Intel RMS25KB080 (LSI 2308 HBA) - Abysmal RAID 10, "background initialize"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

danwood82

Member
Feb 23, 2013
66
0
6
Hi all,

I finally got my new server setup going, with the Intel S2600COE motherboard with included RMS25KB080 RAID host adapter (Appears to be a standard LSI 2308 controller with Intel firmware), and 8x Seagate Constellation CS 3TB drives.

I've got the machine up and running, and everything looks peachy... I've installed Windows, drivers, updates, the usual routine, and now I've set up the RAID controller to have a single 12TB RAID 10 array.

Only problem now is, it appears to be atrociously slow... way slower than a single drive for writes, about the same as a single drive for reads. (50MB/s and 250MB/s respectively)

In the RAID BIOS and in "Intel RAID Web Console 2", it reports the array is "Optimal" but also running "Background Initialize: 0%"... and it's been on 0% for hours!

I don't understand... it's a newly created array - by implication the data is already mirrored and consistent, by virtue of there not being any yet. Does it really need to run this initialization? Will it really take days/weeks to complete? Will the performance definitely dramatically increase when it completes, or does this seem like a problem elsewhere?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
RAID10 does require a lengthy initialization. While there is "no data" on the drives from your point of view, the drives themselves can have ones and zeroes on them, and each mirrored pair must be made to be identical during initialization - at least for hardware RAID controllers. Performance will be poor to very poor during the process. If you look, are the drive lights flickering away?

That said, I do recommend that you test all drives individually before combining them into any array. If possible, break the array and run ATTO or IOMeter or whatever on the individual drives. My process includes running an IOMeter "speed test" on each drive followed by many hours of "stress test" - often overnight. Only then do I start the long process of building an array or adding disks to an array. There is no worse feeling than having a disk take a dive during a ten-day RAID6 array expand operation.

Hi all,

I finally got my new server setup going, with the Intel S2600COE motherboard with included RMS25KB080 RAID host adapter (Appears to be a standard LSI 2308 controller with Intel firmware), and 8x Seagate Constellation CS 3TB drives.

I've got the machine up and running, and everything looks peachy... I've installed Windows, drivers, updates, the usual routine, and now I've set up the RAID controller to have a single 12TB RAID 10 array.

Only problem now is, it appears to be atrociously slow... way slower than a single drive for writes, about the same as a single drive for reads. (50MB/s and 250MB/s respectively)

In the RAID BIOS and in "Intel RAID Web Console 2", it reports the array is "Optimal" but also running "Background Initialize: 0%"... and it's been on 0% for hours!

I don't understand... it's a newly created array - by implication the data is already mirrored and consistent, by virtue of there not being any yet. Does it really need to run this initialization? Will it really take days/weeks to complete? Will the performance definitely dramatically increase when it completes, or does this seem like a problem elsewhere?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
background Initialise default resource use is 30%
So this will cripple your writes till done.

12TB will take a long time, my 6x2TB drives take about 24hrs to do, but I'm a RAID6 man.

You can stop the initialise, if you want to do some testing.
I would highly recommend what dba has said above for a live system setup, test all drives before setting up the array then let it merrily do the initialise over a few days (means on 24/7)
 

danwood82

Member
Feb 23, 2013
66
0
6
Well, it appears it is getting there, very slowly but surely - about 1%-per-hour... so it would appear I'm looking at 4 days of initialization! :p
I did pre-empt the drive testing thing - all 8 drives have been thoroughly tested and surface-scanned individually.

So, it shouldn't matter at all if I start copying data to the drive before initialization completes, right?

Also, if the system shut down improperly at some point in the future, or had some other glitch, would the synchronization check take this long again, and/or degrade performance this much?

I suspect it may be prudent to get a UPS hooked up to this beast...
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
no kidding, HP P420 an initialize 70 3TB drives in a minute. (psst you don't need to initialize a drive by writing to it unless you are going to do some sort of scrub or scrubbing like ZFS/REFS). Definitely not RAID-1+0 or 1. Probably a good idea with raid-5 or 6.

The HP controllers can set the rebuild or transform from 10% to 110% (sector scan can actually be higher than read/writes during a burn-in).

If you don't remember the old days of esx 4.0 - back then the LSI controllers would get stuck in initialize FOREVER if you didn't let it finish before booting the hypervisor. Do you want me to remind you of how an SSD would feel to be initialized for a week or two until you realized it? ouchies.

I'm not a big fan of initialize and quite honestly if you have a modern SSD or SAS drive with SED - one might say why bother? (ES.3 can even be ordered in FIPS140_2 enhanced encryption models for a few bucks more).