LSI 9271-8i array reconstruction

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

poto

Active Member
May 18, 2013
239
89
28
I'm in process of converting/expanding 8x4tb raid 5 array to 14x4tb raid 6 array, and thought it may be helpful to others to know times and impact on performance.

Equipment:
LSI 9271-8i raid controller
HGST 724040AL 4tb sata hdd (qty14)
RES2CV360 sas expander
Intel 520 4x240gb ssd cachecade

Workload:
medium/light - home/lab nas, streaming media
current array has 21tb data, 18% free space

Progress:
4 days 15 hrs 80%
5 days 18 hrs 20 min 100%

ETA:
1 day 5 hrs
expansion complete
(changes to reconstruction priority rate in MSM did not seem to have any effect)

Impact:
onboard read/write cache are disabled, as well as cachecade, during reconstruction
I'm too cautious to hammer on the array with random i/o, but sequential read/write are 15MB/s and 90MB/s, respectively.

Why?
I have always wiped arrays, created new & reloaded from backups on previous expansions, and was curious to see how long it would take. The new HDD were stressed for 48hrs and backups are available, so why not.

Conclusions/remaining questions:
Impact too high for production, OK for home lab
total array rebuild time - 5 days 18hr 20min

(controller kicked off a backgound init after expansion, which seems redundant after a complete rebuild, I aborted & will wait for scheduled patrol read) apparently the controller is not to be denied - init restarted

time required to expand NTFS partition - n/a immediate

formatted space increased from 25.4tb to 43.6tb

Good to know it can be done if needed, but as Chuckleb said, only if you have to.

benchmarks:
HD Tune ave read - 742MB/s
Anvil - 3,704
sequential network copy - read - 950MB/s, write - 650MB/s
(probably limited by older/slower HDD on target array, will update with ssd target pending time and ambition)
-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : Crystal Dew World
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 738.775 MB/s
Sequential Write (Q= 32,T= 1) : 409.090 MB/s
Random Read 4KiB (Q= 32,T= 1) : 247.399 MB/s [ 60400.1 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 82.007 MB/s [ 20021.2 IOPS]
Sequential Read (T= 1) : 770.921 MB/s
Sequential Write (T= 1) : 394.301 MB/s
Random Read 4KiB (Q= 1,T= 1) : 22.250 MB/s [ 5432.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 49.727 MB/s [ 12140.4 IOPS]

Test : 4096 MiB [S: 59.3% (26492.8/44705.9 GiB)] (x2)
Date : 2015/08/06 8:18:24
OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Did you alter the priority % for rebuild/transform or use stock megaraid values? Most raid controllers allow you to set priority for reconstruction/transformation even if low/normal/high or a %%
 

poto

Active Member
May 18, 2013
239
89
28
Did you alter the priority % for rebuild/transform or use stock megaraid values? Most raid controllers allow you to set priority for reconstruction/transformation even if low/normal/high or a %%
I changed Set Adjustable Task Rates>Reconstruction Rate (%) from the stock 30% to 50% for a couple of hours, then 75% for a couple more. The ETA time never budged, so I returned setting to 30% and left it alone for the remainder.
 

denpa

Member
Feb 21, 2015
58
30
18
I just completed an expansion on my array so I'll share how it went

Setup:
Intel RS25SB008 RAID Controller (LSI 9286-8iCV)
9x Western Digital RE 4TB SAS WD4001FYYG (7 re-certified)
2x HGST Ultrastar 7K4000 4TB SAS (1 adding, 1 hotspare)
Dell J23 enclosure (SAS 3Gb/s)

Use: personal file storage
Expanded from 28TB RAID 6 (25.4TB in Windows) to 32TB RAID 6 (29.1TB in Windows)
Approximate time taken: 5-6 days

One of the WD drives failed during the array expansion which slowed down the final day. It probably would have finished on the 5th day had a drive not failed.
Immediately after the expansion finished, the array rebuilt to the hotspare. It completed and is now doing a background initialize.

With no load HDTune benchmarked at 0.2MB/s, accessing the network share was noticeably laggy and files took a while to load. I guess that's what having no cache at all does to transfer rates. I wouldn't even imagine doing this in a production environment.

The new available space did not show in Disk Management until after the expansion finished.

After experiencing another drive failure during expansion (had 2 previous failures during consistency checks/initialization), I'm not sure if I would rather expand the array again or copy from backup if I had to add more drives in the future. When I expanded from 6x RAID 5 to 9x RAID 6 I had to remake the array from scratch as the card wouldn't let me expand for some reason, and I was lucky all the data copied back without any UREs. I chose to expand this time to avoid reading 11TB from Seagate Archive 8TB drives that have only 10^14 URE rate. How do you decide which to do?
I have learned my lesson on buying re-certified drives though. Never again.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
I thought I would post my experience that the reconstruction rate priority option appears to do NOTHING.
I have the 9271-8i and went from 4x 6TB to 5x 6TB, the process took approx. 5 days, during which the server was completely useless - the load was so high that I couldn't even stream a video off the array while it rebuilt.

I will be ditching LSI and looking at something else. I need OCE and functional capability.

Does anyone have experience with other cards and the OCE capability? I have previously used both Adaptec and 3ware 9690SA OCE functionality and it did not impact the user experience like this LSI did.
 
Last edited:
  • Like
Reactions: Chuckleb

ninja6o4

Member
Jul 2, 2014
92
22
8
45
The time was not the issue, it was the raw IO usage impacting ordinary use during the OCE, and the fact that their own priority setting for this appears to not work.
I had tried a test copy off the RAID while it was in the OCE process, I was literally getting 150-300 KiB/s.
I think I will have to try and find an old 3ware card here or on ebay. I truly regret selling my old 9690.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
lol ty for the offer, I'm going to go with an Adaptec 6805 + Intel RES2SV240 expander. It seems like a winning combo.
 
  • Like
Reactions: Diavuno

chrispitude

Member
Dec 14, 2017
38
2
8
50
Allentown, PA
I ran into the same thing with my LSI 9267-8i. I started with 5 x 3TB WD Red in RAID5. After adding a 6th drive and starting the migration to RAID6, the estimated completion was 5 days. Changing the reconstruction rate had no effect, and the disk was unusably slow (an "ls -R" printed 2-3 directories per second instead of a blur of scrolling).

It was MUCH faster to create a RAID6 array from scratch and restore the contents from backup, than to do a RAID5->6 migration. As others have said, the duration wouldn't be an issue if the disk was usable in the meantime, but it's not.

Another weirdness - starting the migration switched the write policy from Write Back to Write Through. However, after aborting the migration, "Write Back" was no longer shown as an option when creating a new array. This persisted even when doing a Clear Configuration. The only thing that brought it back was going into WebBIOS and resetting the controller to factory defaults.