LSI 9270-8i slow init is slooooooow

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lunadesign

Active Member
Aug 7, 2013
256
34
28
I've got an LSI 9270-8i. I've just added two 4TB WD Se (WD4000F9YZ) drives in RAID 1 and started a slow init using MSM. It's been running for almost 2 hours and it is only 2% done and estimating another 1 day, 21 hours.

I've previously run a slow init on an older system with a 9260-8i and two 3TB WD Red (WD30EFRX) drives in RAID 1. On this slower system with the slower (but somewhat smaller) drives, I think it took about 7 hours.

Am I doing something wrong with the 9270-8i?
 

BigXor

Active Member
May 6, 2011
282
26
28
Pennsylvania, USA
bigxor.com
I have a LSi SAS2208 based controller. Check your drives sequential write speeds. My experience with 4tb drives have been 7-10 hours depending on rotational speed. Raid 1 will only initialize as fast as the drives can write. Also check that both drives are initializing at the same time by scrolling down the progress window in MSM.

Also check that the drive setup is Write Back Cache and Direct I/O and not Cached I/O
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
Thanks BigXor. Yes, the two are being initialized at the same time.

My recollection was that when I used the WD tools to write zeroes onto them (part of my burn-in testing), they took about 10 hours, definitely not 2 days.

And yes, the Virtual Drive is set up with Write Back Cache and Direct I/O.

I'm tempted to cancel the operation and try it from WebBIOS.
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
I've got the latest firmware. However, it appears a new driver and MSM were released three days ago. I've got the driver and MSM from the previous release in May.

I'm curious if MSM goes through the driver. If so, I could see it being different than WebBIOS.
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
Here's an interesting update...

I spoke with a helpful LSI Tech Support rep and he suggested I change the "Drive Cache" from my usual "Unchanged" setting to "Enabled". So, from within MSM, I stopped the slow init, changed the "Drive Cache" setting on the VD and restarted the slow init.

Now, it's moving along at the expected pace and is estimating 5.5 hours to completion. It got to 6% in 25 minutes whereas earlier today it took at least 3 hours to get there.

Also, looking at the Windows Event Viewer, I noticed that before the change, I had a bunch of "disk" warnings ("An error was detected on device \Device\Harddisk1\DR1 during a paging operation") and "Ntfs (Microsoft-Windows-Ntfs)" warnings ("The system failed to flush data to the transaction log..."). After the change, I have none (at least so far).

I'm not quite sure I fully understand what this means but at least it appears I'm seeing the expected results now.
 
  • Like
Reactions: BigXor

lunadesign

Active Member
Aug 7, 2013
256
34
28
Update #2

Just for yucks, I cancelled the slow init, changed the VD "Drive Cache" setting to "Disabled" and restarted the slow init. The performance is super slow (nearly 45 hours estimated to complete the slow init) like before. However, I'm not seeing all the Event Viewer messages like before. Not sure why.

I understand that in case of a power failure, it's better to rely on the RAID controller's battery-backed cache than the HDD's unprotected onboard cache. However, I would have never expected disabling the drive cache to result in performance that is nine times slower, especially something as boring as a single process writing a constant stream of zeroes.

Am I missing something here?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You are disabling the HARD DRIVES write-cache! which cripples the performance of the (SSD,HARD DRIVES)! not the on-board battery back write cache!
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
Yes, I know that. The funny thing is that I thought all drives defaulted to having that on. However, it appears one or both of my drives didn't and my use of the "Unchanged" setting in the LSI tools allowed that to continue, which led to my initial results.

What surprised me is that the performance was so much slower with drive cache disabled even in a boring workload like a slow init. It makes me wonder why anyone would ever turn the drive cache off.

I also still don't know what caused all those warnings in the Windows event log.
 

lunadesign

Active Member
Aug 7, 2013
256
34
28
Update #3

It *appears* that the Windows errors I was seeing was because the drives I used had previously been set up on another system and the volume/partition info was still there. So, when I set these drives up as RAID 1, Windows saw that info and thought there was an NTFS partition there when it really wasn't ready.

When I re-ran my test with totally clean drives and made sure they were "Offline" in Disk Management during the slow init, I didn't get any Windows errors. Silly me!