Solution to Supermicro/LSI/Intel RAID hell

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kmo12345

New Member
May 3, 2018
4
0
1
34
Hi all, last few months have been storage hell with RAID issues on a number of servers I look after. Clearly I am doing something wrong and I'm hoping this is a safe place to ask for some advice.

Back story: I have looked after a dozen or so "small business" servers for over fifteen years with relatively few issues. This isn't my day job but mostly just something I spend a few hours on every other week to help out various friends and family who have small or home businesses. Most of the servers are white box with the exception of some older HP DL380 and ML350. The first few white boxes were inherited and had RAID 5 running on Intel SW RAID or LSI MegaRAID controllers. This was back when when 200 GB enterprise drives were considered huge and RAID5 with a hot spare actually worked.

Many many years ago I had a RAID5 failure during rebuild and educated myself on the nightmare that is RAID5. Since then I've either reconfigured all of the arrays as RAID 1 or 10 or replaced the servers. I try to either have backups to rotated external hard drives, a NAS located somewhere else on the premises, and/or cloud based backup solution. In the last few years I have switched to Hyper-V for almost all of these servers so I can run a Linux web server, PBX, or pfsense in addition to a Windows domain controller or file server on the same box.

Present issues: Last month I had a RAID 10 array fail during rebuild. I have had a dozen hard drives die in RAID 10 arrays and never thought this was possible. The motherboard was a Supermicro X10SRH-CLN4F and I was using the onboard LSI/Avago/Broadcom 3008 in IR mode to do RAID 10. The user was reporting that accessing files on a VM was slow so I poked around on the server (including in MegaRAID Storage Manager) and didn't see any issues. I figured I would do all possible driver and software updates and after installing the latest storage drivers and newest MegaRAID I rebooted the server and instantly began to get dozens of emails from MegaRAID that there were unrecoverable read errors. Somehow the 3008 had either stopped doing patrol reads or stopped reporting errors and multiple drives on the RAID 10 had gone bad causing the array to be unrecoverable.

I pulled two drives from a NAS to make a RAID 1 on the Intel RSTe controller and copied all the vhdx over to the new array with the exception of one which couldn't copy due to unreadable data. I restored that one from backup and thought I was in the clear. I sent the RAID 10 drives which were failing back to WD under warranty and started testing the other ones using the Western Digital tool. They seem to be fine but the tool can't read SMART data due to the IR mode of the LSI controller.

I started manual consistency checks on three other servers also using the LSI 3008 controllers and found another array with consistency errors. It didn't fail rebuild but this really has me worried. Why is this controller not checking consistency? I was planning on moving over to the Intel RSTe controller but now the temporary RAID 1 array on the Intel controller seems to be stuck on "verifying and repairing" with the array of <3 month old NAS drives seemingly stuck with multiple bad blocks that can't be repaired.

Can I trust any of these onboard RAID solutions? Should I recommend we buy PCIe RAID controllers for all of the servers I'm looking after? I have read about Windows Storage Spaces, is that any safer? What is a "small business" single Hyper-V server guy supposed to use these days?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Personally, I don't trust hardware or "fake-RAID" (pretends to be hardware, but is actually software). That is not to say that true hardware RAID can't work. Just that to get hardware that can properly do it costs more than I'm willing to spend as a home lab guy. I also dislike the proprietary formatting which means needing to have a backup controller of the exact same model and firmware version on the shelf.

IMO, If you are serious about data safety and cost is an issue, ZFS is where it's at. I know nothing of Hyper-V or Storage Spaces, as I don't do Microsoft, so I can't give specific advice on that platform. If it can pass a controller to a VM, you could probably do FreeNAS. If you're willing to flip things around a bit, you can do ESXI+FreeNAS+Windows or Proxmox+Windows, among other things. That might be more invasive than you're prepared for though.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
The LSI 3008 onboard is the same as a PCIe version except it is on the mobo not in a PCIe slot. The RSTe is the one I'd stay away from.

Let me ask this, what kind of case is this in? Do you have cooling to the SAS chip onboard? I've seen poor cooling cause LSI array issues.
 

ecosse

Active Member
Jul 2, 2013
463
111
43
You might want to post under the Windows section for a view on storage spaces (I thought you needed multiple servers to gain resilience but I've not really investigated it in any detail. I don't think you are doing anything particularly wrong. Not sure if you've setup monitoring such that if things aren't working at least you get notified.
I've not had cooling issues with LSI raid cards (well, none that hasn't been my fault!) but had proper issues with Areca RAID in that area so that is definitely something to look at. If there isn't a chip fan you can always stick one on yourself.
 

serverpanda

New Member
Mar 31, 2018
6
1
1
47
I was not happy with my benchmarks on storage spaces, but that was using a rather individual hardware config. What you should keep in mind that Microsoft changed the support for ReFS in Win 10 Pro (see here). I think Storage Spaces only make sense with ReFS, so you should keep this in mind.
 

kmo12345

New Member
May 3, 2018
4
0
1
34
Hi all,

Thanks for the information. I am going to setup a new home server soon so maybe I will check out Proxmox or ESXi. I used ESX about ten years ago when Hyper-V had a hard time virtualizing Linux and BSD but never used it again once Microsoft got their act together with other OS support.

I switched to the RSTe because I was worried I couldn't trust the LSI controller after the RAID 10 disaster I had. The fact that MSM didn't warn me or even have any errors at all until I updated it was quite a turn off. Once I get all the drives back from WD I guess I will recreate the RAID on the LSI again.

The case is currently open as there wasn't enough room for the two NAS drives which I'm using while I get the LSI drives sorted. I will try adding a fan above the LSI controller.

Kevin
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
The problem you had with traditional Raid 1 or 5/6 is called write hole problem, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

Think of it like: If you write something to a raid-array, the controllet or driver when using fak-raid is updating the first disk(s) then the other disk(s) sequentially. A crash in between leads to a corrupt raid and/or corrupted filesystem.

In case of a mirror it can happen that you read bad data from one or the other disk or bad data at all as there is no way to detect bad data due the lack of real data checksums. The only way to reduce this on a traditional raid is a cache + BBU/Flash backup that protects last atomic writes so this is mandatory with traditional filesystems and raid with critical data.

The other option is a new generation filesystem like ZFS. As this is CopyOnWrite an atomic write operation (write data, write metadata, update raid) is done completely or not at all avoiding partially written information to disk or raid. As ZFS adds real data checksums on a datablock level, it can detect wrong data when reading a mirror and can read then the other part with correct data and repair the raid on the fly.

And you do not need a special raid adapter as this is softwareraid. Independent from a special controller and much faster with a modern CPU and mainboard RAM support than a hardware raid adapter.