Intel and AMD RAID with many disks

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Petr

New Member
Apr 22, 2017
9
5
3
41
Hello.

Does anyone have experience running Intel desktop class chipset (say Z390, C246 and similar - not pure server such as C626) or AMD equivalents (say X470) with as many as 6 to 8 SATA drives in multiple RAID arrays? I plan to utilize 2x RAID1 arrays using server class SSDs (one of which would be boot drive) + 1x RAID array using HDD + 1x separate HDD.

I am interested mainly in long term reliability, pure performance is not that important.

Thank you.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I used to run softraid across onboard Intel SATA (and sometimes a cheap add-in card) until I moved to LSI HBAs (mostly because those add-in cards were always slow and unreliable, and HBAs made cable management so much easier). I've never had any problems with the Intel SATA controllers apart from the Cougar Point debacle. My AMD B450 HTPC isn't running any RAID arrays but the SATA controller has been working just fine.

You might count it as sever-class but I've been running a 6-spindle RAID6 array from the onboard SATA controllers on an Atom C3758 SoC for 18 months without incident. It's not SAS so it should be functionally identical to the SATA controllers in the desktop and workstation chipsets too.

You don't mention an OS though - my only real experience with my home kit is with linux mdadm/ZoL.
 
  • Like
Reactions: Petr

Petr

New Member
Apr 22, 2017
9
5
3
41
Thank you. I also have good experience with Intel RAID (but only using single array). Unfortunatelly my experience with AMD RAID is somewhat negative and would like to check with others.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I assume by Intel RAID you mean the "fakeraid" as it's known that's configured in the BIOS...? If so, no, I've never tried using that - it's mostly a windows-only thing and it can introduce recovery problems if you ever need to bring the array up in another system; all of my experience is based on linux mdadm/ZFS pure softraid and I've not tried using "fakeraid" under windows.
 

Patriot

Moderator
Apr 18, 2011
1,451
792
113
Your experience is likely to be the same amd to intel as they are both going to be using the windows storport miniport driver with a compatibility shim layer for their chipset. I have not had any reliability issues on AMD or Intel chipset raid and I have had drives fail and rebuild properly, and grown the array by pulling a drive, replacing with larger then doing it again after the rebuild.

I have had copious issues with the current AMD windows installer when booted from nvme trying to do SATA raid, you have to insert the sata only raid driver on OS install to have a working setup, and side install the OS side monitoring. Evidently they never tested my setup and anyone with nvme was doing raid. (it always tries to install nvme raid if you have one installed, even from the sata only package)

And for intel... tried updating a 1155 box to larger than 500gb rust, dropped a 3tb in and it showed up as 720gb or such. Windows 7 and UEFI shouldn't have the >2tb issue...but it did. Had to power off, unplug the mirror, power on, disable raid, go to OS format the 3tb drive, power off, replug drives and enable raid and all was happy again.

The time of chipset raid except for nvme raid has clearly passed its prime usage point. Anyone needing such large volumes uses a NAS.

Through all of my recent struggles all I could think was, this is so much easier in linux.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I am running a home NAS now with consumer stuff. Until recently, I had all 6 drives below attached to the Intel SATA motherboard. This week I added the two LSI controller for more ports, and moved the HDDs to one of the LSI. This was completely seamless in Ubuntu 18.04 . I am running ZoL . My main volume is RAID Z2 with 5 x 10TB drives.

Z170 chipset . Asus Z170-AR motherboard (3 year old mobo )
Skylake 6600k CPU running at 4.4 GHz (3 year old chip)
NH-D14 cooler
Cooler Master HAF-XM case (6 year old case)
Raidmax 1200 PSU (3 year old PSU)
32GB DDR4-3000 Patriot RAM, at 2400 MHz (3 year old RAM. unreliable beyond 2400 unfortunately!)
1 x Kingston 96GB SSD (SATA II) mounted to the back of the motherboard for the OS
5 x WD 10TB easystore, shucked in December
1 x LSI 9207-8i PCI-e 3.0 x8
1 x LSI 9207-4i4e PCI-e 3.0 x8
1 x Aquantia AQN-107 10 Gbe NIC PCI-e 3.0 x4, running at x2 but still manages 10 Gbe in iperf
Space for many more drives in my case : 1 x 3.5 left inside, 2 x 3.5 through x-dock in the front
In one 5.25 bay, I added a 4x2.5 SSD SATA dock
Another 5.25 bay has a SATA dock with 1 x 3.5 and 1 x 2.5
The other 5.25 bay is forced-converted to 3.5 in the HAF-XM , and currently has a USB 3.0 hub / card reader . Could be swapped for a dual SSD SATA dock
Altogether, I have 18 internal SATA ports, between the 6 on the motherboard, and the 12 from the two LSI controllers.
6 of those are connected to the HDDs and SSD, and 8 of them to existing docks.
I still have 4 free SATA ports internally - 3 on the Intel, and one on the LSI.
I could use those 4 free ports for 4 more SSDs, using the space that remains in the case.
And of course I have the one external miniSAS SFF-8088 left for expansion

The only issue I have encountered is one HDD disconnects sometimes I think one of my Easystore drives is intermittently bad.
This was the case on the Intel SATA controller, and is still the case on the LSI . I still don't know which drive it is exactly, but I'm going to track it down. Once I do, I think I can successfully unshuck it to return it to BB.
I wouldn't blame either controller. If I only wanted 6 drives total, the Intel would be fine, IMO, as long as those are HDDs.
The Intel controller has a bandwidth limitation of PCIe 2.0 x4 which is 1 GB/s. If your drives are SSDs, that's a major issue. Even with 5 HDDs, it was getting close. Those easystore average 170MB/s on the whole surface, but have some peaks at 230MB/s. 5 x 230 = 1150 MB/s, which is more than the bandwidth of the Intel controller. If you want an array of fast SSDs, forget the Intel SATA controller. It will be the bottleneck.
The PCIe 3.0 x8 LSI controllers each have 7800MB/s of bandwidth .

Why so many hotswap SATA docks ? Mostly because I am worried about backing up my ZFS array, and I want to use a bunch of old 1.5 / 3.0 / 4 .0/ 6.0 TB drives to do that - I have a pair of each of them. And I need many hotswap bays for that.
I also have eSATA port multiplier enclosures. Those require a controller which supports them, though. The LSI doesn't. Even with just the first drive, it doesn't work reliably, which is surprising. Most controllers work fine that way. I think I will return my SFF-8088 to 4 x eSATA breakout cable to Amazon. I'm now looking for a cheap external SAS enclosure for backup purposes.