Best hardware RAID card for massive storage server (24 HDDs)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ekiro

New Member
Mar 16, 2019
20
2
3
I'm going to build a massive storage server with 24x12TB HDDs in RAID 10. I'm curious what hardware RAID cards you recommend for such setup? For cables I am looking for ones that fan out from 1 to multiple SATA connectors. What do you suggest?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
3x H310 in IR mode might work.. I always flash them to IT for ZFS, so I can't say for sure.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
No, I think @ttabbal is talking about using the cards in JBOD (IT mode), with each card supporting 8 disks each using fan-out cables. Each disk is presented as an individual to your host OS and you install the ZFS file system, or use it if it already has it rolled in, and use that to pool your disks, provide redundancy and otherwise manage your storage :)
 

ekiro

New Member
Mar 16, 2019
20
2
3
The RAID cards I'm eyeing are LSI/Boardcom MegaRAID SAS 9361-8i and Microsemi Adaptec SmartHBA 2100-8i. Any thoughts on these? The MegaRAID has been around for years. Seems kind of dated but if it works it works?

I will be using a MiniSAS to MiniSAS HD cable. The backpanel will have a SAS2 expander. Turns out this is the best setup. The server is a Supermicro 4U with 24HD bays and its backpanel has a SAS2 expander port.

24 HDs in RAID 10 because I need the redundancy. Apparently using RAID 5/6 for disks over 4TB is very bad so I'm told. I think it would be bad for 12TB disks. Anyone have thoughts on this?
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
If you have SAS2 expander on the backplane, no real reason for going to SAS3 RAID card. I do not have much experience with HW RAID but I'd venture to say LSI 2208 based cards or Adaptec 7 series with respective cache & BBU should be more than enough.

ASR71605 with 1GB cache & Supercap are < $75. They are SFF-8643 so you would need 8643 -8087 cable to the expander backplane
 
  • Like
Reactions: pricklypunter

pricklypunter

Well-Known Member
Nov 10, 2015
1,708
515
113
Canada
If your are using an expander backplane, you'll only need the one card. Pretty much any HBA would do you, but the LSI based cards are very well tried and tested with these backplanes, so that's what I would stick with. As for which RAID level or how you go about providing redundancy, it really depends on how performant you need the array to be, how risk averse you are etc. If I were looking at using a hardware based raid engine and needing performance, I would be wanting mirrored pairs. If I could live with a disk causing issues and needed some capacity, I would go with double parity raid, so something like RAID 6. I would not recommend RAID 5 now for anything beyond a quick temporary space to stick stuff in transit to permanent storage. However, I see little benefit these days in using hardware based RAID in anything but the most stringent of circumstances. Software based redundancy solutions really are up there with the best RAID cards now, and there are plenty of solutions to choose from. Modern CPU's can easily handle the XOR calcs of even the largest array, without even breaking a sweat, plus you have the added benefit of not being tied to any particular card should that fail at some point. For me, it's a "no brainer" :)

I would be personally looking at using ZFS and creating either 4 pools of 6 disks each or 3 pools of 8 disks each in RADZ2. I need and want a bit of capacity, but I also want to have that little bit of a safety net as well. This gives me time to correct issues, before I have to consider taking the arrays down. If a disk fails, it can simply be replaced and the array can still be active in degraded mode until a re-silver is completed.

If I needed to bring down the time that it takes to do that, or needed more performance, mirrored pairs would be the way to go, but the way I look at it is that I can have 2 disks fail and still be running, but common sense says replace any disk that fails immediately.

Backups! Backups! Backups! THERE IS NO RAID solution that will save you in a catastrophic failure, so make regular backups and do what nobody else ever does, test them dammit :)
 

ekiro

New Member
Mar 16, 2019
20
2
3
If your are using an expander backplane, you'll only need the one card. Pretty much any HBA would do you, but the LSI based cards are very well tried and tested with these backplanes, so that's what I would stick with. As for which RAID level or how you go about providing redundancy, it really depends on how performant you need the array to be, how risk averse you are etc. If I were looking at using a hardware based raid engine and needing performance, I would be wanting mirrored pairs. If I could live with a disk causing issues and needed some capacity, I would go with double parity raid, so something like RAID 6. I would not recommend RAID 5 now for anything beyond a quick temporary space to stick stuff in transit to permanent storage. However, I see little benefit these days in using hardware based RAID in anything but the most stringent of circumstances. Software based redundancy solutions really are up there with the best RAID cards now, and there are plenty of solutions to choose from. Modern CPU's can easily handle the XOR calcs of even the largest array, without even breaking a sweat, plus you have the added benefit of not being tied to any particular card should that fail at some point. For me, it's a "no brainer" :)

I would be personally looking at using ZFS and creating either 4 pools of 6 disks each or 3 pools of 8 disks each in RADZ2. I need and want a bit of capacity, but I also want to have that little bit of a safety net as well. This gives me time to correct issues, before I have to consider taking the arrays down. If a disk fails, it can simply be replaced and the array can still be active in degraded mode until a re-silver is completed.

If I needed to bring down the time that it takes to do that, or needed more performance, mirrored pairs would be the way to go, but the way I look at it is that I can have 2 disks fail and still be running, but common sense says replace any disk that fails immediately.

Backups! Backups! Backups! THERE IS NO RAID solution that will save you in a catastrophic failure, so make regular backups and do what nobody else ever does, test them dammit :)
What would the rebuild time be for a raid6 rebuild using 24x12tb hds?

I will definitely be going with ZFS. I will have to research it a little more but from everything I heard it's the best FS now and for the future.

I should point out that this server will host many large video files. 100mb ~ 3gb, the average being around 500mb.

The system will have 254gb of RAM. If that fails to alleviate the load on the disks then I will add a PCIe NVMe drive for caching. But I am not worried about that for now. I think what's best to figure out right now is the RAID and FS setup because once it's set I cannot change it.

Any more input from people is much appreciated.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Apparently using RAID 5/6 for disks over 4TB is very bad so I'm told. I think it would be bad for 12TB disks. Anyone have thoughts on this?
A pci-e 3.0 x8 slot can do ~8GB/s.

Assuming you use a single card to connect to all 24x 12TB drives, the math looks like:

Total bits to read = 288TB
Total bits to read in GB = 288000

288000/8 = 36,000 seconds = 600 minutes = 10 hours.

This is in completely ideal conditions, so let's throw some randomness in there and double it. That's ~20 hours.

Sounds about right to me, and taking it to an even worse case possibility, ~40 hours, which is about 2 days. For 288TB, that's not bad at all, assuming you use RAID-6. I shudder at using RAID5 for this.
 
  • Like
Reactions: ekiro and itronin