Best hardware RAID card for massive storage server (24 HDDs)

Discussion in 'RAID Controllers and Host Bus Adapters' started by ekiro, Mar 30, 2019.

  1. ekiro

    ekiro New Member

    Joined:
    Mar 16, 2019
    Messages:
    20
    Likes Received:
    1
    I'm going to build a massive storage server with 24x12TB HDDs in RAID 10. I'm curious what hardware RAID cards you recommend for such setup? For cables I am looking for ones that fan out from 1 to multiple SATA connectors. What do you suggest?
     
    #1
  2. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    723
    Likes Received:
    193
    3x H310 in IR mode might work.. I always flash them to IT for ZFS, so I can't say for sure.
     
    #2
  3. ekiro

    ekiro New Member

    Joined:
    Mar 16, 2019
    Messages:
    20
    Likes Received:
    1
    Are you talking about using the boards soft-raid?
     
    #3
  4. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,525
    Likes Received:
    434
    No, I think @ttabbal is talking about using the cards in JBOD (IT mode), with each card supporting 8 disks each using fan-out cables. Each disk is presented as an individual to your host OS and you install the ZFS file system, or use it if it already has it rolled in, and use that to pool your disks, provide redundancy and otherwise manage your storage :)
     
    #4
  5. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,681
    Likes Received:
    411
    Why 24 hdds in raid 10?
     
    #5
  6. ekiro

    ekiro New Member

    Joined:
    Mar 16, 2019
    Messages:
    20
    Likes Received:
    1
    The RAID cards I'm eyeing are LSI/Boardcom MegaRAID SAS 9361-8i and Microsemi Adaptec SmartHBA 2100-8i. Any thoughts on these? The MegaRAID has been around for years. Seems kind of dated but if it works it works?

    I will be using a MiniSAS to MiniSAS HD cable. The backpanel will have a SAS2 expander. Turns out this is the best setup. The server is a Supermicro 4U with 24HD bays and its backpanel has a SAS2 expander port.

    24 HDs in RAID 10 because I need the redundancy. Apparently using RAID 5/6 for disks over 4TB is very bad so I'm told. I think it would be bad for 12TB disks. Anyone have thoughts on this?
     
    #6
  7. nthu9280

    nthu9280 Well-Known Member

    Joined:
    Feb 3, 2016
    Messages:
    1,418
    Likes Received:
    358
    If you have SAS2 expander on the backplane, no real reason for going to SAS3 RAID card. I do not have much experience with HW RAID but I'd venture to say LSI 2208 based cards or Adaptec 7 series with respective cache & BBU should be more than enough.

    ASR71605 with 1GB cache & Supercap are < $75. They are SFF-8643 so you would need 8643 -8087 cable to the expander backplane
     
    #7
    pricklypunter likes this.
  8. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,525
    Likes Received:
    434
    If your are using an expander backplane, you'll only need the one card. Pretty much any HBA would do you, but the LSI based cards are very well tried and tested with these backplanes, so that's what I would stick with. As for which RAID level or how you go about providing redundancy, it really depends on how performant you need the array to be, how risk averse you are etc. If I were looking at using a hardware based raid engine and needing performance, I would be wanting mirrored pairs. If I could live with a disk causing issues and needed some capacity, I would go with double parity raid, so something like RAID 6. I would not recommend RAID 5 now for anything beyond a quick temporary space to stick stuff in transit to permanent storage. However, I see little benefit these days in using hardware based RAID in anything but the most stringent of circumstances. Software based redundancy solutions really are up there with the best RAID cards now, and there are plenty of solutions to choose from. Modern CPU's can easily handle the XOR calcs of even the largest array, without even breaking a sweat, plus you have the added benefit of not being tied to any particular card should that fail at some point. For me, it's a "no brainer" :)

    I would be personally looking at using ZFS and creating either 4 pools of 6 disks each or 3 pools of 8 disks each in RADZ2. I need and want a bit of capacity, but I also want to have that little bit of a safety net as well. This gives me time to correct issues, before I have to consider taking the arrays down. If a disk fails, it can simply be replaced and the array can still be active in degraded mode until a re-silver is completed.

    If I needed to bring down the time that it takes to do that, or needed more performance, mirrored pairs would be the way to go, but the way I look at it is that I can have 2 disks fail and still be running, but common sense says replace any disk that fails immediately.

    Backups! Backups! Backups! THERE IS NO RAID solution that will save you in a catastrophic failure, so make regular backups and do what nobody else ever does, test them dammit :)
     
    #8
  9. ekiro

    ekiro New Member

    Joined:
    Mar 16, 2019
    Messages:
    20
    Likes Received:
    1
    What would the rebuild time be for a raid6 rebuild using 24x12tb hds?

    I will definitely be going with ZFS. I will have to research it a little more but from everything I heard it's the best FS now and for the future.

    I should point out that this server will host many large video files. 100mb ~ 3gb, the average being around 500mb.

    The system will have 254gb of RAM. If that fails to alleviate the load on the disks then I will add a PCIe NVMe drive for caching. But I am not worried about that for now. I think what's best to figure out right now is the RAID and FS setup because once it's set I cannot change it.

    Any more input from people is much appreciated.
     
    #9
  10. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,932
    Likes Received:
    423
    If you're going with ZFS you dont want a hardware RAID card. You'd be best going with an LSI HBA that supports IT mode and a SAS expander.
     
    #10
  11. kapone

    kapone Active Member

    Joined:
    May 23, 2015
    Messages:
    620
    Likes Received:
    248
    A pci-e 3.0 x8 slot can do ~8GB/s.

    Assuming you use a single card to connect to all 24x 12TB drives, the math looks like:

    Total bits to read = 288TB
    Total bits to read in GB = 288000

    288000/8 = 36,000 seconds = 600 minutes = 10 hours.

    This is in completely ideal conditions, so let's throw some randomness in there and double it. That's ~20 hours.

    Sounds about right to me, and taking it to an even worse case possibility, ~40 hours, which is about 2 days. For 288TB, that's not bad at all, assuming you use RAID-6. I shudder at using RAID5 for this.
     
    #11
    ekiro and itronin like this.
Similar Threads: Best hardware
Forum Title Date
RAID Controllers and Host Bus Adapters Best practice - SSDs hardware raid and filesystems Oct 20, 2017
RAID Controllers and Host Bus Adapters Best Storage Controller for BPN-SAS-216A Aug 27, 2019
RAID Controllers and Host Bus Adapters Best Non-LSI HBA option? Jun 23, 2019
RAID Controllers and Host Bus Adapters dell r620 10 bay - which internal HBA is best? Nov 19, 2018
RAID Controllers and Host Bus Adapters Please Help me find the best course of options for ZFS HBA. Sep 4, 2018

Share This Page