PCI-E 3.0 4x 6/8-port HBA recommendation

Discussion in 'RAID Controllers and Host Bus Adapters' started by antioch18, Nov 20, 2019.

  1. antioch18

    antioch18 New Member

    Joined:
    Dec 17, 2018
    Messages:
    16
    Likes Received:
    0
    Hello!

    I'm looking for a PCIe 3.0 4x HBA that supports 6 (preferably 8) internal SATA3 drives (RAID is not needed) for a NAS I'm planning.

    I would appreciate it if anyone can share recommendations with good price/performance ratio.

    Thanks a bunch! :)
     
    #1
  2. Deslok

    Deslok Well-Known Member

    Joined:
    Jul 15, 2015
    Messages:
    1,044
    Likes Received:
    112
    Is there a reason it has to be 4x explicitly? If your slot is open ended or 4x electrically and 8 or 16x physically any 8x HBA will work
     
    #2
  3. BeTeP

    BeTeP Active Member

    Joined:
    Mar 23, 2019
    Messages:
    314
    Likes Received:
    143
    There is Adaptec ASA-6805H - it's a PM8001 based PCIe 2.0 x4 HBA. But personally I would just cut the connector on the motherboard and used any x8 card.
     
    #3
  4. antioch18

    antioch18 New Member

    Joined:
    Dec 17, 2018
    Messages:
    16
    Likes Received:
    0
    It is both physically and electrically 4x.

    Assuming that I am able to cut open the connector without damaging the connector or motherboard, what are the drawbacks of using an 8x HBA in a 4x slot? I am not familiar with this at all so any explanation is much appreciated.

    Additionally, any recommendations for a quality 8x device that will run well in 4x mode with be equally as appreciated. :)
     
    #4
  5. Deslok

    Deslok Well-Known Member

    Joined:
    Jul 15, 2015
    Messages:
    1,044
    Likes Received:
    112
    Anything SAS2008 or higher listed here https://forums.servethehome.com/ind...and-hba-complete-listing-plus-oem-models.599/ that isn't proprietary (like say an ASUS PIKE2008) should fit what you're looking for, as for performance most will be PCIe 2.0(unless you opt for the expense of a 3.0 card) which at 4x gives you 2 GB/s you'd fully saturate it with 4 SATA SSD's and a 100% sequential read but outside of that a more typical configuration(2 ssd's and 6 hdd's or 8 hdd's) for bulk storage wouldn't be restricted in most scenarios(assuming a typical 250 MB/s sequential read/write for a 7200rpm 3.5inch HDD it would take 8 to hit 2GB/s (and they can't sustain that full span)
     
    #5
  6. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    293
    Likes Received:
    64
  7. BeTeP

    BeTeP Active Member

    Joined:
    Mar 23, 2019
    Messages:
    314
    Likes Received:
    143
    There is no reason to be paying more than $20 for a LSI SAS2008 based card.
     
    #7
  8. llowrey

    llowrey Member

    Joined:
    Feb 26, 2018
    Messages:
    68
    Likes Received:
    40
    I'm running a few 2008's in x16 physical but x4 electrical slots with no issues.

    I 100% recommend going PCIe3. Due to PCIe packet overhead, you really only get about %80 of the available bandwidth. So, 4 lanes at PCIe2 speed would net you only ~1,600MB/s. That's fine for spinners but not so nice for SSDs.

    Here's a PCIe3 x8 LSI 2308 (HP 220) for $40:

    HP H220 6Gbps SAS PCI-E 3.0 HBA LSI 9205-8i P15 IT Mode From US Ship | eBay

    I have several of those in service but only in x8 slots so I can't confirm x4 operation but I'd be shocked if they wouldn't work just fine.
     
    #8
  9. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    293
    Likes Received:
    64
    Hehe, "only". Good luck maxing it out with 8 drives in raid 6 (z2 or similar). That said for $40, I'd go with PCIe3.0 model for better future-proofing :)
     
    #9
  10. Deslok

    Deslok Well-Known Member

    Joined:
    Jul 15, 2015
    Messages:
    1,044
    Likes Received:
    112
    1600 MB/s would be more than enough for a raid 10 even if sequential wasn't the primary target.
     
    #10
  11. zer0sum

    zer0sum Active Member

    Joined:
    Mar 8, 2013
    Messages:
    277
    Likes Received:
    89
    My go to HBA's are as follows...

    IBM M1215 SAS/SATA 12G HBA, breakout cables to 8 x SATA ports (flashed to LSI SAS3008 IT mode firmware) - $80

    HP H220 SAS/SATA 6G HBA, breakout cables to 8 x SATA ports (flashed to LSI SAS2308 IT mode firmware) - $30
     
    #11
  12. zack$

    zack$ Active Member

    Joined:
    Aug 16, 2018
    Messages:
    233
    Likes Received:
    86
    9211-4i (x4 pcie 2.0) is the last of the lsi x4 cards I think.

    All the 2308 and 3008 cards are x8 (pcie 3.0).

    If you don't want to modify your pcie slot, and must have more than 4 drives, why not use an expander with the 9211-4i? An Intel Res2sv240 will do the job and runs off molex power.
     
    #12
  13. BeTeP

    BeTeP Active Member

    Joined:
    Mar 23, 2019
    Messages:
    314
    Likes Received:
    143
    ServeRAID H1110 is another 4x SAS2 lanes in PCIe 2.0 x4 option.
     
    #13
  14. antioch18

    antioch18 New Member

    Joined:
    Dec 17, 2018
    Messages:
    16
    Likes Received:
    0
    Thanks, all. Have thought about it decided that the PCIe 3.0 HP H220 is my best bet. $30 is a fine price for future proof (higher bandwidth) solution. I just have a few more questions for consideration (caveat: I am not at all familiar with HBAs, so I appreciate your patience):
    • Where can I find the correct v20 IT mode firmware for this device?
    • I was planning to run a 6xHDD RAIDZ2 array with this HBA - is it recommended to use all 6 drives on the same HBA, or am I ok to put 4 directly on the motherboard and the remaining 2 on the HBA?
    • Would anyone please share a simple and safe method for how to open a PCIe slot? (I'd rather not need to buy a dremel for this)
      • Considering getting this and hacking on it rather than the motherboard slot, but am not sure how I'd mount the card to the case given that this offsets the height -- thoughts? PCIe 4X riser card
     
    #14
  15. zer0sum

    zer0sum Active Member

    Joined:
    Mar 8, 2013
    Messages:
    277
    Likes Received:
    89
    #15
  16. antioch18

    antioch18 New Member

    Joined:
    Dec 17, 2018
    Messages:
    16
    Likes Received:
    0
    What is the reasoning to put them all on the same controller?
     
    #16
  17. zer0sum

    zer0sum Active Member

    Joined:
    Mar 8, 2013
    Messages:
    277
    Likes Received:
    89
    Cable management is a lot easier/cleaner
    You can pass the physical HBA through to a virtual machine
    Throughput can be better, but that really depends on your motherboard
    Using a H330/M1215 you can get 12G speed for SSD's

    Although if you have SSD's you might want to leave them on motherboard ports, as trim is a lot easier to deal with :)
     
    #17
    antioch18 likes this.
  18. antioch18

    antioch18 New Member

    Joined:
    Dec 17, 2018
    Messages:
    16
    Likes Received:
    0
    All excellent points, thank you!

    I will eventually add some light weight SSDs to the system, and given that I'm putting the 8x card in a 4x slot and loading it up with 6-8 drives HDs, the headroom for SSDs to flex their muscles will be diminished, so saving the higher speed motherboard ports for the SSDs does make sense. But I am curious about these two remarks.
    Why would throughput on be better with all of the HDs in the RAIDz2 be better if the disks were on the same HBA? My naive assumption was that doing so would create a bottleneck, since again, I'm putting it in a 4x slot.

    Also, TRIM doesn't work well/is a headache through HBAs?

    As always, thank you for taking the time to share your knowledge. :)
     
    #18
    Last edited: Nov 22, 2019
  19. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,288
    Likes Received:
    758
    If you connect disks to one or more HBAs does not matter with ZFS. Only be sure that a HBA gives the performance needed for the accumulated maximal performance of all to the HBA connected disks.Check this against number and generation of PCI-e lanes. PCI Express - Wikipedia

    For a 8 port HBA with 8 disks connected this means
    with mechanical disks around 8 x 250 MB/s = around 2 GB/s (min pci-1 x8 )
    with 6G Sata SSDs around 8 x 500 MB/s = around 4 GB/s (min pci-2 x8 )
    with 12G SAS SSD around 8 x 1 GB/s = around 8 GB/s (min pci-3 x8 )

    When using dualport 12G SAS you my even increase with two HBAs but this is mostly done in a Cluster/HA config . If you do not want to work at the absolute limits, double the minimal demand.

    Trim in Raid is one of the most advanced storage features. It requires newest Open-ZFS OSes (based on Illumos or ZoL) In general trim on ZFS is not working with all SSDs. Mostly desktop SSDs are less supported. It does not help on a server with a steady write load where you generally want high iops server class SSDs. Best results with trim may be expected in an environment with a mixed workload and SSDs with lower write iops capabilities.
     
    #19
    Last edited: Nov 23, 2019
Similar Threads: PCI-E 6/8-port
Forum Title Date
RAID Controllers and Host Bus Adapters NVME M.2 to PCI-E - which of these adaptors? Feb 13, 2019
RAID Controllers and Host Bus Adapters Recommended (current) RAID controller for both PCI-E 2 and 3 ESXi hosts with SAS and SATA SSDs Sep 17, 2018
RAID Controllers and Host Bus Adapters PCI-E 4X to quad 1X PCI-E? Apr 10, 2018
RAID Controllers and Host Bus Adapters ESXI With Sun 96GB PCI-E Flash Accelerator F20 Card? Jan 22, 2018
RAID Controllers and Host Bus Adapters Conflict between LSI in PCI-E 1.1 x4 slot and onboard SATA2 ports Apr 15, 2017

Share This Page