PCI-E 3.0 4x 6/8-port HBA recommendation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

antioch18

New Member
Dec 17, 2018
19
0
1
Hello!

I'm looking for a PCIe 3.0 4x HBA that supports 6 (preferably 8) internal SATA3 drives (RAID is not needed) for a NAS I'm planning.

I would appreciate it if anyone can share recommendations with good price/performance ratio.

Thanks a bunch! :)
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Is there a reason it has to be 4x explicitly? If your slot is open ended or 4x electrically and 8 or 16x physically any 8x HBA will work
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
There is Adaptec ASA-6805H - it's a PM8001 based PCIe 2.0 x4 HBA. But personally I would just cut the connector on the motherboard and used any x8 card.
 

antioch18

New Member
Dec 17, 2018
19
0
1
It is both physically and electrically 4x.

Assuming that I am able to cut open the connector without damaging the connector or motherboard, what are the drawbacks of using an 8x HBA in a 4x slot? I am not familiar with this at all so any explanation is much appreciated.

Additionally, any recommendations for a quality 8x device that will run well in 4x mode with be equally as appreciated. :)
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Anything SAS2008 or higher listed here https://forums.servethehome.com/ind...and-hba-complete-listing-plus-oem-models.599/ that isn't proprietary (like say an ASUS PIKE2008) should fit what you're looking for, as for performance most will be PCIe 2.0(unless you opt for the expense of a 3.0 card) which at 4x gives you 2 GB/s you'd fully saturate it with 4 SATA SSD's and a 100% sequential read but outside of that a more typical configuration(2 ssd's and 6 hdd's or 8 hdd's) for bulk storage wouldn't be restricted in most scenarios(assuming a typical 250 MB/s sequential read/write for a 7200rpm 3.5inch HDD it would take 8 to hit 2GB/s (and they can't sustain that full span)
 

llowrey

Active Member
Feb 26, 2018
167
138
43
It is both physically and electrically 4x.

Assuming that I am able to cut open the connector without damaging the connector or motherboard, what are the drawbacks of using an 8x HBA in a 4x slot? I am not familiar with this at all so any explanation is much appreciated.

Additionally, any recommendations for a quality 8x device that will run well in 4x mode with be equally as appreciated. :)
I'm running a few 2008's in x16 physical but x4 electrical slots with no issues.

I 100% recommend going PCIe3. Due to PCIe packet overhead, you really only get about %80 of the available bandwidth. So, 4 lanes at PCIe2 speed would net you only ~1,600MB/s. That's fine for spinners but not so nice for SSDs.

Here's a PCIe3 x8 LSI 2308 (HP 220) for $40:

HP H220 6Gbps SAS PCI-E 3.0 HBA LSI 9205-8i P15 IT Mode From US Ship | eBay

I have several of those in service but only in x8 slots so I can't confirm x4 operation but I'd be shocked if they wouldn't work just fine.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
So, 4 lanes at PCIe2 speed would net you only ~1,600MB/s. That's fine for spinners but not so nice for SSDs.
Hehe, "only". Good luck maxing it out with 8 drives in raid 6 (z2 or similar). That said for $40, I'd go with PCIe3.0 model for better future-proofing :)
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
My go to HBA's are as follows...

IBM M1215 SAS/SATA 12G HBA, breakout cables to 8 x SATA ports (flashed to LSI SAS3008 IT mode firmware) - $80

HP H220 SAS/SATA 6G HBA, breakout cables to 8 x SATA ports (flashed to LSI SAS2308 IT mode firmware) - $30
 

zack$

Well-Known Member
Aug 16, 2018
701
315
63
9211-4i (x4 pcie 2.0) is the last of the lsi x4 cards I think.

All the 2308 and 3008 cards are x8 (pcie 3.0).

If you don't want to modify your pcie slot, and must have more than 4 drives, why not use an expander with the 9211-4i? An Intel Res2sv240 will do the job and runs off molex power.
 

antioch18

New Member
Dec 17, 2018
19
0
1
Thanks, all. Have thought about it decided that the PCIe 3.0 HP H220 is my best bet. $30 is a fine price for future proof (higher bandwidth) solution. I just have a few more questions for consideration (caveat: I am not at all familiar with HBAs, so I appreciate your patience):
  • Where can I find the correct v20 IT mode firmware for this device?
  • I was planning to run a 6xHDD RAIDZ2 array with this HBA - is it recommended to use all 6 drives on the same HBA, or am I ok to put 4 directly on the motherboard and the remaining 2 on the HBA?
  • Would anyone please share a simple and safe method for how to open a PCIe slot? (I'd rather not need to buy a dremel for this)
    • Considering getting this and hacking on it rather than the motherboard slot, but am not sure how I'd mount the card to the case given that this offsets the height -- thoughts? PCIe 4X riser card
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
What is the reasoning to put them all on the same controller?
Cable management is a lot easier/cleaner
You can pass the physical HBA through to a virtual machine
Throughput can be better, but that really depends on your motherboard
Using a H330/M1215 you can get 12G speed for SSD's

Although if you have SSD's you might want to leave them on motherboard ports, as trim is a lot easier to deal with :)
 
  • Like
Reactions: antioch18

antioch18

New Member
Dec 17, 2018
19
0
1
All excellent points, thank you!

I will eventually add some light weight SSDs to the system, and given that I'm putting the 8x card in a 4x slot and loading it up with 6-8 drives HDs, the headroom for SSDs to flex their muscles will be diminished, so saving the higher speed motherboard ports for the SSDs does make sense. But I am curious about these two remarks.
Throughput can be better, but that really depends on your motherboard
...
Although if you have SSD's you might want to leave them on motherboard ports, as trim is a lot easier to deal with :)
Why would throughput on be better with all of the HDs in the RAIDz2 be better if the disks were on the same HBA? My naive assumption was that doing so would create a bottleneck, since again, I'm putting it in a 4x slot.

Also, TRIM doesn't work well/is a headache through HBAs?

As always, thank you for taking the time to share your knowledge. :)
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
If you connect disks to one or more HBAs does not matter with ZFS. Only be sure that a HBA gives the performance needed for the accumulated maximal performance of all to the HBA connected disks.Check this against number and generation of PCI-e lanes. PCI Express - Wikipedia

For a 8 port HBA with 8 disks connected this means
with mechanical disks around 8 x 250 MB/s = around 2 GB/s (min pci-1 x8 )
with 6G Sata SSDs around 8 x 500 MB/s = around 4 GB/s (min pci-2 x8 )
with 12G SAS SSD around 8 x 1 GB/s = around 8 GB/s (min pci-3 x8 )

When using dualport 12G SAS you my even increase with two HBAs but this is mostly done in a Cluster/HA config . If you do not want to work at the absolute limits, double the minimal demand.

Trim in Raid is one of the most advanced storage features. It requires newest Open-ZFS OSes (based on Illumos or ZoL) In general trim on ZFS is not working with all SSDs. Mostly desktop SSDs are less supported. It does not help on a server with a steady write load where you generally want high iops server class SSDs. Best results with trim may be expected in an environment with a mixed workload and SSDs with lower write iops capabilities.
 
Last edited: