HBA pcie 2.0 x8 card on pcie 3.0 x4 (electrical)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

djeflig

New Member
Apr 11, 2019
21
0
1
I've filled up my motherboards 8 sata slots and now I'm looking for an HBA to get 8 more. Alas, I've only got a pcie 3.0 x8 physical with x4 electrical. I'm planning to expand my zfs pool and now I'm wondering if there are any serious bottlenecks having only 4 lanes would entail.
It seems as though LSI pcie 2.0 x8 cards are the cheapest so I wouldn't mind if one of those would work without any speed losses in this regard.
If that isn't a viable option, maybe a pcie 3.0 card running on 4 lanes would at least be equivalent to pcie 2.0 running on 8?
Anyways, my goal is primarily to find something on the second hand market.
 

Mithril

Active Member
Sep 13, 2019
354
106
43
*Most* of the time putting a card into a different generation slot works, and *most* of the time this is true for a slot with less electrical lanes than the card has (there are rare exceptions where despite the PCIe spec it doesn't work, I've seen cards refuse to work at 1x lane and very old PCIe 1.0 cards refuse to work in a 3.0 slot).

4x PCIe 2.0 would be 4GB/s or 500ish MB/s per drive assuming 8 drives all reading/writing at the same time. That's going to be plenty for spinners or SATA SSDs. PCIe 3.0 would double that and would really only be needed for SAS 12Gb SSDs or going to more drives (via a drive shelf, expander, etc).

What I would double check personally is WHERE those 4x lanes are connected to. If they go to the PCH/chipset then you'll be competing with everything else connected to the PCH/chipset. That may not be an issue, but it's something you should be aware of, how much of a potential issue also depends on the link speed to the CPU and what else the motherboard (or you) connect to the PCH/chipset.

Slight tangent, it's one of the practical reasons I'm glad AMD pushed for PCIe gen 4 for consumer products, it allowed them to double the bandwidth to the chipset
 

djeflig

New Member
Apr 11, 2019
21
0
1
Thanks for your input, Mithril, your knowledge is highly appceciated.
I've got a Supermicro x11ssh-f motherboard, xeon e3 1240v6 cpu and currently 32 gb ecc ram.
Looking at the block diagram it seems as though that pcie slot would be connected to the pch. I've also got an ssd nvme on the board and combined with those 8 sata slots I guess I might be in trouble.
blockdiagram.jpg

I've got a GPU at the 16x slot and was planning to have a couple of nvmes at the pcie x8 (it's got bifurcation) but maybe I should revise those plans.
 
Last edited:

Mithril

Active Member
Sep 13, 2019
354
106
43
Assuming "DMI3" is accurate: Direct Media Interface - Wikipedia You're looking at essentially 4 PCIe gen 3 lines in terms of bandwidth. So that may or may not be a bottleneck. You may need to sit down and figure out your "worst case" in each direction for data transfer to see how likely you'd actually run into issues.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I assume you've already populated slots 5 and 6 with something that can't be moved to the PCH slot 4?

In any case, assuming you're using HDDs as opposed to SSDs, I doubt you're never going to exceed 200MB/s per drive under ideal conditions and you'll likely not exceed 100MB/s under most realistic scenarios; only if you're populating the HBA with SSDs would you get 500MB/s throughput on each drive (and then only under largely sequential workloads).