PCIe 5.0: any adapters to PCIe 4.0 2x16 or to 8 NVMe Gen 4 SSD?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

HunterAP

New Member
Apr 13, 2022
2
1
3
I got a 12600K used from a friend, and am deciding to build a NAS out of it with TrueNAS SCALE.
The one thing that's exciting me is that this system would have a PCIe 5.0 x16 slot, which is a ton of bandwidth for things like M.2 NVMe drives - but I haven't found any sort of adapters or HBA's that support it.
My idea would be to have 8 M.2 NVMe Gen 4 SSD's on that slot. Since a Gen 4 NVMe SSD uses 4 lanes of PCIe 4.0, and that's equivalent to 2 lanes of PCIe 5.0, I figure I can cram in 8 drives.

The issue is that I can't seem to find hardware that could make this work. PCIe 5.0 x16 to 8 NVMe Gen4 M.2HBA's don't seem to exist (at least where I could look, and they'd require some complicated PLX chips and controllers), and the alternative of bifurcating the PCIe 5.0 x16 slot into two PCIE 4.0 x16 slots would also require some adapter with PLX chips that I can't seem to find anywhere.

Is this kind of hardware even remotely available for someone to find, either as consumer hardware or used enterprise gear?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
It might be a tad new;)

Give it 6 months and it will look better, for anything you can get now you will be paying early adopters fees.
 
  • Like
Reactions: Sleyk and Patrick

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
There's no way around needing a switch chip if you're hoping to split the bandwidth of gen5 lanes to double the amount of gen4 lanes. Bifurcation just lets you split the x16 into x4/x4/x4/x4, but each lane will still be gen5, and if you connect it to a gen4 drive, it will drop down to gen4 speeds without a switch chip in the middle.

And yeah, agreed with Rand, there's not a lot of them out yet. The closest thing I know of that would let you do what you're thinking of is this, and it's still quite expensive: PCIe Gen5 x16 MCIO Host Card with Broadcom Atlas2 A0 PCIe Switch - Serial Cables

At this point in time, if you're needing to connect 8 gen4 drives with full bandwidth to each one, it's probably more cost effective to just step up to a workstation or server platform that has the lanes you need, rather than trying to use a switch to shoehorn them into a consumer platform. This may change as gen5 switch chips begin to come down in price.
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
If there's a silver lining, it's that with NVMe having gained a ton of traction, pricing on PCIe switches is going to be pressured down by competition, relative to the PLX-only world that most of the PCIe 3.0 generation saw.
 

HunterAP

New Member
Apr 13, 2022
2
1
3
Yeah I figured the tech would be too new for something like that. I can live with 4 usable drives for the time being, and maybe one day get an adapter for my system when they do come out.
 
  • Like
Reactions: Sleyk

bryan_v

Active Member
Nov 5, 2021
135
64
28
Toronto, Ontario
www.linkedin.com
Just so you know about prices though: a x16 PCIe Gen3 switch chip based card will run ~400 USD depending on the connectors, an x16 PCIe Gen4 switch card is ~$1,500 USD depending on the connectors.

You can check out the sell sheet for Gen5 switch chips here, but I can't find any of the part numbers from the usual distributors, which means that they're both (a) expensive, and (b) not in mass production yet.

I haven't talked to anyone at Broadcom regarding their Gen5 offering, but if I'd have to guess, the chip alone for a 1k pallet would be about $750k USD with a 48 week lead time (that's assuming a middle of the road 36 lane switch offering, x16 with 10 partitions).

It might be cheaper just to grab an EPYC rome QS chip if you want the 8 drive Gen4 NVMe.... or a HoneyBadger... or both. Plus you could probably get that in a month.
 
Last edited:
  • Wow
Reactions: Aluminat

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I haven’t seen any press releases for PCIe gen 5 to gen 4 switches yet, but it can be assured they will probably be quite expensive. At that point trying to do this on an ADL platform probably wouldn’t make sense even if it existed.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Usually, they are not Gen5 to Gen4 switches. I think all of the modern PCIe switches support an uplink that is one speed and downstream links that are slower speeds. What you would look for is a 48 port or larger switch:
  • 16x for the PCIe Gen5 uplink
  • 32x (or more) for other devices
Most of the switch volume will not happen until say Q4 this year after Sapphire and Genoa are in the market and Gen5 platforms begin to ramp.

While it should be possible, it is probably a few months until this becomes really practical because the switches are driven by server not consumer product cycles.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
Just so you know about prices though: a x16 PCIe Gen3 switch chip based card will run ~400 USD depending on the connectors, an x16 PCIe Gen4 switch card is ~$1,500 USD depending on the connectors.

You can check out the sell sheet for Gen5 switch chips here, but I can't find any of the part numbers from the usual distributors, which means that they're both (a) expensive, and (b) not in mass production yet.

I haven't talked to anyone at Broadcom regarding their Gen5 offering, but if I'd have to guess, the chip alone for a 1k pallet would be about $750k USD with a 48 week lead time (that's assuming a middle of the road 36 lane switch offering, x16 with 10 partitions).

It might be cheaper just to grab an EPYC rome QS chip if you want the 8 drive Gen4 NVMe.... or a HoneyBadger... or both. Plus you could probably get that in a month.
another issue with the ADL platform is the architecture of that cpu. I tried to use one of these systems recently to leverage the pci gen5 x16 slot for a mellanox cx-7 pci gen5 nic. That was the only pci gen5 platform that I could find as I couldn't get any of the server class reference systems. Most of the m.2 nvme raid type adapters still depend on the host cpu. The big/little core architecture of the ADL cpus were not powerful enough for my use cases (at least compared to the epyc or xeon ice lake cpus).

I would tend to agree with bryan_v regarding a cheap epyc cpu. But I understand the original intention was to try and find some way to leverage the new found ADL cpu that does support 1 pci gen5 x16 slot.
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
I think the closest you can get is the highpoint 4.0 adapter for 8x m.2 for up to 28GB/s with either gen3 or gen4 m.2s ssd7540
if you really need more bandwith than that id look to another system than z690 with more pcie4.0 bandwith.
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
The big/little core architecture of the ADL cpus were not powerful enough for my use cases (at least compared to the epyc or xeon ice lake cpus).
out of curiosity what were you attempting to do? iam in the process of setting up an ADL server for a workload that mixes singlethread but and some MT. might test it on an epyc first but the 5.3+ ghz on ADL looks very tempting for my workload that dosnt scale well past 1 numa.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
out of curiosity what were you attempting to do? iam in the process of setting up an ADL server for a workload that mixes singlethread but and some MT. might test it on an epyc first but the 5.3+ ghz on ADL looks very tempting for my workload that dosnt scale well past 1 numa.

You can see my other thread here: https://forums.servethehome.com/index.php?threads/alder-lake-on-esx.35905/

I was trying to find a platform to use for my proprietary app that acts as a sort of traffic generator. I wanted to try and leverage the pci gen5 slot along with a pci gen5 nic. my app runs as a vm and I needed to first install a hypervisor on the server/pc.

I was able to force an install of vmware onto the ADL system. Unfortunately, it didn't perform as hoped even with a pci gen4 nic.

I wound up punting and will need to wait on a proper server platform and cpu that supports pci gen5.