PCIe Gen 4 bifurcation risers and NVMe SSDs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

alexhaj

New Member
Jan 12, 2018
12
2
3
123
Which cables are you using to connect your drives?

Have you tried forcing Gen3 link speed?
HI I am using these cables:

I am worried they might not be compatible with the DeLock 89030 Card. What do you think?

And I've confirmed the PCIe slot is working by putting a m.2 adapter and m.2 card and it shows up in the bios nvme section...

Please help.
 

DRW

Member
May 1, 2021
31
15
8
HI I am using these cables:

I am worried they might not be compatible with the DeLock 89030 Card. What do you think?

And I've confirmed the PCIe slot is working by putting a m.2 adapter and m.2 card and it shows up in the bios nvme section...

Please help.
It might be that you are losing too much signal at Gen4 without a retimer card. I assume that's why they asked if you tried forcing Gen3, since you have more leeway there.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
HI I am using these cables:

I am worried they might not be compatible with the DeLock 89030 Card. What do you think?
The pinouts for both the card and the cable are on the product pages ... What do you think?
(No, I haven't vetted them [that's your mission]; but I trust IOI at least 10x as much as uSata.)
 

alexhaj

New Member
Jan 12, 2018
12
2
3
123
ok I finally got it to work. there are switches on the riser that manually needed to be set to the ON position. in their default state they are in AUTO mode that is supposed to be on only when a cable is plugged in. even though my cable was plugged in they were still not turning on and this might be due to the cable I was using to connect my u.2 ssds.

interestingly apparently asrock makes this card though I can't find it anywhere on google:
ASRock Rack > RB1U4OCU_G4
 
  • Like
Reactions: ectoplasmosis

bryan_v

Active Member
Nov 5, 2021
135
64
28
Toronto, Ontario
www.linkedin.com
ok I finally got it to work. there are switches on the riser that manually needed to be set to the ON position. in their default state they are in AUTO mode that is supposed to be on only when a cable is plugged in. even though my cable was plugged in they were still not turning on and this might be due to the cable I was using to connect my u.2 ssds.

interestingly apparently asrock makes this card though I can't find it anywhere on google:
ASRock Rack > RB1U4OCU_G4
For weird add-on parts like that, I usually send an email to a SMB server distributor and say we need the part for a business prototype, so need 2-3 units for now with indicative pricing for >500 units you can use to price the prototype.

It's usually enough to get their attention, as they have to be sourced from further up stream at the national/continental distributor level.
 
  • Like
Reactions: RolloZ170

j.battermann

Member
Aug 22, 2016
82
16
8
44
Forgot to update this thread.

2x AOC-SLG4-4E4T-O driving 8x Samsung PM9A3 1.92TB U.2 Gen4 drives, in an EPYC 7443P 24-core system with 256GB 8-channel DDR4.

Currently running as an 8-way RAID0 mdadm array on a Proxmox host. Quick FIO direct=1 sequential benchmark showing ~50GB/s read, ~20GB/s write.
Quick question here: are you running these AOC-SLG4-4E4T-O card(s) in a Supermicro board or in i.e. an Asrock Rack one and/or do you know, if they are somewhat locked to SM boards? I am currently looking for an easy way to use four U.2 (albeit Gen3/Intel P4510) drives in an AsRock Rack ROMED8-2T system and remembered this thread here.

Also, which cable(s) are these, again.. if you don't mind sharing.

Thanks!
-JB
 

ectoplasmosis

Active Member
Jul 28, 2021
117
53
28
Quick question here: are you running these AOC-SLG4-4E4T-O card(s) in a Supermicro board or in i.e. an Asrock Rack one and/or do you know, if they are somewhat locked to SM boards? I am currently looking for an easy way to use four U.2 (albeit Gen3/Intel P4510) drives in an AsRock Rack ROMED8-2T system and remembered this thread here.

Also, which cable(s) are these, again.. if you don't mind sharing.

Thanks!
-JB

I'm using these cards with ROMED8-2T boards. They should work with any motherboard that supports PCIe bifurcation.

The cables are: Supermicro CBL-SAST-0953
 
  • Like
Reactions: j.battermann

vcc3

Member
Aug 19, 2018
13
34
13
I got the oportunity to play with some PCIe Gen 4 NVMes and want to share my observations in this thread since it already contains many interesting and usefull posts.

For my testings I used the following hardware:
  • CPU: AMD EPYC 7402P
  • Mainboard: Supermicro H12SSL-i
  • Case: Supermicro CSE-LA26AC12-R920LP1
  • HD Backplane (included in the SM case): BPN-SAS3-LA26A-N12 (2U 12-Slot LFF Backplane Supports 12 x SAS3/SATA3/NVMe4 3.5"/2.5" Storage Devices)
  • NVMe SSDs:
    • 1 x Intel - DC P4510 2TB SSD (SSDPE2KX020T801) - PCIe Gen 3
    • 3 x Solidigm SSD D7-P5520 3.84TB, U.2 (SSDPF2KX038T1N1) - PCIe Gen 4
  • PCIe / NVMe - riser/adapters
    • Gen4 Retimer Adaper: Linkreal Adapter-LRNV9F24 (PCI Express 4.0 x16 to Two SlimSAS SFF-8654 8i Retimer)
      Gen4-Retimer.Adaper_Linkreal-LRNV9F24_01.jpg
    • Gen4 Passive Adaper: Linkreal Adapter-LRNV9F14 (PCI Express 4.0 x16 to Two SlimSAS SFF-8654 8i)
      Gen4-Passive.Adaper_Linkreal-LRNV9F14_02.jpg
    • Gen3 Passive Adapter: Ceacent ANU04PE16 (NVMe Controller PCIe 3.0 X16 to 4 port SFF8643 SSD Exp Riser)
      Gen3-Passive.Adapter_Ceacent-ANU04PE16_01.jpg
  • PCIe Cables
    • PCIe 4.0 cable from Gen4 Adapers to the HD Backplane: Linkreal SFF-8654 8i to SFF-8654 8i, length: 60cm
    • PCIe 3.0 cable from Gen3 Passive Adapter to NVME SSDs (without backplane): Linkreal SFF-8643 to U2 SFF-8639 Mini SAS cabel, length: 80cm

The Supermicro H12SSL-i Mainboard has 7 PCIe slots. The Slots 2 and 4 are X8 slots, The Slots 1, 3, 5, 6, and 7 are X16 slots. The slot 7 is closest to the CPU and slot 1 is furthest from the CPU. Therefore, I think that PCIe signal quality is best at slot 7 and worst at slot 1.

PCIe Gen4 NVMe Retimer Adapter PCIe Gen4 NVMe Backplane PCIe Gen4 NVMe Passive Adapter PCIe Gen4 NVMe Passive Adapter PCIe Gen3 NVMe Passive Adapter

I have tryed to reproduce the test from @lunadesign with the hardware listed above. I interpreted his test "SEQ1M Q8T1 Read" with the following command:
Code:
sudo fio --name="SEQ1M Q8T1 Read" --rw=read --bs=1024k --iodepth=8 --numjobs=1 --direct=1 --ioengine=libaio --size=50000M --filename=/dev/nvme0n1
Hire are my test results and observations for the 4 NVMe SSDs on different adapters and cables.

Gen4 Retimer Adapter at PCIe Slot 1 => Backplane => SSD
All 3 Gen4 NVMe SSDs did negotiated as Gen4 - as expected!

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2921, BW=2922MiB/s (3064MB/s)(48.8GiB/17113msec)
  • 3 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec)

parallel tests
  1. Gen4 P5520: read: IOPS=7122, BW=7123MiB/s (7468MB/s)(48.8GiB/7020msec)
  2. Gen3 P4510: read: IOPS=2919, BW=2919MiB/s (3061MB/s)(48.8GiB/17128msec)
  3. Gen4 P5520: read: IOPS=7126, BW=7127MiB/s (7473MB/s)(48.8GiB/7016msec)
  4. Gen4 P5520: read: IOPS=7128, BW=7129MiB/s (7475MB/s)(48.8GiB/7014msec)




Gen4 Passive Adapter at PCIe Slot 1 => Backplane => SSD
One of the 3 Gen4 NVMe SSDs did negotiated as Gen3 - interessting! This is reproducable also after multiple reboots and also after re-pluging the dapter and cables.

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2916, BW=2916MiB/s (3058MB/s)(48.8GiB/17146msec)
  • 2 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec)
  • 1 x Gen4 P5520: read: IOPS=3565, BW=3566MiB/s (3739MB/s)(48.8GiB/14022msec) (downgraded to 8GT/s (Gen 3))

parallel tests
  1. Gen3 P4510: read: IOPS=2916, BW=2916MiB/s (3058MB/s)(48.8GiB/17145msec)
  2. Gen4 P5520: read: IOPS=7126, BW=7127MiB/s (7473MB/s)(48.8GiB/7016msec)
  3. Gen4 P5520: read: IOPS=3565, BW=3565MiB/s (3739MB/s)(48.8GiB/14024msec) (downgraded to 8GT/s (Gen 3))
  4. Gen4 P5520: read: IOPS=7127, BW=7128MiB/s (7474MB/s)(48.8GiB/7015msec)


Gen4 Passive Adapter at PCIe Slot 7 => Backplane => SSD
All 3 Gen4 NVMe SSDs did negotiated as Gen4 - interessting!

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2912, BW=2913MiB/s (3054MB/s)(48.8GiB/17166msec)
  • 3 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec)

parallel tests
  1. Gen3 P4510: read: IOPS=2910, BW=2911MiB/s (3052MB/s)(48.8GiB/17179msec)
  2. Gen4 P5520: read: IOPS=7127, BW=7128MiB/s (7474MB/s)(48.8GiB/7015msec)
  3. Gen4 P5520: read: IOPS=7126, BW=7127MiB/s (7473MB/s)(48.8GiB/7016msec)
  4. Gen4 P5520: read: IOPS=7128, BW=7129MiB/s (7475MB/s)(48.8GiB/7014msec)


Gen4 Passive Adapter at PCIe Slot 3 => Backplane => SSD
All 3 Gen4 NVMe SSDs did negotiated as Gen4 - interessting!

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2913, BW=2913MiB/s (3055MB/s)(48.8GiB/17164msec)
  • 3 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec)

parallel tests
  1. Gen3 P4510: read: IOPS=2911, BW=2911MiB/s (3053MB/s)(48.8GiB/17174msec)
  2. Gen4 P5520: read: IOPS=7124, BW=7125MiB/s (7471MB/s)(48.8GiB/7018msec)
  3. Gen4 P5520: read: IOPS=7126, BW=7127MiB/s (7473MB/s)(48.8GiB/7016msec)
  4. Gen4 P5520: read: IOPS=7128, BW=7129MiB/s (7475MB/s)(48.8GiB/7014msec)


Gen3 Passive Adapter at PCIe Slot 3 => SSD (Without Backplane)
Even on the Passive Gen3 Adapter, both Gen 4 NVMe SSDs did negotiated as Gen4 (16GT/s) after every reboot - very interessting!

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2913, BW=2913MiB/s (3055MB/s)(48.8GiB/17164msec)
  • 3 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec) (Negotiated as Gen4)

parallel tests
  1. Gen3 P4510: read: IOPS=2913, BW=2913MiB/s (3055MB/s)(48.8GiB/17164msec)
  2. Gen4 P5520: read: IOPS=7128, BW=7129MiB/s (7475MB/s)(48.8GiB/7014msec) (Negotiated as Gen4)
  3. Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec) (Negotiated as Gen4)


Gen3 Passive Adapter at PCIe Slot 1 => SSD (Without Backplane)
After connecting the components for the first time, one of the two Gen 4 NVMe SSDs did negotiated as Gen3 and one as Gen4. This did not change after a reboot. However, after switching the two SSDs both did negotiated as Gen4. Also after a reboot.

indevidual tests
  • 1 x Gen3 P4510: read: IOPS=2909, BW=2910MiB/s (3051MB/s)(48.8GiB/17184msec)
  • 3 x Gen4 P5520: read: IOPS=7129, BW=7130MiB/s (7476MB/s)(48.8GiB/7013msec) (Negotiated as Gen4)

parallel tests
  1. Gen3 P4510: read: IOPS=2908, BW=2908MiB/s (3050MB/s)(48.8GiB/17192msec)
  2. Gen4 P5520: read: IOPS=7127, BW=7128MiB/s (7474MB/s)(48.8GiB/7015msec) (Negotiated as Gen4)
  3. Gen4 P5520: read: IOPS=7128, BW=7129MiB/s (7475MB/s)(48.8GiB/7014msec) (Negotiated as Gen4)


----------------------
I did never observe any PCIe error from cat /var/log/syslog | grep pcie or sudo dmesg | grep pci.

Conclusion
If NVMe Backplanes are used, passive PCIe Gen4 adapter are on the edge of signaling. However, if you use PCIe slots near the CPU it still works.
For me the most important observervation is that at least the combination of AMD EPYC Rome, Supermicro H12SSL-i and Solidigm SSD D7-P5520 fails in the best possible way if the PCIe Gen4 signaling becomes to bad - It just uses PCIe Gen3 speed (8GT/s) and does not produce any errors, problems or instabilities - this is great news for me!
 

Docop

Member
Jul 19, 2016
41
0
6
45
Well, no matter what, if you do not have a retimer, the spec of gen4 do apply and signal lenght path is... very short. And evey connector do add signal loss.
 

abufrejoval

Member
Sep 1, 2022
39
10
8
Has anyone tried hooking up NVMe SSDs to something like this? Or is a retimer required with PCIe Gen 4?


View attachment 18638
I wonder what you're trying to achieve: when it comes to simply using an x16 slot to mount more M.2 form factor NVMe drives, there are far simpler boards with very short traces. I've used this, which is supposed to support PCIe v4 and sports a little fan, but also similar ones designed for PCIe v3 that are purely passive.

I haven't actually tried to use this at PCIe v4 speeds, I basically just decided that an extra 10 bucks were well spent in case it would go into a v4 system eventually (or the fan might be useful).

1686871977876.png
I had been looking for ways to recycle smaller NVMe drives, that were evicted from laptops or their mainboard M.2 slots as capacities and speeds grew. But with good Samsung EVOs dropping to double digits at 2TB, it's hard to justify spending money on cables, connectors, docks, retimers or redrivers, each of which can easily cost the equivalent of a terabyte or two of brand new storage.

For me the bigger issue is becoming flexibility as the mainboard's lane allocation to slots may not actually suit my needs and there I've come to like cables like this
1686872725444.png
which allow me to turn M.2 connectors from the mainboard (or the card above) into x4 slots e.g. to hold 10GBase-T Ethernet NICs.

In other words you can almost translate the logical flexibility that bifurcation gives you into a physical equivalent.

In a way slots and the mainboard traces which enable them have become more of a burden than benefit and replacing them completely with point-to-point x4 cables (potentially aggregated) would be the smarter way forward.

I believe I see hints of that with the really big new EPYC systems, where CXL is run over cables because it's much easier to control trace length in the 3D space of an enclosure than when pressing everything on a very 2D PCB, connectors and slots.

Of course you'd still need to manage mounting and cooling, but that's why we still need engineers.

E.g. take a Threadripper system with 8 PCIe x16 slots spaced at traditional 1 slot intervals together with GPUs using 4: you can either waste all these lanes or try making things work with extender cables when you could just cluster fixed length high density cables right around the CPU socket and on both sides of the mainboard to use all that volume instead of fighting for surface area.
 

Docop

Member
Jul 19, 2016
41
0
6
45
The solution is to get a plx card, that is mostly exactly like an usb hub. no delay... and over 1 slot you get 8. Good solution is card like Highpoint ssd7184 as you can connect internal drive and also external unit for flexibility.
 

happymac

New Member
Apr 19, 2023
4
0
1
Hi, sorry for the naïve question, but this is my first time using a U.2 drive, and I’m a bit confused by what I’m seeing.

I have a Supermicro H12SSL-NT, an Intel D7-P5520 3.84TB U.2 SSD, and an SFF-8654 to 2x U.2 cable with 4-pin Molex connector. The cable was helpfully provided by the vendor who sold me the motherboard. It’s not clear who the OEM is, though it appears to have been assembled by Foxconn.

I’ve connected the P5520 directly to the JVNMe1 connector on the motherboard — no backplane, just a straight cable connection. However, neither the BIOS nor Linux see the drive. I’ve tried connecting the 4-pin Molex connector to 12V from the PSU, and disconnecting it. I’ve also tried setting the BIOS setting for JVNMe0 and JVNMe1 to ”NVMe”, and to “Auto” while also jumpering pins 2-3 on JCFG1 and JCFG2. None of this makes any difference, and it’s as if the drive isn’t there.

This thread is the first I’ve heard of the need for redrivers or retimers. Would you expect that I’d need one in my situation?

At the moment I’m inclined to blame the cable, and am considering getting an official Supermicro CBL-SAST-0953 to replace the cable my vendor provided, but I thought I’d ask here in case I’m missing something obvious or not understanding how to connect U.2 drives correctly.

Thanks!
 

vcc3

Member
Aug 19, 2018
13
34
13
If the cable is not extraordinary long (1 meter or more) you really should not need a retimer, based on my experience.

When I did start to read your post I first thought that the UEFI setting for for JVNMe0/ JVNMe1 is wrong. But you already checked that.

The only thing that stuck out a bit is that a 4-pin Molex connector carries not only 12 V but also 5 V. Furthermore, all such U.2 cables I have ever seen, did not have a 4-pin Molex connector from an IDE drive but a SATA power connector that has also a 3.3 V rail. However, I have also not seen and U.2 Cable with an 8i version of the SFF-8654 connector for two U.2 NVMe SSDs. Furthermore, I do not think this is the problem.
 
  • Like
Reactions: L3R4F and happymac

jnolla

New Member
Oct 18, 2023
17
1
3
Some testing on our end:

Drives: Micron_9400_MTFDKCC15T3TGH
Backplane: BPN-NVME3-216N-S4
Retimer cards: AOC-SLG4-4E4T-O
MB: H12SSL-NT

sudo zpool status idisk
pool: idisk
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
idisk ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme0n1 ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
nvme2n1 ONLINE 0 0 0
nvme3n1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
nvme4n1 ONLINE 0 0 0
nvme5n1 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
nvme6n1 ONLINE 0 0 0
nvme7n1 ONLINE 0 0 0


sudo fio --name="SEQ1M Q8T1 Read" --rw=read --bs=1024k --iodepth=8 --numjobs=1 --direct=1 --ioengine=libaio --size=50000M
SEQ1M Q8T1 Read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
fio-3.28

Starting 1 process
SEQ1M Q8T1 Read: Laying out IO file (1 file / 50000MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=8152MiB/s][r=8152 IOPS][eta 00m:00s]
SEQ1M Q8T1 Read: (groupid=0, jobs=1): err= 0: pid=50382: Mon Oct 30 16:08:45 2023
read: IOPS=8114, BW=8114MiB/s (8508MB/s)(48.8GiB/6162msec)
slat (usec): min=48, max=1080, avg=121.99, stdev=13.27
clat (usec): min=2, max=2495, avg=862.89, stdev=42.60
lat (usec): min=124, max=3014, avg=985.02, stdev=46.98
clat percentiles (usec):
| 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 848],
| 30.00th=[ 848], 40.00th=[ 857], 50.00th=[ 865], 60.00th=[ 865],
| 70.00th=[ 873], 80.00th=[ 881], 90.00th=[ 889], 95.00th=[ 898],
| 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1614], 99.95th=[ 1647],
| 99.99th=[ 1844]
bw ( MiB/s): min= 7822, max= 8186, per=100.00%, avg=8118.50, stdev=99.86, samples=12
iops : min= 7822, max= 8186, avg=8118.50, stdev=99.86, samples=12
lat (usec) : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.03%, 1000=99.71%
lat (msec) : 2=0.24%, 4=0.01%
cpu : usr=1.12%, sys=98.72%, ctx=24, majf=0, minf=2058
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=50000,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
READ: bw=8114MiB/s (8508MB/s), 8114MiB/s-8114MiB/s (8508MB/s-8508MB/s), io=48.8GiB (52.4GB), run=6162-6162msec