Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

skyjam

New Member
Jul 16, 2020
8
1
3
Hi all

I have recently ordered an LRNV9547-4I card directly from Linkrel in China.
This card utilizes a PLX 8747 chip with 48 lanes.
I paid 258 US$ plus shipping, which was just a few days to Switzerland via Fedex.
The card is working fine, tough at the moment I have only two m.2 drives to test with.

Installed it in a Lenovo/IBM x3550 M4 (1U) server. I had to switch the x8 riser card with a x16 card and to modify the server a little bit:
The CMOS battery was originally installed upright on the board and prevented the card to sit correctly. I removed it and soldered a new battery with attached wires (and a plug) to the board. Also, I had to remove a small pillar which was useless anyway to give room for the card.

The Lenovo BIOS does not have NVME drivers to boot from the m.2 drives. I tried injecting them. But afterwards, oniine flashing was not possible due to some verification failure. Flashing via SPI works, but the System was not booting. So I reverted back.
Anyway, I have some SATA SSDs isntalled with an LSI RAID card to boot from which is sufficient.
 
  • Like
Reactions: AveryFreeman

victorhooi

New Member
Apr 18, 2018
7
0
1
37
I'm looking at the x16 cards, that support 8 x U.2 drives.

The OP has given two options - here and here - one is $277, the other is $135.

Does anybody know what the difference between them is? They seem pretty comparable - so not sure why the price difference.

The first one says it uses a PEX8749 controller, the second says a PLX8748 controller (although I assume this is Broadcom PEX8748?)
 

TonyP

New Member
Jun 14, 2017
9
0
1
50
The first one says it uses a PEX8749 controller, the second says a PLX8748 controller (although I assume this is Broadcom PEX8748?)
PLX was the company (Acquired by Broadcom), that produced switching chips... They are called i.e. PEX8648 or PEX8748.. So the full name would be PLX PEX8648 - but for the latter, I am not sure if it was ever marketed by PLX, so it is probably just called Broadcom PEX8748 (but it is anyway the same, there is only one chip, and the proper name is PEX8748 irrespective of when it was released :)
 

dbTH

Member
Apr 9, 2017
149
59
28
I'm looking at the x16 cards, that support 8 x U.2 drives.

The OP has given two options - here and here - one is $277, the other is $135.

Does anybody know what the difference between them is? They seem pretty comparable - so not sure why the price difference.

The first one says it uses a PEX8749 controller, the second says a PLX8748 controller (although I assume this is Broadcom PEX8748?)
The PEX8749 chip has more features than PLX/PEX8748 . The notable differences are DMA (4 vs none), Non-Transparency (2 vs 1) and Port Count (16 vs 12). So, PEX8749 chip would definitely cost more. But about 2 times price difference may be also due to brand/make.
 

victorhooi

New Member
Apr 18, 2018
7
0
1
37
The PEX8749 chip has more features than PLX/PEX8748 . The notable differences are DMA (4 vs none), Non-Transparency (2 vs 1) and Port Count (16 vs 12). So, PEX8749 chip would definitely cost more. But about 2 times price difference may be also due to brand/make.
Got it - thank you!

Do you know what exactly DMA and Non-Transparency mean here? Are they a big deal, or worth getting?
 

Mithril

Active Member
Sep 13, 2019
354
106
43
Hi all

I have recently ordered an LRNV9547-4I card directly from Linkrel in China.
This card utilizes a PLX 8747 chip with 48 lanes.
I paid 258 US$ plus shipping, which was just a few days to Switzerland via Fedex.
The card is working fine, tough at the moment I have only two m.2 drives to test with.

Installed it in a Lenovo/IBM x3550 M4 (1U) server. I had to switch the x8 riser card with a x16 card and to modify the server a little bit:
The CMOS battery was originally installed upright on the board and prevented the card to sit correctly. I removed it and soldered a new battery with attached wires (and a plug) to the board. Also, I had to remove a small pillar which was useless anyway to give room for the card.

The Lenovo BIOS does not have NVME drivers to boot from the m.2 drives. I tried injecting them. But afterwards, oniine flashing was not possible due to some verification failure. Flashing via SPI works, but the System was not booting. So I reverted back.
Anyway, I have some SATA SSDs isntalled with an LSI RAID card to boot from which is sufficient.

I'd be super curious to see benchmarks of how the drives perform. If they are within 5% of "direct" (not through the PLX) that would be great to know.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
433
247
43
NH, USA
I'd be super curious to see benchmarks of how the drives perform. If they are within 5% of "direct" (not through the PLX) that would be great to know.
I would be super disappointed in any switch-based card which imposed more than 1-2% performance penalty on a drive's (raw/unswitched) performance.

I've tested the ANU28PE16 (PEX8748 chip) (in a x16 slot) with[**] a SK Hynix P31 1TB, vs that same P31 via a direct adapter in that same slot. The only difference was in the random-4K-q1t1 test where the switch might have introduced a ~1% slowdown (the deltas were hard to differentiate between actual performance and test-sample variance).

[**] The P31 was connected to the switch-card via a SFF8463-to-SFF8639 50cm cable and a U.2-to-M.2 adapter case.
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
I would be super disappointed in any switch-based card which imposed more than 1-2% performance penalty on a drive's (raw/unswitched) performance.

I've tested the ANU28PE16 (PEX8748 chip) (in a x16 slot) with[**] a SK Hynix P31 1TB, vs that same P31 via a direct adapter in that same slot. The only difference was in the random-4K-q1t1 test where the switch might have introduced a ~1% slowdown (the deltas were hard to differentiate between actual performance and test-sample variance).

[**] The P31 was connected to the switch-card via a SFF8463-to-SFF8639 50cm cable and a U.2-to-M.2 adapter case.
Are you considering performance in terms of bandwidth, latency, or both? My expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.

A random-4k-q1t1 test would likely be the most sensitive to latency in terms of overall performance, so if that's what you're measuring, that makes sense.
 

Mithril

Active Member
Sep 13, 2019
354
106
43
Are you considering performance in terms of bandwidth, latency, or both? My expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.

A random-4k-q1t1 test would likely be the most sensitive to latency in terms of overall performance, so if that's what you're measuring, that makes sense.
Right, "how much latency" both in cases where "lanes in equals lanes out" and in cases where we are driving say 4 NVMe on 8x lanes from the host. Thats the kind of metric I'd love to see, either directly or how it impacts "worst case" (small random IO).
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
Well, it turns out that max switching latency is in the spec of the PEX8748, and it's actually pretty good at only 150ns. For reference, the typical latency of the Optane P5800X, which is probably the lowest latency NVMe drive on the market at present, is 6us (6000ns), so the latency adder would be 2.5% in that case. For a typical nand drive, with probably ~100us latency, the additional latency added by the switch chip would be lost in the statistical noise. It would be nice if someone was able to measure to confirm this, but I'd be surprised if it was off by much.

Of course, that's assuming a 1:1 input to output ratio of lanes. In the situation where you have more downstream lanes to the drives than upstream to the host, like 4 x4 drives on a x8 host connected card, then the switch chip may have to buffer packets and wait for upstream bandwidth to be available, which would add more latency. That would be an interesting one to measure.

 
  • Like
Reactions: abq

Mithril

Active Member
Sep 13, 2019
354
106
43
Thanks for digging up that info @NateS !

Sounds like 1:1 should be "fine" for 99.9% of use cases, but over subscribed is still a big question.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
433
247
43
NH, USA
Are you considering performance in terms of bandwidth, latency, or both?
Actually, as many aspects of "performance" as I can think of [and have the hardware/ability/insight to explore] :)
My expectation would be that there would be nearly zero bandwidth penalty, but there would definitely be a latency impact. That latency penalty should be small, and will likely be fixed, so expressed as a percentage it would look better when measured on high latency drives, and worse with something like an Optane drive.
Completely agree. (In jest, it's a little "stinging" to hear my fave, the P31, referred to as high-latency, but :))
A random-4k-q1t1 test would likely be the most sensitive to latency in terms of overall performance, so if that's what you're measuring, that makes sense.
Might we paraphrase, and say that the r4kq1t1 is the best ("conventional") test for exposing/highlighting an SSD's latency? And, if one wants to expose that latency in the best light, they should minimize their system call overhead [hint: io_uring]; because your competitor probably is.
[EDIT] (Since a r4kq1t1 test really doesn't impose much sys call overhead,) When it comes to Max IOPS Testing (e.g. r4kq8t32 [adjust q & t #s so q=# cpu cores, & t = 256/q]), one should minimize sys call overhead by using fio w/io_uring. [Just conjecture, but I believe it is exactly this scenario (max iops testing) that inspired the author of fio to design/implement io_uring. Kudos to Jens Axboe!]
 
Last edited:

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
If you have bifurcation, dual M-key x8 PCIe 4.0 adapters are $20 on eBay now, I just bought one of them. Will have to get a PLX switches for a couple of my X9s if I keep them, they all have x8 PCIe slots. Thanks for the thread!
 

nabsltd

Active Member
Jan 26, 2022
339
207
43
Will have to get a PLX switches for a couple of my X9s if I keep them, they all have x8 PCIe slots.
Most of the X9 line has gotten BIOS updates that support bifurcation. The UI isn't the best (on my X9SRL you basically have to read the value backwards from what you would expect), but it does work.
 
  • Like
Reactions: AveryFreeman

Mithril

Active Member
Sep 13, 2019
354
106
43
Has anyone tested any of these cards (or equivalent) in an "oversubscribed" scenario, such as a 4 drive card in a 4x or 8x slot? I know you'll be capped on total top speed, but I'm curious what latency looks like and how performance in a unraid and/or zfs setup compares.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Most of the X9 line has gotten BIOS updates that support bifurcation. The UI isn't the best (on my X9SRL you basically have to read the value backwards from what you would expect), but it does work.
I appreciate the mention. I believe you on the E5 models, but I don't own any. My boards are all E3s, and none of them have received BIOS updates for several years, except the X9SPU-F that stopped working after 2020 unless I set the clock back to a prior year - I sent them a message and they made an "a" version to fix it (obviously the bare minimum).

I wonder if the bifurcation is an easy enough addition we could splice in the code using a hex reader. I don't really know much about doing it, just have used hacked BIOSes on Thinkpads to remove device whitelists, add extra features, etc. so I've seen the work the community is capable of in action.

A search of Bios Mods -The Best BIOS Update and Modification Source shows some Supermicro stuff (but old, mostly X8), and some references to bifurcation, but only on consumer boards. It might be worth asking, though.
 

Mithril

Active Member
Sep 13, 2019
354
106
43
What is the point of having a switch that supports so many lanes when there are only x16 possible through that model's PCIe electrical connector?
Most (dare I say all) NVMe and SATA flash based SSDs don't hit anywhere *close* to the fun "max transfer" speeds during the majority of their real-world operation. The difference between 99th percental "worst case" and the big marketing number can easily be two "0s", but you'll be seeing somewhere between the two most of the time. The "why" gets deep into the ins and outs of flash itself, file systems, latency, the OS and various software you are running (and more).
This is not really all that different than (decent) network switches. So long as the hardware doing the work handles things like "multiple devices sending data to the host/upstream port" in some sane way, everything should mostly work. Given the right setup, and well designed hardware and firmware, you can be over subscribed at a huge ratio and just never notice as your actual bottlenecks are elsewhere anyways.
Also, assuming the same cost per TB (including accessories) it's cheaper to get redundancy with more smaller disks (less total space lost to parity). Real world of course has lots of tradeoffs there so it's not always that simple :)


The TL;DR is that IF (and this is a big IF) the switch chip added 0 downsides in all scenarios, you could *in theory* have 16 NVMe (or more) on a 16x slot with little to no real-world use effects.
 

Andriy Gapon

New Member
Apr 10, 2017
4
3
3
Hope that this topic is not dead yet.
I recently bought a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
lspci sees the card as an upstream PCIe port with two downstream ports which matches the reality.
What's interesting is that the card works fine in a consumer motherboard, but it is not detected at all with X9CSM-F motherboard.
I tried both x8 slots of it with the same result.
The card (and an NVMe module on it) is not visible in the OS (lspci, etc) and I do not see any changes in the BIOS too.

I wonder why that could be and if anything could be done about that.
Any suggestions?
Thank you.
 
Last edited:

Mithril

Active Member
Sep 13, 2019
354
106
43
Hope that this topic is not dead yet.
I recently both a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
lspci sees the card as an upstream PCIe port with two downstream ports which matches the reality.
What's interesting is that the card works fine in a consumer motherboard, but it is not detected at all with X9CSM-F motherboard.
I tried both x8 slots of it with the same result.
The card (and an NVMe module on it) is not visible in the OS (lspci, etc) and I do not see any changes in the BIOS too.

I wonder why that could be and if anything could be done about that.
Any suggestions?
Thank you.
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.