Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

andrewbedia

Active Member
Jan 11, 2013
698
247
43
Just wanted to share some information with the wonderful folks on STH forums. I stumbled on this about nine months ago by accident digging in aliexpress. There are adapters being sold by two Chinese domestic market companies that allow multiple NVMe drives on the same slot without requiring bifurcation. This has value for folks that have Ryzen or LGA115x/1200 Intel systems and want a lot of NVMe connectivity these systems would not otherwise offer. This is in lieu of moving up to HEDT (Ryzen Threadripper, Haswell-E[P]/Broadwell-E[P]/Skylake-E[P]/etc) and using a card like ASUS Hyper M.2 with bifurcation enabled.

These cards are using PLX controllers. I have one in my Ryzen 3900X system on ASUS Prime X470-PRO that is hooked to 3x 2TB Intel P3500 in software RAID 0 on Win10. It has worked very smoothly for about nine months. I get about 5GB/s write and about 4.5GB/s read using Blackmagicdesign's Disk Speed Test. Not too shabby. I couldn't be happier with the results and the price was not too tough to deal with.

The companies are LinkReal and Ceacent. Some others are being added periodically like DIE WU.

Generally speaking, cards that bottleneck down to a x8 interface should be fine for most people using these. Most folks running these systems will probably have a x16 graphics card in the first slot and then plugging this into a secondary slot will put both slots in x8 mode. This also has some use in older X9 servers (and similar) that don't support bifurcation.

Background information on the PCIe switches used in below cards
PEX 8724
PEX 8725
PEX 8747
PEX 8748
PEX 8749
ASM1812 (beware PCIe Gen2)
PM8533

U.2 cards
x8 2-port (PLX PEX 8724)
x8 2-port (PLX PEX 8747)
x8 4-port (PLX PEX 8724, this is the card I have)
x8 4-port (PLX PEX 8724, another PCB design)
x16 4-port (PLX PEX 8747)
x16 8-port (PLX PEX 8749)
x16 8-port (PLX PEX 8748)

M.2 cards
x8 dual carrier (PLX PEX 8724)
x8 dual carrier (PLX PEX 8747)
x8 dual carrier (Asmedia ASM1812)
x16 quad carrier (PLX PEX 8747, low profile/double sided)
x8 quad carrier (PLX PEX 8725, full profile)
x16 quad carrier (PLX PEX 8747, full profile)
x16 quad carrier (PLX PEX 8748, full profile)

Oculink cards
x8 4-port (PLX PEX 8724 per @vintagehardware, reported working but some trouble with micron 9200 max)
x8 4-port (PLX PEX 8724)

SlimSAS (SFF-8654)
x16 4-port (Micro Chip PM8533, supports 8 SSDs)

These companies carry other stuff in their stores too that is pretty handy but not super relevant to above use case, but tangentially relevant (e.g. bifurcation-required carriers, non-bifurcation multi-oculink cards, non-bifurcation slimsas nvme cards, etc).

I'm not aware of any non-CDM companies selling cards with these PCIe switches on them, with the exception of the extremely overpriced $300+ Rocketraid quad-m.2 carriers (at that rate, just move up to HEDT). There are other far-inferior solutions such as the QNAP cards as well, but... there's really no reason to buy those when the above cards exist.

Happy to edit the post and add more information/other products/other companies if other members have input.

Specific to my card:
This shows up as a PCI-to-PCI Bridge in device manager on Windows 10 (PCI\VEN_10B5&DEV_8724). No special drivers were required.
I've attached a few pictures I took when testing it out on an Ubuntu test bench (Ivy Bridge), some pictures of how it shows up in Windows, and some benchmarks. Benchmarks were taken on the Ryzen 3900X system mentioned earlier.
 

Attachments

Last edited:

andrewbedia

Active Member
Jan 11, 2013
698
247
43
added a few things
U.2 x8 4-port (PLX PEX 8724, another PCB design)
m.2 x8 dual carrier (Asmedia ASM1812)
m.2 x16 quad carrier (PLX PEX 8748, full profile)

If you have bought and used one of these products, post a reply and I can edit the original post to indicate like... "verified" or something. I assume all of these work based on the chips used, but verification by users here is nice.
 
  • Like
Reactions: Aleksey_Pravda
Jun 22, 2015
91
61
18
I have LinkReal x8 4 port oculink card with PEX-8724.
Verified working on supermicro X10SRH-CLN4F. with two drives connected: an Intel P3600 and a P4600 . But the micron 9200 max didn't work (not showed up in vmware device list at least) and I don't know why.
Edit: micron 9200 instead of 9300
 
Last edited:

andrewbedia

Active Member
Jan 11, 2013
698
247
43
Added oculink and slimsas. I assume the one I added is the one you have @vintagehardware ?

Also, LinkReal sells as Shenzhen Lianrui on newegg, if folks don't want to deal with ali.
 
  • Like
Reactions: Sleyk

GregTaylor

New Member
Apr 14, 2021
18
16
3
Thanks for a very informative post. Outside of this, there's not much clear information on the limits of you can do with these cards written so novices like me can understand. One seller on Ali told me that you couldn't use adapters on the M.2 Creacent cards but didn't explain why.

I'm thinking about trying one of these self-bifurcating x16 controller cards in my 2011 Dell Inspiron 580 (i5-680) along with an x1 video card.

I'd like to have a couple of internal NVMe drives along with a high speed USB or external drive port. An x4 home for the graphics card would be really nice. Seems like all that might be possible with a self-bifurcating switched x16 controller card. Right now that x16 slot is the best asset the machine has and I'd like to maximize its usefulness. I'd consider faster backups and home file server potential useful. Upgrading from SATA II and USB 2.0 (or just better removable storage) are high priorities.

Are there options besides these cards that could be used?

NVMe device speeds will likely be throttled by PCIe 2.0 (half that in the x1 slot) but they'd probably be 10x faster than the SATA (II) SSD I now have. I think the x1 slot and other devices on the Inspiron run through the H57 chipset so perhaps one of the Linksys/Creacent controller cards could use all 16 lanes to directly access 16GB of RAM.

I'm guessing that the on-board graphics will be disabled by any card in the x16 slot and the x1 slot is probably the only video option.

What kinds of devices can be attached to the cards? Do they all need to be U.2/M.2 NVMe or SSD drives? It seems like that is the case, especially for the cheaper Ceacent and the M.2 cards. You can get lots of cables/adapters for those U.2 ports and attach just about anything to them. Seems like the switches would need to be pretty smart to deal with traffic from other types of devices (USB, graphics cards,...) but maybe not. I don't understand how they work or their limitations. Any cables/adapters that might work on these cards are likely to be expensive.

Yeah, I know that putting a switched controller in the lone x16 slot along with expensive cables/adapters is spending 10 times what the computer is worth...but there's lots of old software that would be difficult/impossible to reinstall in a new machine and I'm not ready to part with it.
 

andrewbedia

Active Member
Jan 11, 2013
698
247
43
Honestly, I think your money is much better spent getting a SAS-2 controller like LSI 9211-8i, Dell H310, or IBM H1110. You should be able to get a card for $20 and maybe another $10 for wiring to get 6Gbps speeds. I think NVMe generally speaking would be a waste (ignoring the money factor) based on CPU performance not up to the task of even demanding more than a SATA drive would need. I'd put in like a i7-860 and get the sas card and be under $60.

Directly answering your question: in theory anything PCIe should really work with these. These are just cards with PCIe switches. In terms of using adapters to go from U.2->M.2 and such, there should be no issue. The only time the integrated graphics ports should ever shut off is if a GPU is installed in one of the slots. Speaking generically, using the 16x slot for storage should be no issue.

Unless it's riddle with some obnoxious DRM, I would just migrate your drives to a newer machine. Windows Vista and later generally migrate well between machine types and you can always take a backup before moving the drive to something newer.

So there's three perspectives. Hopefully that's helpful.
 

GregTaylor

New Member
Apr 14, 2021
18
16
3
Thanks. I'd thought about upgrading SATA but hadn't realized the controllers you mentioned are so cheap. I'll need to buy at least two drives (one internal and the other external), and I had wanted to invest in drive tech with a long-lived future. NVMe devices could find a future home in a nice new workstation that I've been contemplating while the SATA SSDs would feel like ugly stepchildren there. You pay less than 2x more for NVMe that seems to have a longer future so it just seemed like that was the way to go.

Slow USB 2.0 backups and the inability to run Zoom effectively for my job were the primary motivations for reimagining the Inspiron. I bought a $25 i7-870 but haven't installed it yet. If I do, I'll gain 2 cores and 4 threads but lose the i5s integrated graphics, AES hardware encryption instruction set, and some clock speed. Installing the i7 will require an x1 video card running in a 500 Mbps bidirectional PCIe (1.0) lane. If Andrew is right about an x16 controller card not shutting off integrated graphics, then I could keep the i5 without the x1 video card I recently purchased. I'll try it when I decide on a card. The i5-680 is the fastest LGA1156 processor with integrated graphics, none of the i7s have it.

I was thinking that since the CPU and GPU are physically separate on the i5-680, the easiest and fastest way to avoid memory conflicts with the x16 slot would be to just shut off integrated graphics when another card was present. Otherwise you have three separate entities that are accessing RAM outside the chipset - seems more complicated to manage and would be a lot slower. Later processors actually integrated the CPU and GPU and probably fixed this problem. I don't know if installing a controller card on my machine will shut down integrated graphics but will see if Andrew is right on this. If he is, I might have wasted $75 on a processor and x1 video card.

Even ancient video cards are expensive - after an extensive search, I found a 2014-era 1GB DDR3 Zotac GT 730 x1 3-port card for under $50. The Asus 2GB DDR5 GT 710 x1 4-HDMI port card debuted at $60 last summer but costs two or three times that today. The higher-performing x16 variants of the same cards can be found at half the price or less. Must be lots of folks who don't want to waste an x16 slot on video when they can run 4 monitors off the x1 slot.

Over my long history with personal computers, I've tried to purchase the fastest I/O and the most RAM I could afford - they were always the bottleneck. Saved money for that by getting processors that were 20% slower and 1/3 the price. For the most part that strategy worked - I'm still using a 10-yr old computer. Going to NVMe seemed like a no-brainer.

So Andrew's concept of a "processor-bound" computer is something I need to get my mind around. He's probably right. If so, getting the CPU to off-load as much work as possible to the GPU(s) and any intelligence built into the controller cards seems critical. Right now, my GPUs do next to nothing when my CPU is stressed.

Right now, I badly need faster backups so they can be done automatically and without hours-long waits. The SAS-SATA III solution would work and definitely cheaper in the short run. But my gut keeps telling me to get the fastest I/O I can afford and the vision of backing up internally between M.2 drives and externally on the 20Gbps super-highway is really enticing. I thought the CPU could offload some of the work making backups to the self-bifurcating cards and I'd get to go fast.

All the cool kids drive on the 20Gbps minimum speed limit highway and they get all the dates. I'm on the dirt road with my Yugo. Andrew shows the way to pavement. I still want on the superhighway.

And yeah, I'm worried a bit about the Digital Rights Management issues that may arise when I migrate, especially with the Microsoft software I use for work that now require subscriptions. I have software running on Windows 10 purchased 10-20 years ago that I rarely use but consider critical. Spent a lot of time researching family history 20 years ago and recording information in a stand-alone program unattached to the cloud. I've used it 5-6 times over the past 20 years but in those rare instances, I want it to work. It wouldn't run well when migrated from Win 3.1? to Windows 7 but works great under Windows 10. There are at least a dozen or so other programs with similar issues - I don't know if I could reinstall all of them to a fresh version of Windows on a new machine or if there is an easy way to find out. So, I do have a bit of migration hesitancy.

I haven't gotten my mind around renting software rather than owning it either.

Thanks again for your thoughts.
 

digity

Member
Jun 3, 2017
53
1
8
54
So far I've been using the StarTech.com and an off-brand single port U.2 to PCI-e 3.0 x4 adapter (U.2 drive installs to the PCI-e card itself). I notice a drive will sometimes be detected and mount initially in a consumer grade motherboard, but won't be detected nor mount ever again upon reboots. The adapter is always populated in a 3.0 x16 slot. However a drive is detected and mounts consistently in server grade motherboards. The adapter is always populated in a 2.0 or 3.0 x8 slot. I imagine my U.2 adapter cards don't have PLX or bifurcation, since they are single port.

Will the PLX based cards mentioned in the OP exhibit the same behavior (i.e., work in server mobos, but not consumer mobos)??


Consumer mobos tried (worked once or never):
  1. ASRock H370M-ITX/ac LGA1151 Mini ITX
  2. ASUS Z97-A LGA1150 ATX
  3. ASUS Z9PE-D8 WS Dual LGA2011-0 E-ATX
  4. Gigabyte GA-H97M-D3H LGA1150 Micro ATX
  5. Gigabyte GA-X79-UP4 LGA2011-0 ATX
Server mobos tried (works consistently):
  1. ASUS Z9PR-D12 Dual LGA2011-0 E-ATX
  2. Supermicro X9SCM LGA1155 Micro ATX
 

willis936

New Member
Oct 26, 2021
1
0
1
x16 quad carrier (PLX PEX 8748, full profile)
This is a PLX8724 card and it doesn't have enough lanes to support full x16 -> x4x4x4x4 switching. It's also quite cheap at $130. If it's too good to be true, then it is.

The 8724 has 24 lanes. Idk if the ANM24PE16 is configured to be x16 -> x2x2x2x2 or x8 -> x4x4x4x4. I think it is the former because the photos show traces to all 16 lanes on the PCIe connector.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
247
43
NH, USA
Maybe this is the correctly described product link: [Link]
(the SPECIFICATIONS tab on this product page says PLX8748; and ANM24PE16 is used consistently [vs ANM24PE08 on OP's link])

As for the "too good to be true", these Ceacent cards might be an exception. I bought a ANU28PE16 [Link] and it's A-OK [full spec performance].
 

TonyP

New Member
Jun 14, 2017
9
0
1
50
Maybe this is the correctly described product link: [Link]
(the SPECIFICATIONS tab on this product page says PLX8748; and ANM24PE16 is used consistently [vs ANM24PE08 on OP's link])

As for the "too good to be true", these Ceacent cards might be an exception. I bought a ANU28PE16 [Link] and it's A-OK [full spec performance].
Can you report on what performance you can see from the ANU28PE16 card - with i.e. 4 NVMe's installed???
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
247
43
NH, USA
Can you report on what performance you can see from the ANU28PE16 card - with i.e. 4 NVMe's installed???
I thought I did ... "[full spec performance]" ... :) i.e., PCIe Gen3 x16 (14 GB/s) ...

Read:
Code:
~/bin [ 1022 ] # nvmt -t 3 4 5 6
nvme3 = 3286.2 Md/sec
nvme4 = 3275.6 Md/sec
nvme5 = 3246.1 Md/sec
nvme6 = 3292.6 Md/sec
================
Total = 13100.5 Md/sec

~/bin [ 1023 ] # nvmt -t 3 4 5 6 7
nvme3 = 2826.6 Md/sec
nvme4 = 2842.3 Md/sec
nvme5 = 2849.1 Md/sec
nvme6 = 2850.2 Md/sec
nvme7 = 2847.8 Md/sec
================
Total = 14216.0 Md/sec
Write:
Code:
~/bin [ 1024 ] # nvmt -W -t 3 4 5 6
nvme3 = 3189.8 Md/sec
nvme4 = 3265.2 Md/sec
nvme5 = 3257.4 Md/sec
nvme6 = 3205.3 Md/sec
================
Total = 12917.7 Md/sec

~/bin [ 1025 ] # nvmt -W -t 3 4 5 6 7
nvme3 = 2763.7 Md/sec
nvme4 = 2767.7 Md/sec
nvme5 = 2730.2 Md/sec
nvme6 = 2764.8 Md/sec
nvme7 = 2758.3 Md/sec
================
Total = 13784.7 Md/sec
 

TonyP

New Member
Jun 14, 2017
9
0
1
50
I thought I did ... "[full spec performance]" ... :) i.e., PCIe Gen3 x16 (14 GB/s) ...

Read:
Code:
~/bin [ 1022 ] # nvmt -t 3 4 5 6
nvme3 = 3286.2 Md/sec
nvme4 = 3275.6 Md/sec
nvme5 = 3246.1 Md/sec
nvme6 = 3292.6 Md/sec
================
Total = 13100.5 Md/sec

~/bin [ 1023 ] # nvmt -t 3 4 5 6 7
nvme3 = 2826.6 Md/sec
nvme4 = 2842.3 Md/sec
nvme5 = 2849.1 Md/sec
nvme6 = 2850.2 Md/sec
nvme7 = 2847.8 Md/sec
================
Total = 14216.0 Md/sec
Write:
Code:
~/bin [ 1024 ] # nvmt -W -t 3 4 5 6
nvme3 = 3189.8 Md/sec
nvme4 = 3265.2 Md/sec
nvme5 = 3257.4 Md/sec
nvme6 = 3205.3 Md/sec
================
Total = 12917.7 Md/sec

~/bin [ 1025 ] # nvmt -W -t 3 4 5 6 7
nvme3 = 2763.7 Md/sec
nvme4 = 2767.7 Md/sec
nvme5 = 2730.2 Md/sec
nvme6 = 2764.8 Md/sec
nvme7 = 2758.3 Md/sec
================
Total = 13784.7 Md/sec
Don't know this nemt command... Is it concurrent transfers??
And, why going for 5 drives - that is clearly not on the same card/PLX???

So - if you create a ZFS stripe (or any other type of stripe) - how fast can you transfer to 4 drives?? On a system with just one of these cards, and thus 4 NVMe drives....
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
247
43
NH, USA
Of course it's concurrent.
The card is a HBA using a PEX8748, configured as 8* x4 NVMe/PCIe/SFF-8643.

As stated/documented, full spec performance, the card will NOT be the bottleneck.
 

Dango

New Member
May 15, 2021
7
3
3
This is a PLX8724 card and it doesn't have enough lanes to support full x16 -> x4x4x4x4 switching. It's also quite cheap at $130. If it's too good to be true, then it is.

The 8724 has 24 lanes. Idk if the ANM24PE16 is configured to be x16 -> x2x2x2x2 or x8 -> x4x4x4x4. I think it is the former because the photos show traces to all 16 lanes on the PCIe connector.
Just got the card today and tested it with 4 OEM Samsung drives. Maxed out that the drive can do.
In idrac inventory, it shows the chip has 48 lanes, so quad x4 are all at full bandwidth. I guess it can work with x8 x 4 x1 slot as well with a reduced total bandwidth of course.

It works and it is cheap. Totally worth the money.