Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ca3y6

Well-Known Member
Apr 3, 2021
772
753
93
You can use some parcel forwarding service like forward2me. The backplanes are fairly cheap, won't cost you much in taxes to import them.
 
  • Like
Reactions: tubeit

tubeit

New Member
Aug 1, 2021
14
22
3
Last edited:

nexox

Well-Known Member
May 3, 2023
1,971
978
113
Will have to consider.. at these prices I assume there are more backplane options
I've looked for quite a while at ebay backplane options and so far haven't come up with anything better for U.2 than the Intel, everything else either uses proprietary connectors, fits a proprietary chassis, or is some combination of rare and expensive. I'm tempted to grab another kit from the UK seller since they're in stock again, even though I haven't installed the last one I bought yet.
 
  • Like
Reactions: tubeit

TRACKER

Active Member
Jan 14, 2019
328
146
43
I saw that one but it's MUCH more expensive than what @ca3y6 proposes. Like 3 x the Cost.

And make sure it's the NVMe one. The one I saw a couple Weeks back was SATA/SAS. The NVMe one was out of Stock IIRC.
It is NVME (even though only SAS/SATA is mentioned) because i bought one and it works flawlessly with my p4610 1.6T U.2 drives (via oculink)
 
  • Like
Reactions: luckylinux

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
Thanks, got one more option now.
Although with shipping and import duties (thanks Brexit) it doesn't cost much less than this chassis with the backplane in it Intel R2000 19" Server Gehäuse mit 8-fach 12G SFF Backplane & Rails // Rr2000 | eBay
Will have to consider.. at these prices I assume there are more backplane options (for example the backplane that I have in my G292-Z20)
Make sure you plan your way through, it may use "standard" components in terms of e.g. PSU and Risers, but do your own Research before pulling the Trigger.

At best you could slide in a Supermicro ~400-600W PSU with Cables in that Format (1U / Redundant), if it fits.

At worst you need to buy a Proprietary PDU (Power Distribution Unit) and 1-2 x PSUs with "Golden Finger" Type Connectors (that plug into the PDU, which does the Redundancy).
 

luckylinux

Well-Known Member
Mar 18, 2012
1,544
491
83
It is NVME (even though only SAS/SATA is mentioned) because i bought one and it works flawlessly with my p4610 1.6T U.2 drives (via oculink)
Uhm OK, bummer, I passed on it and got the RAW Backplane from the US ... 20 USD / Piece plus shipping to Europe. Not too bad if you order 4 pcs :D .
 

tubeit

New Member
Aug 1, 2021
14
22
3
Make sure you plan your way through, it may use "standard" components in terms of e.g. PSU and Risers, but do your own Research before pulling the Trigger.

At best you could slide in a Supermicro ~400-600W PSU with Cables in that Format (1U / Redundant), if it fits.

At worst you need to buy a Proprietary PDU (Power Distribution Unit) and 1-2 x PSUs with "Golden Finger" Type Connectors (that plug into the PDU, which does the Redundancy).
Definitely, not buying more stuff soon unless there's a clear plan, I'm still considering my options and if it's worth doing anything at all.

I'd love to find a cheap server that can house at least 10-12 NVMe/SAS/SATA and has a decently recent CPU (like Zen2-3) with support for 2 or 3 PCI-E cards, that idles at ~150W. But apart from some 1st/2nd gen Xeon Scalable servers, there's not much interesting stuff on the low end from what I can see.

Story time

I thought that the G292-Z20 could be what I needed, but honestly the idle consumption is too high for doing not much at all. Also, only the mezzanine cards support PCI-E bifurcation so I have my M.2 drives on the outflow stream of hot air coming from the CPU, which is part of the reason why I'm migrating to U.2/SATA in the backplane in the front.
I've 3d printed an intake and an extractor to have all the central fans run slower and this thing screams quite a lot anyway (the CPU still runs hot, 55-60° at 90-100W idle).
Overall not impressed with this chassis, I think it's good for people who load it with GPUs and don't care about power consumption, but it's definitely not my endgame
.
 

kapone

Well-Known Member
May 23, 2015
1,881
1,264
113
I'd love to find a cheap server that can house at least 10-12 NVMe/SAS/SATA and has a decently recent CPU (like Zen2-3) with support for 2 or 3 PCI-E cards, that idles at ~150W. But apart from some 1st/2nd gen Xeon Scalable servers, there's not much interesting stuff on the low end from what I can see.
- cheap...
- at least 10-12 NVMe/SAS/SATA
- recent CPU (like Zen2-3)
- 2 or 3 PCI-E cards

er...that's most likely not going to happen. :) I think you may have to "settle" for a 3647 based system, or even older. I wouldn't worry about the idle power consumption of any system from the last 10-12 years, they're pretty similar. In fact the newer systems consume more power at idle...yes, they're more powerful, but unless you're utilizing them fully, it's just wasted compute.
 
  • Haha
Reactions: luckylinux

tubeit

New Member
Aug 1, 2021
14
22
3
- cheap...
- at least 10-12 NVMe/SAS/SATA
- recent CPU (like Zen2-3)
- 2 or 3 PCI-E cards

er...that's most likely not going to happen. :) I think you may have to "settle" for a 3647 based system, or even older. I wouldn't worry about the idle power consumption of any system from the last 10-12 years, they're pretty similar. In fact the newer systems consume more power at idle...yes, they're more powerful, but unless you're utilizing them fully, it's just wasted compute.
I don't want to spam the thread with my "problems", but to me it seems like the G292-Z20 ticks pretty much all the boxes. The main problem is the power consumption which is ~100W higher than other similar systems for some reason.
I agree that I don't need Zen-4 or more performance, but I think that zen2 is the sweet spot for a homelab (afaik it idles lower than Naples and the generations after it).
 

schlangz

New Member
Sep 3, 2025
8
2
3
Hello my friends I'm new to bifurcation but I want to add more NVME drives to my PC.

What I want to do is just install all my Steam games at the same time because I'm tired of uninstalling games in order to install and play other ones.
I do not really care if the drives have full performance or not because all I do is playing games and basically only one drive will be active at a time anyway and I never copy files between them.

I have a crappy mainboard (MPG X570 GAMING EDGE WIFI) which does not support bifurcation.
From what I understood is that by default it supports x4 in the second x16 slot which is connected to the chipset.

I'm from Germany and I was wondering if I could use this card?

It says it requires x8 and the PCIe Lanes Configuration in my BIOS offers an x8+x8 mode.
My GPU is a 3070Ti which is a x8 card from what I understood.

Thank you.
 

beatle

Member
Mar 23, 2017
73
15
8
If your BIOS supports x8 + x8, that card should work.

If you're not already, consider pooling your drives' capacity together with Drivepool or Storage Spaces. This will let you get the most mileage out of the space as you won't have to see where a game fits as long as your overall capacity is big enough. I pool 4 smaller drives on my desktop.
 
D

Deleted member 24947

Guest
Configuration in my BIOS offers an x8+x8 mode
For what it’s worth, that x8+x8 iS bifurcation.

Are you implying that you’d put the m.2 card in the x16 PCIe slot? How would you connect the GPU? The m.2 card should work in the x4 slot, which leaves your GPU in the x16 slot.

There are adapters like this one:
If your motherboard supports x8x4x4 then you can add two m.2 drives. However it may only be practical if you use a vertical GPU mount with a PCIe riser cable.
 

beatle

Member
Mar 23, 2017
73
15
8
I'm thinking his motherboard allows the division of PCIe lanes equally between the two slots instead of 16x and 4x. Bifurcation would be getting multiple sets of lanes in one slot. (8x8x or 4x4x4x4x) in a 16x slot, for example.

The card he references has an onboard controller that then talks to the drives which is why it works without bifurcation. The card itself is limited to 8x overall, so even though you have four NVMe drives, you could hammer them all and only get 8x total shared. Since it has a controller, it's also a looot more expensive than a card that requires bifurcation. Seeing the price now makes me want to say, "Just spend the money on a different motherboard that supports bifurcation and/or more NVMe slots" natively.
 

alaricljs

Active Member
Jun 16, 2023
280
129
43
MSI docs suck. The manual for that motherboard makes no claim to bifurcation of any kind. x8 on the first slot is only a biproduct of installing an APU.

Bifurcation in general does not differentiate between same-slot and multi-slot configs. Typically desktop motherboards do multi-slot and servers do same-slot.
 
D

Deleted member 24947

Guest
I'm thinking his motherboard allows the division of PCIe lanes equally between the two slots instead of 16x and 4x.
Nah, there isn’t an even split of 20 lanes. If the second slot is x4 it’s always going to be separate from the primary x16 slot. If the 2nd slot shares lanes it would be described as x16 x0 / x8 x8.

MSI’s specs page sucks for not having all the details, but the manual has a block diagram which shows the x4 is off of the PCH, not the CPU.

Seeing the price now makes me want to say
Yeah, I didn’t see the price at first. I don’t know if €179 is a good price in Deutschland, but it seems reasonable for a PCIe 3.0 switch card.

But I agree, it may not be worth the price. My thought would be to spend the money on a larger NVMe drive. €179 gets you close to the price of a 4TB drive. If he’s got 1 or 2 TB drives, going to 4TB will make a difference.

Also, there are cheap adapters that will mount one m.2 drive into that x4 slot. I see some priced as low as €5.
 

schlangz

New Member
Sep 3, 2025
8
2
3
Thank you for all the responses. I'm old and my hardware knowledge gets really weak for everything that came after AGP so I'm very unfamiliar with PCI Express in general.

I try to answer the questions one by one:
- I thought I can keep the GPU in the upper x16 slot and put the PCIe switch card in the second x16 slot? Even though the 2nd x16 slot is only electrically x16 and can have only x4 or x8 lanes?
- At this time I dont want to buy a new mainboard because when I do that it will be new generation with AM5 and at this point I don't want to replace the entire mainboard and buy new CPU and memory
- I just bought 2x 4TB SSD which I'm going to put in the board's two NVME slots and my two older drives I will put in the PCIe card, so the PCIe card will have two empty slots which I may or may not fill in the future

Attached a screenshot of the PCIe Lanes configuration:

IMG_3234.jpg
 

nexox

Well-Known Member
May 3, 2023
1,971
978
113
Attached a screenshot of the PCIe Lanes configuration:
That's all for one slot, if you have a low profile GPU you could use that dual m.2 passive riser under it with x8x4x4 bifurcation, but that's all the main x16 slot, the other one is connected to the chipset and is electrically x4 with no options to adjust it. You can put the active adapter you linked in the x4 slot just fine, but it may not be the most cost effective option, depending on how large your old drives are - a passive adapter for a single m.2 drive is cheap and you could drop in another 4TB drive.
 

schlangz

New Member
Sep 3, 2025
8
2
3
Urgh. So what you are saying basically it's better to invest the money for the PCIe adapter into another drive and just get rid of the old drives...

I don't even understand whats the purpose of having lanes configuration for a single x16 slot,
nor whats the purpose of slapping a second x16 slot on the board if it's electrically x4 no matter what?