Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Mithril

Active Member
Sep 13, 2019
317
96
28
Hope that this topic is not dead yet.
I recently both a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
lspci sees the card as an upstream PCIe port with two downstream ports which matches the reality.
What's interesting is that the card works fine in a consumer motherboard, but it is not detected at all with X9CSM-F motherboard.
I tried both x8 slots of it with the same result.
The card (and an NVMe module on it) is not visible in the OS (lspci, etc) and I do not see any changes in the BIOS too.

I wonder why that could be and if anything could be done about that.
Any suggestions?
Thank you.
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.
 

Andriy Gapon

New Member
Apr 10, 2017
4
1
3
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.
The slots definitely work as I tested them with a different card (9207-8i). The OS is the same too.
I could not find anything that I would want to try changing in the BIOS settings.
I'll check out the SMBus trick.
Thanks!
 

Andriy Gapon

New Member
Apr 10, 2017
4
1
3
After much trial and error, what helped was setting
PCI Express Port - Gen X [Gen3]
in BIOS settings under Advanced -> Chipset Configuration -> Integrated IO Configuration.
The original /default value was Auto.

Hope that may be useful for others.
 
  • Like
Reactions: Mithril

tiebird

New Member
Aug 20, 2022
5
0
1
Hey guys,

Just bought what I think is a Linkreal LRNV9349-8I through Amazon to be used in combination with 3x Intel D7 P5620 U.2 6400 GB PCI Express 4.0 SSD's.

My system boots with the card and 1 of the SSD's attached, tested this with all 3 individually. But as soon as I connect a second SSD the system won't boot anymore. In my bios I set the port to PCIe gen3 x16.

Do you guys have any idea how I can fix this, if I understood correctly it should be possible to connect 4 devices at the same time because this would result in 24 of 32 lanes being used. Help would be appreciated :)
 

mrpasc

Member
Jan 8, 2022
88
46
18
Munich, Germany
In general those Server grade U.2 NVME SSDs have very high power consumption. Your P5620 needs 20W each, so did you check your PSU has enough power for delivering 60W on that rail you use to connect the SSDs to?
 

tiebird

New Member
Aug 20, 2022
5
0
1
The psu has 850 W with 94% efficiency, threadripper 1920x with Rtx 2080 and 3 m.2 nvme SSD's. So my guess is that power isn't the issue. I do see the bios loading screen but it stops there.

I think I already found the reason:

- Card was on a PCIe gen3 x8 port instead of a x16
- SSD is PCIe gen4 x4 which results int PCIe gen3 x8

Probably faulty assumptions from my side:

- Expected the SSD to automatically fall to lower number of lanes
- Thought the number of lanes on the motherboard would only be a limiting "speed" factor and not how many disks you can connect

Either way, I will tray 2 disks with 2 different rails and see if this helps.

Thanks for the input!
 

tiebird

New Member
Aug 20, 2022
5
0
1
Just tested the setup with 2 nvme disks on different rails but still hangs on boot. Above 4G decoding is enabled.
Read that you can connect 8 nvme disks, not really sure how that works... unless they are all PCIe gen3 x2 but I don't see these in shops anymore.

Planning to upgrade my motherboard in the coming months so I will probably use the Asus Hyper v2 with bifurcation for my next build.
This should solve all my problems, just was hoping that I could already use them.
Only paid 450 euro VAT incl. for each drive so prices was very good!
 

UhClem

Active Member
Jun 26, 2012
297
152
43
NH, USA
- Card was on a PCIe gen3 x8 port instead of a x16
IF your card was working properly, the ONLY effect (of x8 vs x16) would be on the maximum/total/combined transfer rate (~7GB/s vs ~14GB/s). Neither the number of drives simultaneously accessible (8), nor the max speed of any single drive (~3.5 GB/s), should change.

But, it sounds like, on your card, the PCIe switch chip itself is mis-configured. (I recall seeing a similar report for an almost-clone card to yours [same PEX8749, same layout, only the heatsink differed].)

A necessary, but not sufficient on its own, condition to support my suspicion, is:
Try your card in an x16 slot (you never ID'd your mobo, but with a 60-lane CPU, I would hope you have at least one x16 slot). If everything works OK, we're (possibly) on the right track.

Let us know ...
 

tiebird

New Member
Aug 20, 2022
5
0
1
Motherboard is Asus Zenith Extreme x299. The biggest issue is the first x16 slot is blocked by the CPU cooler. The 2nd is in use for the graphics card. I tried putting the Graphic cards into the x8 slot but because of the thickness, the other x16 slot is not accessibel.
Looking for a PCIe 4.0 x16 riser now that supports bifurcation. This would give me multiple options.
 

Mithril

Active Member
Sep 13, 2019
317
96
28
Have you tried taping the 2 SMBUS communication pins? there can be weird issues (even in some server boards) when cards try to use the SMBUS. Several of my SAS and 10/40GB cards need those pins taped on either all or some of my motherboards. I suggest kapton or similar tape, fold it *just* under and around and leave it long up the side of the card you tape to keep it sticking/peeling off going in or out of the slot. I'm not sure if the card is acting as a SMBUS relay/switch but I *have* had U.2 SSDs prevent boot when using the "single 4x PCIe to U.2" card adaptors.

The connector style SFF-8643 shouldn't/doesn't carry power, the cable/adaptor to the U.2 drives should have a place to plug into power. Make sure all of your adaptors/cables are working, and that all of the drives work one at a time.
 

tiebird

New Member
Aug 20, 2022
5
0
1
Thanks for all the help, I finally find the solution!

Needed to disable Fastboot in bios as well as disable the legacy part of the bios.
Took a while to find the correct settings, hopefully this will help somebody else.
 

Branko

Member
Mar 21, 2022
32
14
8
Most of the cards disappeared from aliexpress, did someone find new/better ones?
Personally, interested in some that would take two M.2 or U.2 SSDs
 

RolloZ170

Well-Known Member
Apr 24, 2016
2,228
554
113
55
Hope that this topic is not dead yet.
I recently bought a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
very similar cards with 1812 2812 require Bifurcation.
there is a same looking card with ASM1812 that requires bifurcation, beware !
1812_71hZWJRPUZL._AC_SL1500_.jpg
 
Last edited:

cptcrunch

New Member
Dec 14, 2021
16
22
3
Kentucky
What is the thought of running a quad m.2 pcie x16 card in truenas for iscsi traffic to a bunch of vmware hosts? Will the cards be able to handle sustained high traffic workloads for years?

I was thinking of purchasing 4 x 1tb nvme or 4 x 2tb nvme and using 3 in raid with a hotspare, but i'm weighing that against a h730 and 4 x 2tb sas ssds for good speed and reliability.