Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Mithril

Active Member
Sep 13, 2019
317
96
28
Hope that this topic is not dead yet.
I recently both a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
lspci sees the card as an upstream PCIe port with two downstream ports which matches the reality.
What's interesting is that the card works fine in a consumer motherboard, but it is not detected at all with X9CSM-F motherboard.
I tried both x8 slots of it with the same result.
The card (and an NVMe module on it) is not visible in the OS (lspci, etc) and I do not see any changes in the BIOS too.

I wonder why that could be and if anything could be done about that.
Any suggestions?
Thank you.
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.
 

Andriy Gapon

New Member
Apr 10, 2017
4
2
3
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.
The slots definitely work as I tested them with a different card (9207-8i). The OS is the same too.
I could not find anything that I would want to try changing in the BIOS settings.
I'll check out the SMBus trick.
Thanks!
 

Andriy Gapon

New Member
Apr 10, 2017
4
2
3
After much trial and error, what helped was setting
PCI Express Port - Gen X [Gen3]
in BIOS settings under Advanced -> Chipset Configuration -> Integrated IO Configuration.
The original /default value was Auto.

Hope that may be useful for others.
 

tiebird

New Member
Aug 20, 2022
5
0
1
Hey guys,

Just bought what I think is a Linkreal LRNV9349-8I through Amazon to be used in combination with 3x Intel D7 P5620 U.2 6400 GB PCI Express 4.0 SSD's.

My system boots with the card and 1 of the SSD's attached, tested this with all 3 individually. But as soon as I connect a second SSD the system won't boot anymore. In my bios I set the port to PCIe gen3 x16.

Do you guys have any idea how I can fix this, if I understood correctly it should be possible to connect 4 devices at the same time because this would result in 24 of 32 lanes being used. Help would be appreciated :)
 

mrpasc

Active Member
Jan 8, 2022
120
61
28
Munich, Germany
In general those Server grade U.2 NVME SSDs have very high power consumption. Your P5620 needs 20W each, so did you check your PSU has enough power for delivering 60W on that rail you use to connect the SSDs to?
 

tiebird

New Member
Aug 20, 2022
5
0
1
The psu has 850 W with 94% efficiency, threadripper 1920x with Rtx 2080 and 3 m.2 nvme SSD's. So my guess is that power isn't the issue. I do see the bios loading screen but it stops there.

I think I already found the reason:

- Card was on a PCIe gen3 x8 port instead of a x16
- SSD is PCIe gen4 x4 which results int PCIe gen3 x8

Probably faulty assumptions from my side:

- Expected the SSD to automatically fall to lower number of lanes
- Thought the number of lanes on the motherboard would only be a limiting "speed" factor and not how many disks you can connect

Either way, I will tray 2 disks with 2 different rails and see if this helps.

Thanks for the input!
 

tiebird

New Member
Aug 20, 2022
5
0
1
Just tested the setup with 2 nvme disks on different rails but still hangs on boot. Above 4G decoding is enabled.
Read that you can connect 8 nvme disks, not really sure how that works... unless they are all PCIe gen3 x2 but I don't see these in shops anymore.

Planning to upgrade my motherboard in the coming months so I will probably use the Asus Hyper v2 with bifurcation for my next build.
This should solve all my problems, just was hoping that I could already use them.
Only paid 450 euro VAT incl. for each drive so prices was very good!
 

UhClem

Active Member
Jun 26, 2012
315
169
43
NH, USA
- Card was on a PCIe gen3 x8 port instead of a x16
IF your card was working properly, the ONLY effect (of x8 vs x16) would be on the maximum/total/combined transfer rate (~7GB/s vs ~14GB/s). Neither the number of drives simultaneously accessible (8), nor the max speed of any single drive (~3.5 GB/s), should change.

But, it sounds like, on your card, the PCIe switch chip itself is mis-configured. (I recall seeing a similar report for an almost-clone card to yours [same PEX8749, same layout, only the heatsink differed].)

A necessary, but not sufficient on its own, condition to support my suspicion, is:
Try your card in an x16 slot (you never ID'd your mobo, but with a 60-lane CPU, I would hope you have at least one x16 slot). If everything works OK, we're (possibly) on the right track.

Let us know ...
 

tiebird

New Member
Aug 20, 2022
5
0
1
Motherboard is Asus Zenith Extreme x299. The biggest issue is the first x16 slot is blocked by the CPU cooler. The 2nd is in use for the graphics card. I tried putting the Graphic cards into the x8 slot but because of the thickness, the other x16 slot is not accessibel.
Looking for a PCIe 4.0 x16 riser now that supports bifurcation. This would give me multiple options.
 

Mithril

Active Member
Sep 13, 2019
317
96
28
Have you tried taping the 2 SMBUS communication pins? there can be weird issues (even in some server boards) when cards try to use the SMBUS. Several of my SAS and 10/40GB cards need those pins taped on either all or some of my motherboards. I suggest kapton or similar tape, fold it *just* under and around and leave it long up the side of the card you tape to keep it sticking/peeling off going in or out of the slot. I'm not sure if the card is acting as a SMBUS relay/switch but I *have* had U.2 SSDs prevent boot when using the "single 4x PCIe to U.2" card adaptors.

The connector style SFF-8643 shouldn't/doesn't carry power, the cable/adaptor to the U.2 drives should have a place to plug into power. Make sure all of your adaptors/cables are working, and that all of the drives work one at a time.
 

tiebird

New Member
Aug 20, 2022
5
0
1
Thanks for all the help, I finally find the solution!

Needed to disable Fastboot in bios as well as disable the legacy part of the bios.
Took a while to find the correct settings, hopefully this will help somebody else.
 

Branko

Member
Mar 21, 2022
32
14
8
Most of the cards disappeared from aliexpress, did someone find new/better ones?
Personally, interested in some that would take two M.2 or U.2 SSDs
 

RolloZ170

Well-Known Member
Apr 24, 2016
2,663
678
113
55
Hope that this topic is not dead yet.
I recently bought a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
very similar cards with 1812 2812 require Bifurcation.
there is a same looking card with ASM1812 that requires bifurcation, beware !
1812_71hZWJRPUZL._AC_SL1500_.jpg
 
Last edited:

cptcrunch

Member
Dec 14, 2021
19
25
13
Kentucky
What is the thought of running a quad m.2 pcie x16 card in truenas for iscsi traffic to a bunch of vmware hosts? Will the cards be able to handle sustained high traffic workloads for years?

I was thinking of purchasing 4 x 1tb nvme or 4 x 2tb nvme and using 3 in raid with a hotspare, but i'm weighing that against a h730 and 4 x 2tb sas ssds for good speed and reliability.
 

nabsltd

Active Member
Jan 26, 2022
203
119
43
What is the thought of running a quad m.2 pcie x16 card in truenas for iscsi traffic to a bunch of vmware hosts?
Unless you have faster than 10Gbit Ethernet, you likely won't see a lot of advantage over a properly configured (mirrored, with SLOG and plenty of free RAM) set of spinning disks. If you have 40Gbit (or faster), you could see some gains at peak use.

But, it's really, really unlikely that you are going to hammer your storage with writes 24/7 at even 10Gbps. I don't know your use case, but in general it would take dozens of very active VMs to do this. For me, anything that manages to need that much write bandwidth only needs it for long enough to write out a 10-20 GB and then goes back to essentially idle on the disk. And, since I don't create 10-20 "new" GB very often, it's not a big deal.

Last, the folks over at TrueNAS will tell you that even NVMe drives should be mirrored instead of parity striped (raidz) if you are serving block data (like iSCSI). Striping will also lead to a lot of write amplification, so if you really need the 4-8GB/sec combined speed of these drives 24/7, you'd also need very high endurance drives. Even at the lowest speed (4GB/sec), spread over 3 drives, that's over 80 drive writes per day if you do it 24/7. Even at 1/10 of that speed, it's 8 DWPD. But, at 1/10 the speed is only 400MB/sec, which a spinning disk array can easily handle.

I suspect what you really want is some sort of tiered storage, where the NVMe are the ingest for writes, which is then copied to spinning disk. Reads would populate the NVMe as cache based on your criteria. This would allow you to handle high burst writes, which is probably what you really want, while still having plenty of total storage.
 

peter_sch

New Member
Oct 11, 2022
3
1
3
Thanks for the wealth of interesting and useful information posted here! After reading through the thread, I purchased the least expensive PCIe x16 -> 4x M.2 NVMe card with a PLX switch I could find on Aliexpress:

Ceacent ANM24PE16

I am using it to upgrade a 2010 MacPro 5,1. The NVMe SSD was previously attached by way of a single passive PCIe to M2 adapter, which peaked at around 1.7 GB/sec (sequential).

When I removed the drive from the old adapter and put it into the ANM24PE16, I was quite surprised so see that the transfer rate of the same SSD increased to 2 GB/sec which is about the maximum you expect to get out of four PCIe 2.0 lanes. I am wondering how this can be? Most people here are concerned that the switch chip would introduce additional latency, making the drive slower, but I am actually observing the opposite. Why can the same drive become faster when the data goes through the PLX-equipped adapter card as opposed to the directly connected passive adapter?

I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").

At any rate, I think it is a good card. The tiny fan is extremely noisy, but since the heatsink is massive, I will simply disconnect it as there is enough airflow from the Mac Pro's slot fan. I will do more testing, especially simultaneous transfers with more drives attached to the card.
 

RolloZ170

Well-Known Member
Apr 24, 2016
2,663
678
113
55
I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").
Dynamic Buffer Pool: The PEX 8748 employs a dynamic buffer pool for Flo Control (FC) management