Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Andriy Gapon

New Member
Apr 10, 2017
10
4
3
When I've had issues like that (although usually in consumer boards) SOMETIMES taping the two SMBUS connectors does the trick. You'll need some katon tape or similar, have it wrap *slightly* around the bottom to keep it from peeling off as you insert it and have it go up the card enough so it doesn't get stuck in the slot when pulling it out.

If that still doesn't work make sure you are using the same os on both boards to reduce variables, check that the slot works at all, and look in bios/eufi for any possible conflicting settings.
The slots definitely work as I tested them with a different card (9207-8i). The OS is the same too.
I could not find anything that I would want to try changing in the BIOS settings.
I'll check out the SMBus trick.
Thanks!
 

Andriy Gapon

New Member
Apr 10, 2017
10
4
3
After much trial and error, what helped was setting
PCI Express Port - Gen X [Gen3]
in BIOS settings under Advanced -> Chipset Configuration -> Integrated IO Configuration.
The original /default value was Auto.

Hope that may be useful for others.
 

tiebird

New Member
Aug 20, 2022
5
1
1
Hey guys,

Just bought what I think is a Linkreal LRNV9349-8I through Amazon to be used in combination with 3x Intel D7 P5620 U.2 6400 GB PCI Express 4.0 SSD's.

My system boots with the card and 1 of the SSD's attached, tested this with all 3 individually. But as soon as I connect a second SSD the system won't boot anymore. In my bios I set the port to PCIe gen3 x16.

Do you guys have any idea how I can fix this, if I understood correctly it should be possible to connect 4 devices at the same time because this would result in 24 of 32 lanes being used. Help would be appreciated :)
 

mrpasc

Well-Known Member
Jan 8, 2022
582
326
63
Munich, Germany
In general those Server grade U.2 NVME SSDs have very high power consumption. Your P5620 needs 20W each, so did you check your PSU has enough power for delivering 60W on that rail you use to connect the SSDs to?
 

tiebird

New Member
Aug 20, 2022
5
1
1
The psu has 850 W with 94% efficiency, threadripper 1920x with Rtx 2080 and 3 m.2 nvme SSD's. So my guess is that power isn't the issue. I do see the bios loading screen but it stops there.

I think I already found the reason:

- Card was on a PCIe gen3 x8 port instead of a x16
- SSD is PCIe gen4 x4 which results int PCIe gen3 x8

Probably faulty assumptions from my side:

- Expected the SSD to automatically fall to lower number of lanes
- Thought the number of lanes on the motherboard would only be a limiting "speed" factor and not how many disks you can connect

Either way, I will tray 2 disks with 2 different rails and see if this helps.

Thanks for the input!
 

tiebird

New Member
Aug 20, 2022
5
1
1
Just tested the setup with 2 nvme disks on different rails but still hangs on boot. Above 4G decoding is enabled.
Read that you can connect 8 nvme disks, not really sure how that works... unless they are all PCIe gen3 x2 but I don't see these in shops anymore.

Planning to upgrade my motherboard in the coming months so I will probably use the Asus Hyper v2 with bifurcation for my next build.
This should solve all my problems, just was hoping that I could already use them.
Only paid 450 euro VAT incl. for each drive so prices was very good!
 

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
- Card was on a PCIe gen3 x8 port instead of a x16
IF your card was working properly, the ONLY effect (of x8 vs x16) would be on the maximum/total/combined transfer rate (~7GB/s vs ~14GB/s). Neither the number of drives simultaneously accessible (8), nor the max speed of any single drive (~3.5 GB/s), should change.

But, it sounds like, on your card, the PCIe switch chip itself is mis-configured. (I recall seeing a similar report for an almost-clone card to yours [same PEX8749, same layout, only the heatsink differed].)

A necessary, but not sufficient on its own, condition to support my suspicion, is:
Try your card in an x16 slot (you never ID'd your mobo, but with a 60-lane CPU, I would hope you have at least one x16 slot). If everything works OK, we're (possibly) on the right track.

Let us know ...
 

tiebird

New Member
Aug 20, 2022
5
1
1
Motherboard is Asus Zenith Extreme x299. The biggest issue is the first x16 slot is blocked by the CPU cooler. The 2nd is in use for the graphics card. I tried putting the Graphic cards into the x8 slot but because of the thickness, the other x16 slot is not accessibel.
Looking for a PCIe 4.0 x16 riser now that supports bifurcation. This would give me multiple options.
 

Mithril

Active Member
Sep 13, 2019
433
148
43
Have you tried taping the 2 SMBUS communication pins? there can be weird issues (even in some server boards) when cards try to use the SMBUS. Several of my SAS and 10/40GB cards need those pins taped on either all or some of my motherboards. I suggest kapton or similar tape, fold it *just* under and around and leave it long up the side of the card you tape to keep it sticking/peeling off going in or out of the slot. I'm not sure if the card is acting as a SMBUS relay/switch but I *have* had U.2 SSDs prevent boot when using the "single 4x PCIe to U.2" card adaptors.

The connector style SFF-8643 shouldn't/doesn't carry power, the cable/adaptor to the U.2 drives should have a place to plug into power. Make sure all of your adaptors/cables are working, and that all of the drives work one at a time.
 

tiebird

New Member
Aug 20, 2022
5
1
1
Thanks for all the help, I finally find the solution!

Needed to disable Fastboot in bios as well as disable the legacy part of the bios.
Took a while to find the correct settings, hopefully this will help somebody else.
 
  • Like
Reactions: samarium

Branko

Member
Mar 21, 2022
44
23
8
Most of the cards disappeared from aliexpress, did someone find new/better ones?
Personally, interested in some that would take two M.2 or U.2 SSDs
 

RolloZ170

Well-Known Member
Apr 24, 2016
6,717
2,075
113
Hope that this topic is not dead yet.
I recently bought a Chinese card that supports two NVMe M.2 modules (link). Apparently, it is based on Asmedia ASM2812.
very similar cards with 1812 2812 require Bifurcation.
there is a same looking card with ASM1812 that requires bifurcation, beware !
1812_71hZWJRPUZL._AC_SL1500_.jpg
 
Last edited:

cptcrunch

Member
Dec 14, 2021
56
63
18
Kentucky
What is the thought of running a quad m.2 pcie x16 card in truenas for iscsi traffic to a bunch of vmware hosts? Will the cards be able to handle sustained high traffic workloads for years?

I was thinking of purchasing 4 x 1tb nvme or 4 x 2tb nvme and using 3 in raid with a hotspare, but i'm weighing that against a h730 and 4 x 2tb sas ssds for good speed and reliability.
 

nabsltd

Well-Known Member
Jan 26, 2022
594
423
63
What is the thought of running a quad m.2 pcie x16 card in truenas for iscsi traffic to a bunch of vmware hosts?
Unless you have faster than 10Gbit Ethernet, you likely won't see a lot of advantage over a properly configured (mirrored, with SLOG and plenty of free RAM) set of spinning disks. If you have 40Gbit (or faster), you could see some gains at peak use.

But, it's really, really unlikely that you are going to hammer your storage with writes 24/7 at even 10Gbps. I don't know your use case, but in general it would take dozens of very active VMs to do this. For me, anything that manages to need that much write bandwidth only needs it for long enough to write out a 10-20 GB and then goes back to essentially idle on the disk. And, since I don't create 10-20 "new" GB very often, it's not a big deal.

Last, the folks over at TrueNAS will tell you that even NVMe drives should be mirrored instead of parity striped (raidz) if you are serving block data (like iSCSI). Striping will also lead to a lot of write amplification, so if you really need the 4-8GB/sec combined speed of these drives 24/7, you'd also need very high endurance drives. Even at the lowest speed (4GB/sec), spread over 3 drives, that's over 80 drive writes per day if you do it 24/7. Even at 1/10 of that speed, it's 8 DWPD. But, at 1/10 the speed is only 400MB/sec, which a spinning disk array can easily handle.

I suspect what you really want is some sort of tiered storage, where the NVMe are the ingest for writes, which is then copied to spinning disk. Reads would populate the NVMe as cache based on your criteria. This would allow you to handle high burst writes, which is probably what you really want, while still having plenty of total storage.
 

peter_sch

New Member
Oct 11, 2022
3
1
3
Thanks for the wealth of interesting and useful information posted here! After reading through the thread, I purchased the least expensive PCIe x16 -> 4x M.2 NVMe card with a PLX switch I could find on Aliexpress:

Ceacent ANM24PE16

I am using it to upgrade a 2010 MacPro 5,1. The NVMe SSD was previously attached by way of a single passive PCIe to M2 adapter, which peaked at around 1.7 GB/sec (sequential).

When I removed the drive from the old adapter and put it into the ANM24PE16, I was quite surprised so see that the transfer rate of the same SSD increased to 2 GB/sec which is about the maximum you expect to get out of four PCIe 2.0 lanes. I am wondering how this can be? Most people here are concerned that the switch chip would introduce additional latency, making the drive slower, but I am actually observing the opposite. Why can the same drive become faster when the data goes through the PLX-equipped adapter card as opposed to the directly connected passive adapter?

I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").

At any rate, I think it is a good card. The tiny fan is extremely noisy, but since the heatsink is massive, I will simply disconnect it as there is enough airflow from the Mac Pro's slot fan. I will do more testing, especially simultaneous transfers with more drives attached to the card.
 

RolloZ170

Well-Known Member
Apr 24, 2016
6,717
2,075
113
I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").
Dynamic Buffer Pool: The PEX 8748 employs a dynamic buffer pool for Flo Control (FC) management
 
  • Like
Reactions: mrpasc

UhClem

just another Bozo on the bus
Jun 26, 2012
470
280
63
NH, USA
Thanks for the wealth of interesting and useful information posted here! After reading through the thread, I purchased the least expensive PCIe x16 -> 4x M.2 NVMe card with a PLX switch I could find on Aliexpress:

Ceacent ANM24PE16

I am using it to upgrade a 2010 MacPro 5,1. The NVMe SSD was previously attached by way of a single passive PCIe to M2 adapter, which peaked at around 1.7 GB/sec (sequential).

When I removed the drive from the old adapter and put it into the ANM24PE16, I was quite surprised so see that the transfer rate of the same SSD increased to 2 GB/sec which is about the maximum you expect to get out of four PCIe 2.0 lanes. I am wondering how this can be? ... I measured it multiple times ...
That card is an excellent value, delivering full performance as a no-frills NVMe HBA. I have (3 of) it's non-identical twin, the ANU28PE16 (8x U.2).

As for your performance result, and your analysis ...
[a critique (not a criticism):] When you get a measured throughput result (which you are confident of) that matches the theoretical maximum, something definitely is wrong--and you should re-examine the assumptions you've made.

While your x16 slot is definitely Gen2, the card uses a Gen3 switch chip (PEX8748), which does negotiate a Gen2 upstream link (to the host), but it negotiates Gen3 downstream link(s) to the M.2 NVMe target(s).

A few months ago, I had a chance to test this out with an old HP 6300 Pro:
Code:
sda     111.8G  KINGSTON SA400S37120G
nvme0n1 477G    ADATA SX8200PNP
nvme1n1 953.9G  ADATA SX8200PNP
nvme2n1 931.5G  WDS100T3X0C-00SJG0

nvme0 = 3367.4 MB/sec

(2x - tested concurrently):
nvme0 = 3389.1 MB/sec
nvme1 = 3289.0 MB/sec

(3x - tested concurrently):
nvme0 = 2422.0 MB/sec
nvme1 = 2286.6 MB/sec
nvme2 = 2412.4 MB/sec
Note the 3x totals ~7100 MB/sec which is the real-world max for Gen2 x16.

The lspci -vv output (mucho elided):
Code:
...
01:00.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
 ...            LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L1, Exit Latency L1 <4us
 ... (====>>)   LnkSta: Speed 5GT/s (downgraded), Width x16 (ok)
 ...
02:09.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
 ...            LnkCap: Port #9, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
02:0a.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
 ...            LnkCap: Port #10, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
02:0b.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
 ...            LnkCap: Port #11, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
02:10.0 PCI bridge: PLX Technology, Inc. PEX 8748 48-Lane, 12-Port PCI Express Gen 3 (8 GT/s) Switch, 27 x 27mm FCBGA (rev ca) (prog-if 00 [Normal decode])
 ...            LnkCap: Port #16, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <4us
 ...            LnkSta: Speed 2.5GT/s (downgraded), Width x0 (downgraded)
 ...
 ...
04:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) (prog-if 02 [NVM Express])
 ...            LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <8us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
05:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) (prog-if 02 [NVM Express])
 ...            LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
06:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express])
 ...            LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
 ...            LnkSta: Speed 8GT/s (ok), Width x4 (ok)
 ...
Note the upstream x16 downgrade (====>>)

PCIe switches are cool!