Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TRACKER

Active Member
Jan 14, 2019
260
110
43
in idle is around 20-22W total (4--5W for PLX and around 4W per SSD), under load goes up to around 30W (around 8W per PLX and 5-6W per SSD)
 
  • Like
Reactions: unphased

unphased

Active Member
Jun 9, 2022
164
39
28
That's great info! Awesome. Yeah, I think I speak for many when I say that it is a big factor on designing a system whether the addition of the PLX card is going to add 5W extra power draw or 20W extra power draw! Would definitely think twice if it were 20W. It's not enough to make a big difference if you're doing mad scientist stuff, but if you're even a bit conscientious you have to start weighing it.

But as you state it should only be 5 to 8 watts, which makes me worry a whole lot less about it.
 

FlashBasedFox

New Member
Jan 5, 2024
6
0
1
I wanted to contribute a bit to this discussion since I found (and bought) a U.2 quad M.2 NVMe carrier. Its apparently made by Viking Enterprise Solutions and while they don't have a webpage dedicated for it, they have a link up that leads to a PDF product brief on it. Not sure what controller it uses but as far as I'm aware it splits the 4 PCIe Gen3 lanes that the U.2 drive gets and gives 1 lane each to every drive on board.

Viking Enterprises U20040 PDF Product Brief

Currently waiting on a U.2 to PCIe x4 adapter in the mail to test it, will report back when I get it.
 

cestovatel

New Member
Jan 29, 2024
3
0
1
old reply to old post:

Thanks for the wealth of interesting and useful information posted here! After reading through the thread, I purchased the least expensive PCIe x16 -> 4x M.2 NVMe card with a PLX switch I could find on Aliexpress:

Ceacent ANM24PE16

I am using it to upgrade a 2010 MacPro 5,1. The NVMe SSD was previously attached by way of a single passive PCIe to M2 adapter, which peaked at around 1.7 GB/sec (sequential).

When I removed the drive from the old adapter and put it into the ANM24PE16, I was quite surprised so see that the transfer rate of the same SSD increased to 2 GB/sec which is about the maximum you expect to get out of four PCIe 2.0 lanes. I am wondering how this can be? Most people here are concerned that the switch chip would introduce additional latency, making the drive slower, but I am actually observing the opposite. Why can the same drive become faster when the data goes through the PLX-equipped adapter card as opposed to the directly connected passive adapter?

I measured it multiple times, and I know for sure that it cannot be a drive cache issue since the drive is an ultra-cheap cacheless model based on the SM2263XT chip ("Walram W2000").

At any rate, I think it is a good card. The tiny fan is extremely noisy, but since the heatsink is massive, I will simply disconnect it as there is enough airflow from the Mac Pro's slot fan. I will do more testing, especially simultaneous transfers with more drives attached to the card.
You´re a lucky one with such a speed of ANM24PE16. I just tested Axagon PCEM2-ND ( https://forums.servethehome.com/ind...furcation.31172/page-3#lg=post-353379&slide=0 ) low-cost card with ASMedia chip on (allegedly working with PCIe Gen2 x4 in reality), and my results on my old Supermicro boards (X9SRH-7F and ancient C2SBA+ II.) was a bit disappointment. Speed with Samsung 970 Evo Plus NWMe SSD was only about 2,2-2,5x better than with my another Samsung 860 Pro SSD connected directly to onboard SATA (3 or 2 respectively). I don´t know if it could be due to lack of dynamic buffer pool. I tested it also during copying cca 70+ GB of data here and there and still without any significant (=still bad) speed changes :(, which I suppose should make some effect on (lack of) buffer.
 
Last edited:

FlashBasedFox

New Member
Jan 5, 2024
6
0
1
I've got a quick question. I've got a QM2-4P-284 (4 NVMe SSDs @PCIe Gen2 x8). I'm aware there is a switch chip on-board the AIC so my question is: If I run the card which is meant to split the 8 lanes amongst the 4 drives by 2 lanes for each drive, will running it in an electrically wired PCIe x4 slot run each drive at x1 lane each or will only 2 drives show up in Windows. Thanks in advance!
 

alaricljs

Active Member
Jun 16, 2023
219
93
28
It's a switch w/ lane aggregation... so any active devices gets 1/<active devices> bandwidth not an electrical separation of lanes. And bandwidth is however many lanes are available to the host side of the switch.
 
  • Like
Reactions: FlashBasedFox

FlashBasedFox

New Member
Jan 5, 2024
6
0
1
It's a switch w/ lane aggregation... so any active devices gets 1/<active devices> bandwidth not an electrical separation of lanes. And bandwidth is however many lanes are available to the host side of the switch.
Thanks for the swift reply. I'm still new to this niche technology and I'm trying to learn so I appreciate you humoring my question.
 

nirurin

New Member
Feb 21, 2024
3
0
1
I stumbled onto this thread while looking for a solution to adding nvme drives to me non-bifurcation b760i board, very handy!

However I noticed a lot of the list of devices was from a few years ago, so wanted to check if there was a currently recommended set of options?

Looking for a low profile (half height) with the best performance available for either 2 or 4 nvme drives (I haven't decided if I'll go for 2 or 4 yet so figured I'd find both and decide then).

Anything reliable and budget-friendly around these days?

Edit -

This seems to be one of the only ones that will ship to the UK still... most others are out of stock now (?) Failing to find newer variants so I wonder if they're just not being made anymore or something
 
Last edited:

mrpasc

Well-Known Member
Jan 8, 2022
581
324
63
Munich, Germany
Well, most of those cards are PCIE 3.0. Meanwhile 5.0 is starting to hit mass market. There is simply not enough demand (and PCIE switch chips for PCIE 4.0 and 5.0 are much more demanding due to the higher bandwidth , thus more expensive). Today you can go with capacities up to 30TB with a single U.3, so not enough people want to stripe / „raid“ their NVMEs for bandwidth and/or capacity.
 

nirurin

New Member
Feb 21, 2024
3
0
1
Well, most of those cards are PCIE 3.0. Meanwhile 5.0 is starting to hit mass market. There is simply not enough demand (and PCIE switch chips for PCIE 4.0 and 5.0 are much more demanding due to the higher bandwidth , thus more expensive). Today you can go with capacities up to 30TB with a single U.3, so not enough people want to stripe / „raid“ their NVMEs for bandwidth and/or capacity.

Makes sense, but for a home user like me, I suspect 'low quality' NVME drives like the 4tb P3 will still be the best performance-per-dollar option to cram into a NAS haha.

The pci card I found that adds 4 nvmes to a 16x slot is probably worth it, I'll just have to save up to cram it full of 16tb of storage aha
 

pimposh

hardware pimp
Nov 19, 2022
243
140
43
Makes sense, but for a home user like me, I suspect 'low quality' NVME drives like the 4tb P3 will still be the best performance-per-dollar option to cram into a NAS haha.
For few minutes long sequential write, probably. Otherwise spinners are just giving similiar performance except IO.
 
  • Like
Reactions: nexox

nirurin

New Member
Feb 21, 2024
3
0
1
Well sure, if you only send terabyte sized files, or send terabytes of data at a time.

Not a frequent activity for most people.

And I only chose the p3+ because its the absolute cheapest, spend a little more and you avoid that problem entirely (if you're someone who needs to send terabytes at a time). But if you are, you're probably using enterprise equipment anyway. Niche within a niche.

Not really on topic for this thread anyway.
 

Mithril

Active Member
Sep 13, 2019
432
148
43
Well, most of those cards are PCIE 3.0. Meanwhile 5.0 is starting to hit mass market. There is simply not enough demand (and PCIE switch chips for PCIE 4.0 and 5.0 are much more demanding due to the higher bandwidth , thus more expensive). Today you can go with capacities up to 30TB with a single U.3, so not enough people want to stripe / „raid“ their NVMEs for bandwidth and/or capacity.
For a "$ per TB" it can still make sense, especially if you are doing some level of parity, to do more drives around 4TB than fewer around 15/16TB. Not to mention getting used enterprise SSDs where the price delta per TB is quite large
 
  • Like
Reactions: JustinClift

cfunk

New Member
Mar 15, 2024
4
0
1
Hi, I'm new and come for this very thread. Thanks for the heads up.

I am looking at different cards that do not require bifurcation for my servers at home, to make the most out of my PCIe 3 lanes.

so, to put in context: ASM1812 (beware PCIe Gen2) this card has two slots. If I plug it in a 8X, it means that it will take 4X and 4X lanes ( one for each disk )

Does this means each disk will work at full speed ? There are also cards for 8X PCIe slots that have 4 disk slot. To my understanding, those cards will not run the disks at full speed due to being 4 disk slots on a 8X PCIe slot, because each disk takes a 4X PCIe lane ?

Is my reasoning correct?

Thanks
 

alaricljs

Active Member
Jun 16, 2023
219
93
28
That's just a chip and it has multiple config options. You need to find a card and see what choices were implemented.
 

pimposh

hardware pimp
Nov 19, 2022
243
140
43
Does this means each disk will work at full speed ? There are also cards for 8X PCIe slots that have 4 disk slot. To my understanding, those cards will not run the disks at full speed due to being 4 disk slots on a 8X PCIe slot, because each disk takes a 4X PCIe lane ?

Is my reasoning correct?

Thanks
Well if you do 2 mirror pools and you won't use them at the same time then still you're closer to full 2x 4x.
Since switching as it sounds... switch lanes. Not redistribute them.
 

nexox

Well-Known Member
May 3, 2023
1,268
583
113
Of course if your switch chip runs PCIe 2.x then nothing will be full speed, because the very few SSDs running that generation tend to require an x8 slot, and chances are anything you're actually looking at is at least 3.0.