Bifurcation x4 into two times x2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

flamm

New Member
Oct 18, 2022
10
1
3
I have two Optane nvme drives I use as a log devices for TrueNAS. The drives both use only PCIe x2 connectivity.
Is there a way to attach two of these drives to a single PCIe x4 slot on the motherboard? Maybe there are simple adapters (although I do not think the mainboard supports bifurcation down to x2), or do I need an HBA? I could not find anything.
At the moment, each drives blocks a PCIe x4 slot without using half of the lanes, which is pretty wasteful.

Any idea is appreciated! Thank you!

Slot and drives are PCIe 3.0. These are the optane modules, if important:
 

zir_blazer

Active Member
Dec 5, 2016
364
132
43
For which Motherboard and platform? You need something that can actually bifurcate to 2x/2x. Intel Chipsets (NOT Processors) usually can, but this is not exposed.
 
  • Like
Reactions: flamm

flamm

New Member
Oct 18, 2022
10
1
3
Thank you for helping out.
It's an Asus WS X299 Pro/SE with an Intel i9-7900X.
 

flamm

New Member
Oct 18, 2022
10
1
3
It seems the X299 chipset does support it. However, I can not find an option to enable bifurcation to x2x2 in BIOS. Does this mean I need to find a RAID card / HBA that does it for me? I would like to avoid it as TrueNAS likes to have direkt access to the drives.
 

flamm

New Member
Oct 18, 2022
10
1
3
is the x4 slot open?
if yes: there are x8 adapters with plx chips and m.2 sockets :D
Yes, it ist physically x16, but electrically x4. That is an interesting idea. Would that mean that the PLX is not switching, but just "redirecting" the lanes 1,2 and 5,6 used by the optane modules to lanes 1-4 of the slot? Would that mean that both drives can be utilized at the same time at their respective full speed?
As far as I understand, the PLX does not mask the drives like a hardware raid card would do, right? I need the drives to be accessible to the operating system.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,599
1,744
113
35
Germany
No, the plx switch will do "switching".
"Speed" is the wrong word; you will get the full bandwidth (depending on the add on card/switch configuration) of x2 lanes per ssd with an increased latency
 
  • Like
Reactions: flamm

mrpasc

Well-Known Member
Jan 8, 2022
599
360
63
Munich, Germany
Most of those cards are x16 and their switches split to x4x4x4x4. But there are x8 cards with 4 M.2 sockets which then split to x2x2x2x2. Check for Supermicro PCI-E 3.0 carrier card for up to four NVMe M.2 SSDs - AOC-SHG3-4M2P. I used that one for 4 Optane M10 32GB in a x8. So it should split in a x4 to x2x2. But they are pricey,…
 
  • Like
Reactions: flamm

UhClem

just another Bozo on the bus
Jun 26, 2012
490
298
63
NH, USA
I have two Optane nvme drives I use as a log devices for TrueNAS. The drives both use only PCIe x2 connectivity.
Is there a way to attach two of these drives to a single PCIe x4 slot on the motherboard?
Check this [Link] ($46). It uses an Asmedia 2812 switch (PCIe gen3); configured as x4 upstream/host, two x4 downstream/targets.
asm2812.jpg
Although Asmedia does not specify/document the chip's latency, I would guess it is in the range of 100-250 ns; that would mean about 1.5-4% (worst-case) for your Optane
 

itronin

Well-Known Member
Nov 24, 2018
1,343
890
113
Denver, Colorado
dumb question, if you have an x8 open and it supports x4x4 then why note just use a low cost (like the supermicro) m.2 bifurcation adapter ~40 USD used typically). your x2 m.2 cards will sync up rather than a more expensive plx solution(?). bit of a waste of x4 lanes but to me seems simplest and lowest cost solution if you don't mind burning the lanes - which you are in essence doing with plx, x8 slot, and dual m.2 that are x2. you are saving a slot though and that is usually goodness in my book.

just one other FWIW and I don't mean to rain on your parade. Maybe you are aware but maybe not so I'll mention it: If you have heavy writes you may well kill those 16GB optane modules quickly. they have a pretty low TBW. I seem to recall some early adopter & intrepid chia miners using the same modules and eating them in a couple of months...
 
  • Like
Reactions: flamm

mrpasc

Well-Known Member
Jan 8, 2022
599
360
63
Munich, Germany
The OP wants to get the most out of an x4 port. He never mentioned an x8.
Me was using the x8 with 4 of the M10, but not as an SLOG/ZIL, was running a metadata only special vDev on them. There are great for that usecase due to their excellent read of small files.
 
  • Like
Reactions: flamm

Markess

Well-Known Member
May 19, 2018
1,207
829
113
Northern California
Thank you for helping out.
It's an Asus WS X299 Pro/SE with an Intel i9-7900X.

If you aren't familiar with them, Asus makes a series of expansion cards (Hyper M.2 series) that hold up to four m.2 drives in an x4x4x4x4 configuration. These are simple pass-through cards. So no PLX, which makes them relatively inexpensive.

You DO NOT have to use an Asus Hyper M.2 expansion card to mount multiple drives in your Asus motherboard. For two drives, the Supermicro expansion card that @itronin mentions should work just fine. But, the Asus compatability list for using their product with their motherboards is very useful for understanding what your Asus motherboard's bifurcation capabilities are when using expansion cards of this type. See this page: [Motherboard] Compatibility of PCIE bifurcation between Hyper M.2 series Cards and Add-On Graphic Cards | Official Support | ASUS USA

For your motherboard with a 44 lane CPU installed, your M.2 drive capacity via bifurcation is listed as:

PCIeX16_1: 4 Drives
PCIeX16_2: 4 Drives
PCIeX16_3: 2 Drvies
PCIeX16_4: 1 Drive

Asus assumes splitting the slots in x4x4 etc. since that's what their product can do. You'd need to check your motherboard's BIOS to see if it can x2x2etc. But, according to the compability list for their products, you would need to use slot #1,2, or 3. It appears that #4 can't be bifurcated.
 
  • Like
Reactions: flamm

zir_blazer

Active Member
Dec 5, 2016
364
132
43
It seems the X299 chipset does support it. However, I can not find an option to enable bifurcation to x2x2 in BIOS. Does this mean I need to find a RAID card / HBA that does it for me? I would like to avoid it as TrueNAS likes to have direkt access to the drives.
I have seen people doing BIOS modding that ocassionally showcases the available options and mentions of 2x/2x on Chipset PCIe Ports is rather common, but no one seems to expose this by default, besides than the adapters for 2x/2x in a PCIe 4x card should be pretty much nonexistent. No other card will work because you NEED than from the PCIe 4x slot, Lane 0-1 is wired to one Port/Device and 2-3 to other.
 
  • Like
Reactions: flamm

flamm

New Member
Oct 18, 2022
10
1
3
Thank you to everyone who contributed! Reading your posts was very informative to me.
AOC-SHG3-4M2P. I used that one for 4 Optane M10 32GB in a x8
Great and specific suggestion! A bit expensive for me at the moment, but essentially what I need with possible use of up to 4 drives.
Check this [Link] ($46). It uses an Asmedia 2812 switch (PCIe gen3); configured as x4 upstream/host, two x4 downstream/targets.
Great find and specific suggestion, and affordable. I have ordered one and will try it out.
Although Asmedia does not specify/document the chip's latency, I would guess it is in the range of 100-250 ns; that would mean about 1.5-4% (worst-case) for your Optane
Thank you! If I understand it right, this should not be an issue for a log device on truenas. The added latency should not be noticable.
If you have heavy writes you may well kill those 16GB optane modules quickly. they have a pretty low TBW.
The Optane M10 16GB has a rating of 365 TBW, compared to a 970 evo 500GB with 300 TBW. And for being substantially cheaper, I think I can stick to it as long as they are available. The pool is used as a file server with forced sync writes, so not much going on, but the files are critical and I must not loose them.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
490
298
63
NH, USA
Although Asmedia does not specify/document the chip's latency, I would guess it is in the range of 100-250 ns; that would mean about 1.5-4% (worst-case) for your Optane
Thank you! If I understand it right, this should not be an issue for a log device on truenas. The added latency should not be noticable.
(I have ordered one and will try it out.)
Correct. In fact, as a ZFS Log_device (if I understand it right, it is write-only [until sh*t happens]), it will be even less noticed--since the write latency of M10 (16GB) is 30microsecs (vs 8 for read; used as worst-case, above). Also, you can/should use the x4 slot (phys&elec--chipset connected), since your bandwidth needs are tiny (300 MB/s); keep your x4elec/x16phys [CPU slot] free for more demanding use.

If you're interested in doing a simple timing test, we can measure (deductively) the latency of the ASM2812 when you get the card.
The Optane M10 16GB has a rating of 365 TBW
It is curious/interesting that all 3 Optane M10's (16, 32, 64GB) have the same (365 TBW) Endurance Rating ... ??? [unlike other Optanes, and NVMe SSDs in general].
 
  • Like
Reactions: flamm

flamm

New Member
Oct 18, 2022
10
1
3
if I understand it right, it is write-only [until sh*t happens]
I can confirm that from truenas reporting zero reads on the optane modules during file transfers, only writes.
It is curious/interesting that all 3 Optane M10's (16, 32, 64GB) have the same (365 TBW) Endurance Rating ... ??? [unlike other Optanes, and NVMe SSDs in general].
That is indeed interesting and does not make much sense to me. It would mean that the endurance of the individual 3D X-point chips in the higher capacity models is lower, which I doubt. And at least the 16GB model can not distribute the writes somehow, because only one of the two places for the chips is populated. Or the ratings are wrong. Which I hope they are not.
If you're interested in doing a simple timing test, we can measure (deductively) the latency of the ASM2812 when you get the card.
I would indeed be interested. How could I test the latency?
It will need a couple weeks until it arrives, though.
 
Last edited:

flamm

New Member
Oct 18, 2022
10
1
3
I have one additional question:
In addition to the new PCIe switch card for the two optane M10s, which I am waiting for now, I have a 10Gbit PCIe 3.0 x4 NIC (1 port).
On the mainboard are two PCIe 3.0 x4 slots available. One is linked directly to the CPU, the other goes to the PCH. No additional slot is free.
The PCH is linked to the CPU via DMI x4, which is äquivalent to PCIe 3.0 x4, having a throughput of 31.5 Gbit/s, if I understand correctly.
How would you distribute the two cards? Which one gets the direct CPU link?
 

mrpasc

Well-Known Member
Jan 8, 2022
599
360
63
Munich, Germany
In theorie one would attach the Optane M10 to CPU and the NIC to the PCH. But if you want do something like GlusterFS or Ceph then vice versa as then you want lowest latency for the NIC.
In real homelabers environment it doesn’t matter. You might be able to bench/measure differences but you will not „feel“ them in reality.
 
  • Like
Reactions: flamm