Multi-NVMe (m.2, u.2) adapters that do not require bifurcation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

unphased

Active Member
Jun 9, 2022
148
26
28
It seems reasonable for converting one slot into two M.2’s, effectively adding just one more slot if that’s all you need. But for around $100 you can get PLX based cards that give you 4 or 8 M.2/U.2 expansion, which is a lot more flexible and capable.
 

slidermike

Active Member
May 7, 2023
116
45
28
Only plx 4+ m.2 nvme cards I saw were more expensive than the pex version (bifurcation mobo required) but I assume that's because of the cost of the plx controller on the board. Anything with more than 4 slots should be added expensive.
I too would appreciate an aliexpress or other link to a true 4 port m.2 nvme plx pcie. I would even settle for gen 3 pci speed; better temps & lower power draw.
cheapest price i can find for a plx nvme riser card 4 port, pci gen 3.
$115+$28 shipping
 
Last edited:

unphased

Active Member
Jun 9, 2022
148
26
28
Where can I find this 8 M.2 card for $100 more? Link please.
Here is an example. https://www.aliexpress.us/item/3256805484706547.html

however it will be a lot of adapter shenaningans to adapt to M.2, it may be more straightforward to adapt that SFF-8654 into U.2 instead.

I opted for my personal needs a 4x M.2 card https://www.aliexpress.us/item/3256801152814110.html instead of this, since all the SFF8654 to M.2 or PCIe adaptation I would need wasn't realistically gonna fit in my rig, even if I affix them to case panels or something, my case is stuffed to the gills. And I did not see myself needing more than 4 slots (to gain 3 more slots) and... I think I made the right call. I have actually a ton of SATA drives running off an LSI HBA plugged in a PCIe x4 riser in one of these M.2 slots... It's been working... really allowed me to push the X570 farther into server territory than it has any right to, with 128GB ECC UDIMMs, 12 HDD's, two 3090's, 3 NVMe (could be 4... i had one keep dropping out, but it seems reliable in a separate rig, i need it there anyway...), a dual port 40Gbit mellanox NIC.

Yes these things are PCIe 3.0. We'll see how many years it takes for pcie4 gear to trickle down to sane prices. Would love to see some cool stuff to break 4 pcie5.0 lanes into 16 pcie3.0 lanes with which to use random older hardware with. I even think there should be a market for that. But there seems to be a chicken and egg issue preventing that from being a reality.
 
Last edited:
  • Like
Reactions: abq and KingKaido

KingKaido

New Member
Nov 24, 2023
18
15
3
Hey, first post but i have been following this wonderful thread for a while :)

sorry to divert the topic abit but seeing as based on this thread, ceacent is somewhat trusted for their PLX Switch PCIe Cards (i am planning to get their 4 NVME PCIE 3.0 x16 Card very soon), what do you think about their LSI HBA cards?

i wanted to draw your attention to their ceacent made HBAs and then their 'Broadcom' based ones, would you guys trust that the Broadcom ones are indeed original / real or just the same card but priced higher?

There are 2 HBAs i'm tring to decide between, the LSI 9305-24i and the LSI 9500-16i

for the LSI 9500-16I
the ceacent made one is 175$
whereas the original broadcom one is 248$
then another broadcom one (an actual RAID card) for 727$

for the LSI 9305-24i
the original broadcom one is 296$
not ceacent but another somewhat trusted store is 378$


I don't mind paying more for the 'original broadcom' version for the piece of mind, but would you trust its actually is that, original, and made to the same spec as a retail/oem card?

also does it make sense for the 9305-24i to be more expensive than the 9500-16i, seeing as the latter is PCIE 4.0 and newer, or is that fact the 24i has 6 x MiniSAS HD ports more desirable and flexible vs 2 x SFF-8654 ports?

I am drawn more to the 9500-16 because a single SFF-8654 cable can be broken out into 8 sata/sas cables or 2 U.2/M.2 cables, so if my theory is correct, you could run 8 HDDs in Raid 10/5/6 or the ZFS equivalents and then have 2 NVMEs in Raid 1/0 as superfast pool or cache all via one HBA card, all in the PCIE 4.0 x 8 (15.754 GB/s) slot, and in my usecase, a TrueNas VM in proxmox, that is the only card i would need to passthrough for everything, which is very exciting

So going back to the original question, would you trust the Ceacent Broadcom LSI HBAs as genuine? or has anyone bought the Ceacent made HBAs and had a good experience from it?
Thanks :)
 

KingKaido

New Member
Nov 24, 2023
18
15
3
really allowed me to push the X570 farther into server territory than it has any right to, with 128GB ECC UDIMMs, 12 HDD's, two 3090's, 3 NVMe (could be 4... i had one keep dropping out, but it seems reliable in a separate rig, i need it there anyway...), a dual port 40Gbit mellanox NIC.
that is a really solid setup, and yeah i think consumer AMD and intel (w680 for ECC support) is where its at these days, epyc or threadripper is just too expensive these days or the idle power would be alot compared to Zen 3/4, Alder lake/Raptor lake systems, plus these new CPUs are crazy fast, the only annoying thing is the 128GB RAM limit due to 4 slots and max 32GB ECC RAM sticks
 

unphased

Active Member
Jun 9, 2022
148
26
28
epyc or threadripper is just too expensive these or the idle power would be alot compared to Zen 3/4, Alderlake/Raptor lake systems, plus these new CPUs are crazy fast, the only annoying thing is the 128GB RAM limit due to 4 slots and max 32GB RAM sticks
Yeah I also have a 5800X3D SFF gaming workstation rig which I will grab a $88 2x32GB kit (the crucial one) tomorrow so I can take it with me on my trip, and that upgrade is going to really let me stretch its workstation legs. 32GB DIMMs cheap as dirt is really coming in handy, these ones are supposed to clock easily up to 3600mhz at least as well.

Something like a 96 core threadripper is really compelling on the consolidation capability. You could really run a business off a single machine like that. I did some rough math, My jank ass home datacenter pooling all my accumulated hardware last few years (5 PCs) with an optimistic comparison of compute horsepower, can't even reach half of one of those 96 core monsters! But i guess combined i have more horsepower than a modern 32 core threadripper still at least lol. Too rich for me at $100 a core when I'm over here collecting old cores at $3 per core and modern fast CPUs at $30 per core. Once I have a workload that makes money though I can justify building some of those sexy things though.
 

KingKaido

New Member
Nov 24, 2023
18
15
3
Yeah I also have a 5800X3D SFF gaming workstation rig which I will grab a $88 2x32GB kit (the crucial one) tomorrow so I can take it with me on my trip, and that upgrade is going to really let me stretch its workstation legs. 32GB DIMMs cheap as dirt is really coming in handy, these ones are supposed to clock easily up to 3600mhz at least as well.

Something like a 96 core threadripper is really compelling on the consolidation capability. You could really run a business off a single machine like that. I did some rough math, My jank ass home datacenter pooling all my accumulated hardware last few years (5 PCs) with an optimistic comparison of compute horsepower, can't even reach half of one of those 96 core monsters! But i guess combined i have more horsepower than a modern 32 core threadripper still at least lol. Too rich for me at $100 a core when I'm over here collecting old cores at $3 per core and modern fast CPUs at $30 per core. Once I have a workload that makes money though I can justify building some of those sexy things though.
Haha yeah i agree once the budget allows for it, threadripper is the dream, i was even trying to justify the 24core threadripper 7000 just to get on the platform and have access to the 80+ PCIe lanes... and then you realise the inital CPU + motherboard + Ram cost will set you back 3k$+ easily :(, but being realistic i still need to max out my intel W680 system first. i have 2 PCIE 5.0 x8x8 lanes to fully use, but it is fun to dream...
 

unphased

Active Member
Jun 9, 2022
148
26
28
Haha yeah i agree once the budget allows for it, threadripper is the dream, i was even trying to justify the 24core threadripper 7000 just to get on the platform and have access to the 80+ PCIe lanes... and then you realise the inital CPU + motherboard + Ram cost will set you back 3k$+ easily :(, but being realistic i still need to max out my intel W680 system first. i have 2 PCIE 5.0 x8x8 lanes to fully use, but it is fun to dream...
Yeah I'm "behind the times" on pcie 4 and ddr4 over here because I invested rather heavily in zen 3. historically i tend to go in bursts because the intensity of my computer hardware obsession comes and goes like waves. Recent AI developments are the start of some changes that are probably going to shake that trend up for me though. Not sure what the world will look like 5 years from now.
 

KingKaido

New Member
Nov 24, 2023
18
15
3
also does it make sense for the 9305-24i to be more expensive than the 9500-16i, seeing as the latter is PCIE 4.0 and newer, or is that fact the 24i has 6 x MiniSAS HD ports more desirable and flexible vs 2 x SFF-8654 ports?

I am drawn more to the 9500-16 because a single SFF-8654 cable can be broken out into 8 sata/sas cables or 2 U.2/M.2 cables, so if my theory is correct, you could run 8 HDDs in Raid 10/5/6 or the ZFS equivalents and then have 2 NVMEs in Raid 1/0 as superfast pool or cache all via one HBA card, all in the PCIE 4.0 x 8 (15.754 GB/s) slot, and in my usecase, a TrueNas VM in proxmox, that is the only card i would need to passthrough for everything, which is very exciting
I have learnt alot about bifurcation, PCIe Switching /PLX Switch cards in the last few days and thought to share the information,

Firstly in my response to my original post, it makes sense why PCIe 3.0 x8 Cards (9207-8i, 9300-16i, 9305-16/24i etc) are all what people need in terms of bandwidth
Versionx1x2x4x8x16
3.00.985 GB/s1.969 GB/s3.938 GB/s7.877 GB/s15.754 GB/s
4.01.969 GB/s3.938 GB/s7.877 GB/s15.754 GB/s31.508 GB/s
5.03.938 GB/s7.877 GB/s15.754 GB/s31.508 GB/s63.015 GB/s
Also PCIe 2.0 x8 is 3.938 GB/s for the older gen HBAs

I forgot that HBAs are just 'dumb' cards that split bandwidth equally, so a PCIe 3.0 x8 with ~8GB/s bandwitch, using SAS expanders/ backplanes, you can easily saturate 24 drives because itll be ~8000/24 = 333.33 MB/s of bandwidth allocated to each drive, which is higher that the peak 250 MB/s of modern day SATA HDDs (unless you start to use Dual Actuator drives or 10K, but those are not as common), and therefore if you plan to use a max of 16 HDDs, a PCIe 2.0 x8 is also enough because ~4000/16 is 250 MB/s, which is around about the peak of modern day hard drives, but if you start to mix HDDs and SSD (Sata 6Gb/s) because, if for example you are using 8 HDDs and 4 SATA SSDs via a HBA, in a PCIe 2.0 x8, because ~4000/12 is 333.33 MB/s so while the 8 HDDs will be able to fully saturate the 333.33 MB/s max bandwidth, the SSDs will be bottlenecked as they can easily reach 550 MB/s, but because HBAs just split the bandwidth equally, they'll be limited to 333.33MB/s, so thats why having a PCIe 3.0 x8 HBA would be better because then the 12 drives will have a max bandwidth allocation of 666.66 MB/s (an annoying sequence of numbers), and therefore you'll be able to saturate both drives fully

I must add that HBAs split it equally via the ports to take a 9305-16i, which is a PCIE3.0 x8 that has 4 MiniSAS-HD ports, each port has a max theoretical bandwidth of 2GB/s, you could allocate 2 ports (4GB/s) to a SAS expander for HDDs and use 16 HDDs for 250MB/s per drive, and the other 2 ports for 8 SSDs which a max of 500MB/s per SSD (which is very close to SATA 6Gb/s speeds)

[see the edit below for a newer opinion on HBAs & SAS Expanders \/]

So going back to my original post the 9500-16i is PCIe 4.0 x8 so ~16 GB/s split between 2 x8 SFF-8654 SlimSAS ports, so 8GB/s each, if you was to use one of the SFF-8654 ports to 2 U.2 / 2 M.2, because the bandwitch is split equally, the U.2/M.2 NVMe will be running at PCIe 4.0 x2 (PCIe 3.0 x4 effectively) 4 GB/s which is annoying for raid 1/mirror purposes because the max sequential read speeds are held back, (unless you are fine with the speed) on the other hand, using a x8 SFF-8654 SlimSAS to 2x MiniSAS HD SFF-8643 and an Adaptec AEC-82885T SAS Expander, (which is pretty cool because it supports both internal and external expansion) you can easily connect 24-36 HDDs or 14-16 SATA SSDs or mixing and matching with the massive 8GB/s bandwidth available, again HBA's split equally per port so you'd have to bare that in mind.

So back to the bifurcation dilemmas we have, by using a PCIe switch card / PLX Switch card, it can dynamically allocate the bandwidth when it’s actually needed, but the problem is that those cards are expensive, for example Highpoints lineup of PCIe 4.0 PLX Switch cards for M.2 / U.2 are expensive because they are filling a niche (i'll explain why its a niche later) the rocket 1580 which can support 8 x U.2 drives cost 600$, the rocket 1508 which supports 8 x M.2 drives cost 800$, and the rocket 1504 which supports 4 x M.2 drives cost 640$, these are very expensive because it uses PCIe 4.0 which is still new for PLX chips, they are especially cool because you can put it in a X8 slot and still be fine bandwidth wise, it depends on your use case, the reason why you can find alot of PCIe 3.0 PLX cards is because its an old chip that has been produced for years now and therefore is cheap, also PCIe 3.0 is still fast if you are limited by 10Gb/25Gb network speeds, you can actually get external NVMe enclosures for U.2 drives (which have PCIe switching built in), but they are very expensive and are PCIe 3.0, or buying C-Payne PCIe 4.0 PLX switch device are again very expensive, but PLX switching is very cool because the bandwith is managed dynamically and in theory you can use any device in those X16 slots (although it comes with a latency cost).

The reason why cards that don't require bifurcation are so niche these days, is because of where server / workstation platofrms have been going, EPYC, Threadripper, Xeons have had 64-128 lanes for a couple generations now (and have therefore killed the need for expanding IO past its normal/hardware limit), and because older EPYC or Threadripper go relatively cheaply on marketplaces, it just makes sense to get a 2nd gen EPYC and a decent motherboard, and now you have 128 PCIE 4.0 lanes, paired with cheaper bifurcation cards / splitters / adapters, its very appealing. i was listening to MLID podcast with a former AMD Product Manager, and he basically said the market/demand dictates alot of things and the reason why consumer AMD and Intel wont go past 30+ PCIe lanes these days is because most consumers just don't need it (its definetly possible though), if they do, they just need to move up to TR or EPYC with has a crazy amount of I/O potential

But anyways if you read all of this, i appreciate you :)

[Jan/24 Edit:] some corrections based on new information

The 8i or 16i in a HBA refers to how many SAS 12Gb or SATA 3/6Gb lanes it can handle, so the max bandwidth for a 8i card (9207-8i) is (8x12Gb) 9600MB/s split by 2 ports, which is 4800 MB/s per port on the HBA (the per port max bandwidth is useful when using a SAS expander). Similarly with a 16i card, it would be 16 x12Gb 19200MB/s, but saying that, the actual max speed usable to the CPU / system would be limited by the PCIe lane speed, eg 3.0 x8 = 8000 MB/s, so when using just a HBA by itself, let's say a 16i card, connecting 12HDDs and 4 SSDs on a 3.0 X8 (8GB/s) card, you should still be okay because the HBA allocates each connection a max of SAS 12Gb or SATA 3/6Gb bandwidth, but you are limited by the PCIe lane bandwidth.

The best way to think about HBAs & SAS Expanders is like a network switch trying to access the internet with high speed uplinks(4i or 8i or 16i or 24i) but they are ultimately limited by the WAN speeds (PCIe lane bandwidth). if you take a 9207-8i which has 2x 4i ports and you are only using SATA 6Gb/s drives (8x6Gb/s - 4800 MB/s), if you allocate both ports to a SAS expander, you can connect to 19 HDDs(250MB/s) that can be saturated fully all the time, but also you can connect to 24 or 48 drives if you don't mind the 200 MB/s or 100 MB/s bandwidth allocated to each drive, or if they all drives don't need the bandwidth at the same time due to mixed workloads, they can all be run together without bottlenecks

This reddit thread has some key infortmation in it
 
Last edited:

odditory

Moderator
Dec 23, 2010
384
70
28
that is a really solid setup, and yeah i think consumer AMD and intel (w680 for ECC support) is where its at these days, epyc or threadripper is just too expensive these days or the idle power would be alot compared to Zen 3/4, Alder lake/Raptor lake systems, plus these new CPUs are crazy fast, the only annoying thing is the 128GB RAM limit due to 4 slots and max 32GB ECC RAM sticks
I've been unexcited by both Intel W790 and AMD TR7000/WRX90 and their high early adopter costs, mostly because Gen5 PCIe storage devices aren't there yet to take full advantage and will take a few more years for throughput to mature.

The middle ground option for me last year was a TR 5955WX ($1000) + ebay ASUS SAGE WRX80 ($350) + $65 cooler + 256GB (8x32GB) of DDR4-3600 G.Skill sticks I had laying around, and it's no-compromises PCIe lanes galore (7 slots of 16x PCIe4.0). I have it packed with 4 x $50 ASUS Hyper M.2 Quad PCIe4 Cards (4 x 8TB NVMe's per), LSI 9500-8e, HBA for 4x 15.36TB U.2, Mellanox 2x40GB, and a GPU (fed off one of the M.2 slots). It's been rock solid.

Hey, first post but i have been following this wonderful thread for a while :)
Excellent, contributory posts right out of the gate. Glad you registered and hope you post a lot more! ;)
 
  • Like
Reactions: KingKaido

KingKaido

New Member
Nov 24, 2023
18
15
3
I've been unexcited by both Intel W790 and AMD TR7000/WRX90 and their high early adopter costs, mostly because Gen5 PCIe storage devices aren't there yet to take full advantage and will take a few more years for throughput to mature.

The middle ground option for me last year was a TR 5955WX ($1000) + ebay ASUS SAGE WRX80 ($350) + $65 cooler + 256GB (8x32GB) of DDR4-3600 G.Skill sticks I had laying around, and it's no-compromises PCIe lanes galore (7 slots of 16x PCIe4.0). I have it packed with 4 x $50 ASUS Hyper M.2 Quad PCIe4 Cards (4 x 8TB NVMe's per), LSI 9500-8e, HBA for 4x 15.36TB U.2, Mellanox 2x40GB, and a GPU (fed off one of the M.2 slots). It's been rock solid.
Wow! That is an amazing system, you have everything haha, amazing ST and MT performance, plenty of storage that is fast and silent, and you even have a GPU haha, literally the dream, what is your idle power like with an the things connected? Also how do you connect the 4 x U.2 externally, do you use one of them highpoint U.2 enclosures or something else?

And yeah i agree W790 and TR 7000 definitely doesn't make sense right now unless you need the performance for business purposes.

My next server/homelab system (hopefully sometime next year) is either a EPYC Rome or Milan F series CPU like the 7F72 or 74F3 which are 24C/48T high frequency chips, which should have alot of performance paired with an Asrock Rack ROMED8-2T motherboard, and the benefit of it using DDR4, filling up the 8 Dimm slots with 32GB or 64GB modules shouldn't be too expensive all together

But recently I've been looking at the EPYC Siena 8004 platform which uses Zen4c cores and is made around power efficiency (helpful in a 24/7 server/homelab) and stable clocks (around 2.5-3Ghz all core) it has access to 96 PCIe5.0 lanes, 6 channel memory and DDR5 RAM, and based on the proposed pricing(especially the 8-24 core ones) it looks very interesting

I'm thinking to pair it with an Asrock Rack SIENAD8-2LT2 motherboard or a Gigabye ME03-CE1 motherboard (which has the benefit of more x16 slots and 12 dimm slots) it shouldn't be a costly upgrade, but then getting DDR5 RDIMMs will be the expensive part as they're bleeding edge right now...my only other hesitation to this platform is posibility of CPU upgrades as Zen 5c or even 6c eventually comes out(but normally EPYC Is 2 cycles max) and performance Vs efficiency when comparing to Milan or Rome, but we won't know that until general availability, Jan-Feb next year

The system will be paired with 25/40/100Gb Mellanox cards, and alot of M.2 or hopefully 30TB QLC U.2 from Solidigim gets cheaper and more available, and it'll be a super solid and efficient server/homelab for years to come. Then all I'd need after that is an Nvidia GPU for the server or a intel N100 / N305 mini pc for low power plex/transcoding duties

If i wanted to build a desktop system (that can be turned off whenever, saving money on energy) i could just use a consumer AMD or Intel CPUs (for super high ST and decent MT) and a motherboard that has PCIe 5.0 X8/X8/X4 slots for a nice GPU for gpu accelerated software or gaming, a 25Gb/40Gb Nic to connect to the server and the option to have a PCIe card you may need, like a capture card or extra NVMe M.2/U.2 plus it'll have 2-3 onboard M.2s, oor use a laptop with a USB4/TB3 to 10Gb/25Gb network card, it'll be the ultimate setup!
 

unphased

Active Member
Jun 9, 2022
148
26
28
Hmm so does this mean that running my x8 3.0 HBA (LSI 9201 or some such? It's Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) in Linux) on x4 lanes is going to limit its bandwidth? It's clear it shouldn't impact HDDs at all.

Let's suppose I run this card off a x1 slot which at 3.0 signaling can provide 1GB/s roughly. If I plug 4 drives into it will each drive get 250MB/s, or will they each get around 120MB/s? (The HBA card can host 8 drives)

The question is mostly academic since 8 ports split across x4 3.0 lanes still provides 500MB/s which still mostly saturates a SATA SSD. But the question becomes relevant once we're talking about a 16 port HBA behind x4 lanes.

Also, I'm already connecting this HBA behind a PLX card, haha. Works great.
 

KingKaido

New Member
Nov 24, 2023
18
15
3
Hmm so does this mean that running my x8 3.0 HBA (LSI 9201 or some such? It's Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) in Linux) on x4 lanes is going to limit its bandwidth? It's clear it shouldn't impact HDDs at all.

Let's suppose I run this card off a x1 slot which at 3.0 signaling can provide 1GB/s roughly. If I plug 4 drives into it will each drive get 250MB/s, or will they each get around 120MB/s? (The HBA card can host 8 drives)

The question is mostly academic since 8 ports split across x4 3.0 lanes still provides 500MB/s which still mostly saturates a SATA SSD. But the question becomes relevant once we're talking about a 16 port HBA behind x4 lanes.

Also, I'm already connecting this HBA behind a PLX card, haha. Works great.
It's an interesting one, but the way i interpreted based on the manual it (i definitely could be wrong) for example if you have an 8i card(which has 8 lanes of sata/sas), split by 2 ports on the card(x4 each), the bandwidth is split via the 2 ports, so if you was to to put it in a pcie 3.0 x4 slot which is 4GB/s, so for 4 hard drives, like you said via 2 ports is 2GB/s each /4 is 500 MB/s so you'll be fine. In a x1 example (also i don't know if X1 provides enough power to run the card) you'd be able to run 4 HDDs at 250MB/s only if you use 2 HDDs on each port, due to the way it's allocated, if you done 4 via 1 port on the HBA, you'd be limited to 125MB/s so it's best to split it over 2 cables.
Hopefully that makes sense
 
Last edited:
  • Like
Reactions: unphased

KingKaido

New Member
Nov 24, 2023
18
15
3
This might be not the most elegant switch available, but look at this little thing :)
That is pretty cool! it has a built in pcie switch to handle 4 m.2, the only minor downside is the PCIe 3.0 x 4 speeds but 32Gb/s (4GB/s) is good enough, especially for 10/25Gb networks

Edit: if my theory is correct, if you use a LSI 9500-16i HBA and using one of the SFF-8654 ports to 2 x U.2 cables, to 2 of those u.2 to 4 m.2 adapters mentioned above, you could have 8 NVMe's connected to the HBA (only using one of the 2 ports), with up to 64Gb/s (8GB/s, 2x PCICe 3.0 x4) bandwidth between them in a Raid or zfs raid configuration...very very interesting :)

Edit2: thinking about it again, it may not work like that ^ because the HBA might negotiate the 2 connections in PCIe3.0 so it'll be half in total (2GB/s allocated to each u.2 connector) and a max of 4GB/s between the 2 U.2 drives, if only they made a 4.0 version of the adapter



Edit3: i can see the appeal of just using your motherboards m.2 slots to connect to u.2 to 4 NVMe adapter, especially using something like this\/ and getting the full PCIe 3.0 x4 bandwith (~32Gb/s ~4GB/s)
 
Last edited:
  • Like
Reactions: unphased

fringfrong

New Member
Aug 28, 2016
9
2
3
34
Does anyone have any info on the additional latency these adapters introduce? My application is very latency-sensitive, I'm considering using something like Highpoint R1580 (PCIe 4.0 16x) to drive 8x Optane 905p (PCIe 3.0 4x) but wondering if the additional latency won't give any benefit over modern SSD
 

KingKaido

New Member
Nov 24, 2023
18
15
3
Does anyone have any info on the additional latency these adapters introduce? My application is very latency-sensitive, I'm considering using something like Highpoint R1580 (PCIe 4.0 16x) to drive 8x Optane 905p (PCIe 3.0 4x) but wondering if the additional latency won't give any benefit over modern SSD
i don't think it'll impact latency noticeably but i could be misinformed, but also instead of getting the highpoint card you could get a similar PCIe 3.0 version of the card for alot cheaper CEACENT CNS44PE16 with PLX8748 Built-in PCIe Bifurcation seeing as you'll be using PCIe 3.0 devices
 
  • Like
Reactions: unphased

fringfrong

New Member
Aug 28, 2016
9
2
3
34
i don't think it'll impact latency noticeably but i could be misinformed, but also instead of getting the highpoint card you could get a similar PCIe 3.0 version of the card for alot cheaper CEACENT CNS44PE16 with PLX8748 Built-in PCIe Bifurcation seeing as you'll be using PCIe 3.0 devices
i was considering the 4.0 card for upgradability and i think the 3.0 uplink (15.5GB/s) can bottleneck the drives in sequential read (8x 2.6GB/s = 20.8GB/s), but maybe not worth the extra cost in this case, thanks
 

KingKaido

New Member
Nov 24, 2023
18
15
3
From the help of this forum i found these 2 interesting products, QNAP QDA-U2MP 2 x M.2 to U.2 adapter and the OWC U.2 Shuttle 4 x M.2 to U.2 adapter, the latter being very similar to the Viking Enterprise U20040 adapter mentioned above, but with the benefit of OWC providing products to consumer/prosumer/professional market, so it will be readily available and have support, warranty etc, it costs roughly 120-140$, they both look like they are using PCIe switch chips, the QNAP one mentions an Asmedia 2812 for the 2 M.2s and the OWC one mentions the Asmedia ASM2812X in the specs part, also they are both PCIe 3.0 x4 but we've established for alot of usecases, the 32Gb/s max theoretical speed, far exceeds 10Gb/25Gb network connections, and you can reach 40Gb/50gb with a Raid 0 / ZFS striping.

Now for the exciting theoretical part...I wonder if the OWC U.2 Shuttle pairs well with the the Ceacent PCIe 3.0 x16 to 8 x U.2 card (they both use PCIe Switches so the PLX card would only 'see' 8 U.2 connections for its PCIe switching, and the OWC Shuttle would break the U.2 connection into 4 M.2), using these SFF-8654 8i to 2 U.2 Cables, you could in theory connect 8 x OWC U.2 Shuttles in 8 3.5" drives bays, now having access 32 M.2s, if you pair it with the 4TB M.2 (8TB M.2 are widly expensive) as opposed to U.2 drives, you lose the option of having PLP for high sustained writes and not losing data if power shuts off randomly (but that can be combated with a UPS and/or redundant PSUs), but having the benefit of buying new drives and having 3-5 year warranty for each drive all in your name (and maybe quicker warranty replacements).

So 4TB x 32 drives, thats up to 128TB raw capacity or if you do 4 Drive RAID5 /Z1 per U.2 Shuttle and striping (RAID50) them , thats 96TB of fast NVME storage, (i think up to 12-15GB/s sequential reads if setup correctly), and if you catch a nice deal for NVMes, i bet it would be cheaper that 96-128TB of U.2 drives, the only downside would be a double latency penalty from the 2 pcie switches, but i wonder how that would manifest in reality.

And for the interesting part, since it uses 3.5" drive bays, all you need is a case that can hold that 8, and now you have a near silent homelab/server with 128TB raw SSD Storage, the loudest parts of the computer would be the fans(put some noctuas in there), maybe some coil whine if you have a GPU (which can be dampened by a case with thick panels) and the PSU (if you use a high wattage one, the fan may never turn on).

Very interesting and exciting, especially as all it takes is 1 PCIe 3.0 x 16 slot for it all to work (if the theory is correct), or worst case using M.2 to U.2 Cables
 
Last edited:
  • Like
Reactions: dmitry.n.medvedev