Looking to populate older c222 board with nvme using PCIe adapter - Nvme with PCIe Gen4x4 to PCIe Gen3x8 adapter?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Hi all

I'm trying to extend the life of my X10SLL-F and want to replace the GPU with 2TB blazing-fast Nvme. This is my vmhost and the nvme would host a NAS cache too.

To fully utilize the slot I want to find something that maxes out the PCI-E 3.0 x8 (in x16 slot) on the motherboard.

My questions:
  • Can I adapt a SAMSUNG 980 PRO SSD 2TB from PCIe Gen4x4 to the mobo's PCIe Gen3x8 ?
    • If it downgrades to PCI Gen 3 does it go to Gen3x8 or Gen3x4 ?
  • What options do i have to max out a Gen3x8 slot?

---
1. PCIe Gen3x8 has a maximum theoretical bandwidth of 7880 MB/s.
2. The Samsung 980 Pro is a PCIe Gen4 SSD with maximum speeds of 7000 MB/s (read) and 5000 MB/s (write).
3. The 980 Pro will operate at lower speeds in a PCIe Gen3 slot but should be within Gen3's max bandwidth.
4. NVMe SSDs, like the 980 Pro, use an M.2 slot with up to 4 lanes of PCIe; in case of PCIe Gen3x4, the maximum theoretical bandwidth is 3940 MB/s.
5. The 980 Pro will operate up to these limits when connected to a PCIe Gen3x4 interface.
6. Samsung 980 Pro's maximum speed on a PCIe Gen3x4 interface is up to 3940 MB/s. This is less than its rated maximum speed (7000 MB/s read, 5000 MB/s write) achievable on a PCIe Gen4x4 interface.
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
Sequential reads/writes really aren't as important as you might imagine. I'd be surprised if there's any noticeable difference between PCIe 3.0 and PCIe 4.0 for what you're doing. I wouldn't bother chasing benchmarks and just use a single SSD as-is.

If you insist, Intel P3608 or a bifurcating M.2 adapter.
 
  • Like
Reactions: Alfa147x

Tech Junky

Active Member
Oct 26, 2023
351
120
43
Simple answer is no.

Gen whatever won't allow you to make an x4 be an x8 electrically.

If you're running AMD you can get a a dual socket card to put M2 drives into a d split the slot into 2 drives for expansion. Your other option is using a card that has a plx controller and run up to 4 drives off a card but those are expensive at $600 just for the card.

I can think of other options depending on what you want to achieve though. If it's speed you'll need a platform rebuild and upgrade but, if it's capacity there are U drives that beat M2 in terms of $/TB once you hit higher around 8tb.
 
  • Like
Reactions: Alfa147x

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Some research:

1. Intel Optane SSD 905P

- Max size: 1.5TB
- Price: $700 USD
- Half Height compatible: Yes, with AIC form factor
- Price per TB: Approximately $466
- Max Read/MaxWrite: Up to 2600 MB/s / 2200 MB/s
- PCIe form factor and speed: PCIe 3.0 x4
- Random Read/Write: Up to 575,000 IOPS / 555,000 IOPS
- Power Consumption: 5.8W at idle, up to 13.4W during read test and 17.5W during write test

2. QNAP QM2 Dual M.2 22110/2280 PCIe Gen3 x8 NVMe SSD Expansion Card with two Crucial P3 Plus 2TB NVMe SSDs
- Price of Adapter: $200
- Price of SSDs (2x Crucial P3 Plus 2TB): $85 * 2 = $170
- Total Price: $370
- Half Height compatible: Yes, Low-Profile Flat and Full-Height are Bundled
- Price per TB: $370/4TB = Approximately $92.5
- Max Read/MaxWrite (per SSD): Up to 3400 MB/s / 3000 MB/s
(Note: These speeds are per SSD and don't take into account the potential loss by the adapter.)
- PCIe form factor and speed (Native for Crucial P3 SSD): PCIe 3.0 x4
- Random Read/Write (Crucial P3 SSD): Up to 430,000 IOPS / 500,000 IOPS
- Power Consumption (Crucial P3 SSD): 0.43W when idle, averages at 2.3W, max up to 3.4W

3. Dell Samsung 1.6TB PM1725b PCIe Gen3 x8 NVMe SSD
- Price: $179
- Half Height compatible: Yes, with the correct bracket
- Price per TB: Approximately $112
- Max Read/MaxWrite: Up to 6400 MB/s / 2900 MB/s
- PCIe form factor and speed: PCIe 3.0 x8
- Random Read/Write: Up to 1,000,000 IOPS / 180,000 IOPS
- Power Consumption: Under 23W during active usage.
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Can i use the QNAP QM2 Dual M.2 (QM2-2P-384) on a non-QNAP device?

And can I use plx cards with my X10SLL-F?
 
Last edited:

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Sequential reads/writes really aren't as important as you might imagine. I'd be surprised if there's any noticeable difference between PCIe 3.0 and PCIe 4.0 for what you're doing. I wouldn't bother chasing benchmarks and just use a single SSD as-is.

If you insist, Intel P3608 or a bifurcating M.2 adapter.
How do you feel about using a 2x 2TB m.2 NVMe in a m.2 bifurcating adapter?
Any favorite bifurcating adapters out there that are half height?
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
Can i use the QNAP
I used one of their AC wifi cards as an AP when I went diy and it worked fine. As long as the mobo allows for bifurcation you should be able to find a dual card for about $20. Most of them come with both brackets.

PLX is just useful for when you're running Intel as they don't bifurcate on consumer chipsets. The controller basically is a switch for the drives to share the same slot.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Assuming that board supports bifurcation you can also get an adapter to two ports for cables that connect to U.2 drives. They use more power and take up a bit more space, but there are a lot of good deals to be found and they're all enterprise grade with PLP and loads of endurance. The adapter and a couple decent cables should probably total around $50.
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
The adapter and a couple decent cables should probably total around $50.
I just recently did this and the M2 adapter was $20 and the 50cm cable was another $30 but, I went with Oculink for full performance with a Gen4 drive that hits 6.5GB/s. There are some PCB adapters you can mount the drives directly to though that can hold 1/2/4 drives on them around $70 though as well. The other reason to go U is the capacity and price per TB is better when you hit a certain level. 8TB U drives run about $400 where the M2 runs $800 and there aren't any M2 options beyond 8TB where the current U options hit up to 30TB if you have the budget for them.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Right, the $50 number is for two ports of PCIe 3.0, 4.0 U.2/3 drives don't currently make a lot of sense to me as far as price to performance ratio.
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
@nexox

It's a bit of a mess trying to squeeze them into a consumer platform. I actually somehow broke 2 x Micron 7450 drives through the process of playing around with things. 1st one lasted only a couple of hours and the second one made it about a week of use. I ended up coughing up more $ and switched to Kioxia and haven't had any issues since. On the other hand someone on another board I was talking with got 2 drives around the same time and didn't have any issues with Micron.

The adapters and cables though are a mess as you have 3 options of SAS 12/24 or Oculink and the speeds associate with them will determine how well the switchover to U drives will feel. OCU tough will handle up to 8GB/s which is a single Gen4 max. When dicing into it though I did find a dual x8 card that runs about $50 and the dual output cables for ~$50/ea but, that's going to be more for a Gen4 slot x16.

The other confusing thing is you can mount both 2/3 drives to a U.2 adapter but only a 3 to a 3 adapter.

Then there's the whole seller descriptions that can lead to a rabbit hole. IIRC the M2 adapter I went with said Gen3 but works fine as a Gen4 link to the drive in terms of speed. Another issue is the drives have speeds all over the place from low G3 to high G4 and asymmetrical R/W numbers.

Temps on the K drive though sit around 40-45C though with it sitting inside the case w/o any real active cooling attached to it. The person I was talking about though with the Micron's has come up with an active cooling mount on the drives to force air across them and still has some higher than desired temps to deal with on them.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
The variety of different U.2/3 cables is definitely out of control, and I have heard not great things about how well 4.0 runs over some cables, while the lower operating frequency of 3.0 tends to be a lot more forgiving. I don't really have much experience with Micron drives, but aside from the extremely specific case of the 7300 providing PLP in an M.2 2280 form factor I don't think I'd bother with anything outside their 9300/9400 products.
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Okay sweet. Thanks for the food for thought. The added complication doesn't make the U.2/U.3 option seem worth while as it doesn't bring my $/TB down. I'm using a half depth 2u chassis so I don't have a ton of room for cables and to mount a drive.
Currently leaning towards a 4x NVMe card which will give me room to expand.

This is also looking promising:
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
How do you feel about using a 2x 2TB m.2 NVMe in a m.2 bifurcating adapter?
Any favorite bifurcating adapters out there that are half height?
Personally, I think all the options you have mentioned so far are overkill and will be wasted money.

You have a 10 year old system that caps out at 32GB of RAM, which means it's pretty limited as to how much you can run on it. Just get yourself a single inexpensive SSD if you want to get a bit more life out of it.
 
  • Like
Reactions: Alfa147x

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Personally, I think all the options you have mentioned so far are overkill and will be wasted money.

You have a 10 year old system that caps out at 32GB of RAM, which means it's pretty limited as to how much you can run on it. Just get yourself a single inexpensive SSD if you want to get a bit more life out of it.
I'm just exploring options and pricing them out. If I can go with dual slot nvme in a half height adapter - Going with a cheap 1tb drive and a simple single slot nvme to pcie adapter is not off the table.
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
not off the table.
So, $200 is your budget?

2U still has plenty of space for a single cable if you went with a U drive that has capacity. Surely you can find a spot for a single 2.5" drive case inside your pizza box.

2 z 1TB M2's is kind of silly when you can get a 4TB M2 for under $200. Don't forget you have to count the number of PCIE lanes you have available as well and inspect the specs of the board to see if adding a card auto splits your lane count for the top slot.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Currently leaning towards a 4x NVMe card which will give me room to expand.
Since the slot is x8 electrical you'll need a card with a PCIe switch to support four NVMe drives, that's somewhat more expensive (something like 8-10x) than a dual slot using motherboard bifurcation. I haven't looked specifically for such a thing, but it seems like it would be difficult to fit a switch chip plus four 22110 drives into a low profile card.
 

Alfa147x

Active Member
Feb 7, 2014
192
39
28
So, $200 is your budget?
Wide budget of $200 - $400 but aiming for 4TB+ of space at approx $50-70/TB (depending on many factors - speed, durability, power consumption ).

Uses:
  • ESXI vm boot drive host
  • Synology/Xpenlogy read/write cache via virtual ESXI flash controller
    • Would prefer 2 drives so I can have the caches backed by redundant drives
    • I'm trying to cache the slow communication with the DAS - Lenovo SA120

Notes:
The X10SLL-F does not pcie bifurcation - 1 yr old post on reddit
I unfortunately found out this board does not allow bifurcation.

I think I've settled on 1x nvme PCIE slot adapter as my storage needs are currently closer to 2TB but 4TB+ buys me some future unplanned growth. I hope to run this VM host hardware for another 3 - 5 years before upgrading.
 
Last edited:

Tech Junky

Active Member
Oct 26, 2023
351
120
43
Ok... well the adage of you get what you pay for applies.

$400 gets closer to a robust system for what you wan to do.
4TB Lexar NVME for under $180 hits your target price for space pretty easily
While not redundant it will provide longevity and you can always add another drive for R1 mirroring later

DAS is handy for expanding storage but, spinners will be the bottleneck and/or the aging DAS if it doesn't do 10gbps

Switching to AMD will unlock bifurcation even on the cheap end and this will save you on the cards / adapters vs needing to go PLX. The PLX would cost you the same as rebuilding to AMD in most cases. AMD can be built for less than 1/2 of the PLX cost though if you get creative and don't need to be bleeding edge in terms of CPU.

Another thing to consider would be TB as you can get them used on Amazon for $60 w/ 2 ports and 100W PD using the Gigabyte cards. This ups your game in terms of speed for peripherals on the cheap.

I would strongly advise going AMD though if this is the sort of thing you want to play with. It will be cheaper in the long run in terms of expanding capabilities. As long as you keep the top x16 open for drive adapters you can getaway with most any idea you come up with. X16 drive adapters that can be bifurcated are only $50 vs the PLX @ $600+. With the proper card you also don't take the hit on speed you do with PLX funneling data through it.

Power consumption varies depending on the load you have on the system. For instance my setup tops out at ~350W but, in idle it's under 100W. If your primary concern is electric bill then you can put the CPU into eco mode to drop it to ~65W from 170W and only shave 3-5% off the performance. Less power / less heat / less cooling / less consumption.
 
  • Like
Reactions: Alfa147x

Alfa147x

Active Member
Feb 7, 2014
192
39
28
Ok... well the adage of you get what you pay for applies.

$400 gets closer to a robust system for what you wan to do.
4TB Lexar NVME for under $180 hits your target price for space pretty easily
While not redundant it will provide longevity and you can always add another drive for R1 mirroring later

DAS is handy for expanding storage but, spinners will be the bottleneck and/or the aging DAS if it doesn't do 10gbps

Switching to AMD will unlock bifurcation even on the cheap end and this will save you on the cards / adapters vs needing to go PLX. The PLX would cost you the same as rebuilding to AMD in most cases. AMD can be built for less than 1/2 of the PLX cost though if you get creative and don't need to be bleeding edge in terms of CPU.

Another thing to consider would be TB as you can get them used on Amazon for $60 w/ 2 ports and 100W PD using the Gigabyte cards. This ups your game in terms of speed for peripherals on the cheap.

I would strongly advise going AMD though if this is the sort of thing you want to play with. It will be cheaper in the long run in terms of expanding capabilities. As long as you keep the top x16 open for drive adapters you can getaway with most any idea you come up with. X16 drive adapters that can be bifurcated are only $50 vs the PLX @ $600+. With the proper card you also don't take the hit on speed you do with PLX funneling data through it.

Power consumption varies depending on the load you have on the system. For instance my setup tops out at ~350W but, in idle it's under 100W. If your primary concern is electric bill then you can put the CPU into eco mode to drop it to ~65W from 170W and only shave 3-5% off the performance. Less power / less heat / less cooling / less consumption.
Thanks for the advice - I've just learned that ESXI 8 does not support my Haswell CPU :( so my plan to avoid upgrading for another 5 years needs reconsideration - I love PCIe lanes so I will be looking to maximize PCIe lanes/$ ratio.