PCIe lane math - Transition to SSD/NVMe NAS - how to best use existing hardware

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

thimplicity

Member
Jan 11, 2022
61
9
8
Hi,

I am currently running Proxmox on the following hardware:
  • Motherboard: AsRock Rack EPC612D4U (ASRock Rack > EPC612D4U)
  • CPU: Intel Xeon E5-2640 v3 (Intel® Xeon® Processor E5-2640 v3 (20M Cache, 2.60 GHz) Product Specifications))
  • PCIe Add-on card for NVMe drives: Supermicro AOC-SLG3-2M2 PCIe Add-On Card for up to Two NVMe SSDs. I use this card with one NVMe drive for VMs and containers so far.
  • Boot Drive: 250GB SSD (I forgot the brand) connected via SATA. I was not able to boot from the NVMe for some reason.
  • Software: On top of Proxmox I am running TrueNAS (passing through the onboard HBA incl. HDDs) as well as multiple VMs and containers.

The CPU provides 40 PCIe lanes, but the motherboard has some limitations when combining things:
  • SLOT7: PCIe3.0 x16, auto switch to x8 when SLOT6 is occupied
  • SLOT5: PCIe3.0 x16, auto switch to x8 when LSI3008 is populatedPCIe x 8
  • SLOT6: PCIe3.0 x8
  • The motherboard also supports bifurcation on all slots down to 4x/4x or 4x/4x/4x/4x
  • One slot is filled with the mentioned NVMe add-on card and I am planning to add a dual 10GBe network card as well.

As my need for space is not huge, I would like to move towards NVMe drives going forward. I do not have a lot of drives, so that I could fill up 10GBe bandwidth with a big enough RAID. Moving to NVMe drives for speed/throughput (apart from size and energy efficiency) is therefore the main driver.

The initial idea is to fill up the remaining PCIe lanes with NVMe cards and drives. If I add the 10GBe card as well as keep the HBA running (for now), I should have 4 \* 4x left (either all in slot7 if I leave slot 6 empty or two and two if I use dual NVMe cards instead of one quad NVMe cards) - so 4 NMVe cards. If I get to a point where I do not need the HBA anymore, this would open up lanes for two more NVMes.

Does this idea make sense? Is there a way to utilize the HBA SAS controller with NVMe drives? Will I run into problems with passing through the NVMes to TrueNAS?

I would appreciate some thought for defining the best setup to utilize the existing hardware. I would like to avoid moving to a new motherboard or CPU if I can avoid it.

Thanks for some better ideas.
 

Zer0_C00L

Member
Jun 18, 2023
51
15
8
I'd suggest using the NVMe in the x16 slot, since x16 to 4x4 cards are cheap and easy, I'd suggest using a single SAS HBA that's a x8 and using the fastest NIC card you can justify. From memory there are some infinity fabric cards or something that are cheap and can do QSFP+ @ 40Gb (look into these they're 'funny' and a bit of work but apparently insane).
 
  • Like
Reactions: thimplicity

ziggygt

Member
Jul 23, 2019
62
10
8
I have no experience with bifurcation or this card. From what I read, It seems like a card like this would fill the bill. JEYI Quad NVMe PCIe 4.0 Expansion Card, Supports 4 NVMe M.2 SSD 2280 up t... New | eBay

I have a tp-link 10Gb network switch and transfers from one system with a Local SABRENT 1TB Rocket Q4 NVMe PCIe 4.0 M.2 2280 card and another system seem limited by the network as expected.

I have been thinking about the quad card for a potential future expansion, but realizing this is beyond my current budget.
 

Attachments

Last edited:
  • Like
Reactions: thimplicity

nexox

Well-Known Member
May 3, 2023
699
285
63
The lane math works out, keep in mind you can also use a PCI-e switch card to get four NVMe drives on an 8x slot at the expense of some bandwidth (and money.)

I would also suggest only buying the network card you need now, faster ones are getting cheaper all the time and 10G NICs (the sfp+ variety anyway,) are cheap enough that you'll almost certainly come out ahead by waiting and buying twice. Plenty of 40G cards are also rather cheap, but then to run 10G you need an adapter to SFP+ that costs as much as an entire 10G card, so it's hard to see the sense in that.

That said, if your main goal is saturating 10G without burning too much energy, consider SAS SSDs. I don't know what HBA you have now, but it doesn't take a whole lot, just four 3G SAS 1 ports can provide the speed you want, and the SSDs can be a pretty good deal if you are careful to avoid the ones with unconventional block sizes. If you still want to do the NVMe thing, then U.2 drives might be a better choice than M.2, except the cables are still rather expensive.
 
  • Like
Reactions: thimplicity

thimplicity

Member
Jan 11, 2022
61
9
8
That said, if your main goal is saturating 10G without burning too much energy, consider SAS SSDs. I don't know what HBA you have now, but it doesn't take a whole lot, just four 3G SAS 1 ports can provide the speed you want, and the SSDs can be a pretty good deal if you are careful to avoid the ones with unconventional block sizes. If you still want to do the NVMe thing, then U.2 drives might be a better choice than M.2, except the cables are still rather expensive.
Thanks, I have an LSI3008 HBA onboard that I currently pass-through to TrueNAS in proxmox
 

nexox

Well-Known Member
May 3, 2023
699
285
63
I have an LSI3008 HBA onboard
Well that's certainly fast enough, though the wording of the motherboard specs make it sound like you may not be able to get SLOT5 to run at 16x, "populated" sounds like it means "when the LSI chip is soldered to the board" but I suppose it could be "When there are devices connected to the LSI chip." You would need to experiment or check the manual to confirm.

That said, depending on the total capacity you need, NVMe drives like the WD SN630 may come out cheaper than large SAS3 SSDs
 
  • Like
Reactions: thimplicity

ziggygt

Member
Jul 23, 2019
62
10
8
I am posting this as a PSA. I had to look to see what U.2 was. It looks amazing

M.2 vs U.2 - Velocity Micro Blog

These cards/drives look like it would be easy to build a huge data store
Amazon.com: PCIE to U.2 Adapter Card, PCIE X16 to 4 Port U.2 NVME SFF-8643/8639 Expansion Card, PCIE 4.0 U.2 Split Card with LED Indicator for Windows : Electronics

Will a card like that open up all the 16X lanes without the Bifurcation that a quad M.2 card needs?
The U.2 drives can get pretty big, and they have a hefty price tag.
Local storge access would be blazing fast, but even a 10Gb network will limit the speed right? That seems to be the limiter.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
Will a card like that open up all the 16X lanes without the Bifurcation that a quad M.2 card needs?
You will still need bifurcation support on the mainboard and the x16 slot
Also the card is passive; it has no retimers or redrivers that you will need for "long" cabled pcie 4.0 (or newer) 2.5"/esdff ssds
 
  • Like
Reactions: nexox

nexox

Well-Known Member
May 3, 2023
699
285
63
If you're just going for network storage, pcie 3.0 drives will be plenty fast to saturate 10G. The other benefits of U.2 drives, not mentioned in that article, is that they're pretty much exclusively intended for enterprise use, so they mostly have power loss protection and quite high endurance (often a few PB of writes) so the NAND on even well-used drives will last basically forever in home/hobby use.
 

adman_c

Active Member
Feb 14, 2016
271
143
43
Chicago
If you're just going for network storage, pcie 3.0 drives will be plenty fast to saturate 10G. The other benefits of U.2 drives, not mentioned in that article, is that they're pretty much exclusively intended for enterprise use, so they mostly have power loss protection and quite high endurance (often a few PB of writes) so the NAND on even well-used drives will last basically forever in home/hobby use.
And currently, Gen3 enterprise U.2 drives (used or new) are less expensive than their SAS/SATA enterprise SSD counterparts. New 7.68TB Gen3 drives are available for a bit less than $400, which is less per TB than everything but the very cheapest QLC consumer crap. As @nexox said, they're basically indestructible for home use, at least from an endurance standpoint. 7.68TB U.2 drives routinely have greater than 10PB of endurance. Downsides are that U.2 drives use substantially more power than SAS/SATA SSDs and they run concomitantly hotter.
 

adman_c

Active Member
Feb 14, 2016
271
143
43
Chicago
They're probably all "good enough" for home use since they're made for enterprise use. With Gen 3 drives there's small performance differences between brands, but they're all going to be really good at sustained performance compared to anything consumer grade. Some may have slightly lower idle power usage, but they'll all use between 15-20W when writing and around 8W when reading. I'd decide whether you want new or used and then just get whatever has the lowest cost per TB.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
Does the brand for U.2 drives matter or are they all "good enough"?
For me, the brand matters: so far only "retail" version of intel/solidigm and micron ssds offer free firmware access.
Other brands like samsung, wd (former hgst) or others provide firmware only to oems, which in turn requires you to have support contracts with the oems for firmware access.
 
  • Like
Reactions: adman_c and nexox

thimplicity

Member
Jan 11, 2022
61
9
8
For me, the brand matters: so far only "retail" version of intel/solidigm and micron ssds offer free firmware access.
Other brands like samsung, wd (former hgst) or others provide firmware only to oems, which in turn requires you to have support contracts with the oems for firmware access.
great insight, thanks!