Maxing NVMe drive connectivity + cheap data protection ideas

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TrumanHW

Active Member
Sep 16, 2018
253
34
28
I was thinking about

RAID-0 for some NVMe drives and RSync-ing to a spinning rust ZFS volume to protect the NVMe drives.
Or maybe set the NVMe array mirrored RAIDZ-1
-- or --

Unless spinning rust as an RSync target is a sufficient condom?

I'm using the HighPoint 7201
- But I'd sure love to find an x32 controller that'd allow the PCIe 4.0 upgrade to let me double the drives.

Though that's probably facile
Obviously I pine for PCIe 4.0 to simply 'double the lanes'
But I doubt it'll necessarily allow a PCIe 3.0 (HighPoint) and PCIe 3.0 SSD to behave as an x16 via an x8 slot..? Will there actually be benefits to be had via PCIe 3.0 devices when 4.0 is released (or upgraded as with AMD)?

Are NVMe drives basically all hitting the limit of the PCIe lanes?

What about using Epyc procs, or a pair of E5 v4 or SP (3647) Processors? Are there any boards which support U.2 direct to the PCIe lanes dedicated to their own CPU's PCIe lanes? Or will I always need a controller between them?

Can I use something like a T630 or T640 with 8 (or even 16) NVMe drives ?
Dual v3 would provide 80 CPU lanes, with 8 drives that'd leave 48.
Dual v3 would provide 80 CPU lanes, with 16 drives that'd leave 16.

Dual SP (3647) would provide 88 CPU lanes, with 8 drives that'd leave 56.
Dual SP (3647) would provide 88 CPU lanes, with 16 drives that'd leave 24.

1x SFP28 uses another x4 ... if those are even CPU lanes... with onboard GPU, I'd think it'd work.

Many systems run on a TOTAL of 16 CPU lanes...

Why doesn't Dell explicitly offer U.2 NVMe drives for their SFF 16 units?

What about the Epyc 7000 series? Can't I use that for up to 32 drives??

HPE ProLiant DL385 Gen10 with 7301 processor, even SINGLE is 128 real PCI lanes
Plus the MB pseudo lanes.

And here there're two slots; seemingly MORE than adequate with only 1 Proc for 24 NVMe U.2's @ x4.
24 is still only 96 lanes;

That'd leave 32 lanes. More than ANY consumer system for the past 30 years.


Assuming x16 HBA card is the limit for NVMe devices would require 6 !! HBA cards ... ?

Is there a better way?

If your goal was to have ≥4 drives as parity in ZFS ... with a goal of making the system ultra fast and still very reliable without needlessly wasting capacity -- what configurations would you suggest?

And what are your thoughts re: using RSync as additional protection to minimize the number of drives which need to be used as parity?

Thanks! Truman
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I count over a dozen questions... I'd urge you to do more research and then ask the specific questions you're unsure about.

Here's some tidbits that will help.

- NVME Expander (like SAS) will yield > drive density at the cost of best performance

- Systems don't limit PCIE lanes, but the CPU itself. So check the CPU. 16 PCIE lane is an E3 not an E5, and def. not a DUAL E5...

- You won't get max NVME performance from an E3 aside from lack of PCIE lanes, the CPU will bottleneck at some point