Epyc ZFS Server - NVMe vs. SATA/SAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Think

Member
Jul 5, 2017
32
5
8
I'm a big fan of AMD's new processors. My Threadripper workstation is great, and I've been toying around with the idea of building an Epyc server that would run storage (appr. 15 TB needed) and my VMs here (mail server, web server, Plex, a bit of minecraft for the nippers... ;))

Liking the horsepower and PCIe lanes of a single Epyc, I like the Tyan TN70A chassis, which comes in two flavors, 8 bays NVMe, 16 bays SATA or 24 bays NVMe. Just as on the current server, I would probably be using two ZFS pools, a large one for the bulk of data (15 TB - on the current server 8 mirrored HDDs with Intel NVMe SSD log), and a smaller one for things than can use more performance (on the current server 4 SATA SSDs in 2 striped mirrors).

For the performance piece, I was thinking about Optane 900P, either 2 striped or 4 in a striped mirror. Where it gets more interesting, I think, is the larger storage:
  • NVMe: 3 Micron 9200 Pro 7.68 TB in raidz, appr. 75W operating, 21W idle, USD 10.5k.
  • SATA SSD: 6 Intel S4500 3.8 TB in raidz2, appr. 33W operating, 7W idle, USD 10.9k
  • SATA SSD cheap: 2x 5 Crucial MX500 2TB in raidz each, USD 4.5k (or 5.5k for 2 raidz2).
  • SATA HDD: 16 Seagate Enterprise 2TB 2,5" HDDs, 84W operating, 62W idle, USD 4k.
A wild guess on IOPS:
  • 1 NVMe raidz vdev: IOPS of one SSD in the vdev, i.e. 750k read/150k write
  • 1 SATA SSD raidz2 vdev: IOPS of one SSD in the vdec, i.e. 72k read/33k write
  • 2 SATA SSD raidz vdevs: IOPS of two SSDs, i.e. 190k read/180k write
  • 8 mirrored vdevs with SATA HDDs: appr. 800 IOPS
Is my thinking roughly ok? If so:
  • While half the price, performance from the HDD pool (which would be the only one that would actually need a 2U chassis with 24 2,5" enclosures) is just a fraction of the SSDs.
  • SATA SSD price is comparable with NVMe SSDs for enterprise reliability. IOPS are, however, lower for SATA. While also bandwidth should calculate to be higher for NVMe, I'm not sure that will be significant in real life.
  • Cheaper SATA SSDs have a nice price/performance ratio. DWPD rating however is just a fraction of enterprise level.
Is my thinking about right? Any thoughts?
 

Nizmo

Member
Jan 24, 2018
101
17
18
38
High End NVMe drives run at 5GB and above transfer speeds. Something like that will beat any raid config unless in a RAID 0. Which for production is out of the question.

Cost effective IMO go with NVMe and be future proof. Much of enterprise NVMe has reliability and DWPD, High End NVMe uses 8x PCIe.

*I just picked up 4TB Intel DC p3608 for about $1,800 brand new on Ebay. Each VM Host has its own NVMe local storage. At 5,500MB/s rates my 10Gb network is the bottleneck now.
 
  • Like
Reactions: gigatexal