ZFS 20 x 8TB Mirrored pool (40 drives)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

abstractalgebra

Active Member
Dec 3, 2013
182
26
28
MA, USA
We are considering a ZFS Storage Server for our VMware VSphere 6.7 Environment.
Dell's SAN solutions do not appear to be price competitive. I'm open to other enterprise supported solutions.
ZFS provides minimal vender lock-in and gives us a lower cost to expand by adding 24 drive or 60 drive shelfs and lower yearly support costs.


Draft Plans are for HA Environment with Dual Head Nodes:
40 x 8TB NL-SAS Drives
20 x Mirrored vdevs of each 2 x 8TB
Raw Storage: ~291 TiB
ZFS Usable: ~140 TiB (20% free space limit => ~111 TiB)

ZIL: 16GB NVDIMM (ix) or 2 x 400GB SAS SSD (nexenta)
Thinking that a mirrored P4800X would be a massive upgrade tempting for Nexenta.

* Presenting to 12 VMware hosts by NFS
* Network from SAN: 4 x 10 GBE per controller (considering 4 x 40GBE)

Any experience with TrueNAS (ixSystems) or NexentaStor v5? (Support, Speed, Bugs, ...)
Better configurations or alternatives I should consider?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
I can't comment ix or Nexenta.
But on the Nexenta you should prefer a Dimm or Optane based Slog (much faster than SSDs). You do not need to mirror them (the NVDIMM isn't either). A single Optane is perfect (On a failure the logging revert to the slower onpool ZIL automatically). Only a crash with a Slog failure at the same time leads to a dataloss.

For a Dual-head Cluster with RSF-1 you may need the dualpath SAS Slog to allow a pool failover together with the Slog failover or a failover will end with a missing slog (this is a possible setup that results in a reduced performance after failover)
 
Last edited:
  • Like
Reactions: abstractalgebra

abstractalgebra

Active Member
Dec 3, 2013
182
26
28
MA, USA
Thank you, nice only a single P4800x sounds great. Do you have any guesses on the idea on the performance of a setup like this? I can't find much online. Other better ways to do this?
I wish ZFS had the new SSD Tiering features the developers are talking about on finished.

HP Nimble looks interesting but expecting it to be at a premium.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
On a well designed ZFS system, all small random writes are going to the rambased writecache (10% RAM, 4GB max per default on Open-ZFS) to convert them to large and fast sequential writes (where even a diskbased pool can deliver >> 1GByte/s writes). This is why you need the Slog (to protect the writcache).

Most of random reads (80% and more) are delivered from the rambased readcache Arc or its L2Arc extension on SSD/NVMe. So tiering is mostly not needed and wanted as on a system with a high write workload, you have the write workload and the tiering workload together.

I have made some performance tests with different ram and the Optane as Slog, see https://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf

btw
Diskbased pools are due to be replaced by SSD only pools over time. In the medium future SSDs will become cheaper than disks.

From your pool you can expect a max of around 20 x 150MB/s = 3GB/s nonsync write on average, read theoretically twice of that. With sync write enabled and the Optane, expect less than 1 GB/s sync write. Illumos based systems with Open-ZFS like Nexenta are fast, only Solaris with a genuine ZFS was always faster in my tests.