Trying to determine ZFS design of vdevs

Zack Hehmann

Feb 6, 2016
Hello everyone!!

I ended up purchasing (10) 2TB Hitachi drives from ebay when they were $30 each. Of the 10 2 had a bunch of bad blocks leaving me 8 that I can use. I'm wanting to build an all in one Napp-it server on ESXi 6. This is going to be for my home environment with 30 VMs max and about 10-15 are going to be on all the time. I going to have a 2012R2 Dc, 2012 R2 file server, several desktop operating systems (Win, Mac, and Linux), steam dedicated game servers, UML vm, GNS3 vm, torrent client, PLEX, and maybe a lite database for KODI or an arduino project.

I already have a Xeon E3-1240 v2 with 32GB ECC, a X9SCM-f, and a Dell h310 flashed to IT mode. I would like to maximize all 3 (Capacity, Redundancy, Performance) with ZFS but I know that isn't possible. With (8) 2TB drives I thinking of making 2 vdevs. I will buy a few more drives if I have to, but I would like to stick with 8 if I can because I would to buy another HBA. I do have (7-9) 3TB Hitachi 7k4000 drives I can use until I replace them with 2TB drives instead. I bought the 3TB drives a few years back for ZFS, but never did anything with them. I will eventually use them but at a later date.

Option 1
vdev 1 Striping of (3) 2TB drives
vdev 2 RAIDZ-2 of (5) 2TB drives

Mirror vdev 1 and 2
That gives me a total of 6TB usable? I read somewhere that having multiple vdevs will increase the IOPS of the pool...? Is this correct. I would get better performance then a 8 drive RAIDZ vdev? Is option 2 better?

Option 2
vdev 1 RAIDZ of (4) 2TB drives
vdev 2 RAIDZ of (4) 2TB drives

Mirror vdev 1 and 2
That gives me a total of 8TB usable?

Option 3
vdev 1 Striping of (4) 2TB drives
vdev 2 Raidz of (4) 2TB and (1) 3TB drive

Mirror vdev 1 and 2
That gives me a total of 8TB usable?

I'm not really sure how many IOPS I should shoot for and I know that depends on the type of data I'm reading and writing. I did give some ideas on what kind of VMs I'm going to have. I plan on giving the Napp-it vm 16GB of RAM. I don't think I will use an SSD for l2arch or for a ZIL. I will probably use an SSD in the ESXi host for cache.

Let me know what you guys think. Thanks!!!


Well-Known Member
Mar 30, 2012
Option 2 RAIDZ means losing a disk to parity. 6TB each. Mirrored you'd get 6TB total. But you're losing 5 of the 8 drives to redundancy which is too much.

I'd either RAID Z2 8 drives, get 12TB usable.

My other option is to just mirror/ stripe the drives (more or less RAID 10 equivalent) and get 8TB usable with faster rebuilds.

Either way I'd rely upon RAM, ZIL, L2ARC to help with performance. SSDs are dirt cheap just use them if possible and E3's can't take that much RAM. 32GB RAM in the system and then get a small ZIL and some L2ARC and you're set.

BTW - I just saw these on STH while looking around. They seem unfinished but hardware guide you could use the content from the FreeNAS section as it looks up to date Top Hardware Components for FreeNAS NAS Servers

@Patrick where did those guides come from? I see a pfSense guide too.
  • Like
Reactions: niekbergboer


Well-Known Member
Dec 31, 2010
Two principles:

you cannot mirror vdevs, they are always striped. A vdev itself can be a mirror or a raid-z

Performance of VM storage is mostly io limited.
A single raid-Z gives you the iops of one disk on read/write while overall iops scale with number of vdevs. With mirrors write iops scale with number of mirrors as well but read perfomance is 2 x number of mirrors. So your first goal bust be as many vdevs as possible.

The result is, that there is only one setup that makes sense with 8 disks:
4 x mirror what gives you around 600 iops on write and 1200 iops on read (asume 150 iops per disk).

Another option would be
- a raid-z2 from 8 disks for general use, filer and backup
- a second pool from an SSD mirror for VMs

This is what I would prefer.
Last edited:


Active Member
Mar 10, 2016
If you are going to store VM images on the array, do mirrors. It feels "wasteful", but it's much faster and far more flexible. And seriously, you're using $30 drives, get over the "waste". :) I would also highly recommend this config for either 10gb networking, or many plex clients. My 2x raidz2 stuttered occasionally with 4 clients streaming higher bandwidth files. All the seeks killed performance. My 10x mirrors don't break a sweat. The streaming stuff can be helped a lot with L2ARC *IF* you enable the L2ARC for sequential reads, which is disabled by default. Keep in mind that L2ARC can actually hurt performance, so check on it if you use one.

For VMs, you probably want sync writes enabled. You may want an SLOG (ZIL) device for performance. There are some guides out there for determining the sync write delays to help you decide if an SLOG is worth it.

What gea mentioned, running an SSD mirror for the VM storage, is ideal. It moves the heavy IOPS use to SSDs which are much more capable for that. Then, if you get sufficient performance, raidz2 for the main array.

You have 8x 2TB... so your options are along these lines..

4x mirrors - 8TB usable
raidz1 - 14TB usable (not recommended)
raidz2 - 12TB usable
raidz3 - 10TB usable

I don't like raidz1 at this level for primary storage, particularly with older disks. The risk of a second failure goes up over time, so you could end up losing the whole thing while rebuilding. If you have full backups and don't mind downtime being possible, it can work. At that point though, you might consider a plain stripe (raid0).

Multiple raidz2/3 use enough parity drives that you're better off with mirrors for 8 drive setups.

For home use, I like mirrors. You can upgrade 2 drives and get more space, or add 2 at a time. With parity arrays, you need to upgrade every drive in the vdev to get more space. It's a lot easier to get 2 drives past "she who must be obeyed" than 8. :) With careful shopping, like $30/2TB, you keep costs down and get more flexibility and performance.
  • Like
Reactions: Zack Hehmann


Active Member
Apr 12, 2016
with ZFS the first and only thing that should come out of your mouth first is... IT DEPENDS.... hehe.. here is why

ZFS has lots of features... does not mean that you should use them all....

how often does the data change... i.e.... how often are you backing up and can it suffer a couple hours / days without snaps..etc

does the system need to remain up on an issue? business uses raidz not for keeping DATA alive in a dive failure (thats what backups are for) they do it so the database continues to run while degraded.. hardly necessary for server movies in my opinion... you see what I am getting at...

in MY particular case... and there is no right or wrong and yours is likely different.. I just stripe my active pool drives and backup to a raidz every week or so... that is off 99% of the time.. here is WHY

its mostly a media server.. if I loose a week or two.. who cares...

for my online pool.. I buy big disks (i.e. expensive) so I spin less 24/7 (cost)
I add them in pairs (not mirrors) so I get 100% utilization.. what the f do I care if a drive goes down.. replace it / resore from my raid backup that has self healing.

this way I only buy dives when I need them.. and they get CHEAPER as time goes on...

now for years I have been using 2 TB drives.. I am getting close to filling 6... so I bought 2 more and 2 8TB drives... moved everything over to the new 16TB stripe.. and added the old 8 2TB drives in a raidz to my existing backup pool.. yes you can expand a raidz another raidz at a time...

so you see I get everything you are looking for... 100% utilization and speed on my online pool... stripped raidz on a protected offline pool so viurses and ransomewhare are thwarted...

and it allows my to reuse my older drives which are still very serviceable.. I have most of my hitachi 2TB drives with over 40,000 HRS and only 2 with 20 or so back blocks.. which takes the smart life down only a point or two.. no big deal....

I kinda chuckle when I see folks setting up a plex server for instance using a raidz2 with half their drive space wasted and no backup... slow as shit and not very redundant for data loss to boot.

just because ZFS can do it.. doesnt mean YOU NEED it.... hehe...

Just one OPINION.. again you MMV
Last edited:
  • Like
Reactions: Zack Hehmann