Choosing hardware/software for home lab use case

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.


New Member
Jan 1, 2014
Hi there,

Long time reader and ZFS follower. I've been reading threads on ZFS for nearly two years now, just trying to get a feel for the different types of software/setups that are possible. I'm looking for a bit of guidance on what an appropriate software selection choice and hardware before I start spending money; I'm hoping to do a ZFS build this summer to replace an aging Q6600 server that presently acts as a datastore with 5x4TB WD Red drives w/StableBit Drivepool on Win8.1.

I consider myself an intermediate level user - I feel pretty comfortable on the Windows/Linux command line. Not much Unix/networking experience.

I do some C#/.NET development though, so I'm looking to grow my skills as I become a better developer as I branch out.

I would like to have a ZFS datastore that can

- Host my 6TB of user data (I've been recording University of Michigan Football/Basketball/Hockey games in HD since 07, Go Blue!)
- Host 2 Windows Server VMs - One for a Domain Controller, the other for SQL Server 2014
- One Linux Server VM to dabble in
- Some spare storage for Time Machine backups, for the mac users in my household

Here's where I'm having a bit of trouble -

- So, a hexa-core Xeon with 32GB ECC would most likely be appropriate, yes?
- Can't decide between FreeNAS, napp-it, or Proxmox. Any suggestions for the use case I've described thus far?
- How does pool architecture work for installing the OS? What would you recommend for setting up the various datastore/OS pools?
- I use Backblaze for the 6TB I have stored; I'd like to use iSCSI to facilitate the continued use of Backblaze. Is this possible with this setup?
- Would an SSD cache for the pool containing the OSs be advisable?

If you guys have any Paypal/bitcoin addresses you'd like to share for your advice, please include them in your comments. I feel like some guidance here would probably save me from data loss/headaches - your wisdom is appreciated!


Well-Known Member
Mar 18, 2015
With your requirements, I personally would suggest more then 32GB of RAM.


Well-Known Member
Mar 30, 2012
That's a really good question. So you are looking for an all-in-one. I might actually suggest using ESXi too. Very good for AIO's and personally I think VMware does better than KVM at Windows virtualization. You could also do Hyper-V and run everything else as a guest and pass through the hard drives. The nice thing is that for what you're doing, you have a lot of options.

I think there are fundamental questions you need answered:

- Do you want the ZFS hosted on bare metal or virtualized?
  • If bare metal, then you are doing ZoL, SmartOS (or similar) with Napp-it, or FreeBSD like FreeNAS and nas4free. I would probably go with a CentOS or Ubuntu server install, install KVM and manage that way if I were being brutally honest.
  • If virtualized, ESXi and Hyper-V are good options
- What are you most comfortable with or what do you want to learn to use for virtualization?

- Realistically, how big is this system going to get? These days you can oversubscribe your RAM in ESXi, Hyper-V, KVM (with Linux works, Windows is more flaky than the others)

At first read I thought you're so close to just being able to get one of those Xeon D boxes reviewed on the main site and just being done with it. Just 1 too many drives (it only takes 4). You could even do 4x 3.5" many TB hard drives, then get a cheapo Fusion-io (if you're using Hyper-V) for L2ARC/ ZIL and get 16GB DIMMs easily. You would learn more doing this. I don't know if the napp-it on ZoL is still being developed but there was a version. If you do go D, I'd get 2x 16GB then add either 8's or 16's later as prices fall and you use the system more.


New Member
May 19, 2015
"I would probably go with a CentOS or Ubuntu server install, install KVM and manage that way if I were being brutally honest."

So in this model you describe, I would install Ubuntu on bare metal, install KVM, install napp-it inside this KVM, present the disks to the napp-it appliance through PCI-passthrough, and install additional operating systems through the KVM since napp-it is 'passing through' the datastores? I thought I recall reading at one point that ZFS doesn't like PCI passthrough, I certainly could be wrong though.

(Sorry if I explained any of this incorrectly, this is where I'm having a bit of trouble)


New Member
Jan 1, 2014
the virtualization part is a bit scary - from what I've read, ESXi can't take snapshots? Is this true with Hyper-V as well?

The purpose of the VMs is to serve as a domain controller/SQL server/development prototyping. I'll be making quite a few mistakes along the way, so I'll be leaning on snapshots pretty heavily.

The datastore grows by about 300-400GB a year. I'll be adding two additional 4TB drives, for 7x4TB total in RAIDZ2. I don't mind spending the money by getting 2x480GB SSDs for the operating system VMs, if it reduces the setup/shared resource complexity. Just looking for something that is stable/has decent performance/doesn't get into rube-goldberg level of configuration complexity.

(btw, the akarpo reply is mine as well, the board software is getting a bit derpy -