Homeserver virtualization

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

C1pher

New Member
Mar 13, 2015
1
0
1
37
Hi all,

I'm currently running my homeserver with samba, nfs, nginx, ftp, ... all in one debian 7 system. Since I need to extend the storage soon (14TB with ~200GB free) I thought about moving every service into its own vm.

My question is, what is best practice for the fileserver vm: do I expose the raw discs to the vm and let the vm handle the RAID and filesystem (still unsure what, but most likely ZFS), or should the host handle all matters regarding storage and just expose one big virtual drive to the vm?
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
Hi all,

I'm currently running my homeserver with samba, nfs, nginx, ftp, ... all in one debian 7 system. Since I need to extend the storage soon (14TB with ~200GB free) I thought about moving every service into its own vm.

My question is, what is best practice for the fileserver vm: do I expose the raw discs to the vm and let the vm handle the RAID and filesystem (still unsure what, but most likely ZFS), or should the host handle all matters regarding storage and just expose one big virtual drive to the vm?
From what I've read the second option may be possible if running Oracle Solaris 11.3 for ZFS and using the integrated hypervisor, but I'm a bit unsure. I don't know whether anyone here has actually tried doing that, but I'd be interested in anyone has.

Regarding the first option, there isn't a universal simple answer, and there are a lot of threads discussing the topic. The answer will depend on which hypervisor you choose as well as possibly which file system. If using ZFS, you'll find the full range of mixed opinions on whether it should be done at all using some of the common hypervisors and/or whether it should be done on anything other than an "experimental" setup.
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
If you want to stick with debian, have you looked at proxmox? It's a kvm-based hypervisor built on top of debian. AFAIK, you can run a script of some sort to install proxmox on your existing debian NAS without blowing it away, although it might be cleaner to do a fresh proxmox install, and then set up the NAS stuff from scratch. Note that the latest proxmox does in fact support zfs for VM storage.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
If you want to stick with debian, have you looked at proxmox? It's a kvm-based hypervisor built on top of debian. AFAIK, you can run a script of some sort to install proxmox on your existing debian NAS without blowing it away, although it might be cleaner to do a fresh proxmox install, and then set up the NAS stuff from scratch. Note that the latest proxmox does in fact support zfs for VM storage.
Have you tried it? The last thread I read (actually one that I started hoping it was a viable approach) seemed to end in pessimism about going that route, except for possibly doing very simple things: https://forums.servethehome.com/index.php?threads/proxmox-3-4-now-integrated-with-zfs.4894/
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
Tried what? Proxmox on top of debian? Converting a proxmox box into NAS? I did use proxmox a couple of years ago - I switched for reasons having nothing to do with it being a crappy hypervisor...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Tried what? Proxmox on top of debian? Converting a proxmox box into NAS? I did use proxmox a couple of years ago - I switched for reasons having nothing to do with it being a crappy hypervisor...
I was one of the vocal ones in that prior thread. I think the conclusion was (1) proxmox is sorta OK for a single VM host - if you are not trying to manage multiple hosts or do HA or do complex VM networking its OK and (2) the principles @ proxmox are class-a jerks. Because of (2) I don't reccomend Proxmox - because they don't deserve to get any business. But for simple things its still sorta OK (especially if you have no intention of paying them :)).
 

dswartz

Active Member
Jul 14, 2011
610
79
28
No argument at all. For something simple (like a NAS that also runs a couple of guests), it's fine...
 

spyrule

Active Member
If your other services are using the nas storage, personally id passthrough my hdd controller and run the nas as a vm, and then run the other vms as needed. The primary benefit is that should your hardware fail, but the drives survive, you can easily rebuild your nas with no little complication. Trying to recover a single/multiple virtual drives from multiple hdd sources gets very complicated, and i remember reading an article where someone laid out the math, and it was literally twice the chance of no recovery (and that was with a 4hdd setup, the more drives the higher the chance).
 

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
In the prior thread mentioned I suggested running SmartOS with ProjectFIFO for a GUI. This gives you Illumos for native ZFS, Zones (jails) for running other vms (SMB/FTP/nginx), and KVM for running other vms. One of the newest things in SmartOS is the lx-brand which will let you run Linux binaries on top of the illumos kernel so that your apps are running at bare metal speed without the penalty of going through KVM.
 
  • Like
Reactions: whitey

OBasel

Active Member
Dec 28, 2010
494
62
28
What are people using to manage kvm images with a GUI for something like this?
 

chinesestunna

Active Member
Jan 23, 2015
621
194
43
56
Hi all,

I'm currently running my homeserver with samba, nfs, nginx, ftp, ... all in one debian 7 system. Since I need to extend the storage soon (14TB with ~200GB free) I thought about moving every service into its own vm.

My question is, what is best practice for the fileserver vm: do I expose the raw discs to the vm and let the vm handle the RAID and filesystem (still unsure what, but most likely ZFS), or should the host handle all matters regarding storage and just expose one big virtual drive to the vm?
I was in you exact situation 4 years ago and had to make the same decisions, had Debian 6 running a 8x2TB array via MDADM and ran VMware Server (not ESXi) for VMs. Decided to go baremetal and here are my experiences:
  1. First pick a platform, Hyper-V, ESXi, KVM, ProxMox etc. I went with ESXi because of the large user base and thus support base. Not saying others don't have a decent number of users, but many companies use ESX/vSphere and that gets a lot of people production experience and can really help you out in a bind.
  2. Regarding array management, there's a few ways of passing the storage from HyperVisor to VM. While some people are ok with passing RDM (raw disk mode), most of my research and experience shows that passing the entire controller via VT-d (IOMMU in AMD) is best/easiest approach. This allows the NAS VM to see the entire controller and all disks as if it's a native install, you get more direct access to management/power/SMART features. In some cases such as ZFS, passthrough (VT-d) seems to be required
  3. If you do go baremetal or Type1 VM Hypervisor, make sure you have good hardware support. The above mentioned VT-d support requires proper CPU/Motherboard combo to make it work. Some consumer boards will have it, some consumer CPUs will have it, but it takes both for it to properly work. I spent a month with Gigabyte support getting beta BIOS builds to enable that feature on my X58 board, even though the platform supports it, doesn't mean the BIOS implementation will. Server boards and CPUs almost always supports it.
  4. If you go ZFS, most my reading indicates that ECC RAM is a must if you care about the data, keep that in mind. Also you can't reshape vDevs, that was one of the reason I didn't go with it current build, if you have say 6 drive RAID6, and need more space, you can't just grow the array to 8 drive RAID6 but rather would need to create a new vDev with desired redundancy. Keep in mind too if any vDev fails completely, entire zPool is lost
I guess some questions are:
  1. What's your current hardware setup? This could influence people's recommendations.
  2. Are you planning to change any hardware? If so what's future planned setup? This could be influenced by your storage/VM needs.
Good luck!
 
  • Like
Reactions: whitey and Patrick