OMV for VMs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I've been reading up on OMV as an alternative to UnRAID and while it seems an OMV + SnapRAID + MergerF is a viable option for my bulk storage, I'm intrigued by the notion of running my VM's on a ZFS pool in OMV (ZoL). Has anyone done this who can speak to the performance vs say running it on FreeNAS. I'm using NFS for my accessing my current VM Datastores in FreeNAS.

Also, has anyone actually run VM's themselves in OMV? I assume it's possible with KVM but I haven't seen much mention of this anywhere.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I can't speak to OMV, but on Proxmox, I do run VMs from a ZFS datastore. I use striped mirrors, performance is about what you would expect from that config. I see no reason raidz would be different. There is a little less overhead vs running against NFS. And if you use containers vs VMs, you get a little less overhead there as well.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I can't speak to OMV, but on Proxmox, I do run VMs from a ZFS datastore. I use striped mirrors, performance is about what you would expect from that config. I see no reason raidz would be different. There is a little less overhead vs running against NFS. And if you use containers vs VMs, you get a little less overhead there as well.
I wish Proxmox would support docker containers natively. That would probably push me over the ESXi edge towards Proxmox for my home VM cluster.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I wish Proxmox would support docker containers natively. That would probably push me over the ESXi edge towards Proxmox for my home VM cluster.

You can install Docker on it, STH even did a writeup on it. It's not exactly "natively" though, and I ran into a problem booting up after. Docker creates a bunch of ZFS filesystems for it's containers. I guess ZoL tries to mount everything it finds, even though they are marked not to auto-mount. This stalls the boot. To prevent it I had to modify one of the ZoL scripts. I found some bug reports where people documented it, but I guess they don't want to fix it upstream.

Honestly, I'm still trying to figure out why I want Docker over a more "traditional" container. I'm sure there is a reason, but I haven't found it, other than ease of running existing images.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
You can install Docker on it, STH even did a writeup on it. It's not exactly "natively" though, and I ran into a problem booting up after. Docker creates a bunch of ZFS filesystems for it's containers. I guess ZoL tries to mount everything it finds, even though they are marked not to auto-mount. This stalls the boot. To prevent it I had to modify one of the ZoL scripts. I found some bug reports where people documented it, but I guess they don't want to fix it upstream.

Honestly, I'm still trying to figure out why I want Docker over a more "traditional" container. I'm sure there is a reason, but I haven't found it, other than ease of running existing images.
The problem is without failover support for containers (not possible with LXC either), it's pretty pointless to me. I run 90% of my home services in docker containers including my surveillance recording software so I need it up at all times.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
OMV have no support for KVM virtualization. there is a VirtualBox plugin that you can install.
it setups VirtualBox and give you a webUI for it.
also as far as I can tell OMV does not support ZFS natively. there is a plugin to help you set it up and some what manage it, but I have never tried it myself.

that said, using Proxmox + ZFS (which is supported natively) + TurnKey Fileserver LXC container (which is available from Proxmox templates) maybe something to your liking.
depending on what your needs are.
the TK file server gives you a Web UI for management. it is a custom WenMin interface.
it provides you with Samba shares, and mycloud or something like that.

just create ZFS pool(subvolume) on Proxmox, unfortunately it is a CLI only process.
create the container.
and bind mount the subvolume into it.

everything else you take care from inside the container.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
hold on a sec, how are you plan to do failover with OMV?
I am not sure it is a supported feature.

I even did not know that unRaid supports failover configuration.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
hold on a sec, how are you plan to do failover with OMV?
I am not sure it is a supported feature.

I even did not know that unRaid supports failover configuration.
I wouldn't be using OMV or any other OS itself to do failover. I was referring to using exported NFS shares from OMV to present a shared datastore to my ESXi hosts the way I currently do with FreeNAS. As long as that storage is up, the VM's can failover between ESXi hosts. As for LXC, I really don't have any interested in converting all my services over from docker to LXC.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Why can't you failover using Proxmox and Docker? I'm lost. You can export a zfs share and map Docker volumes to the share and set policies on Swarm to keep a service up. This is a basic use case
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Why can't you failover using Proxmox and Docker? I'm lost. You can export a zfs share and map Docker volumes to the share and set policies on Swarm to keep a service up. This is a basic use case
I'm not sure I'm following you exactly. In this configuration you're talking about, where would the actual container reside?
 

vl1969

Active Member
Feb 5, 2014
634
76
28
hold on a sec. I am not sure I understand your setup.
So, you have an ESXi setup AND a shared storage using FreeNAS, right?
2 separate machines.

now you want to replace the shared storage machine?
and you want to also do virtualization on that machine? in addition to ESXi ?
why?

I mean, if you want a robust setup with failover and everything, you can do that with ESXi and Proxmox and other hypervisors, but it requires a bit of planning and maybe an additional software.

for example:
VMware ESXi have vSAN option. it's allows you to build an HA cluster with failover and such using local storage on each host instead of external shared storage.
BUT, it requires a 3 host setup. and a paid license from VMWare.
you can also use Veem backup to do async replication between hosts, that is a bit cheaper and may work with 2 nodes.

also there is a product from StarWind that works with ESXi and MS Hyper-V. 2 nodes minimum and up-to 4TB shared storage free.
any other solutions will set you back
financially as almost all require 3 hosts setups. even Proxmox needs 3 hosts to have a true HA setup.

OMV supports Docker, and have a plugin for it. so I says it should be a good fit for you.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
hold on a sec. I am not sure I understand your setup.
So, you have an ESXi setup AND a shared storage using FreeNAS, right?
2 separate machines.

now you want to replace the shared storage machine?
and you want to also do virtualization on that machine? in addition to ESXi ?
why?

I mean, if you want a robust setup with failover and everything, you can do that with ESXi and Proxmox and other hypervisors, but it requires a bit of planning and maybe an additional software.

for example:
VMware ESXi have vSAN option. it's allows you to build an HA cluster with failover and such using local storage on each host instead of external shared storage.
BUT, it requires a 3 host setup. and a paid license from VMWare.
you can also use Veem backup to do async replication between hosts, that is a bit cheaper and may work with 2 nodes.

also there is a product from StarWind that works with ESXi and MS Hyper-V. 2 nodes minimum and up-to 4TB shared storage free.
any other solutions will set you back
financially as almost all require 3 hosts setups. even Proxmox needs 3 hosts to have a true HA setup.

OMV supports Docker, and have a plugin for it. so I says it should be a good fit for you.
My current setup has 4 physical nodes as follows:

Node 1: FreeNAS All Flash Storage (VM Shared Datastore)
Node 2: UnRAID Bulk Array
Node 3: ESXi host #1
Node 4: ESXi host #2

I run 2 VM's on UnRAID in addition to what's on my ESXi cluster. Those VMs are a second Domain Controller so that I have DHCP, DNS, Mapped Drives, if I have to take my FreeNAS box offline. I also run a Windows 10 VM as a backup Veeam box so I can restore VM's in case my regular Veeam box on FreeNAS datastore is offline.

I used to run VMware vSAN among my 4 nodes but what I found was that it wasn't very flexible. I run vSAN at work and it's great for setup's you're not tinkering with often but my home network doesn't fall into that category. For example vSAN really doesn't like it when a node is unexpectedly down for more than an hour. It also doesn't like any hardware that isn't explicitly defined as supported.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
My current setup has 4 physical nodes as follows:

Node 1: FreeNAS All Flash Storage (VM Shared Datastore)
Node 2: UnRAID Bulk Array
Node 3: ESXi host #1
Node 4: ESXi host #2

I run 2 VM's on UnRAID in addition to what's on my ESXi cluster. Those VMs are a second Domain Controller so that I have DHCP, DNS, Mapped Drives, if I have to take my FreeNAS box offline. I also run a Windows 10 VM as a backup Veeam box so I can restore VM's in case my regular Veeam box on FreeNAS datastore is offline.

I used to run VMware vSAN among my 4 nodes but what I found was that it wasn't very flexible. I run vSAN at work and it's great for setup's you're not tinkering with often but my home network doesn't fall into that category. For example vSAN really doesn't like it when a node is unexpectedly down for more than an hour. It also doesn't like any hardware that isn't explicitly defined as supported.
gotcha,

so it is safe to say that you have a 2-node ESXi cluster + FreeNAS machine(essentially a stand alone host) + a second stand alone host with UnRaid what bulk storage.

do you mind me asking what are your usecase for all of this hardware?
I mean how do you use all of this and what are your requirements?
also what is wrong with FreeNas :) ?

mind you, I do not use FreeNAS myself. I more prefer Linux based setups since I know it better and do not want to learn BSD.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
gotcha,

so it is safe to say that you have a 2-node ESXi cluster + FreeNAS machine(essentially a stand alone host) + a second stand alone host with UnRaid what bulk storage.

do you mind me asking what are your usecase for all of this hardware?
I mean how do you use all of this and what are your requirements?
also what is wrong with FreeNas :) ?

mind you, I do not use FreeNAS myself. I more prefer Linux based setups since I know it better and do not want to learn BSD.
First off, nothing is wrong with FreeNAS. It has been very stable for my VM storage. However, I'm looking to consolidate my storage to downsize the amount of hardware I'm running/supporting. So this is what has lead me to look at options to combine my bulk array and flash storage under the same "roof" so to speak. I don't want to run my bulk drives as a striped array for a multitude of reasons.

Bulk array is used for the following:
  • 60% media streaming
  • 10% surveillance video
  • 10% VM snapshots
  • 10% Software
  • 10% Personal Data (Documents, Pictures, etc.)

The biggest use of my network is definitely media streaming. I often have 6-10 streams going every night, sometimes more. I also run 4 video cameras in my home. I also have a large amount of test VM's that I run on my ESXi cluster for testing/learning new technologies to apply at work or at home.
 
Last edited:

vl1969

Active Member
Feb 5, 2014
634
76
28
well, since FreeNas does not support clustering, you have limited choices here.
I am not sure how is the best way to do this. would help to know what your end goals is.
are you trying to eliminate freenas and unraid host completely?
or your plan is to normalise the whole infrastructure to run/support a homogenise setup
as in 1 ESXi cluster and 1 storage setup.

but nothing will be that easy to implement.
for ZFS I would go with Proxmox, since it supports it natively.
one big issue, for me, is a fine grained support is CLI only.
that however lucks Docker support.

OMV supports Docker, but ZFS is not native and may still require a lot of CLI interaction to manage.
OMV does not natively supports KVM. and any attempts to have any kind of UI for management is proven to be difficult to install setup. so the only VM choice is VirtualBox and even that have been not trouble free as plugins are not always ported fast enough between versions.

I have been planning my home setup for a while, now.
I only have 1 host so it is a bit difficult to envision the best setup, my needs are mostly what your UnRaid provides. but unraid is not work out for me.
so my plan is to run Proxmox using ZFS.
than using CLI, or I may load a webmin alongside Proxmox to help with management(not recommended setup but if used carefully will work fine).
so using CLI setup my DATA ZFS pool as I want it. not sure how your hardware is configured and how many drives you have but you can use SSD as cache for ZFS pool(s)
than load up either an OMV VM and pass-through the ZFS pool into it.
or, as I said in my other post load a TurnKey File Server container and bind mount the pool(s) into it. for more universal sharing (the TKFS provides SAMBA and owncloud style sharing in one.)
in addition you can use ZFS NFS engine provided share to have your shared storage for ESXi pointing to the same pool.
if you have 2 small SSD , do an ZFS raid-1 system setup (proxmox supports that natively) to have a good uptime.

the other option is, if your unraid and freenas hosts good enough and similar in config, to have a 3 node Proxmox HA cluster with storage replication. you load up Proxmox on 2 of your nodes and 1 VM you run in ESXi.
2 nodes will be the cluster and VM node will be witness. so not much resources needed for it.
this way you will have a HA setup for your storage capable of running VMs/containers.
and you can than have a nice VM with Docker setup and run on that. passing through the storage as needed.


PS>> if I can grab 2 servers from my office (we are closing and they might want to sell it to me for cheaps.) I might attempt to build out this setup myself. :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Proxmox is off the table for me. I've looked at it multiple times but there are certain features in VMware that I just can't live without and I don't think the $180 a year VMUG subscription is outrageous considering it allows me to keep my skillset up to date.

I'm just for now looking to eliminate one physical storage node thus combining my flash VM storage and bulk storage into one physical box. I'll be keeping the two node ESXi cluster as is because I want/need the flexibility it offers. If UnRAID ran nicely in a VM (I've tested this extensively and it's unreliable) I'd just run UnRAID and FreeNAS both in VM's under the same physical box.

Honestly, I'm leaning towards just moving my flash storage back into my ESXi nodes and creating local datastores. With vMotion no longer requiring shared storage I can still migrate my VM's between hosts with the local datastores.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
why can't you add a controller to your ESXi box and just pass through to a VM?
this way you can build out a FreeNas VM on the cluster, pass-through the drives via whole controller, into it and use that as shared storage via NFS as you do now. it should be faster as it is on the same box and same vSwitch.
if you have enough SSDs split them into 2 nodes evenly and replicate the data between 2 nodes. the only thing that will not work is auto migration as you can not migrate passed in controller. but if you mirror the data you can just span up the VM as needed.
I think...

unfortunately unraid is not good for VM. never was and never will be as they require USB key to run properly.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
why can't you add a controller to your ESXi box and just pass through to a VM?
this way you can build out a FreeNas VM on the cluster, pass-through the drives via whole controller, into it and use that as shared storage via NFS as you do now. it should be faster as it is on the same box and same vSwitch.
if you have enough SSDs split them into 2 nodes evenly and replicate the data between 2 nodes. the only thing that will not work is auto migration as you can not migrate passed in controller. but if you mirror the data you can just span up the VM as needed.
I think...
I'm not sure what you're suggesting here. You're saying put FreeNAS (running as a VM) on one of my two ESXi hosts? What happens when I have to take that host down or it goes down? I'd prefer to be able to fully migrate all my VM's between hosts for maintenance.

unfortunately unraid is not good for VM. never was and never will be as they require USB key to run properly.
That part works fine in ESXi (I can pass through the USB drive to the VM) but I just have very unreliable performance when it's running as a VM. UnRAID would regularly locks up on me when running in a VM.