ProxMox vs OMV vs UnRaid vs Debian/Centos w/ Docker + KVM vs Rancher

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eric Faden

Member
Dec 5, 2016
98
6
8
41
Hey All,

So I finally am ordering my hardware for my server, but now comes the decision of what to run on it. Currently I'm planning to be able to run Plex/SickRage/etc.... but ideally I'd like to have an OSX/Windows/etc VM available in the future.

I have read tons and tons of articles, and still can't sort through what to use. I'm comfortable in linux so having a GUI isn't entirely necessary. I was also leaning towards SnapRAID w/ MergerFS or Unraid since I plan to add more storage in the future and like the ability to modify the pools moving forward.

So the question comes down to what to use... My current thought is to use Debian w/ SnapRAID + MergeFS + KVM + Docker (or KVM with RancherOS running inside a KVM).

Although ProxMox/Unraid/OMV with plugins also seem like a reasonable idea....

Anyone have any tips?
 

Eric Faden

Member
Dec 5, 2016
98
6
8
41
Hey All,

So I finally am ordering my hardware for my server, but now comes the decision of what to run on it. Currently I'm planning to be able to run Plex/SickRage/etc.... but ideally I'd like to have an OSX/Windows/etc VM available in the future.

I have read tons and tons of articles, and still can't sort through what to use. I'm comfortable in linux so having a GUI isn't entirely necessary. I was also leaning towards SnapRAID w/ MergerFS or Unraid since I plan to add more storage in the future and like the ability to modify the pools moving forward.

So the question comes down to what to use... My current thought is to use Debian w/ SnapRAID + MergeFS + KVM + Docker (or KVM with RancherOS running inside a KVM).

Although ProxMox/Unraid/OMV with plugins also seem like a reasonable idea....

Anyone have any tips?
And not to mention ZFS w/ FreeNAS (although ZFS can't add drives...)
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
It's a fairly broad question.... The typical Plex stuff is easy enough. You can run them on top of almost any Linux setup using Docker or other container technology. FreeNAS jails work pretty well there as well. My current hardware started on FreeNAS, but I moved from it as I couldn't get Crashplan to work reliably under it. At the time, BeHyve couldn't boot Linux on my hardware, so I decided to just use Linux and went to Proxmox/ZFS/containers. That's a bonus for ZFS, it's quite compatible. You seem aware of the downsides, so I won't go into that.

All the container stuff works well in Proxmox as well, though maintenance is a little more involved than a Docker setup. It's not much more, just applying the updates seems a little less simple. I run the usual suspects in lightweight LXC containers and am happy with it. If you go with Docker, those are available as Docker images as well. I've never used Docker, so I can't really compare them. I think the end result is pretty similar though. Proxmox has some other features that might be interesting like Ceph and clustering, but I don't use them. Docker seems to have more options for pre-made containers.

If I were to go the Docker direction, I think I would try a basic Debian or other Linux, with whatever file/RAID system I chose to use with Rancher running in a KVM. One downside there is disk I/O takes a hit as it has to pass over a VirtIO channel, but the Linux drivers are pretty good and I expect the impact would be small. You could also install Docker directly onto the base Linux install. That would give you a setup much like Proxmox, just using Docker containers instead of LXC.

I think the first thing to do is pick a disk management technology. As that limits your selections for the higher layers. Then perhaps the container/virtualization stuff. If you have time, try a couple options and just be OK with formatting the boot drives to experiment.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
You can add disks with ZFS. You just miss traditional online capacity expansion.
 
  • Like
Reactions: brown0611

Eric Faden

Member
Dec 5, 2016
98
6
8
41
@Patrick True.

@ttabbal
True. It came out broader than I was thinking. But the issue is that there are just so many ways to go with this.

I considered FreeNAS with jails as well as OMV with plugins. Both seem like they would work for most of my purposes, but figured I would rather go with a main OS for flexibility moving forward.

Right now my storage choices are down to MergerFS+SnapRAID, ZFS, or Unraid. Unraid is a pretty easy choice for the rest if I go that way since it is all baked in....

If I do MergerFS+SnapRAID I'll likely go Debian server host with Docker+KVM for the rest.....

If I go ZFS I'll likely go OMV or FreeNAS....


I'm leaning towards MergerFS+SnapRAID. I realize it's downsides, but the bulk of the storage on here won't be mission critical (movies, TV, etc). Mission critical stuff will be stored on a separate array. This box will also use SSDs on a different FS for the VMs/Dockers, etc.

Have you run RancherOS in a KVM?.... Since I'll be running things like Plex, etc... I will need VertIO or some way to get the /mnt/Movies etc directories inside of Rancher and then to Plex, Sonarr, etc.

Is VertIO the way to do that?... or NFS shares? or ...?
 
  • Like
Reactions: dawsonkm

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
I haven't run RancherOS. I just took a quick look at the documentation for it. Accessing the main filesystem data can likely be done a few ways. The easiest way is probably NFS. There are some performance implications doing that, but for a media storage area, it's likely fine.
 

markarr

Active Member
Oct 31, 2013
421
122
43
For SnapRAID and mergerFS you could look at what @rubylaser has written up on it. He has a couple post of using debian as a base and running everything as containers from there when you create the container you map paths outside the container inside so you don't have to deal with sharing. I used it and running plex, sonarr, ect as containers on ubuntu works great for me
 

Eric Faden

Member
Dec 5, 2016
98
6
8
41
@ttabbal I'm curious about it... the downside as @markarr points out is sharing the file system in and out.... If I use docker directly I don't have to deal with it... then just save VMs to use for other things that don't need access to host shares.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
For SnapRAID and mergerFS you could look at what @rubylaser has written up on it. He has a couple post of using debian as a base and running everything as containers from there when you create the container you map paths outside the container inside so you don't have to deal with sharing. I used it and running plex, sonarr, ect as containers on ubuntu works great for me
Thanks for the kind words. This setup has worked well for me for a long while now. It's fantastic for bulk home media.
 

mbello

New Member
Nov 14, 2016
17
8
3
42
If your setup should also have a learning value to you (are you an IT pro?) then I would say stay away from SnapRAID and mergerFS because these technologies are not something anyone would want to use in production for anything serious. And it sounds quite cumbersome to maintain.

Imagine that everytime a HDD fails your system will fail and will malfunction until a replacement HDD arrives. If one of your HDD starts to lose data, you will not know and snapraid will probably not know it either. Why not use a proper RAID technology?

Usually, snapraid and mergerfs do not even get mentioned in professional discussions when talking about open source storage systems. These days, from what I hear, it is either a safe filesystem like ZFS and its replication options if you need that or a distributed storage like Ceph (very complex to setup and maintain). At enterprise level there are proprietary options as well.

For you I would say ProxMox is the way to go. If your budget is high, you could even keep a dedicated FreeNAS box and then run your VMs from network storage (iSCSI, PXE, etc).

I would stay away from Docker in your case, Docker is great if you want to easily package your application and scale it out, but Docker instances are stateless, if you are running only one instance of a software use either KVM or LXC for virtualization.

Also, choose a Linux distro and stick with it (and its derivatives) for host and VMs. I would say the decision is really among Debian, Ubuntu Server, CentOS and SUSE for reliable server distro. I usually go with Ubuntu Server.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
@mbello
Thanks for the info. Not an IT pro.... not any more.... a Physician.

@rubylaser
Thanks... just looking through your site now. Do you also run VMs on that platform?...
I used to, but I have since broken my VMS back out into a separate Proxmox host. It is certainly capable of running VMs. Just don't use a mergerfs pool as your VM storage (I use ZFSonLinux for VM storage).
 

Eric Faden

Member
Dec 5, 2016
98
6
8
41
I used to, but I have since broken my VMS back out into a separate Proxmox host. It is certainly capable of running VMs. Just don't use a mergerfs pool as your VM storage (I use ZFSonLinux for VM storage).
@rubylaser
I am only planning to have one box. It is going to have a few HDs and a pair of SSDs. I was planning to run Debian and use the SSDs for VMs (maybe ZFS) and then the HDDs with SnapRAID/MergerFS for storage for movies etc.
 

nephri

Active Member
Sep 23, 2015
541
106
43
45
Paris, France
in a similar situations i used:
- a FreeNas box running nfs and iscsi services for exposes disks
- a box with proxmox. i use kvm to create vm for differents service (plex, emby, wiki, owncloud, dns, ....)
- each vm have disk configured throw iscsi on the freenas server (or nfs)

I'm always feeling more comfortable with kvm that lxc or docker.
I don't know why (maybe because i found that set up a coreOS is quite a pain...)

So, i haven't much experience on all stuffs you study but i can say that proxmox is an awesome tools and freenas is really fun also.

PS: on the VM having Plex, the media mount point is done by nfs to freenas. That let me manage snapshot and backup strategy directly from freenas.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
@rubylaser
I am only planning to have one box. It is going to have a few HDs and a pair of SSDs. I was planning to run Debian and use the SSDs for VMs (maybe ZFS) and then the HDDs with SnapRAID/MergerFS for storage for movies etc.
This will work fine on one box as well. You would just need to install SnapRAID and MergerFS on the Proxmox host. This is super easy to do :)
 
  • Like
Reactions: Continuum

markarr

Active Member
Oct 31, 2013
421
122
43
Imagine that everytime a HDD fails your system will fail and will malfunction until a replacement HDD arrives. If one of your HDD starts to lose data, you will not know and snapraid will probably not know it either. Why not use a proper RAID technology?

Usually, snapraid and mergerfs do not even get mentioned in professional discussions when talking about open source storage systems. These days, from what I hear, it is either a safe filesystem like ZFS and its replication options if you need that or a distributed storage like Ceph (very complex to setup and maintain). At enterprise level there are proprietary options as well.
With SnapRAID a drive fails you only loose what is on that drive the system doesnt fail. If a hard drive starts to fail, with every sync of the drives it does a checksum of anything that is changed and will fix bit rot.

Your right SnapRAID does not get mentioned in discussions due to what it was designed for. It is meant for bulk static files that don't change often, it is not meant for critical storage or VM like ZFS. In a hybrid situation like the poster wants it works great, you can add a drive to the system and modify the two config files and the space is available to use.

I would stay away from Docker in your case, Docker is great if you want to easily package your application and scale it out, but Docker instances are stateless, if you are running only one instance of a software use either KVM or LXC for virtualization.
Docker in this case actually works pretty well for the posters use case. Each program is isolated and can be worked on independently, you also don't have near the overhead of vms, and the apps he is talking about are mostly stateless anyway. To update an app you "docker restart plex" and it will update plex for you.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
@rubylaser What's your performance like with SnapRAID + MergerFS? I'm considering moving on of my UnRAID arrays to SnapRAID as a temporary test and wonder about read/write performance to the array since I have 10Gb networking. Currently I have a RAID0 cache pool on both my arrays so I can come pretty close to saturating my 10Gb link between the two.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
@rubylaser What's your performance like with SnapRAID + MergerFS? I'm considering moving on of my UnRAID arrays to SnapRAID as a temporary test and wonder about read/write performance to the array since I have 10Gb networking. Currently I have a RAID0 cache pool on both my arrays so I can come pretty close to saturating my 10Gb link between the two.
MergerFS has no builtin caching mechanism, so it's only as fast as the underlying disks. A user a while back wrote up a simple script to run via cronjob that would periodically sweep data onto the pool, but I have never used it. I asked the developer about a year ago for an option to support a cache pool or SSD Cache, but he mentioned that the added complexity was not worth the effort.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
MergerFS has no builtin caching mechanism, so it's only as fast as the underlying disks. A user a while back wrote up a simple script to run via cronjob that would periodically sweep data onto the pool, but I have never used it. I asked the developer about a year ago for an option to support a cache pool or SSD Cache, but he mentioned that the added complexity was not worth the effort.
So there is no way of setting up a cache pool using SnapRAID + MergerFS so that any data transferred to those MergerFS shares goes to the cache pool (SSDs) first before being moved to the SnapRAID protected array (spinners)? Similar to how UnRAID allows for the use of cache pools?