1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ProxMox vs OMV vs UnRaid vs Debian/Centos w/ Docker + KVM vs Rancher

Discussion in 'Linux Admins, Storage and Virtualization' started by Eric Faden, Dec 29, 2016.

  1. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    Hey All,

    So I finally am ordering my hardware for my server, but now comes the decision of what to run on it. Currently I'm planning to be able to run Plex/SickRage/etc.... but ideally I'd like to have an OSX/Windows/etc VM available in the future.

    I have read tons and tons of articles, and still can't sort through what to use. I'm comfortable in linux so having a GUI isn't entirely necessary. I was also leaning towards SnapRAID w/ MergerFS or Unraid since I plan to add more storage in the future and like the ability to modify the pools moving forward.

    So the question comes down to what to use... My current thought is to use Debian w/ SnapRAID + MergeFS + KVM + Docker (or KVM with RancherOS running inside a KVM).

    Although ProxMox/Unraid/OMV with plugins also seem like a reasonable idea....

    Anyone have any tips?
     
    #1
  2. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    And not to mention ZFS w/ FreeNAS (although ZFS can't add drives...)
     
    #2
  3. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    575
    Likes Received:
    161
    It's a fairly broad question.... The typical Plex stuff is easy enough. You can run them on top of almost any Linux setup using Docker or other container technology. FreeNAS jails work pretty well there as well. My current hardware started on FreeNAS, but I moved from it as I couldn't get Crashplan to work reliably under it. At the time, BeHyve couldn't boot Linux on my hardware, so I decided to just use Linux and went to Proxmox/ZFS/containers. That's a bonus for ZFS, it's quite compatible. You seem aware of the downsides, so I won't go into that.

    All the container stuff works well in Proxmox as well, though maintenance is a little more involved than a Docker setup. It's not much more, just applying the updates seems a little less simple. I run the usual suspects in lightweight LXC containers and am happy with it. If you go with Docker, those are available as Docker images as well. I've never used Docker, so I can't really compare them. I think the end result is pretty similar though. Proxmox has some other features that might be interesting like Ceph and clustering, but I don't use them. Docker seems to have more options for pre-made containers.

    If I were to go the Docker direction, I think I would try a basic Debian or other Linux, with whatever file/RAID system I chose to use with Rancher running in a KVM. One downside there is disk I/O takes a hit as it has to pass over a VirtIO channel, but the Linux drivers are pretty good and I expect the impact would be small. You could also install Docker directly onto the base Linux install. That would give you a setup much like Proxmox, just using Docker containers instead of LXC.

    I think the first thing to do is pick a disk management technology. As that limits your selections for the higher layers. Then perhaps the container/virtualization stuff. If you have time, try a couple options and just be OK with formatting the boot drives to experiment.
     
    #3
  4. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    10,006
    Likes Received:
    3,312
    You can add disks with ZFS. You just miss traditional online capacity expansion.
     
    #4
    brown0611 likes this.
  5. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    @Patrick True.

    @ttabbal
    True. It came out broader than I was thinking. But the issue is that there are just so many ways to go with this.

    I considered FreeNAS with jails as well as OMV with plugins. Both seem like they would work for most of my purposes, but figured I would rather go with a main OS for flexibility moving forward.

    Right now my storage choices are down to MergerFS+SnapRAID, ZFS, or Unraid. Unraid is a pretty easy choice for the rest if I go that way since it is all baked in....

    If I do MergerFS+SnapRAID I'll likely go Debian server host with Docker+KVM for the rest.....

    If I go ZFS I'll likely go OMV or FreeNAS....


    I'm leaning towards MergerFS+SnapRAID. I realize it's downsides, but the bulk of the storage on here won't be mission critical (movies, TV, etc). Mission critical stuff will be stored on a separate array. This box will also use SSDs on a different FS for the VMs/Dockers, etc.

    Have you run RancherOS in a KVM?.... Since I'll be running things like Plex, etc... I will need VertIO or some way to get the /mnt/Movies etc directories inside of Rancher and then to Plex, Sonarr, etc.

    Is VertIO the way to do that?... or NFS shares? or ...?
     
    #5
    dawsonkm likes this.
  6. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    575
    Likes Received:
    161
    I haven't run RancherOS. I just took a quick look at the documentation for it. Accessing the main filesystem data can likely be done a few ways. The easiest way is probably NFS. There are some performance implications doing that, but for a media storage area, it's likely fine.
     
    #6
  7. markarr

    markarr Active Member

    Joined:
    Oct 31, 2013
    Messages:
    351
    Likes Received:
    96
    For SnapRAID and mergerFS you could look at what @rubylaser has written up on it. He has a couple post of using debian as a base and running everything as containers from there when you create the container you map paths outside the container inside so you don't have to deal with sharing. I used it and running plex, sonarr, ect as containers on ubuntu works great for me
     
    #7
    Continuum and rubylaser like this.
  8. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    @ttabbal I'm curious about it... the downside as @markarr points out is sharing the file system in and out.... If I use docker directly I don't have to deal with it... then just save VMs to use for other things that don't need access to host shares.
     
    #8
  9. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    821
    Likes Received:
    214
    Thanks for the kind words. This setup has worked well for me for a long while now. It's fantastic for bulk home media.
     
    #9
  10. mbello

    mbello New Member

    Joined:
    Nov 14, 2016
    Messages:
    11
    Likes Received:
    6
    If your setup should also have a learning value to you (are you an IT pro?) then I would say stay away from SnapRAID and mergerFS because these technologies are not something anyone would want to use in production for anything serious. And it sounds quite cumbersome to maintain.

    Imagine that everytime a HDD fails your system will fail and will malfunction until a replacement HDD arrives. If one of your HDD starts to lose data, you will not know and snapraid will probably not know it either. Why not use a proper RAID technology?

    Usually, snapraid and mergerfs do not even get mentioned in professional discussions when talking about open source storage systems. These days, from what I hear, it is either a safe filesystem like ZFS and its replication options if you need that or a distributed storage like Ceph (very complex to setup and maintain). At enterprise level there are proprietary options as well.

    For you I would say ProxMox is the way to go. If your budget is high, you could even keep a dedicated FreeNAS box and then run your VMs from network storage (iSCSI, PXE, etc).

    I would stay away from Docker in your case, Docker is great if you want to easily package your application and scale it out, but Docker instances are stateless, if you are running only one instance of a software use either KVM or LXC for virtualization.

    Also, choose a Linux distro and stick with it (and its derivatives) for host and VMs. I would say the decision is really among Debian, Ubuntu Server, CentOS and SUSE for reliable server distro. I usually go with Ubuntu Server.
     
    #10
  11. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    @mbello
    Thanks for the info. Not an IT pro.... not any more.... a Physician.

    @rubylaser
    Thanks... just looking through your site now. Do you also run VMs on that platform?...
     
    #11
  12. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    821
    Likes Received:
    214
    I used to, but I have since broken my VMS back out into a separate Proxmox host. It is certainly capable of running VMs. Just don't use a mergerfs pool as your VM storage (I use ZFSonLinux for VM storage).
     
    #12
  13. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    @rubylaser
    I am only planning to have one box. It is going to have a few HDs and a pair of SSDs. I was planning to run Debian and use the SSDs for VMs (maybe ZFS) and then the HDDs with SnapRAID/MergerFS for storage for movies etc.
     
    #13
  14. nephri

    nephri Active Member

    Joined:
    Sep 23, 2015
    Messages:
    407
    Likes Received:
    56
    in a similar situations i used:
    - a FreeNas box running nfs and iscsi services for exposes disks
    - a box with proxmox. i use kvm to create vm for differents service (plex, emby, wiki, owncloud, dns, ....)
    - each vm have disk configured throw iscsi on the freenas server (or nfs)

    I'm always feeling more comfortable with kvm that lxc or docker.
    I don't know why (maybe because i found that set up a coreOS is quite a pain...)

    So, i haven't much experience on all stuffs you study but i can say that proxmox is an awesome tools and freenas is really fun also.

    PS: on the VM having Plex, the media mount point is done by nfs to freenas. That let me manage snapshot and backup strategy directly from freenas.
     
    #14
  15. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    821
    Likes Received:
    214
    This will work fine on one box as well. You would just need to install SnapRAID and MergerFS on the Proxmox host. This is super easy to do :)
     
    #15
    Continuum likes this.
  16. markarr

    markarr Active Member

    Joined:
    Oct 31, 2013
    Messages:
    351
    Likes Received:
    96
    With SnapRAID a drive fails you only loose what is on that drive the system doesnt fail. If a hard drive starts to fail, with every sync of the drives it does a checksum of anything that is changed and will fix bit rot.

    Your right SnapRAID does not get mentioned in discussions due to what it was designed for. It is meant for bulk static files that don't change often, it is not meant for critical storage or VM like ZFS. In a hybrid situation like the poster wants it works great, you can add a drive to the system and modify the two config files and the space is available to use.

    Docker in this case actually works pretty well for the posters use case. Each program is isolated and can be worked on independently, you also don't have near the overhead of vms, and the apps he is talking about are mostly stateless anyway. To update an app you "docker restart plex" and it will update plex for you.
     
    #16
  17. Eric Faden

    Eric Faden Member

    Joined:
    Dec 5, 2016
    Messages:
    91
    Likes Received:
    5
    Nice. I may try that.
     
    #17
  18. IamSpartacus

    IamSpartacus Active Member

    Joined:
    Mar 14, 2016
    Messages:
    863
    Likes Received:
    165
    @rubylaser What's your performance like with SnapRAID + MergerFS? I'm considering moving on of my UnRAID arrays to SnapRAID as a temporary test and wonder about read/write performance to the array since I have 10Gb networking. Currently I have a RAID0 cache pool on both my arrays so I can come pretty close to saturating my 10Gb link between the two.
     
    #18
  19. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    821
    Likes Received:
    214
    MergerFS has no builtin caching mechanism, so it's only as fast as the underlying disks. A user a while back wrote up a simple script to run via cronjob that would periodically sweep data onto the pool, but I have never used it. I asked the developer about a year ago for an option to support a cache pool or SSD Cache, but he mentioned that the added complexity was not worth the effort.
     
    #19
  20. IamSpartacus

    IamSpartacus Active Member

    Joined:
    Mar 14, 2016
    Messages:
    863
    Likes Received:
    165
    So there is no way of setting up a cache pool using SnapRAID + MergerFS so that any data transferred to those MergerFS shares goes to the cache pool (SSDs) first before being moved to the SnapRAID protected array (spinners)? Similar to how UnRAID allows for the use of cache pools?
     
    #20
Similar Threads: ProxMox UnRaid
Forum Title Date
Linux Admins, Storage and Virtualization Proxmox VE 5.1 Updates Sunday at 8:05 PM
Linux Admins, Storage and Virtualization Inter-VLAN routing Proxmox Sep 25, 2017
Linux Admins, Storage and Virtualization Proxmox vs. ESXi...what am I losing? Sep 18, 2017
Linux Admins, Storage and Virtualization [CLOSED]Setup and use Ceph on single node Proxmox? A little crazy idea? Sep 8, 2017
Linux Admins, Storage and Virtualization Proxmox Blade Cluster storage options Aug 20, 2017

Share This Page