1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ProxMox vs OMV vs UnRaid vs Debian/Centos w/ Docker + KVM vs Rancher

Discussion in 'Linux Admins, Storage and Virtualization' started by Eric Faden, Dec 29, 2016.

  1. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    778
    Likes Received:
    196
    Not with any builtin tools. Unraid is built on top of md, so it is a block device. As result, it can have a caching layer put in front of it easier than something FUSE based like MergerFS.

    This doesn't mean that a crafty person couldn't write a mechanism to periodically sweep completed files from one location to another though
     
    #21
  2. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    289
    Likes Received:
    26
    I do not think, unraid is build on top of md. it uses a modified md code but one that have been rewritten by LimeTech as a custom module that functions similar to what snapraid+ a union file system like mergerfs does, but with additional benefits of being real time parity raid-like setup.
    I have run unRaid server for 3+ years before moving off it.
    unraid has a caching mechanism built in into the system. if you have the drive(s) you just turn on the cache in settings and point the system to the drives be used for cache.
    other than that it is function just like snapraid+mergerFS (or any other drive pooling system) except with a real time protection of the data.

    here is a few key points

    1.
    UnRaid -- a single pool of mixed drives to be shared by the system
    MergerFS(or other similar) -- same

    2.
    UnRaid -- a single pool of mixed drives protected via Parity Raid-Like setup where you can pull individual drives (except parity drive) and read data on them in any other system capable of mounting them.
    yet the data is protected via Parity mechanism like a real raid 5/6 automatically and on the fly.
    in essence UnRaid server works like SnapRaid+MergerFS(or similar) +a real time data validation and protection mimicking real raid setup.

    SnapRaid -- same as unraid above but not real time. you can pool the drives of different size into a raid like setup where data is protected using Parity mechanism but the actual checks and balances are done on schedule intervals rather than real time as the data is written to pool.
    SnapRaid is also transparent to the user as each protected drive can be used independently not like a raid setup where devices is pooled together into single virtual device.


    3.
    Unraid -- offers all of the benefits of real raid system described above plus ability to use drives of different sizes, array expansion on the fly and ability to use write cache for speed, although when using cache drives
    data is not protected until it is moved to the data pool.
    yes cache will speed up your writes but if something happens to cache drive in mid use your data is gone.
    most common, default setup in UnRaid is to run cache sync scripts on schedule rather real time, so until script is run, your data is in limbo.
     
    #22
  3. trapexit

    trapexit New Member

    Joined:
    Feb 18, 2016
    Messages:
    14
    Likes Received:
    5
    Author of mergerfs here...

    To address the question of speed... it's really difficult to say. It's highly dependent on 1) Kernel version 2) libfuse version 3) CPU 4) underlying drives 5) how many files are being read/written.

    I've been meaning to put together some benchmarks and try out different combinations of the above but I've yet to get around to it. Given mergerfs' usecase is primarily write once read many situations with large amounts of data (meaning using large, slow spinning disks) the raw performance hasn't been a big issue. While a caching layer of course would make things completely transparent if the final storage layer is slower than the cache and the cache slower than the network you'll probably quickly be limited by the slowest component. It's primarily useful for bursts of data but is that *really* an issue? (as mentioned a script was created (and a better one could be whipped together easy enough) which can manage a similar cache behavior if setup correctly but I've not found one necessary.)

    I'm all for improving raw performance but given that tests with RAM disks show 1GB/s+ transfers through mergerfs on my core i7-3770S system I'm not yet convinced mergerfs itself is the bottleneck in some people's systems. Unfortunately it's not clear what is.

    My suggestion in terms of speed is to just try it out. mergerfs is trivial to install and test out. It's simply an overlay/passthrough and does not impact the underlying setup.
     
    #23
  4. IamSpartacus

    IamSpartacus Active Member

    Joined:
    Mar 14, 2016
    Messages:
    739
    Likes Received:
    121
    I myself don't have an issue with the speed of mergerfs. I've done many tests with it as I'm using it to link all my dockers on my Ubuntu servers to my UnRAID NFS shares (2 identical sets on separate servers). It works great and I've gotten solid speed results 300-500MB/s regularly.

    I'm actually just looking for a way to move away from UnRAID while still keeping the caching capabilities of it so that my replications between UnRAID servers and other transfers I make to those servers from my personal workstation are still high performance.
     
    #24
  5. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    778
    Likes Received:
    196
    This is a great comparison. UnRAID creates an array that is a block device that uses a modified md underneath (note all of the md specific values that you can set). Also, for users of UnRAID who want caching, one can mitigate the risk of losing files as the result of a cache disk failure by setting up a caching pool. I don't use UnRAID, but this is my understanding of how it works :)
     
    #25
  6. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    778
    Likes Received:
    196
    Thanks for the comments @trapexit! I have not had any issue with MergerFS' performance for my use case either (large bulk media), but I can see how someone with a 10GBe (or greater) network in their home would get excited at the possibility of saturating that uplink on writes if a caching layer existed (writing to a pool of SSDs or NVMe devices). Thanks for all of your continued work on what has become by far my favorite disk pool solution :)
     
    #26
  7. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    289
    Likes Received:
    26
    yes in UnRaid version 6 and up they added cache pool option. until version 6 you only had one cache drive and only on paid licenses. free license gave you 3 drives only 1 parity + 2 data
    Plus allowed 6 drives more in 1P+4D config or 1P+1C+4D config.
    and Premium was up to 24 drives total.
    when the moved to version 6.2 or .3 they started using BTRFS systems and added support for non array drives
    which you did not have earlier, you had to jump through hoops to use drive out side of array before that.
    now in paid licenses you have unlimited drives on the system
    with 12 drive array + up to 4 drive cache pools for Plus and 30eesh drive array + up to 24 drive cache pool for premium
    but system also recognise an unlimited drives out side of array so you can have and manage an unprotected storage as well as protected via WebUI. need a plugin but it is easily installed.
     
    #27
  8. trapexit

    trapexit New Member

    Joined:
    Feb 18, 2016
    Messages:
    14
    Likes Received:
    5
    Glad to hear. I've noticed it's popularity grow quite a bit and usecases continue to grow. While we're discussing speed... others are using mergerfs with rclone to combine multiple cloud services for their media.

    In newer versions of FUSE and the user library there is a writeback cache which could help write performance. I could try to do the caching myself but it's a tough problem and I don't want to accidentally lose my user's data. I'm currently still targeting older library so I can't enable it AFAIK but I'm considering dual licencing mergerfs so I can include the newer library into my codebase so I can more easily distribute it. That is ... after I port mergerfs to the new library and compare performance. If things are largely the same I'll not bother.
     
    #28
    rubylaser likes this.
  9. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    778
    Likes Received:
    196
    Mergerfs works great for me, and I agree, I don't want to lose any of my data as result of a caching layer. What can I say, you have created a very flexible tool that can fill a lot of different roles :) Thanks again for your work!

    I would be interested in leveraging newer libfuse versions if they offer any real gains and keep the underlying data intact.
     
    #29
  10. JayG30

    JayG30 Active Member

    Joined:
    Feb 23, 2015
    Messages:
    223
    Likes Received:
    46
    Was looking at RancherOS (had not heard of it before). Seems they have the ability to run KVM VM's wrapped inside a Docker container, which is neat. What types of storage is supported by RancherOS? I'm seeing some references to EXT4 and ZFS support. With ZFS that sounds like a really nice combo for a multitude of use cases.
     
    #30
Similar Threads: ProxMox UnRaid
Forum Title Date
Linux Admins, Storage and Virtualization couple of strange questions about proxmox and pfsense setup. Thursday at 6:58 AM
Linux Admins, Storage and Virtualization Proxmox VE 5.0 and Docker with a Web GUI Wednesday at 10:16 PM
Linux Admins, Storage and Virtualization Proxmox VE 5.0 beta Wednesday at 12:33 PM
Linux Admins, Storage and Virtualization Proxmox ZFS pools under mergerFS, I wonder if anyone done that before ? Mar 17, 2017
Linux Admins, Storage and Virtualization FreeNAS / SOHO Server Build - can this be done with Proxmox? Mar 14, 2017

Share This Page