Docker Users? Thoughts on NFS mounted config volumes.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Currently I run a lot of services on my network in dockers on an UnRAID storage server. I'm in the process of building myself a small all-SSD storage server to host files I want fast network access to including my docker config files.

I will be upgrading my network to 10GbE this summer but in the meantime (as soon as the server is built) I'd like to move my appdata files off of my UnRAID cache pool and over to my SSD storage server to be accessed via NFS shares over the network (1Gbps for now). This way I can centralize my appdata for use amongst multiple systems docker (never at the same time). So if I want/need to UnRAID array down for any reason I can still keep all my dockers running on another system running docker (probably Ubuntu).

My question for anyone who is/ has attempted to do this, have you run into any kind of performance issues with your appdata folders being accessed via NFS over a 1Gbps connection as opposed to locally off of your cache pool? I'm mainly concerned with Plex since I have over 200GB of metadata and thumbnail preview images but I run the following dockers right now in case anyone has specific experiences to share:

  • Apache
  • CouchPotato
  • Madsonic
  • NZBGet
  • Plex Media Server
  • PlexPy
  • Plex Requests
  • Sonarr
  • UniFi
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I would assume you want want to move your app data over to the Docker host (the server with the SSDs). For example, the Docker host would have all of the Plex metadata stored locally and you would share an NFS mounted volume to the container for your bulk media.

If this is the case, I would expect using the library would be even faster than what you are used to, but there may be a small delay when you play a file for it to transfer over gigabit.

Most of your other containers will be very fast other than the "media acquisition apps" only because they will need to transfer finished files over the network to UnRAID vs. a loca transfer. Luckily, UnRAID isn't super fast anyways, so it's likely you aren't seeing significantly faster transfers to your spinning disks anyways.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I would assume you want want to move your app data over to the Docker host (the server with the SSDs). For example, the Docker host would have all of the Plex metadata stored locally and you would share an NFS mounted volume to the container for your bulk media.

If this is the case, I would expect using the library would be even faster than what you are used to, but there may be a small delay when you play a file for it to transfer over gigabit.

Most of your other containers will be very fast other than the "media acquisition apps" only because they will need to transfer finished files over the network to UnRAID vs. a loca transfer. Luckily, UnRAID isn't super fast anyways, so it's likely you aren't seeing significantly faster transfers to your spinning disks anyways.
The entire goal of this project is to have maximum uptime of my dockers. I want my dockers (mainly Plex) to be fully operational even when my UnRAID server is offline. I have a backup server housed off-site that is connected to my network over a 100Mbps site-to-site VPN connection so I still have the ability to serve all my media to Plex even during those times. However, since my dockers all currently reside on UnRAID, they become unavailable whenever my UnRAID server is down.

Now my original thought to solving this problem was to rsync all my docker appdata daily between UnRAID and my PC (i5-4690K at 4.5Ghz) where I'd be able to bring all the same docker containers up in an Ubuntu VM which would have NFS mounted volumes to my backup server. Thus to my remote Plex users nothing would be different. However, since my Plex appdata alone is almost 200GB it doesn't make a lot of sense to keep that data in two places. Furthermore I'd have to make sure that the data was fully rsynced before I brought up the dockers in Ubuntu in order to ensure a seamless user experience (current viewstate/watched status).

So this led me to wanting to keep my Docker appdata on a centralized server that would be accessible regardless of the state of my UnRAID server. Thus I could run dockers from any system (UnRAID, Ubuntu VM, etc.) each using the same exact appdata (through NFS mouted shares).

I'm hesitant to move my Docker containers off of UnRAID completely because I like the UnRAID interface for managing them and it's also a well supported platform in terms of the docker community (mainly Linuxserver.io).
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
My first question would be why is your UnRAID box going down often enough that this is an issue. I would expect that you should have months of uptime without a reboot.

Secondly, I run all linuxerver.io containers too (other than Crashplan) on Ubuntu. I use Plex, Plexpy, Unifi, Sonarr, NZBget, and Muximux, so I know these all work great. You don't need to run them on UnRAID for them to work. Docker containers are super easy to setup and manage even without a GUI. I auto launch all my containers with Upstart. Also, there are a bunch of Web based Docker management tools if you really need a GUI (you probably don't if you are willing to spend an hour to learn how to start/restart/stop/view logs from the command line).

Another idea that I'd need to think about a little further about is that you could use a pooling solution like mergerfs to pool both your local UnRAID data and your remote VPN data and present them as one pool to your Plex container. That way, as long as one of these two synced directories is up, your Plex container should have access to it's bulk media without you needing to manually intervene to fire up Docker in a VM and remote mount the share over NFS ( this assumed the site-to-site VPN is always up).
 
  • Like
Reactions: gigatexal

JimPhreak

Active Member
Oct 10, 2013
553
55
28
My UnRAID box isn't going down that often on it's own. However I recently had a drive failure and since my disks (including parity) are 8TB, a data rebuild takes roughly 24 hours in UnRAID. During that time, while my array is technically still up and available, having multiple 1080p streams going during the rebuild is all but impossible and only lengthens the rebuild time. Furthermore I'd prefer to be able to get the rebuild done ASAP with no other services running until it's complete just for my own piece of mind. The same thing goes for when I run monthly parity checks. While again the array is up and functioning, streaming video during this process just lengthens it as it causes additional disk thrashing.

I'm not against running my docker containers outside of a webGui. I've done the testing recently in Ubuntu getting the same Linuxserver.io dockers up in there via command line and it's not difficult as you say. I like webgui for convenience and being able to quickly restart/edit them with just a few clicks but I'm not opposed to doing command line.

How exactly does mergerfs work? Sounds interesting but I'm not quite sure I understand how the data would be presented to Plex in that scenario. And yes my VPN is always up but currently I don't have both servers "synced" so to speak. I run nightly backups (mirrors) to the backup server to sync them.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Mergerfs is a FUSE based pooling solution. Again, this is just an idea that I would to test first because I have not used mergerfs like I've mentioned here, but you could NFS mount both your remote system file system and your UNRAID file system on the Docker host. In that same host, you could pool these two filesystems together with Mergerfs (this would present them as one filesystem and would not present the duplicates). You would share this pooled volume to your Plex container. It might be worth asking trapexit, the developer of mergerfs how this would work and if there is a way that you could tell the pool to prefer the local copy, if available, over the remote filesystem.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Mergerfs is a FUSE based pooling solution. Again, this is just an idea that I would to test first because I have not used mergerfs like I've mentioned here, but you could NFS mount both your remote system file system and your UNRAID file system on the Docker host. In that same host, you could pool these two filesystems together with Mergerfs (this would present them as one filesystem and would not present the duplicates). You would share this pooled volume to your Plex container. It might be worth asking trapexit, the developer of mergerfs how this would work and if there is a way that you could tell the pool to prefer the local copy, if available, over the remote filesystem.
Very interesting. I will read up on this and reach out to trapexit because that does sound promising. However yes, I would certainly need a way to prefer the local copy over the remote VPN copy.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I just posted an issue on his Github. Pleas feel free to add any info you may need.

Enahancement read Priority · Issue #221 · trapexit/mergerfs · GitHub
Sweet!

This conversation has caused me to consider all my options for how best to achieve maximum docker uptime on my network. I thought it would be a good idea to list the current nodes on my network so others can possibly lend opinion on how best to utilize such hardware for my needs in case a reconfiguration may be in my best interest.

Current setup is as follows:

UnRAID server (runs all my dockers with a few VMs)
Backup UnRAID server (offsite via VPN)
PC
  • Intel i5-4690k @ 4.5Ghz
  • ASRock H97M-ITX/ac
  • 16GB DDR3 RAM
  • Zotak GTX970
  • 120GB SSD
  • 2 x 1TB WD Blacks
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
That's some very nice hardware. It would help to know what the hard drive setup is in the backup server. Does that have 8x8TB disks as well, so it's a 1:1 backup of the bulk data?
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
That's some very nice hardware. It would help to know what the hard drive setup is in the backup server. Does that have 8x8TB disks as well, so it's a 1:1 backup of the bulk data?
Correct. Same bulk drive setup.
 
Last edited:

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Do you really need unraid? I mean you could rock Ubuntu and ZFS with ZFS send and receive.
Do I NEED to use UnRAID? No. However there are a number of features I do use it for that I'm not ready to give up such as individual disk spin down, only actively read disks spin, and only losing the data off of failed (beyond parity) disks. Also the fact that all my data disks are SMRs which I don't bike work well with ZFS.
 
Last edited:

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Yea so I definitely want to keep UnRAID for my bulk media storage. The question remains though, how best to configure the rest of my setup. I'm still trying to determine which server I want to dedicate to transcoding (thus where my docker containers will live).
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I already did, if you look at the Github issue. I didn't realize this, but mergerfs already provides read priority based on the order of the mounts (the first is preferred over the second). So, if you had your local and remote NFS mounts mounted at /mnt/local_nfs and /mnt/remote_nfs, you would just need an /etc/fstab line like this to pool them both together and have mergerfs perfer the local version first. This would present both of the volumes pooled together (without duplicates) at /storage. I would suggest you mount the pool at the same path you currently are in your Plex container. That way none of your metadata should need to be updated. Again, I haven't tried this, so I would setup a small test in a couple of VMs before you spend a ton of time on this, but conceptually, it should work well.

Code:
/mnt/local_nfs:/mnt/remote_nfs  /storage  fuse.mergerfs defaults,allow_other,minfreespace=20G,fsname=mergerfsPool  0       0
Here's his email address if you need it trapexit@spawn.link (on his Github page too).
@rubylaser I thought I'd respond to you here so we I don't hijack that thread with questions about my own setup.

If mergefs does work as intended for my situation, would it basically be as simple as me configuring a server just as in that Linuxserver.io blog post with the only difference being that my media would be stored on NFS mounted volumes instead of locally mounted ones? If so, my main concern is future expandablity as I want to build a server that will be able to handle 10G network transfers come this summer/fall, especially as a VM datastore. What are the pitfalls of using mergefs in this scenario?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Yes, that is exactly what you would do. The main issue with mergerfs (and UnRAID) is that they are as fast as a single disk. If your underlying disks are all SSDs, I'm sure it would be very fast, but still as fast as the underlying disk or share.

The other pitfall with FUSE based solutions is that they typically have more overhead than kernel based solutions and aren't quite as fast. For bulk media though, it should be at least as fast as your UnRAID server is over NFS.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Yes, that is exactly what you would do. The main issue with mergerfs (and UnRAID) is that they are as fast as a single disk. If your underlying disks are all SSDs, I'm sure it would be very fast, but still as fast as the underlying disk or share.

The other pitfall with FUSE based solutions is that they typically have more overhead than kernel based solutions and aren't quite as fast. For bulk media though, it should be at least as fast as your UnRAID server is over NFS.
Well that's the thing, I'm not really considering replacing my UnRAID servers with a MergeFS + SnapRAID solution at this time. Even if I could do that while importing my current data array it's not my top priority. My main priority is building out this centralized datastore of all SSD's where I will be storing/running my docker containers, appdata and vdisks. So performance is of paramount concern for me because when I upgrade my network to 10G I don't want to have to rebuild my entire server's OS/FS because it is the bottleneck of my 10G network. From your responses it sounds like I may need to be looking at other options in place of MergeFS.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I thought you where looking for a way to have your Dockerized Plex have access to your UnRAID volume (slow) and a remote VPN (slow). I never suggested using mergerfs for anything other than that container for mounting your bulk media.

Your future SSD array that should be local on your Docker host and would only use mergerfs to pool those two slower volumes to share to your Plex container. Maybe I misunderstood your desired outcome.

I never suggested replacing UnRAID with SnapRAID. I suggested pooling the two slow NFS shares into one volume with mergerfs to share to your a Plex container. I'm not sure that you are understanding what I'm describing to you as none of this would ever require rebuilding or changing your OS in the future.
 
Last edited: