Ideal Plex Setup with Proxmox/LXC/Docker/KVM?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eric Faden

Member
Dec 5, 2016
98
6
8
41
Hey All,

So I have a computer I used to run Plex and a few other things on. I had it setup under Debian with SnapRAID + MergerFS + Docker. I used Docker Compose to run all of my services. I decided that I wanted to virtualize that box so I could run more stuff on it for some other projects (ntopng, gns3, etc). The machine has a couple of SDDs and then a LSI HBA with 4 drives I use for Plex (4 TB working drive, and a 3 drive snapraid + mergerfs array). I originally built it with this article in mind - The Perfect Media Server 2016

My plan was to install Proxmox on the host and then run a VM with debian inside of it. Pass the LSI HBA to the VM and just run it exactly had I had set it up before... just virtualize. Then I thought... is this really the best way....

I can think of a bunch of ways to set this up....

1) Proxmox Host
Debian VM w/ LSI Passthrough, Snapraid, Merger, Docker (Plex, etc)...

2) Proxmox Host w/ LSI
Debian VM w/ Drives Passed Through, Snapraid, MergerFS, Docker (Plex, etc)

3) Proxmox Host w/ LSI, Snapraid, Merger
Debian LXC w/ Docker (Plex, etc)

4) ???

I suppose what I am asking is what is the best way to configure all of this....? Or what do I need to consider?


Right now the Dockers use shared volumes to pass files, and then direct FS links for things like the media folders. It's just mounted directly in Debian.


Any guidance?
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Just for reference I do the following on an ESXi host:

XPEnology NAS with Dell PERC 310 passed through with 4 x 3TB drives.
This is basically a Synology NAS running virtualized and it is super easy to manage, and can run almost anything you can think of due to the amazing app store and docker containers.
I use the native AD, mail, backup, cloud sync, surveillance camera, and vpn apps, as well as having full docker containers for sickrage/radarr/sabnzbd/headphones/plex.

Then the rest of the ESXi host is used for a variety of VM's as needed :)
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
SO many options. I went simple and haven't regretted it. Proxmox natively handling the ZFS storage, bind mounted into a container running Debian for Plex. I manage NFS/SMB sharing direct from the Proxmox host, it works fine and my setup is pretty simple so I don't mind it being mingled on the host.

I haven't tried Docker yet, I'm thinking of setting up a KVM instance to try it out though.
 
  • Like
Reactions: dlasher and T_Minus

Eric Faden

Member
Dec 5, 2016
98
6
8
41
I thought about installing SnapRAID and MergerFS on Proxmox, but wasn't thrilled about installing it on the hypervisor itself due to stability. I like the idea of Snapraid/Merger vs ZFS because this is mostly longer term, large file, archival.
 

Kybber

Active Member
May 27, 2016
138
43
28
48
My setup:

Proxmox hypervisor: Use Snapraid/MergerFS for my media (4 spinners), ZFS for anything else (also 4 spinners). 4 spinners currently unused, 4 empty bays.
LXC-container for Plex: Bind-mount the media from MergerFS
LXC-container for Turnkey Linux File Server: Bind-mount any dir I want to share via NFS or Samba.
 

apnar

Member
Mar 5, 2011
115
23
18
after trying most all the various combinations I realized most everything I wanted to run would work well in Docker with bind mounts and it was a real pain to hack docker onto proxmox. So I ended up with very basic ubuntu install on bare metal running ZoL and docker. For the few things I can't run in docker (OS X and windows) I run them fine in KVM. Overall very happy with the setup.
 

Eric Faden

Member
Dec 5, 2016
98
6
8
41
I thought about that. I just like the prox mox interface....

Sent from my Pixel XL using Tapatalk
 

kroem

Active Member
Aug 16, 2014
248
43
28
38
Anyone tried out the performance difference for transcoding with KVM (host cpu setting?) and LXC?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
I haven't tested the difference personally. The LXC will be faster, as it's more efficient. There's just less overhead. How much of a difference is the question. KVM is pretty efficient on the CPU side, it's I/O that might hurt a little. The usual setup would have the data drives shared to the KVM instance over a networking protocol like NFS or CIFS. Even though it's a local connection, it will have to travel the host kernel network interface, then get passed into the KVM kernel network interface, and finally out to Plex. If you are using VirtIO drivers, that process is pretty efficient as well. So it may not be noticeable. I just find the LXC easier to manage and they start up a lot faster as you don't have to emulate a BIOS and boot an OS to start them. The added efficiency and not having to set up network sharing for every little thing is also nice.
 

moblaw

Member
Jun 23, 2017
77
13
8
38
I'd love to get some comparison charts of that transcoding performance, in other Words, what's the cpu performance bare metal vs. kvm vs. lxc vs. hyper-v.

Because as it is right now, I get horrible cpu-z performance in my hyper-v vms, which corelates negatively in transcoding.
 

dlasher

New Member
Dec 9, 2016
10
0
1
54
SO many options. I went simple and haven't regretted it. Proxmox natively handling the ZFS storage, bind mounted into a container running Debian for Plex. I manage NFS/SMB sharing direct from the Proxmox host, it works fine and my setup is pretty simple so I don't mind it being mingled on the host.
That's close to what I've done since Proxmox in 1.x days. Ran openVZ, have since migrated to LXC. Originally ran ARECA raid6 cards, then moved to zfs4linux.

Proxmox manages all drives via ZFS.
  1. proxmox boots from zfs-mirrored 500G drives
  2. (12) drives in RaidZ3, 2 enterprise SSD for slog/l2arc
  3. container for plex - bind mounts for media. CPU-limit = half the cores on the box, but cpu shares set to low to avoid killing other vms
  4. container for mythtv - bind mounts for media
  5. container for itunes-server - bind mounts for media
  6. container for samba (easier to manage users/etc this way) - bind mounts
  7. container for dns/dhcp/etc
  8. container for data collection (zabbix/cacti/etc)

It's taken several years, but it's the perfect setup. Simple for the win.
 

kroem

Active Member
Aug 16, 2014
248
43
28
38
That's close to what I've done since Proxmox in 1.x days. Ran openVZ, have since migrated to LXC. Originally ran ARECA raid6 cards, then moved to zfs4linux.

Proxmox manages all drives via ZFS.
  1. proxmox boots from zfs-mirrored 500G drives
  2. (12) drives in RaidZ3, 2 enterprise SSD for slog/l2arc
  3. container for plex - bind mounts for media. CPU-limit = half the cores on the box, but cpu shares set to low to avoid killing other vms
  4. container for mythtv - bind mounts for media
  5. container for itunes-server - bind mounts for media
  6. container for samba (easier to manage users/etc this way) - bind mounts
  7. container for dns/dhcp/etc
  8. container for data collection (zabbix/cacti/etc)

It's taken several years, but it's the perfect setup. Simple for the win.
Nice. Quite similar to my setup. BUT I'm using SMB to share storage - bind mounts sound much cleaner. Any pros/cons?
 

vl1969

Active Member
Feb 5, 2014
634
76
28
That's close to what I've done since Proxmox in 1.x days. Ran openVZ, have since migrated to LXC. Originally ran ARECA raid6 cards, then moved to zfs4linux.

Proxmox manages all drives via ZFS.
  1. proxmox boots from zfs-mirrored 500G drives
  2. (12) drives in RaidZ3, 2 enterprise SSD for slog/l2arc
  3. container for plex - bind mounts for media. CPU-limit = half the cores on the box, but cpu shares set to low to avoid killing other vms
  4. container for mythtv - bind mounts for media
  5. container for itunes-server - bind mounts for media
  6. container for samba (easier to manage users/etc this way) - bind mounts
  7. container for dns/dhcp/etc
  8. container for data collection (zabbix/cacti/etc)

It's taken several years, but it's the perfect setup. Simple for the win.
nice setup
do you mind if I pick your brain for the setup details?

I plan a similar setup for a while but I have some issues that basically stopped me from moving forward right now.

my issues , are that I have a bunch of mixed drives right now.
3 0r 4 3TB, 3 or 4 2TB a couple of 1TB for data
and I am not sure how to use them in this kind of setup.
I run OMV setup for a bit but I want better virtualization.
as it is stands now Proxmox is a very good option for me.

I have 2 120GB SSD that I want to use for OS (proxmox ) ZFS raid-1 setup.
now for data I want Proxmox to pull a double duty, a Hypervisor and NAS.
just like you have it setup. for my setup what would be a good option for ZFS data pool where I can use my disks efficiently yet have a reasonable data protection and up time?
I have been looking all over for help.

the best info so far I got on "level1tech" forum.
using 2 raidZ2 vdevs one with my 3TB disks and one with my 2TB disks in a pool.
but this mean I do not use my 1TB disks. and so far no one told me if I should put all of my disks in single vdev or split them in separate vdevs for efficiency and ease of upgrading/updating later on.

would you mind elaborate on your pool setup a little?

thanks Vl
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
nice setup
do you mind if I pick your brain for the setup details?

I plan a similar setup for a while but I have some issues that basically stopped me from moving forward right now.

my issues , are that I have a bunch of mixed drives right now.
3 0r 4 3TB, 3 or 4 2TB a couple of 1TB for data
and I am not sure how to use them in this kind of setup.
I run OMV setup for a bit but I want better virtualization.
as it is stands now Proxmox is a very good option for me.

I have 2 120GB SSD that I want to use for OS (proxmox ) ZFS raid-1 setup.
now for data I want Proxmox to pull a double duty, a Hypervisor and NAS.
just like you have it setup. for my setup what would be a good option for ZFS data pool where I can use my disks efficiently yet have a reasonable data protection and up time?
I have been looking all over for help.

the best info so far I got on "level1tech" forum.
using 2 raidZ2 vdevs one with my 3TB disks and one with my 2TB disks in a pool.
but this mean I do not use my 1TB disks. and so far no one told me if I should put all of my disks in single vdev or split them in separate vdevs for efficiency and ease of upgrading/updating later on.

would you mind elaborate on your pool setup a little?

thanks Vl
I think you'd get better *and more* response here if you started a new thread, specifically asking about ZFS setup based on the drives you have available.

- RaidZ2 with 3 or 4 drives would work but IMHO too few drives. If you have 3 drives use triple mirror (3x mirrored drives to each other)

- If you want performance for VM then run multiple VDEVs that are mirrored or if you have enough drives you can run a pool of raidz# vdvevs too.

- When you install proxmox select your 2 (and only your 2) SSD and make the mirror/raid1 for the 'install' location of proxmox.


I would sell all your 1TB and 2TB and go with all 3TB if that's an option, or if you can sell them all and go with all 4TB to give you some room for the future. If you want to keep it simple/easy sell them all or go all 3TB and use 6x 3TB or 6x 4TB in RaidZ2, you could add an affordable SLOG device for increasing VM performance and a good L2ARC SSD for cache. I'd use the rz2 'as-is' then add SLOG and cache drive as needed if needed.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I think you'd get better *and more* response here if you started a new thread, specifically asking about ZFS setup based on the drives you have available.

- RaidZ2 with 3 or 4 drives would work but IMHO too few drives. If you have 3 drives use triple mirror (3x mirrored drives to each other)

- If you want performance for VM then run multiple VDEVs that are mirrored or if you have enough drives you can run a pool of raidz# vdvevs too.

- When you install proxmox select your 2 (and only your 2) SSD and make the mirror/raid1 for the 'install' location of proxmox.


I would sell all your 1TB and 2TB and go with all 3TB if that's an option, or if you can sell them all and go with all 4TB to give you some room for the future. If you want to keep it simple/easy sell them all or go all 3TB and use 6x 3TB or 6x 4TB in RaidZ2, you could add an affordable SLOG device for increasing VM performance and a good L2ARC SSD for cache. I'd use the rz2 'as-is' then add SLOG and cache drive as needed if needed.

thanks.

I would sell all my 1 and 2 TB drives , but they are old and I simply not sure how reliable they are.
I had some bad luck myself last year with a couple of 3TB drives that I bought from a forum member (not this forum but I am sure he was not at fault) the drives had great SMART report and run for almost a year. than one day they just died one after the other. taking almost 3TB of data with them. I was running a 3x3TB BTRFS raid-1 on raw devices and could not recover anything. the data was just gone.
I would not want to sell my drives knowing that this can be possibility. I also from than on, buy only new drives with warranty. thus I would like to use my drives with a knowing that I replace them as they fail, but also would try to use them in a way to protect me from loosing more data.
BTW: I did recover most of the data from my other backups and extra copies I had laying around.

all I am asking is ,what would be a reasonably safe and robust setup giving my mixed drives.?
I have been testing some configs using a VM under Hyper-V. I have an access to an extra PC at work that runs win10 and hyper-v no it for testing. so I just tried it out there. just asking an opinion here.
thanks
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Since you only have a few of various sizes, it might be better to go with a striped mirror (raid10) setup. It's more performant, and easier to expand, but does lose more drives to parity. Another option is raidz1. With 3-4 drives, it makes no sense to use double parity when you lose the same amount to parity you would with mirrors and get less performance.

My personal rule is raidz2 at 6 devices if you want raidz. That's just where it starts making sense to me. Unless perhaps you are using really large drives, but even then, if 4 or less drives, mirrors have the same overhead and more performance (under most workloads).

Note that you can use different sized drives in a raidz or mirror, but the usable space will be based on the smaller of the drives in the vdev. A 3TB+4TB mirror == 3TB usable. A 4/4/4/3 raidz1 == a 3/3/3/3 raidz1.

You should run badblocks and SMART long tests on all drives before trusting data to them. I also like to make a big n-way mirror, fill it, then scrub a couple times to test ZFS workloads on it. It takes a few days to fully test drives, but it's worth it to prevent data loss and/or emergency replacements later.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Nice. Quite similar to my setup. BUT I'm using SMB to share storage - bind mounts sound much cleaner. Any pros/cons?

Bind mounts are directly available, and perform just like the underlying filesystem. SMB adds overhead from the SMB and network stacks. For local VM sharing, the difference is probably a few percent CPU honestly, but why add overhead you don't need?

Note that it only works for containers, LXC. You can't bind mount to VMs.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
For the sake of providing other options, this is what my setup is for Plex.

I have a standalone ESXi host that is used a shared storage server. It runs the following two VMs:

FreeNAS11 - consisting of different RAID10 vdevs (4 SSDs each) presented to ESXi cluster via NFS
UnRAID - consisting of 8 x 8TB Seagate SMR drives + dual 500GB SSDs in RAID0 btrfs cache pool (all connected to passed through LSI2116.

I then have 3 ESXi hosts in a cluster that use the NFS shares on FreeNAS as shared storage for my VMs. Two of the VMs on this cluster are Ubuntu Server 16.04 servers. Both of those servers have all my UnRAID shares mounted via NFS. I then run Plex in a docker on UbuntuSvr01 and all my other services on UbuntuSvr02.