Storage Strategy for Large Plex Libraries

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I personally can't understand why one would use ZFS for a plex media server storage array over something like Mergerfs/Snapraid. I use the latter for my 200+TB media array and it's wonderful. 22 data disks. 2 parity disks (can use up to 6). Non striped means even if you lose more drives than you have parity for, you only lose the data on those bad drives. Performance wise, there is no need for a striped array to handle a massive media streaming load.
I partially agree with you, specifically on not really needing something as high performance as ZFS to host simple media files which will be mostly sequential. The highest bitrate I have currently for my collection is about 80 Mbps, so that would be well under what a single disk can handle. Even with a free streams going MergerFS + SnapRAID will probably be fine. The only issues I have with such a large array and using MergerFS + SnapRAID is I feel iffy about parity check for such a large pool, and the eventual instances of bitrot down the line.

While I don’t have an issue with manually configuring MergerFS + SnapRAID, and run a few smaller testing setups, there is value in having a dashboard just to quickly look at everything. If there was a decent solution that had a dashboard I might get on board.

Have you updated your old build threads? I don’t recall any mentions of your 200 TB NAS. I’d love to learn more details of what you did, like how you shared about your other boxen. :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I partially agree with you, specifically on not really needing something as high performance as ZFS to host simple media files which will be mostly sequential. The highest bitrate I have currently for my collection is about 80 Mbps, so that would be well under what a single disk can handle. Even with a free streams going MergerFS + SnapRAID will probably be fine. The only issues I have with such a large array and using MergerFS + SnapRAID is I feel iffy about parity check for such a large pool, and the eventual instances of bitrot down the line.

While I don’t have an issue with manually configuring MergerFS + SnapRAID, and run a few smaller testing setups, there is value in having a dashboard just to quickly look at everything. If there was a decent solution that had a dashboard I might get on board.

Have you updated your old build threads? I don’t recall any mentions of your 200 TB NAS. I’d love to learn more details of what you did, like how you shared about your other boxen. :)
Snapraid does checksums so bitrot should not be a concern. I do a monthly scrub of the entire array and daily sync and scrubs which scrubs 5% of the array only of files that haven't been scrubbed in the past 10 days. But more importantly, this is media. If a file gets corrupted, it's pretty trivial to replace it. Never had it happen though.

As for a GUI, yes that's a personal choice. If you're very into doing things in a GUI, then yea your options are limited. I use a grafana/telegraf/influx stack for nice dashboards/monitoring. Other than that, there's really nothing I do via CLI that would be easier in a GUI.

I haven't posted any build updates in a while here as I'm not really on here that much anymore (mostly on discord). But yes I've made many changes. Maybe I can find the time to post a build update but if you want the TLDR I can give you the specs/setup details in here if interested.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Yes dual board CPUs run in single socket boards (and actually vice versa too iirc).
Power for a E5-16xxv4 is below 100W, more about 60 to 70.
Add maybe 20-50 for connectivity depending on details,

45 drives @ 5-10W, max 12W are 540W max

You should be far from 1kW. Even with a bunch of fans
 
  • Like
Reactions: T_Minus

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Snapraid does checksums so bitrot should not be a concern. I do a monthly scrub of the entire array and daily sync and scrubs which scrubs 5% of the array only of files that haven't been scrubbed in the past 10 days. But more importantly, this is media. If a file gets corrupted, it's pretty trivial to replace it. Never had it happen though.

As for a GUI, yes that's a personal choice. If you're very into doing things in a GUI, then yea your options are limited. I use a grafana/telegraf/influx stack for nice dashboards/monitoring. Other than that, there's really nothing I do via CLI that would be easier in a GUI.

I haven't posted any build updates in a while here as I'm not really on here that much anymore. But yes I've made many changes. Maybe I can find the time to post a build update but if you want the TLDR I can give you the specs/setup details in here if interested.
Ah, I learned something new regarding SnapRAID. That’s good to know.

I do use Grafana and Telegraf to make a simple “at a glance” dashboard for my various stacks, but it’s not detailed. I mean, yes I could do CLI, but I mostly prefer to CLI for simple or one-time things to avoid typing commands incorrectly when tired. My area of work is also not CLI or engineer heavy; I’m on the design and management side so usually only need to understand things conceptually. Lame excuse but there you have it :(

Yes, I’d love for you to share here! I’ve enjoyed all your previous build reports.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,624
2,043
113
IIRC, E5-2xxx can be used in SP mobos? That would open up some possibilities on core count to possibly make up for an older Broadwell-EP being much slower/less IPC than more modern CPUs. Also after a quick cursory look, there doesn’t seem to be a ton of Supermicro socket 2011-3 UP mobo variants made. The IO looks great though…

I’m not sure about what sort of CPU horsepower is needed for a 36 disk ZFS array, or if a 45 disk JBOD was attached on top of that. All transcoding would be done off the server on another box.
Yes. I've got a desktop\workstation here setup with a SuperMicro single CPU motherboard and an E5-2697 v3.
My TrueNAS Scale setup is actually an E5-2670 v3 in a single CPU motherboard too.

IMO you don't need anything high performance for media storage if not transcoding. The E5 v3 setup gives you the availability of PCIE lanes and cheap RAM so you can run other stuff on there if you want and expand easily.
 
  • Like
Reactions: ReturnedSword

ecosse

Active Member
Jul 2, 2013
460
111
43
I personally can't understand why one would use ZFS for a plex media server storage array over something like Mergerfs/Snapraid.
Totally agree. The only "gap" I've ever worried about is the time between moving data to the media server and the time it takes to calculate the parity. Of course you can copy the data and just delete the source once parity is calculated but it isn't as integrated as a raid setup. Each to their own though :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Ah, I learned something new regarding SnapRAID. That’s good to know.

I do use Grafana and Telegraf to make a simple “at a glance” dashboard for my various stacks, but it’s not detailed. I mean, yes I could do CLI, but I mostly prefer to CLI for simple or one-time things to avoid typing commands incorrectly when tired. My area of work is also not CLI or engineer heavy; I’m on the design and management side so usually only need to understand things conceptually. Lame excuse but there you have it :(

Yes, I’d love for you to share here! I’ve enjoyed all your previous build reports.

I'm currently running everything on the below system (2 physical boxes). I've gone back and forth a few times running Plex on a different Xeon E-2246G system with remote storage on this system but I keep wind up going back to a single AIO server as it really simplifies things for me in the end. As you can see from my previous builds, I've been through running clusters and such at home and that was fun and all for learning but now that I have a family (2 young kids) and a house to work on, the less time I spend maintaining my home network the better.

Box01
  • AMD EPYC 7443p
  • SuperMicro H12SSL-i Motherboard
  • 8 x 32GB DDR4-3200 RDIMMs
  • Dual Broadcom 9400-16e HBA's
  • Dual 40GbE NIC
  • Nvidia 1660 GPU (Shared among 4-5 containers)
  • 2 x Intel P3605 1.6TB U.2 NVMe drives (vm_datastore01)
  • 4 x Intel S4600 480GB SATA SSDs (vm_datastore02)
  • Circotech RM-4442 4U 10 X 5.25 Drive bays Rackmount Chassis
Box02
  • QCT JB4242 4U SAS3 Disk Shelf
  • 24 x 10TB HGST SAS3 HDDs (mergerfs/snapraid pool)
  • 8 x 3.2TB HGST SAS3 SSDs (ZFS Raid10 cache pool)

I'm running Proxmox 7.x on this system. I treat this box like a vanilla Debian Linux box (which essentially Proxmox is with just some added packages for the hypervisor). Thus I run mergerfs, snapraid, docker, docker compose, etc. all natively on the Proxmox host, not inside of a VM. It's been pretty much flawless since I switched to this setup in 2020.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Totally agree. The only "gap" I've ever worried about is the time between moving data to the media server and the time it takes to calculate the parity. Of course you can copy the data and just delete the source once parity is calculated but it isn't as integrated as a raid setup. Each to their own though :)
Yea, I mean when talking about media (ie. nothing priceless) that's never been a concern that I've lost a second of sleep over. All new media/files get written to my cache pool of 8 SSD's. I run a mover script that runs daily each morning around 4am and as soon as that's done, I run a snapraid sync script. The sync doesn't take too long to complete. I also mirror all this data do gdrive via rclone which runs before the mover for an added option for ease of replacement.
 

ecosse

Active Member
Jul 2, 2013
460
111
43
I also mirror all this data do gdrive via rclone which runs before the mover for an added option for ease of replacement.
What kind of cost is the g-drive storage - is this an education type gig? I've looked at the cost of cloud storage but always found it too expensive for my budget.
 

itronin

Well-Known Member
Nov 24, 2018
1,231
792
113
Denver, Colorado
IIRC, E5-2xxx can be used in SP mobos? That would open up some possibilities on core count to possibly make up for an older Broadwell-EP being much slower/less IPC than more modern CPUs. Also after a quick cursory look, there doesn’t seem to be a ton of Supermicro socket 2011-3 UP mobo variants made. The IO looks great though…

I’m not sure about what sort of CPU horsepower is needed for a 36 disk ZFS array, or if a 45 disk JBOD was attached on top of that. All transcoding would be done off the server on another box.

Yes. I have 4 x10SRL-F with E5-2680v4's and full boat of ram in CSE-836. works great and I am a fan of the X10SRL-F for its flexibility with lanes and it supports pcie bifurcation. Not sure if I mentioned in this thread (or it was another thread) - those 1200W SQ are louder than the 920SQ's and to my ear have a bit of high pitch whine. Are you sure you need the 1200's? even with 36X15w plus all else you will put in there?
 
  • Like
Reactions: ReturnedSword

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I'm currently running everything on the below system (2 physical boxes). I've gone back and forth a few times running Plex on a different Xeon E-2246G system with remote storage on this system but I keep wind up going back to a single AIO server as it really simplifies things for me in the end. As you can see from my previous builds, I've been through running clusters and such at home and that was fun and all for learning but now that I have a family (2 young kids) and a house to work on, the less time I spend maintaining my home network the better.

Box01
  • AMD EPYC 7443p
  • SuperMicro H12SSL-i Motherboard
  • 8 x 32GB DDR4-3200 RDIMMs
  • Dual Broadcom 9400-16e HBA's
  • Dual 40GbE NIC
  • Nvidia 1660 GPU (Shared among 4-5 containers)
  • 2 x Intel P3605 1.6TB U.2 NVMe drives (vm_datastore01)
  • 4 x Intel S4600 480GB SATA SSDs (vm_datastore02)
  • Circotech RM-4442 4U 10 X 5.25 Drive bays Rackmount Chassis
Box02
  • QCT JB4242 4U SAS3 Disk Shelf
  • 24 x 10TB HGST SAS3 HDDs (mergerfs/snapraid pool)
  • 8 x 3.2TB HGST SAS3 SSDs (ZFS Raid10 cache pool)

I'm running Proxmox 7.x on this system. I treat this box like a vanilla Debian Linux box (which essentially Proxmox is with just some added packages for the hypervisor). Thus I run mergerfs, snapraid, docker, docker compose, etc. all natively on the Proxmox host, not inside of a VM. It's been pretty much flawless since I switched to this setup in 2020.
I can relate. I have much less time nowadays for the last 10 years or so. Family takes up much of my time, so the less I have to deal with management the better. Partly the Plex server is to keep everyone occupied and happy, so I have more time to myself :p

For VMs I decided to take the TMM route, on ESXi instead of Proxmox (which I tinkered around with for a while). In the future I’d still like to have the VM store on a separate server though… but at that point might as well build a consolidated box like you did. Funny I was a big proponent of virtualization and containerization about 15 years ago at the various consulting jobs I was a part of, yet with TMM I started moving in the direction of multiple small machines again. I probably will move back to consolidation as this small fleet of TMMs I have is already creating a cabling mess, and taking up precious network ports.

May I ask why your cache pool is so big? My staged files don’t stay in staging for that long. They’re usually moved off within hours.

There are some with the opinion that Docker and other services should not be run on Proxmox bare metal, or even in LXCs for security escalation concerns. This is a topic I struggled with when tinkering around with Proxmox, going back and forth with bare metal services, LXC services, or services in a VM. On ESXi there isn’t that option, so I guess problem solved for now.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yes. I've got a desktop\workstation here setup with a SuperMicro single CPU motherboard and an E5-2697 v3.
My TrueNAS Scale setup is actually an E5-2670 v3 in a single CPU motherboard too.

IMO you don't need anything high performance for media storage if not transcoding. The E5 v3 setup gives you the availability of PCIE lanes and cheap RAM so you can run other stuff on there if you want and expand easily.
Did some reading on Haswell-EP vs Broadwell-EP today on off time. My understanding is there isn’t really a difference, aside from slightly more efficient AVX2 and ability to use DDR4-2400 on the Broadwell-EP.

There seems to be a large gulf of pricing between DDR4-2400 RDIMM/below and speeds above that. I mean, to be completely honest, I’d rather have newer hardware, but just the cost of DDR4-3200 UDIMM/RDIMM in larger capacities is shocking. Even more so if the intention is to eventually fill all memory banks.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Google Workspace Enterprise Standard.
One of my original ideas was to keep everything in G-drive, except for staging files, using MergerFS + SnapRAID on commodity hardware. I still think it’s viable with fast enough external upload, but now I also feel that having a full copy of the library on hand may also be a good idea.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yes. I have 4 x10SRL-F with E5-2680v4's and full boat of ram in CSE-836. works great and I am a fan of the X10SRL-F for its flexibility with lanes and it supports pcie bifurcation. Not sure if I mentioned in this thread (or it was another thread) - those 1200W SQ are louder than the 920SQ's and to my ear have a bit of high pitch whine. Are you sure you need the 1200's? even with 36X15w plus all else you will put in there?
Oh wow, good to know, thank you! I was budgeting 20W max per disk, assuming 7.2k RPM, though I probably will go with 5.4k RPM. Perhaps my estimation was too high.

The X10SRL-F seemed like the best board for me as well. It’s good to know that E5-1000 and -2000 series can be interchangeable on UP/DP motherboards. I had expected Intel to not allow a lower series SKU to be used with an upmarket platform. TBH, I’m not that familiar with enterprise motherboards as my job role is usually higher level. Would you happen to know if on Xeon motherboards are the PCIe slots all connected to the CPU, or most are connected to the PCH? This could potentially affect NIC, HBA, and bifurcation card performance.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I just skimmed through the newer posts, is this now for an enterprise datacenter* or still a plex server at home?

*I saw vm & lxc
I started out the thread leaning towards a simple MergerFS + SnapRAID box and storing the library in the cloud. Then the usual suspects moved it towards convincing me to just build a new TrueNAS and get on with it :eek:

The TL;DR is planning storage for a Plex library that will grow quite large. I have been ripping tons of Blu-rays from my physical library and REMUXing the files.
 

itronin

Well-Known Member
Nov 24, 2018
1,231
792
113
Denver, Colorado
Oh wow, good to know, thank you! I was budgeting 20W max per disk, assuming 7.2k RPM, though I probably will go with 5.4k RPM. Perhaps my estimation was too high.
Even with large cap enterprise SSD's you're not likely to see 20W drives. My 16TB EXOS X16's max out (per spec) at around 10W active and 5W idle.
My system with 16 of those, 2 SATA ssds, 2 SATADOMS, P2000 GPU, P620 GPU, 1 dual CX3 10Gbe, LSI 9400-16i, 256GB, E5-2680v4, idles around 225W and peaks around 400W

Would you happen to know if on Xeon motherboards are the PCIe slots all connected to the CPU, or most are connected to the PCH? This could potentially affect NIC, HBA, and bifurcation card performance.
All 40 CPU PCIE lanes are available to the slots. There is a single PCIE 2.0 x4 slot from the PCH. Block Diagram attached. PCIE slot bifurcation is supported. I have another system where I am pulling 4x PCIE x4 to U.2 using low cost dual (x8 to 2 x x4) adapter cards.
 

Attachments

  • Like
Reactions: T_Minus

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I can relate. I have much less time nowadays for the last 10 years or so. Family takes up much of my time, so the less I have to deal with management the better. Partly the Plex server is to keep everyone occupied and happy, so I have more time to myself :p

For VMs I decided to take the TMM route, on ESXi instead of Proxmox (which I tinkered around with for a while). In the future I’d still like to have the VM store on a separate server though… but at that point might as well build a consolidated box like you did. Funny I was a big proponent of virtualization and containerization about 15 years ago at the various consulting jobs I was a part of, yet with TMM I started moving in the direction of multiple small machines again. I probably will move back to consolidation as this small fleet of TMMs I have is already creating a cabling mess, and taking up precious network ports.

May I ask why your cache pool is so big? My staged files don’t stay in staging for that long. They’re usually moved off within hours.

There are some with the opinion that Docker and other services should not be run on Proxmox bare metal, or even in LXCs for security escalation concerns. This is a topic I struggled with when tinkering around with Proxmox, going back and forth with bare metal services, LXC services, or services in a VM. On ESXi there isn’t that option, so I guess problem solved for now.
TMM?

My cache pool is big because I use it for more than just a write cache. It houses all my "fast" shares and docker appdata. Furthermore, I keep as much "new" media on cache to maximize the the usage of the cache for streaming over spinners. My mover script is setup to run every morning and only moves enough files (oldest files first) to keep cache below 65% usage.

I hear you on the security concerns with running stuff natively in Proxmox. If this was a server I was running for business, I wouldn't set it up this way. But my home network is setup in a way to make my life easier without sacrificing TOO much security. All of my externally available services are accessed through a reverse proxy that is in a DMZ. That DMZ host can only talk to my server over a designated set of ports. It's not perfect but I'm not running a financial institution at home.