Storage Strategy for Large Plex Libraries

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
Tiny/mini/micro nodes, stuff like the hp t740 thin clients
Ahh, got it.

Yea I went the AIO beefy server route as it allowed me to consolidate pretty much my entire house of "computers" into a single box. No more PC's/workstations in my house. Only thin/zero clients that access VM's (both mine and my wife's daily drivers as well as an emulator gamestreaming VM).
 

UhClem

just another Bozo on the bus
Jun 26, 2012
454
272
63
NH, USA
... I keep as much "new" media on cache to maximize the the usage of the cache for streaming over spinners. My mover script is setup to run every morning and only moves enough files (oldest files first) to keep cache below 65% usage.
...
Does Plex and/or mergerFS have mechanisms that allow for files/media to be in your Plex_Library, while first residing in your cache, and later moving to HDD? (Or, do you script that using "temporary" sym_links?)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
Does Plex and/or mergerFS have mechanisms that allow for files/media to be in your Plex_Library, while first residing in your cache, and later moving to HDD? (Or, do you script that using "temporary" sym_links?)
Yes, this is exactly what mergerfs is for. I have one mergerfs pool that is cache enabled and sees all the files on both cache and spinners. This pool is what all my media applications are mapped to. I also have a non-cache enabled pool that the mover script moves the files too.
 
  • Like
Reactions: UhClem

UhClem

just another Bozo on the bus
Jun 26, 2012
454
272
63
NH, USA
Yes, this is exactly what mergerfs is for. I have one mergerfs pool that is cache enabled and sees all the files on both cache and spinners. This pool is what all my media applications are mapped to. I also have a non-cache enabled pool that the mover script moves the files too.
That does raise a bunch of ???s But I found your earlier POST (and your link to mergerfs author's "solution"). thanks
 

Sean Ho

seanho.com
Nov 19, 2019
814
383
63
Vancouver, BC
seanho.com
The X10SRL-F seemed like the best board for me as well. It’s good to know that E5-1000 and -2000 series can be interchangeable on UP/DP motherboards. I had expected Intel to not allow a lower series SKU to be used with an upmarket platform
E5-1xxx merely omit the QPI link, and hence are limited to single-processor boards. They are not any worse than a single equivalent E5-2xxx, and in fact tend to have higher-clocked options.
 
  • Like
Reactions: ReturnedSword

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Even with large cap enterprise SSD's you're not likely to see 20W drives. My 16TB EXOS X16's max out (per spec) at around 10W active and 5W idle.
My system with 16 of those, 2 SATA ssds, 2 SATADOMS, P2000 GPU, P620 GPU, 1 dual CX3 10Gbe, LSI 9400-16i, 256GB, E5-2680v4, idles around 225W and peaks around 400W



All 40 CPU PCIE lanes are available to the slots. There is a single PCIE 2.0 x4 slot from the PCH. Block Diagram attached. PCIE slot bifurcation is supported. I have another system where I am pulling 4x PCIE x4 to U.2 using low cost dual (x8 to 2 x x4) adapter cards.
Thanks so much for the info! I had out to mind that I’d get a LSI 9305-24i, but as SM 846/847 only have a SAS-2 backplanes, and the backplane supports more than 4 drives per cable I may drop down to a 92xx HBA.

I was quite concerned about PSU load but this puts my mind to ease. To be completely honest I tend to over spec on the PSU. I’ll also be using EXOS, but EXOS X18.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
TMM?

My cache pool is big because I use it for more than just a write cache. It houses all my "fast" shares and docker appdata. Furthermore, I keep as much "new" media on cache to maximize the the usage of the cache for streaming over spinners. My mover script is setup to run every morning and only moves enough files (oldest files first) to keep cache below 65% usage.

I hear you on the security concerns with running stuff natively in Proxmox. If this was a server I was running for business, I wouldn't set it up this way. But my home network is setup in a way to make my life easier without sacrificing TOO much security. All of my externally available services are accessed through a reverse proxy that is in a DMZ. That DMZ host can only talk to my server over a designated set of ports. It's not perfect but I'm not running a financial institution at home.
TMM like decommissioned corporate 1L mini PCs that have rather beefy CPU, memory for the size, while having lower power draw.

An ideal solution for me at least is to have some sort of script that moves currently watched and frequently watched content to cache. I haven’t been able to find any ideas to implement that though. Your implementation introduces me to a pretty good idea, to keep the cache filled if possible. I think I’ll end up doing that too.

Ah, so you go through a bastion for external access. I also run a bastion server, so perhaps my concerns may be overblown o_O. I’d still prefer to have a GUI for MergerFS + SnapRAID though. An idea I thought of today is passing through a HBA to a VM with OMV, that manages the MergerFS + SnapRAID.

Ahh, got it.

Yea I went the AIO beefy server route as it allowed me to consolidate pretty much my entire house of "computers" into a single box. No more PC's/workstations in my house. Only thin/zero clients that access VM's (both mine and my wife's daily drivers as well as an emulator gamestreaming VM).
How many computers did you consolidate into the EPYC system? I was surprised on one of your previous posts that you had set it up for your wife to game over the VM, that’s pretty cool! How are you splitting up the GTX 1660? I investigated splitting my P1000, however I found out that unless I have a Grid capable GPU, I would need to partition the GPU’s memory, rather than dynamically sharing it.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
Ah, so you go through a bastion for external access. I also run a bastion server, so perhaps my concerns may be overblown o_O. I’d still prefer to have a GUI for MergerFS + SnapRAID though. An idea I thought of today is passing through a HBA to a VM with OMV, that manages the MergerFS + SnapRAID.
There is so little to setting up MergerFS+Snapraid that having a GUI for it really doesn't make much sense. You set it up once and never think about it again TBH.

How many computers did you consolidate into the EPYC system? I was surprised on one of your previous posts that you had set it up for your wife to game over the VM, that’s pretty cool! How are you splitting up the GTX 1660? I investigated splitting my P1000, however I found out that unless I have a Grid capable GPU, I would need to partition the GPU’s memory, rather than dynamically sharing it.
The 1660 is not split up for multiple VMs, it's only be used by docker containers. I neglected to mention I also have a Quadro RTX 4000 in the system that is used for vGPU to all my VM's. My wife and I don't do any gaming, we just use the GPU's for "gamestreaming" our desktops for things like Youtube, etc. The only gaming done is via my emulator VM to my Nvidia Shield TV's.

Check this video out for some more info about how you can set this all up on Proxmox.

Proxmox GPU Virtualization Tutorial with Custom Profiles thanks to vGPU_Unlock-RS - YouTube
 

bleomycin

Member
Nov 22, 2014
54
6
8
38
Yes, this is exactly what mergerfs is for. I have one mergerfs pool that is cache enabled and sees all the files on both cache and spinners. This pool is what all my media applications are mapped to. I also have a non-cache enabled pool that the mover script moves the files too.
Is this a script you wrote? Do you mind sharing it with us? I'm evaluating my options, looking to ditch a large zfs pool for many reasons and move to either unraid with ssd cache or snapraid + mergerfs, but I need a functioning ssd caching system. I can't believe this isn't more documented in 2022. Who is sitting around just fine with the write speeds of a single spinning rust disk anymore?
 

zunder1990

Active Member
Nov 15, 2012
225
78
28
Is this a script you wrote? Do you mind sharing it with us? I'm evaluating my options, looking to ditch a large zfs pool for many reasons and move to either unraid with ssd cache or snapraid + mergerfs, but I need a functioning ssd caching system. I can't believe this isn't more documented in 2022. Who is sitting around just fine with the write speeds of a single spinning rust disk anymore?
You should check out my post on moosefs https://forums.servethehome.com/index.php?threads/i-have-fallen-in-love-with-moosefs.37137/
 
  • Like
Reactions: bleomycin

IamSpartacus

Well-Known Member
Mar 14, 2016
2,520
652
113
Is this a script you wrote? Do you mind sharing it with us? I'm evaluating my options, looking to ditch a large zfs pool for many reasons and move to either unraid with ssd cache or snapraid + mergerfs, but I need a functioning ssd caching system. I can't believe this isn't more documented in 2022. Who is sitting around just fine with the write speeds of a single spinning rust disk anymore?
I use the percentage full expiring script found in the mergerfs docs.

 
  • Like
Reactions: bleomycin