Storage Strategy for Large Plex Libraries

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,253
667
113
Poland
I think I’ll have to shelve the cloud as a main store idea.
Recent Hetzner incident should be a lesson. Cloud storage. 2 hdd's in an array failed almost at the same time. 3rd hdd rebuilding failed. Boom. Ceph cluster gone.

Just a warning Worst case scenario is never that far away.

So you need backups anyway. Cloud or locally.
 
Last edited:

gb00s

Well-Known Member
Jul 25, 2018
1,253
667
113
Poland
It was confirmed by Hetzner that a Ceph-cluster with snapshots of customers cloud storage went bust. Ok, snapshots only. Pure luck. They have to the same setup for the customer cloud storage. So same could have happened there.

Even worse. Snapshots of the customers cloud storage in the same DC :rolleyes:
 

oneplane

Well-Known Member
Jul 23, 2021
870
527
93
Perhaps write cold(er) backups on LTO tapes? That way you can have your live cluster/pools and cheaper (but slower to access) tape based backups. Density and price is pretty good if you don't get the newest revisions. Downside is of course manual tape handling but if you just append snapshots and maybe consolidate yearly or something like that it can work pretty well. That said, if you still have the physical media (and it hasn't degraded - this still happens with modern optical media) perhaps any backup is not really required.
 

oneplane

Well-Known Member
Jul 23, 2021
870
527
93
We can always make it worse. Create NFTs of all your data and store it on the blockchain!
 
  • Haha
Reactions: NateS

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
How about getting an actively cooled rack then?

You also could try TrueNas Scale when the clustering part becomes part of the gui (it works now iirc but its manual config work), that might enable scaling over multiple nodes as well
I've thought about jury-rigging a fully enclosed rack with a portable AC. This may be a solution if I keep a NAS in the garage. Times like this I'm jealous of a friend who lives in a state where they have basements. Aside from the fact that his basement is essentially a total additional 4 bedroom "home," he has a room dedicated to his homelab in the basement where it is cool year round without needing special cooling.

I'm quite excited for TrueNAS's future clustering support. Something like Ceph but much less complicated to deploy/maintain would be so ideal.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Recent Hetzner incident should be a lesson. Cloud storage. 2 hdd's in an array failed almost at the same time. 3rd hdd rebuilding failed. Boom. Ceph cluster gone.

Just a warning Worst case scenario is never that far away.

So you need backups anyway. Cloud or locally.
I'm not familiar with Hetzner as I only have a couple VPS through Afterburst. I know they provide some dedicated server services, but aren't most of their services via auctions of third party-owned servers running on commodity to enterprise grade systems?

I have the original DVDs and Blu-rays so I haven't been so concerned about backups of the media yet. However, as with many things, I haven't felt the "pain" yet of re-ripping them all. It probably will be a bit more painful now that I've retired the workstation with multiple optical drives I used to rip.

It was confirmed by Hetzner that a Ceph-cluster with snapshots of customers cloud storage went bust. Ok, snapshots only. Pure luck. They have to the same setup for the customer cloud storage. So same could have happened there.

Even worse. Snapshots of the customers cloud storage in the same DC :rolleyes:
But isn't the whole point of Ceph that it is resilient to issues with nodes?
 

Sean Ho

seanho.com
Nov 19, 2019
822
384
63
Vancouver, BC
seanho.com
TrueNAS Scale clustered storage is gluster and ceph, nothing more or less. You can already run it if you're willing to set up each node separately. Bluefin brings k8s cluster-wide deployment of apps (not sure if rook would work).

In this situation my vote is for a plain old NAS. Build one with an -8e HBA and add disk shelves as needed. Don't get too fancy with storage, lest you make a mistake and either kill your array or leave it not as redundant as you thought it was.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Perhaps write cold(er) backups on LTO tapes? That way you can have your live cluster/pools and cheaper (but slower to access) tape based backups. Density and price is pretty good if you don't get the newest revisions. Downside is of course manual tape handling but if you just append snapshots and maybe consolidate yearly or something like that it can work pretty well. That said, if you still have the physical media (and it hasn't degraded - this still happens with modern optical media) perhaps any backup is not really required.
Yes I've still got the physical discs, which is why I'm not too concerned about backups at the moment per se. The last time I checked, the discs are still in good condition, but I wouldn't count on it let's say 10-15 years from now.

For personal backups, I already follow the 3-2-1 strategy, but that's much easier since personal backups are only about 20 TB and only grows somewhat slowly.

I haven't touched tapes in such a long time. I recall back then, let's say 20+ years ago, I had to manually load each tape. Are auto-loader tape drives cheaper now?
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
how did we go from an 8bay synnology nas to a cluster? :D
I intended to ask for a general "idea" discussion, and it immediately escalated to clusters lol.

Since storage cluster, Ceph, cloud and so on where suggested here I guess. Nevermind.
No problem at all. I think clusters are the future. I've always been interested in clusters ever since I built a Beowulf cluster back in the day. I've been following Ceph for a while, and recently have been reading about GlusterFS. Whether it is appropriate for homelabs is a question as is the cost. I've seen a lot of discussion about Ceph, but it seems like a lot of homelabbers are running quite small Ceph clusters on old hardware "just because." While I am all in for doing fun things like that, the constraint for me is I'm not willing to do that for a long term system. For example, if my hard disk budget is $5,000 or $10,000, with another $3,000 for a server, I probably would not go with a solution I'm not confident in adminning.

Even in the enterprise, and I think many will concur with me, GUIs are very important. While it helps some less CLI-adverse admins, IMHO it helps cut down on entry error for CLI. I've made my fair share of entry mistakes in the CLI when I'm dead tired, to disastrous results. Once as a junior analyst years ago I cost my client $50,000+ an hour in losses for almost a whole day o_O but somehow didn't get fired because I fixed and learned from it. Broke the thing, fixed the thing, and wrote the DR on the broken thing :cool:
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
TrueNAS Scale clustered storage is gluster and ceph, nothing more or less. You can already run it if you're willing to set up each node separately. Bluefin brings k8s cluster-wide deployment of apps (not sure if rook would work).

In this situation my vote is for a plain old NAS. Build one with an -8e HBA and add disk shelves as needed. Don't get too fancy with storage, lest you make a mistake and either kill your array or leave it not as redundant as you thought it was.
Cluster support in the future for TrueNAS is supposed to be able to be set up via the GUI though, is my understanding. iX continues to state it isn't appropriate for production systems so I do believe guidance from the source carries some weight.

External port HBAs are pretty cheap. If I build a new NAS, I may keep an eye out to pick one up for future expansion. What's your opinion on disk shelves vs JBOD chassis?

Personally, I think storage should always go with more conservative solutions if possible.
 

ecosse

Active Member
Jul 2, 2013
466
113
43
I use Windows + snapraid + drivepool. The great thing about this approach is that you are using a native OS file system; which potentially means that recovery is easier / at least partial recovery from some catastrophic failure conditions. It fits a media use case quite well - incremental changes to files. I use a 12 drive approach (10 data + 2 parity).
The biggest issue is what you alluded to in the original post- unless you delete data you will always at some point run out of space; irrespective of how you order it. You could look at x265 codec to reduce space.
Depending what you actually store / watch, its probably cheaper just to take out all the tv subscription packages in your locale. But where would the fun be in that ;)
 
  • Like
Reactions: gb00s

Sean Ho

seanho.com
Nov 19, 2019
822
384
63
Vancouver, BC
seanho.com
I'm not quite sure what you mean by disk shelves vs JBOD chassis; they're basically the same thing -- a box with a PSU powering a bunch of drive bays and some way to connect those drives back to an HBA on the main system. Often there will be an expander, either in the backplane or in I/O modules in the back. SFF-8088/8644 connections to the main system, and usually one connector to daisy-chain more shelves.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I use Windows + snapraid + drivepool. The great thing about this approach is that you are using a native OS file system; which potentially means that recovery is easier / at least partial recovery from some catastrophic failure conditions. It fits a media use case quite well - incremental changes to files. I use a 12 drive approach (10 data + 2 parity).
The biggest issue is what you alluded to in the original post- unless you delete data you will always at some point run out of space; irrespective of how you order it. You could look at x265 codec to reduce space.
Depending what you actually store / watch, its probably cheaper just to take out all the tv subscription packages in your locale. But where would the fun be in that ;)
Hah, I have subs to Netflix, Disney+, HBO Max, Paramount+, and the SO always complains there’s “nothing to watch,” ofc right before discovering a series and binging it. The Plex is mainly for movies from my physical library.

I might have to look into re-encoding or re-ripping stuff into HEVC as until recently I didn’t have client devices capable of direct playing high bitrate x265. I can possibly save 10-15% per movie.

I don’t have any issues with using DrivePool on a workstation (used it in the past), but would be leery about using it, or even a MergerFS/SnapRAID solution on a NAS.

I'm not quite sure what you mean by disk shelves vs JBOD chassis; they're basically the same thing -- a box with a PSU powering a bunch of drive bays and some way to connect those drives back to an HBA on the main system. Often there will be an expander, either in the backplane or in I/O modules in the back. SFF-8088/8644 connections to the main system, and usually one connector to daisy-chain more shelves.
Ah, I was under the possibly wrong impression that there was a difference between a disk shelf and a DIY JBOD chassis, though the end result is the same.
 

Lennong

Active Member
Jun 8, 2017
124
29
28
48
Hah, I have subs to Netflix, Disney+, HBO Max, Paramount+, and the SO always complains there’s “nothing to watch,” ofc right before discovering a series and binging it. The Plex is mainly for movies from my physical library.

I might have to look into re-encoding or re-ripping stuff into HEVC as until recently I didn’t have client devices capable of direct playing high bitrate x265. I can possibly save 10-15% per movie.

I don’t have any issues with using DrivePool on a workstation (used it in the past), but would be leery about using it, or even a MergerFS/SnapRAID solution on a NAS.



Ah, I was under the possibly wrong impression that there was a difference between a disk shelf and a DIY JBOD chassis, though the end result is the same.
May I ask what reservations you have regarding SnapRAID when store 'big' streaming media, like video files?

The only negative issue I can see compared to ZFS and RAID-Z is the bit rot one but I am having difficulty to see the impact of a bit switching in a 20+ GB video stream?
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
May I ask what reservations you have regarding SnapRAID when store 'big' streaming media, like video files?

The only negative issue I can see compared to ZFS and RAID-Z is the bit rot one but I am having difficulty to see the impact of a bit switching in a 20+ GB video stream?
Maybe there’s an odd disconnect on my end because on one hand I stated at this point I don’t mind re-ripping (thus not caring about bulletproof data integrity atm), while also wanting to design for a more data-secure system. I’d like to clarify what I’m thinking along the lines of. SnapRAID is a great solution but IMHO I’d rather keep longer term data storage on a RAID/RAID-like storage system. I do use SnapRAID for temporary/transient data before moving it to longer term storage.

That being said, plenty of people use SnapRAID as their primary data storage, mostly as a self-rolled NAS or on Unra and I’d say to each their own I suppose.
 

Lennong

Active Member
Jun 8, 2017
124
29
28
48
Maybe there’s an odd disconnect on my end because on one hand I stated at this point I don’t mind re-ripping (thus not caring about bulletproof data integrity atm), while also wanting to design for a more data-secure system. I’d like to clarify what I’m thinking along the lines of. SnapRAID is a great solution but IMHO I’d rather keep longer term data storage on a RAID/RAID-like storage system. I do use SnapRAID for temporary/transient data before moving it to longer term storage.

That being said, plenty of people use SnapRAID as their primary data storage, mostly as a self-rolled NAS or on Unra and I’d say to each their own I suppose.
I understand perfectly the 'disconnect' as I am myself going back and forth in reasoning if I should migrate my data to Z-RAID. SnapRAID has served me very well but something gnaws in me knowing there is a bit here and there that might flip eventually..
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I understand perfectly the 'disconnect' as I am myself going back and forth in reasoning if I should migrate my data to Z-RAID. SnapRAID has served me very well but something gnaws in me knowing there is a bit here and there that might flip eventually..
There are a lot of people who are sufficiently invested into some distro’s fan base on Reddit and various forums who claim bitrot doesn’t exist. It absolutely does, though tbf in the last 30 years I’ve encountered exactly *4* instances of minor bitrot. I think it all comes down to how much that particular data is valued. For personal files such as family pictures/video, I put on ZFS hands down. There, even minor bitrot may be devastating. For less valued stuff that’s replaceable I’m perfectly fine with regular non-ZFS soft RAID or SnapRAID.

Then there’s how much money can be invested financially, willingly, or both. I recognize that this hobby requires a lot of money, which I’m fortunate to have a bit more of than most people, but my budget isn’t endless. Personally I figured I’d maintain a small ZFS data store (on FreeNAS) for valued stuff, while leaving room for storing less valuable things elsewhere. Of course, the dream is to have a whole rack filled from bottom to top with a massive ZFS array, like I’ve designed for some SMB clients, but looking at the potential hit to the bank account brings me abruptly back down to earth.