Storage Strategy for Large Plex Libraries

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Lennong

Active Member
Jun 8, 2017
124
29
28
48
There are a lot of people who are sufficiently invested into some distro’s fan base on Reddit and various forums who claim bitrot doesn’t exist. It absolutely does, though tbf in the last 30 years I’ve encountered exactly *4* instances of minor bitrot. I think it all comes down to how much that particular data is valued. For personal files such as family pictures/video, I put on ZFS hands down. There, even minor bitrot may be devastating. For less valued stuff that’s replaceable I’m perfectly fine with regular non-ZFS soft RAID or SnapRAID.

Then there’s how much money can be invested financially, willingly, or both. I recognize that this hobby requires a lot of money, which I’m fortunate to have a bit more of than most people, but my budget isn’t endless. Personally I figured I’d maintain a small ZFS data store (on FreeNAS) for valued stuff, while leaving room for storing less valuable things elsewhere. Of course, the dream is to have a whole rack filled from bottom to top with a massive ZFS array, like I’ve designed for some SMB clients, but looking at the potential hit to the bank account brings me abruptly back down to earth.
I hear you. It sounds reasonable to split up the data like that. I personally don't have any photos nor family and it's all video for me but my collection is getting soo crazy big the bit rot issue bugs me a bit. I have as of today 26 data and 4 parity. It's all 14TB disk so it's now or never if I want to migrate to Z-RAID as the initial vdev to copy to will be an investment by itself.
 

Lennong

Active Member
Jun 8, 2017
124
29
28
48
zfs is not the only fs that can handle bitrot. Snapraid already has parity scrub, and it can use btrfs as backing store with btrfs snapshots, for checksumming
You mean 'scrub'? Well, it's not really as sophisticated as ZFS in terms of bit rot / real time verification. Also, when as much data as I have scrub takes ages, much longer than the sync itself.
 

Stephan

Well-Known Member
Apr 21, 2017
1,032
799
113
Germany
Bitrot. Underappreciated. When is the last time you looked into the readability of your CDs? When I transferred my CD collection to ZFS, I encountered already maybe a dozen barely readable CDs. After barely 10 years or so having been pressed in the factory. Another 10 years and the data may have been lost. Sure, a couple bad frames in my favorite Boston Legal episode would not be the end of the world. And I could always buy another box set on eBay I guess. But since the technology to prevent bitrot is there, improving steadily for 20 years, open-source and free even, why not use it.
 
  • Like
Reactions: gb00s

Lennong

Active Member
Jun 8, 2017
124
29
28
48
Bitrot. Underappreciated. When is the last time you looked into the readability of your CDs? When I transferred my CD collection to ZFS, I encountered already maybe a dozen barely readable CDs. After barely 10 years or so having been pressed in the factory. Another 10 years and the data may have been lost. Sure, a couple bad frames in my favorite Boston Legal episode would not be the end of the world. And I could always buy another box set on eBay I guess. But since the technology to prevent bitrot is there, improving steadily for 20 years, open-source and free even, why not use it.
Yes, I myself have quite a few nice collections of ooold collectables that already are out of Blu Ray prints.. I dont trust optical discs and I really cant collect them as well.. Hmm, it's a decision that has to be made. I have a dedicated server room with dedicated cooling and such and a modified Big Foot chassis that holds 48 3.5" and 8 2.5" drives. I could do it but I has to be done now..
 

Stephan

Well-Known Member
Apr 21, 2017
1,032
799
113
Germany
zfs is not the only fs that can handle bitrot. Snapraid already has parity scrub, and it can use btrfs as backing store with btrfs snapshots, for checksumming
Wouldn't that make error detection only an offline feature? Like, bitrot is only detected when Snapraid is scrubbing, not online right when you read a file?

I suspect people would pair BTRFS with Snapraid because BTRFS lacks non-mirror redundancy like something RAID6-ish
 

oneplane

Well-Known Member
Jul 23, 2021
874
532
93
At some point the features and known working configurations outweigh the benefits of hacking together an alternative just for the sake of not using ZFS. That doesn't mean ZFS is the best or a universal winner, but it gets increasingly hacky every time someone tries to be an 'also ZFS'. This mostly seems to happen with miscalculated MTBF setups and with "windows at all costs" setups where ZFS obviously isn't an option.
 
  • Like
Reactions: T_Minus and itronin

Sean Ho

seanho.com
Nov 19, 2019
823
385
63
Vancouver, BC
seanho.com
zfs is great, no doubt about it. But it's not the only option. Parity scrub and data/metadata checksumming are two different things, and though snapraid itself only has the former, btrfs has the latter, and snapraid can use btrfs as the underlying fs for each device. Multi-device btrfs (e.g., raid6) is a different solution (that would obviate the need for snapraid); the much-touted write hole issue is easily mitigated with a UPS and NUT for graceful shutdown. Either snapraid or btrfs raid6 are very flexible on adding/removing devices; zfs vdev expansion is finally being worked on, but not quite there yet.

On serverbuilds.net most of our home Plex users are using Unraid for its ease of use and flexibility with drives; very similar to snapraid. No bitrot protection using default xfs backing fs; parity scrub recommended every couple of months or so. It works just fine for downloadable media. For irreplaceable baby photos, then sure, run a separate zfs pool for backups, as well as off-site copies.
 
  • Like
Reactions: ecosse and Lennong

oneplane

Well-Known Member
Jul 23, 2021
874
532
93
Classical RAID is dead, especially with large disk sizes. As far as I know, SnapRAID is essentially only relevant if you want to replicate ZFS's `copies` feature on Windows. On Linux, there is no real benefit of using SnapRAID.

The same goes for UnRAID etc. It's mostly just a hack to get a storage pool with less protection, and mostly 'suggested' by people that either don't know about data reliability, availability and durability, or by people who think that "it uses less of my disk space so that must be better" which is mostly the market segment that would be better served by Synology.

If you setup a disk array knowing that your data is not actually protected to the extent you think it is, sure, use anything. Otherwise, it's not really an option.
 
  • Like
Reactions: T_Minus

UhClem

just another Bozo on the bus
Jun 26, 2012
469
279
63
NH, USA
Unfortunately, there is a lot of ignorance/misinformation regarding SnapRAID ...
... As far as I know, SnapRAID is essentially only relevant if you want to replicate ZFS's `copies` feature on Windows. On Linux, there is no real benefit of using SnapRAID.
Not very far :rolleyes: .
... Parity scrub and data/metadata checksumming are two different things, and though snapraid itself only has the former, btrfs has the latter, and snapraid can use btrfs as the underlying fs for each device. ...
Not true. All SnapRAID array data has checksums (128-bit checksum for each file data block [typ 256KB]). Parity block will not be generated (via sync), nor validated (via scrub/check), unless all (contributing) data blocks' checksums (aka hashes) are correct. Note, though, that since SR's objective/mission is to "protect" files, not filesystems, it has no explicit mechanisms for FS metadata (inodes, blk-ptrs, etc)- that's the responsibility of the specific FS. But the user has wide choice of FS's--ext[34], xfs, ntfs, btrfs ...
 
Last edited:
  • Like
Reactions: ecosse and itronin

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yes, I myself have quite a few nice collections of ooold collectables that already are out of Blu Ray prints.. I dont trust optical discs and I really cant collect them as well.. Hmm, it's a decision that has to be made. I have a dedicated server room with dedicated cooling and such and a modified Big Foot chassis that holds 48 3.5" and 8 2.5" drives. I could do it but I has to be done now..
Wow, I'm a bit jealous! There just isn't enough space at my house to do something like that.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,770
2,146
113
Bitrot. Underappreciated. When is the last time you looked into the readability of your CDs? When I transferred my CD collection to ZFS, I encountered already maybe a dozen barely readable CDs. After barely 10 years or so having been pressed in the factory. Another 10 years and the data may have been lost. Sure, a couple bad frames in my favorite Boston Legal episode would not be the end of the world. And I could always buy another box set on eBay I guess. But since the technology to prevent bitrot is there, improving steadily for 20 years, open-source and free even, why not use it.
YES! -- My old CDs I burned in the late 90s have random files that no longer work :(
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,770
2,146
113
Yes, I myself have quite a few nice collections of ooold collectables that already are out of Blu Ray prints.. I dont trust optical discs and I really cant collect them as well.. Hmm, it's a decision that has to be made. I have a dedicated server room with dedicated cooling and such and a modified Big Foot chassis that holds 48 3.5" and 8 2.5" drives. I could do it but I has to be done now..
I've yet to have a commercial DVD or Blu-Ray stop working... well due to age... and why can't you collect them? I don't like to say I "collect" movies, but I prefer to own the disc than a digital copy online. I use Disc Wallets to organize them and have maybe 1000 or so across half dozen+ Disc Wallets and they take up very minimal space and have 0 operating cost like a server to house them all :D :D :D I actually built a ripping-server with 3x BR drives to rip but ended up skipping it, maybe something in the future but for now no issues from me with the physical disc.
 
  • Like
Reactions: ecosse

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I've yet to have a commercial DVD or Blu-Ray stop working... well due to age... and why can't you collect them? I don't like to say I "collect" movies, but I prefer to own the disc than a digital copy online. I use Disc Wallets to organize them and have maybe 1000 or so across half dozen+ Disc Wallets and they take up very minimal space and have 0 operating cost like a server to house them all :D :D :D I actually built a ripping-server with 3x BR drives to rip but ended up skipping it, maybe something in the future but for now no issues from me with the physical disc.
IIRC, commercial manufactured optical discs are “stamped” from a master negative before being finished with the polycarbonate coating. Writable discs use an organic/inorganic dye layer based on disc type, which degrades over time (“rot”). Re-writable discs use a dielectric layer that can change its electrical charge. I would expect commercially pressed discs to last much longer than writable discs that have dyes that may degrade.
 
  • Like
Reactions: Lennong

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,770
2,146
113
IIRC, commercial manufactured optical discs are “stamped” from a master negative before being finished with the polycarbonate coating. Writable discs use an organic/inorganic dye layer based on disc type, which degrades over time (“rot”). Re-writable discs use a dielectric layer that can change its electrical charge. I would expect commercially pressed discs to last much longer than writable discs that have dyes that may degrade.
Good to know :D
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
If I will expand in vdevs of 12 wide, would it be better to build a beefy server that can connect to multiple DAS later to eventually get up to 90-100 disks, or would it make sense to build multiple lighter weight servers that independently house 24-36 disks? A big sticking point is admin effort/time so I’ll probably stay with TrueNAS.

“The dream” would be clustering the storage while being able to utilize the leftover hardware resources for light VMs/services.
 

Sean Ho

seanho.com
Nov 19, 2019
823
385
63
Vancouver, BC
seanho.com
Hyperconverged is one way to go, though often storage nodes and compute nodes have rather different hardware needs, so even if they share a uniform management plane (e.g., k8s) you'll still end up with differentiated roles.

With sufficient RAM, TrueNAS should handle 90-100 spinners no problem. But if node failure is a concern(or PSU, RAM, NIC, etc -- anything that's only one per node), then you'd be in the territory of clustered storage (gluster, ceph, longhorn, etc), and that's a whole other level of complexity and cost.
 
  • Like
Reactions: Brian Puccio