Home ZFS setup: Ideas on RAIDZ configuration

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

weust

Active Member
Aug 15, 2014
353
44
28
44
I'm looking into running a ZFS NAS and because it's the first time for me, I'm reading up a lot on how everything works.

Initially I thought RAIDZ1/2/3 would do something magically but still let me use all the capacity of the drives.
Already wondered how the hell this would work, and indeed it does not.
Last night I used five 1TB drives and placed them in a RAIDZ3, and I was left with 1.74TB usable space.

So that got me thinking. For a simple home NAS where I store my Blu-ray remuxes and AIFF music and some data I don't need high speeds. 1Gbit/s is enough, more is nice when I will upgrade my desktop to 10Gbit/s.
Safety wise it's obviously a personal preference. Do I allow one disk to fail, to be replaced and wait for the rebuild, or do I want to be able to be extra safe and use two or even three disks for that.

I'm not even talking about ZIL and L2ARC yet. Not even sure yet if I will. If I have a SSD laying around I might as well.
My server has a modified H310 for the storage, and a H710 for the RAID1 SSD boot disk, and I can add extra SSDs for other stuff like ZIL/L2ARC.

I would add at least six disks of 8TB, of which one will be for parity. I could possibly add two more disks for that. My Dell T320 has eight slots for disks.

For now I'm just wondering what other people with home setups have done or are doing.
What risks are you willing to take.
Everything is welcome.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Most people would consider a RAIDZ2/RAID6 the "minimum" for a decent level of safety on an array with more than four or five HDDs; RAIDZ1 had a worryingly high chance of encountering an error if you end up rebuilding the array. For a 6-8 slot chassis RAIDZ2 is probably the right fit, six 8TB drives in a RAIDZ2 should net you about 29TB of space. Bear in mind though I still don't think ZFS allows you to expand the array with extra discs at a later date; you can't add a disc and turn your RAIDZ into a RAIDZ2.

RAIDZ3 essentially uses three discs for parity, so in your experiment you were only left with two 1TB discs to use for space.
 

weust

Active Member
Aug 15, 2014
353
44
28
44
29TB is an amount that will last me a couple of years, probably. Unless I will buy a 4K TV/monitor soon, and remux those.
Extra disks won't be an issue. If I need more, I will replace them one by one so I can expand the vdev (or pool, whatever).
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
With 8TB drives, I would want RAIDZ2 or better. The reason is that when you rebuild, you need to read all the data and rebuild the missing drive. That takes a fair bit of time, and could cause another disk to fault. For me, it's less about number of drives than total amount of data involved.

It sounds like you are planning to store raw bluray rips. If you have multiple clients, you might run into performance problems. The only way to know is to test with your setup. Just something to be aware of. With lower bitrate files, I ran into problems with 2 6-disk arrays in raidz2. There are various options for handling that, just something to be aware of. Particularly if you start putting a 10Gb connection into the mix. All that seeking just adds up.

I had recently bought a server chassis with 24 drive slots. So I decided to use many smaller drives in a striped mirror configuration (raid10). The reasons were that I maintained redundancy, increased performance, and sped up recovery for failed drives. It also means I can replace 2 drives to increase storage space. It has downsides like increased power use and more cabling. There is also the chance that the wrong 2 drives fail and eat the pool. And, of course, more "wasted" storage as everything is in 2 copies rather than using parity.

It's your data, so you decide. There aren't really any wrong answers, it's all different tradeoffs. Some people like raid0, some like raidz3. It just depends on how you prioritize things.
 

weust

Active Member
Aug 15, 2014
353
44
28
44
Remux is basically ripping the individual parts (video/audio/subtitle/chapter) and combining that into a mkv.
No encoding done. So, yeah. Raw.
I only have one client, my LibreELEC on a RPi3B+.

The 10GbE is just the uplink to the switch, and later a connection to the desktop where I rip my Blu-rays.
I generally don't search through a movie, just play them front to end.

RAIDZ2 sounds like a way I will be going.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@weust

first and foremost.. raidz, mirrors etc are not a replacement for havign a backup.. let me repeat that.. redundancy is NOT a backup plan
zfs redundancy was designed so that a pool could fault a drive and remain in 'uptime' .. ie keep doing its job without having to come offline to fix it. but enterprise still backs up their data.. usually more than one.


so if you are going to have a backup of your data.. then the discussion of your pool layout becomes one more of cost, ease of expansion, and performance

at the lowest cost to start, easiest and cheapest to expand.. but no redundancy.. comes the basic stripe.. mininum one disk.. I add 2 at a time so that read/write is basically double what the disk is capable of.. give or take.. you get the picture.. here you have ZERO wasted space, ZERO cost overhead, you still have data integrity since it will still checksum everything.. but cant repair a file.. you would have to go to your backup.

mirrors are WAY overkill in my opinion for a file server.. very high performance, very fast rebuild, but 50% overhead cost.. no thanks.. not that rich.


in MY setup .. your situation and choices may differ.. I run my ONLINE pool for datastorage as a basic stripe.. as discussed.. backing up to a backup array that consists of stripped raidz1 stripes of 5 disks each.. currently 3 raidz1 vdevs (15 disks)... this is a very acceptible risk level and only 20% cost overhead.. but in reality the disks are free.. read on..

what I do is add drives to my basic stipe.. in pairs.. when I hit six and run out of space.. I buy 2 new.. larger (since it been a couple years) drives that will hold the data of the previous 6.. send/recieve the data to the new pool.. then take the 6 old disks.. and add another 5 disk raidz1 to my backup.. use whatever multiples here make sense for you, your harware .. etc

bennifits..

I spin the least amount of drives required to hold my data.. 24/7 cutting down on bills
I dont buy disks up front for space and redundancy I dont need.. the price of drives go down every month.. so when I do add drives they are cheaper in the future than if I had to buy them all now to fill a raidz
it is the fastest read/write pool structure in zfs
its the cheapest to expand and my backup pool is essentially free.. from my older drives

I have over 20TB of meda data alone that I have had online for over 8 years 24/7 ... never lost a single file.//

again. this works for me .. might not be what you want..

zfs is very flexible .. very powerful.. but you have to engineer it.. you have to look at what your doing and why you are doing in..

I still run a backup server with 8 1.5tb seagates that have over 55000 hours on them.. its my second backup of data and my backup media server.. its still rolling and has never lost data either. 8gb ram e8400 proc.. it doesnt have to be expensive or complicated
 

weust

Active Member
Aug 15, 2014
353
44
28
44
I know about RAID and it not being a backup. Been working in IT, and longer as a hobby, to know that ;-)

Currently, with my Synology, I backup a portion of it.
My DVD and Blu-ray collection is my backup of everything remux.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@weust

I thought of it that way for a couple months. that my 'Blu-ray and DVD disks" are a backup.. but when I look at the hundreds of hours of work invested.. I just decided hard drives are cheaper than my time and run multiple backups of everything. even keep one off-site.

remember .. raidz is not RAID.. in very subtle but meaningful ways.. oversimplified ok call it raid.. but its not. its far better..

with ZFS .. don't get caught up in the hardware race.. you really can run zfs successfully on 10 year old hardware.. just depends on your use case... start off basic and see if you are resource constrained and address it.. if you decide to keep you online pool a basic stripe.. I can almost assure you that for a basic filler.. you won't be disk constrained.. I would not mess with dedupe.. LZ4 compression is essentially free so I highly recommend it.. even on incompressible data since it will compress the meta.. and it has a very fast rejection algo so it won't slow anything down..

good luck .. have fun.. and I would read the sun/solaris zfs admin docs..
 

weust

Active Member
Aug 15, 2014
353
44
28
44
I agree it's a lot of work having to do it all over again, but I don't have another location to store that amount of data.
At least, not at the moment. Might be able to place the DS415play I have now, with 6TB drives, at my parents place.
But even then, it would not be the same amount of space, unless I make it a big RAID0. Meaning I can upload everything again when one drive fails...
And I currently have about 9.5TB worth of remuxes alone, plus several GByte's worth of music and other data.

I know RAIDZ isn't RAID, but comparing is easier when talking about disks and usable space without going in too much detail.

Hardware wise I bought a second hand Dell T320. Reasoning is it can hold a max 192GB RAM (will put in 96GB) and mine has a 6-core CPU.
Meaning I can have plenty of RAM for the VM running whatever ZFS/sharing distro I will choose, and more then enough RAM for the other VMs.
It also has the double 4-bay 3.5" drive solution, and I bought a 5.25" 6-bay 2.5" bay for boot disks, etc.
Main reason for all this is I don't have to build it (too much) and it's very quiet.
Although because of the flashed H310 which the server doesn't really detect, it ramps of the RPM of the fan quite a lot :-(

Not going to touch depuplication. Already decided on that. LZ4 compression I need to read up on. Haven't seen that mentioned anywhere so far...
 

dragonme

Active Member
Apr 12, 2016
282
25
28
If you are serious about running VM and ZFS at the same time... you probably dont want to virtualize in freenas with bhve yet.. i am reading that it has its issues and for just plex or something probably not a big deal.. but if you are thinking esxi with ZFS backing for storage.... you should take a look at the solaris omnios thread.

A esxi ZFS all in one is pretty easy to set up and has less overhead and bugs running ominos and napp-it for ZFS than freenas .. although freenas can be done.... you are running way more overhead than is necessary since you only need freenas for zfs and not jails/bhve.. ominos is more performant than freenas.. you can do simultaneous smb/nfs from the same dataset, and ominos generally gets improvements in zfs before linux and freenas on bsd...
Again .. all just opinions sprinkled with facts but omnios can do things that freenas cant.. and freenas can do some things better than ominos.. so you have to weigh out the features that you need and the resources you are ok with spending on running them..


Its the journey not the destination.. hehe
 

weust

Active Member
Aug 15, 2014
353
44
28
44
A journey it sure is :)

I will be running a hypervisor on hardware and the ZFS setup in a VM with a passthrough HBA or even just the disks (Hyper-V.
All things I want to try out and get a feeling with before the ultimate decision.

So far I have been toying with NAS4Free because of the GUI. It's fairly easy understandable, where I found Napp-it more difficult.
But I haven't used Napp-it enough, so really need to dive deeper into it.

I have even, shortly, considered FreeBSD and setting up ZFS myself. But then comes the hell called Samba, and I just want to click using a webGUI. Plain and simple, and after I set things up I don't want to use it too much.
Having CLI only makes it having less overview for myself.

The journey will be long. At least several months more. Don't have too much free time during the mid-week evenings, and don't want to spend every weekend on it as well. There's also games to be played :)