TrueNAS home NAS setup zPool recommendation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Techie241

New Member
Nov 25, 2021
12
1
3
Hello, so, I am trying to figure out if I understand ZFS and how I should go about setting up TrueNAS. My current hardware is A dell poweredge R720XD, that has 2 SATA boot SSDs on the rear backplane in one mirror and then 24 drive bays in the front I intend to populate with 1TB SAS drives. I was going to start with 16 drives and add in groups of 8 as needed using first the remaining 8 bays and then MD1220s on an H810 flashed to IT mode. because of this I was planning to set up RAIDz2 vdevs with 8 drives each, starting with 2 vdevs and just adding 8 drives at a time to add another vdev to the pool. My understanding there is that I would have a fault tolerance of 2 drives per pool, and I would only lose 2 drives of space to parity per pool. so my initial 16TB of raw storage would have a fault tolerance of 4 drives (assuming failures across different pools) and a total space of 12TB (minus overhead) and that each subsequent addition would extend my fault tolerance by 2 drives and increase the total size by 6TB. I'm given to understand making the Vdevs 8 wide will give me reduced IOPS performance over say 4x4 drive Raidz1 vdevs, but since this will be used mostly for Storage and occasional video editing which I normally do off a single hard drive anyway, it didn't seem like it was worth the hit to fault tolerance since I would lose the same amount of data to parity but I could only lose 1 drive per vdev. do i understand this okay? is this a good way to set it up? should I go with 4x4 RAIDz1 instead? is the increased IOPS performance even worth considering on a Gigabit connection anyway? The server itself will be connected at 10Gig so my wife and I can both hit it at the same time, but both our computers are on 1gig links anyway, so max we can hit it with is 2gig. and even that is rare. to be honest the only reason it's going on a 10 gig link is because both the R720 and my switch have an SFP+ port.... so basically "because it's cool". we have absolutely no need for it, and I do not think it will be physically possible for us to try to saturate that link, let alone for ANY form of raid to be able to read or write fast enough to saturate it without me shelling out for SSDs and that would definitely defeat the purpose of buying second hand server hardware
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
So this wall of text is hard to read ... and not sure what are the actual questions hidden in it.
Maybe you could reformat a bit ? And ideally ask all questions after presenting the facts, not in between? :)
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
8 disks in vdev is the way to go if you don't care about random write IOPS.
As for ZFS capacity calculations, this is same resource I used planning for TrueNAS Enterprise which my company bought and numbers there matched the same calculation guys at iXsystems did. ZFS Capacity Calculator - WintelGuy.com
So 16x 1TB drives in 2 vdevs of 8 raidz2 volumes should get you just under 10TiB of usable space.

For ZFS performance use this link:
 

Techie241

New Member
Nov 25, 2021
12
1
3
So this wall of text is hard to read ... and not sure what are the actual questions hidden in it.
Maybe you could reformat a bit ? And ideally ask all questions after presenting the facts, not in between? :)
so, basically, I am puttin 16 1TB drives into a Dell r720 running TrueNas. I care more about redundancy and capacity than I do IOPS since the most intense thing I will be doing is having a single editor making videos. I do plan on extending the pool later, which I know has to be done with identical Zdevs. my question is would it be better to have 4 zdevs with 4 drive each running in RAID z1 or 2 Zdevs with 8 drives each running RAID z2. Right now I am leaning towards the RAID z2 setup, because I don't want to get screwed if two drives fail in the same Zdev, but I don't know how likely that is.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I'd go with Z2 too, you know Murphy is a tricky bastard...

It sure calms the mind if you can send off a drive for RMA and still have one spare before all things go to hell.
 

Techie241

New Member
Nov 25, 2021
12
1
3
I'd go with Z2 too, you know Murphy is a tricky bastard...

It sure calms the mind if you can send off a drive for RMA and still have one spare before all things go to hell.
I am probably going to keep a hot spare and cold spare on hand because I REALLY don't want to lose data lol. Raid z2 array it is. thank you guys
 

cap

New Member
Sep 20, 2021
12
14
3
California
I have a 10+ year old server running a raidz3 array with 11 active drives (i.e., 8 data + 3 parity). I run it in a dusty home garage (maybe relevant). At one point I lost 3 drives before I could get replacements installed and resilvered. It really scared me. Now I run with 3 hot spares on that zpool. I have the slots so why not?

Speculative lesson is that maybe some causes of drive failure affect all the drives at once, making the chances of simultaneous failures increase well above random chance.

In my new (to me) server I'm going with mirrored vdevs, eventually triple mirrors as finances allow me to move in that direction.
 

Techie241

New Member
Nov 25, 2021
12
1
3
I have a 10+ year old server running a raidz3 array with 11 active drives (i.e., 8 data + 3 parity). I run it in a dusty home garage (maybe relevant). At one point I lost 3 drives before I could get replacements installed and resilvered. It really scared me. Now I run with 3 hot spares on that zpool. I have the slots so why not?

Speculative lesson is that maybe some causes of drive failure affect all the drives at once, making the chances of simultaneous failures increase well above random chance.

In my new (to me) server I'm going with mirrored vdevs, eventually triple mirrors as finances allow me to move in that direction.
Honestly, if I end up having three drives fail in the same zdev all at once... that's what Backblaze is for lol.