ZFS Best Practices?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Wronglebowski

New Member
Jun 18, 2018
5
0
1
Hey all, just built out my second TrueNAS Pool and I'm looking for some tips. My pool is just a general bulk storage for Plex, documents, photos, few game servers.

Pool - Main

mirror-1

6TB

6TB

mirror-2

6TB

6TB

mirror-3

8TB

8TB

mirror-4

8TB

8TB

mirror-5

8TB

8TB

mirror-6

12TB

12TB

mirror-6

12TB

12TB

Few things I'm curious about

  1. Is this the right way to do mirrors? I want to be able to swap out the 6TBs for larger drives down the road. If I put all 4 of them in one mirror it seems it would be more difficult to remove them. Is this accurate?
  2. Even if it's not necessary for my use case, I do have 4 SATA ports free for 2.5 inch drives. I wanted to setup mirrors for Log and Cache as well. Any recommendations on the best way to do this?
  3. Dedupe, when is this ever useful? I'm not sure how it would actually function.
  4. Any recommendations on protection from controller failure? My last Pool was wrecked by one controller suddenly throwing I/O errors in a mirror I had. The drives were fine but it added severe metadata corruption to the pool and it was lost. Would snapshots protect against this?
  5. If snapshots do help here, are they full snapshots and will take up the entirety of the space used? Or is it just a diff?
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
Hey all, just built out my second TrueNAS Pool and I'm looking for some tips. My pool is just a general bulk storage for Plex, documents, photos, few game servers.

Pool - Main

mirror-1

6TB

6TB

mirror-2

6TB

6TB

mirror-3

8TB

8TB

mirror-4

8TB

8TB

mirror-5

8TB

8TB

mirror-6

12TB

12TB

mirror-6

12TB

12TB

Few things I'm curious about

  1. Is this the right way to do mirrors? I want to be able to swap out the 6TBs for larger drives down the road. If I put all 4 of them in one mirror it seems it would be more difficult to remove them. Is this accurate?
  2. Even if it's not necessary for my use case, I do have 4 SATA ports free for 2.5 inch drives. I wanted to setup mirrors for Log and Cache as well. Any recommendations on the best way to do this?
  3. Dedupe, when is this ever useful? I'm not sure how it would actually function.
  4. Any recommendations on protection from controller failure? My last Pool was wrecked by one controller suddenly throwing I/O errors in a mirror I had. The drives were fine but it added severe metadata corruption to the pool and it was lost. Would snapshots protect against this?
  5. If snapshots do help here, are they full snapshots and will take up the entirety of the space used? Or is it just a diff?
1. Right way yes but "suboptimal"
- mirrors are fastest regarding iops, bit you propably want more usable capacity than 50%
- your layout is massively unbalanced what means that after some time the small disks are quite full what means that new writes land only on larger disks (reduced performance). Also if the smaller disks are quite full, performance go down additionally. Maybe two pools (small, large) is a better idea, maybe one from mirrors for VMs and one Z2 for capacity.

2. As long as you have enough RAM, L2Arc is quite useless for a mediaserver as it does not cache large files but small io only. For a mediaserver you do not want sync so you do not need an Slog (unless you want to store VMs on it). L2arc may help a little for metadata when you enable read ahead. Newest OS releases (Free-BSD, Linux or OmniOS 151038 LTS) include persistent L2Arc that may improve performance.

3. Dedup makes sense if dedup rate is say >10 as it comes with the price of a huge RAM need. Count at least 1 GB RAM per dedup data, in a worst case up to 5GB. If you really want realtime dedup think of a special vdev mirror for the dedup table. This should be then a very good NVMe or 12G SAS disks. As an option simply enable LZ4 compress.

4. Propably you worked without ZFS. ZFS writes metadata twice and detects problem very early due checksums, mostly prior a disaster due bad hardware. Snaps do not help, backup does.

5. ZFS snaps are the last state of a data modification due Copy on Write, Snap size is always amount of changed ZFS datablocks (size=0 without change).
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
Gea will surely disagree, but I think you may want to consider a hybrid approach. Keep your 6TB drives in ZFS with two mirrors vdevs each with two disks (ie Raid 10 for 4 drives) - it would provide reliable and fairly fast storage for Documents, photos, VMs.
The rest 8TB and 12TB drives (ie: Plex stuff) may be better suited for a combination of MergerFS and SnapRaid. That would maximize your storage utilization while still protect your data (NON-REALTIME) against bit-rot. Data on disks is not striped, no worries about losing any disks or controller. Single disk data is fully accessible. You could keep on adding disk ONE at the time and using different disk sizes.

More info here:
 
Last edited:
  • Like
Reactions: Lix

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
Gea will surely disagree,..
Why should I?
This is another two pool approach. Snapraid/ Unraid is ok for uncritical mediadata where you do not need the realtime data protection of ZFS, the crash resistency on writes with snaps or the increased performance of realtime raid but mainly want power saving and the full capacity of all disks (realtime raid reduces capacity to smallest disk in raid)
 
  • Like
Reactions: Lix