ZFS storage space maximization in large zpools

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sfbayzfs

Active Member
May 6, 2015
259
143
43
SF Bay area
Greetings,

Although I am starting to run FreeNas as a secondary NAS, my main home NAS for years has been ZOL on Centos 6, but there seems to be a lot more zfs discussion here than in the Linux section, and this is a platform independent zfs question to which I have not found any conclusive answers in my googlings...

I have filled a large single zpool which I am just using as a large network share for data archival and retrieval to within 18GB of being "full" according to df -h, but zpool status (or maybe zfs status) shows there is over 1.1TB of unallocated space. (it is 20 4TB drives in a z3 pool if you care.)

Is there an equivalent in zfs to the ext*fs (and I'm sure others) tuning option set with tune2fs -m? (This allows you to change [usually reduce] the "reserved" block count from the now ridiculous default of 5% to another integer percentage of total disk space on a filesystem to allocate more space for non-root users to create files - non-root users get a disk full error before root does on ext* file systems, and probably others.) I have read lots of zfs docs and looked at reservations and refreservations in zfs, but they seem to have more to do with preallocating parts of unallocated pool space to multiple zfs file systems and or snapshots in the same pool than actually changing the ratio of reserved blocks, or making that hidden >1TB usable.

I have not tried to overfill the zfs filesystem since I am used to bad things happening when disks are actually full, but would zfs allow me to continue to add files until the reserved >1TB is closer to gone? I got as low as 18GB free, but more space did not magically appear, and if I try to copy a file larger than the reported free 18GB to it over the network, I get a not enough space on device kind of error, the specifics of which vary depending on what I am doing the copy with.

Likewise is there way to increase the block size beyond 128K? Most of the files on this pool are very large, so larger blocks with less metadata overhead would be fine if that is possible to tune, although it looks like not from what I have read.

Thanks in advance!
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
With ZFS I would expect the following
- if the pool is nearly full, write performance go down dramatically
- if the pool is full, you get a write error

- if the ZFS system pool (Solaris) is full, you may get boot problems
Newer releases of OmniOS (free Solaris clone) gets improved on nearly full situations
I cannot say what happens on BSD on a full ZFS system pool.

ZFS reserves some percent of the capacity to handle full situations.
You cannt reduce as this is essential for stability.

If you create a filesystem, you can increase recordsize (up to 1M, depends on OS).
This helps to improve performance with large files but reduce capacity with many small files.

best use
- Use LZ4 as it can compress data nearly without performance problems
- Add disks/vdevs when fillrate is > 60%