Greetings,
Although I am starting to run FreeNas as a secondary NAS, my main home NAS for years has been ZOL on Centos 6, but there seems to be a lot more zfs discussion here than in the Linux section, and this is a platform independent zfs question to which I have not found any conclusive answers in my googlings...
I have filled a large single zpool which I am just using as a large network share for data archival and retrieval to within 18GB of being "full" according to df -h, but zpool status (or maybe zfs status) shows there is over 1.1TB of unallocated space. (it is 20 4TB drives in a z3 pool if you care.)
Is there an equivalent in zfs to the ext*fs (and I'm sure others) tuning option set with tune2fs -m? (This allows you to change [usually reduce] the "reserved" block count from the now ridiculous default of 5% to another integer percentage of total disk space on a filesystem to allocate more space for non-root users to create files - non-root users get a disk full error before root does on ext* file systems, and probably others.) I have read lots of zfs docs and looked at reservations and refreservations in zfs, but they seem to have more to do with preallocating parts of unallocated pool space to multiple zfs file systems and or snapshots in the same pool than actually changing the ratio of reserved blocks, or making that hidden >1TB usable.
I have not tried to overfill the zfs filesystem since I am used to bad things happening when disks are actually full, but would zfs allow me to continue to add files until the reserved >1TB is closer to gone? I got as low as 18GB free, but more space did not magically appear, and if I try to copy a file larger than the reported free 18GB to it over the network, I get a not enough space on device kind of error, the specifics of which vary depending on what I am doing the copy with.
Likewise is there way to increase the block size beyond 128K? Most of the files on this pool are very large, so larger blocks with less metadata overhead would be fine if that is possible to tune, although it looks like not from what I have read.
Thanks in advance!
Although I am starting to run FreeNas as a secondary NAS, my main home NAS for years has been ZOL on Centos 6, but there seems to be a lot more zfs discussion here than in the Linux section, and this is a platform independent zfs question to which I have not found any conclusive answers in my googlings...
I have filled a large single zpool which I am just using as a large network share for data archival and retrieval to within 18GB of being "full" according to df -h, but zpool status (or maybe zfs status) shows there is over 1.1TB of unallocated space. (it is 20 4TB drives in a z3 pool if you care.)
Is there an equivalent in zfs to the ext*fs (and I'm sure others) tuning option set with tune2fs -m? (This allows you to change [usually reduce] the "reserved" block count from the now ridiculous default of 5% to another integer percentage of total disk space on a filesystem to allocate more space for non-root users to create files - non-root users get a disk full error before root does on ext* file systems, and probably others.) I have read lots of zfs docs and looked at reservations and refreservations in zfs, but they seem to have more to do with preallocating parts of unallocated pool space to multiple zfs file systems and or snapshots in the same pool than actually changing the ratio of reserved blocks, or making that hidden >1TB usable.
I have not tried to overfill the zfs filesystem since I am used to bad things happening when disks are actually full, but would zfs allow me to continue to add files until the reserved >1TB is closer to gone? I got as low as 18GB free, but more space did not magically appear, and if I try to copy a file larger than the reported free 18GB to it over the network, I get a not enough space on device kind of error, the specifics of which vary depending on what I am doing the copy with.
Likewise is there way to increase the block size beyond 128K? Most of the files on this pool are very large, so larger blocks with less metadata overhead would be fine if that is possible to tune, although it looks like not from what I have read.
Thanks in advance!