The case against ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
That is an insightfully devoid post.

What is doing RAID 5? Where are you getting snapshots? Storage tiering? How are you managing backups?

One of the nice bits about ZFS is that you get storage tiering, snapshots, RAID, and etc. in the solution. Data integrity is a good feature as well, but there is a lot more going on.

RAID rebuilds on even a low-use SMB/ home server are going to be high-stress operations so you have a higher chance of failure. The point on other system items failing is prevalent actually as you move to triple parity. RAID Z3, for example, you are more likely to see a system implosion than four drives in a 28 drive array failing at once.

I do wish ZFS had OCE. Also that it had a different license. Those two features and we would not have a conversation on ZFS or other. OCE is particularly important in SMB/ home scenarios since mixed drive types are highly common. ZFS is also complex to tune so a lot of folks get terrible performance and think it is a ZFS issue.

The advice to backup is sound. Backup with snapshots even better!
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,093
652
113
Stavanger, Norway
olavgg.com
OCE? You mean Block pointer rewrite?

I think the article is just a rant. ZFS great, and a major improvement. It protects your data better than most other filesystems. It is not only because of checksums, you also have transactions and a lot more.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Well let's be honest I have only ever seen flipped bits when I did it deliberately :)
I run md5deep over my files which is also useful to detect and not expected activity.

Backups with a remote copy and a form of snapshots or media rotation are the best policy for home use.

ZFS is nice as @Patrick mentions for all the other features. I actually don't use it except for Solaris file systems
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,093
652
113
Stavanger, Norway
olavgg.com
For me the biggest use case is compression. No other filesystem that I know of can transparently compress with LZ4. You get better performance and a lot more storage. It is not uncommon for a single SATA ssd to push 2000MB/s.
 
  • Like
Reactions: Evan

niekbergboer

Active Member
Jun 21, 2016
155
61
28
46
Switzerland
"Hard drives already do this."

Under normal operating conditions, yes. When the drive and/or its controller starts croaking, not so much. I've personally seen that happen on something as small as a 4-drive single-vdev pool.

Of course you need a backup, but even for local reliability, I say: belt and suspenders, especially if they come for free.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
YES but when we we FINALLY get zfs send/recv suspend/resume support??? :-D Heard this was 'in the works'...Can't wait!
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Also, with ZFS one can do sort of over-provisioning - much easier than LVM.
For the bitrot and cost the Author might be right, but he is really not taking ease of management and portability (or he didn't get it for ZFS) into account.
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
RAID rebuilds on even a low-use SMB/ home server are going to be high-stress operations so you have a higher chance of failure.
This was already addressed by t10 (sas) and sata-io (sata) with "Rebuild Assist" in 2014. Sadly no big vendor for hbas/raid controllers support that feature (they rather sell their proprietary roc and related ip). From the hdds perspective only the wd red 8 tb (256mb cache version) and 10tb drives support that feature.
 

Monoman

Active Member
Oct 16, 2013
410
160
43
This was already addressed by t10 (sas) and sata-io (sata) with "Rebuild Assist" in 2014. Sadly no big vendor for hbas/raid controllers support that feature (they rather sell their proprietary roc and related ip). From the hdds perspective only the wd red 8 tb (256mb cache version) and 10tb drives support that feature.
Would you happen to have more information on this and which drives/controllers that support it?
 

JDM

Member
Jun 25, 2016
44
22
8
33
It may seem really trivial but the majority of the JBOD Management enhancements should have been there years ago. Who really wants to label 96 sleds with the serial numbers for each drive to grab a magnifying glass and trying to play match the serial number by hand when a drive tanks.
While I definitely agree this is a little late and a nice feature to have, it's been possible to DIY for a long time. Many JBODs are actually very helpful in reporting slots in /sys/class/enclosure especially the Supermicro and EchoStreams boxes from my experience. We've had scripts that build appropriate vdev_id.conf files on start up to give pretty names in the zpool status output as well as turn on the fault lights for failures.

This is more or less the same way ZFS builds these now itself. Funny thing was when RHEL/CentOS 7.3 dropped there was a kernel bug that didn't properly populate /sys/class/enclosure that was patched up fairly quick. Hopefully that doesn't happen again as I believe ZFS now needs it too.