Unless you're homelabbing with it.That being said once you run out of Oracle Support or you have a somewhat unsupported System ... Good Luck.
Unless you're homelabbing with it.That being said once you run out of Oracle Support or you have a somewhat unsupported System ... Good Luck.
Well I personally ran into a BUG where ZFS on top of LUKS would start to throw a Barrage ofAgain, key phrase..."block device". ZFS is a file system and if it cannot...There's no secret sauce with Oracle.
zio Pool Errors and keep trying again and again and again and again to write the same Damn 2 Faulty Blocks/Sectors without timing out or stopping.Understood. Like I said earlier, bugs can happen with anything, and in your case the specific combination was a LUKS encrypted device and certain edge cases in ZFS. What I'm saying is that the block device even in this case is not the issue, it's layers above that.Well I personally ran into a BUG where ZFS on top of LUKS would start to throw a Barrage ofzioPool Errors and keep trying again and again and again and again to write the same Damn 2 Faulty Blocks/Sectors without timing out or stopping.
![]()
CKSUM and WRITE errors when receiving snapshots or scrubbing (2.2.4, LUKS) · Issue #15646 · openzfs/zfs
This is probably still #15533, but making a separate issue just in case. System information Type Version/Name Distribution Name NixOS Distribution Version 23.11 Kernel Version 6.1.65 Architecture x...github.com
The Workaround for a while has been to setup a LOOP Device and feed that to ZFS instead, for which I had to write the following, since I have it on my root Pool:
![]()
GitHub - luckylinux/workaround-zfs-on-luks-bug: workaround-zfs-on-luks-bug
workaround-zfs-on-luks-bug. Contribute to luckylinux/workaround-zfs-on-luks-bug development by creating an account on GitHub.github.com
In Theory at least the Problem has been solved and by default with ZFS 2.3.x it switched to that new Disk Layout to hopefully avoid this.
That's just to show that a "Block Device" is a "Block Device" is not always true.
Sure ZFS Bugs happen. LUKS Bugs happen. Hardware RAID Bugs happen. And sure you can argue that in the great Scheme of Things I had a "small" Issue. Those were a couple of Hellish Weeks on a few Systems for me though.
Which is relevant to the whole "don't bother with special vdev, get more RAM or rebuild your pool correctly" argument in what way?The oracle zfs code base diverged a long time ago and they have their own controllers (from the jbod, to jbod controller, to the hba/raid card. It's all running oracle firmware ) .
They themselfs know exactly what they put where because they controll the whole ecosystem and can code the controllers to not **** it up.
Just shows how much diversity and/or confusion there is in the ZFS world. Nothing simple has so much debate. But simple solutions don't work for complex problems, and that's where layers of complexity come about....about how someone is using ZFS wrong and/or doesn't understand it.
Some of us even older folks recall the ST277R in the Amstrad PC computers in the late 1980s. There was a big legal battle over it.Awww c'mon, that ST-4144R is still whining and clunking away! ;-)
Haha, I had two in a row of the ST277R drives fail. Twenty years elapsed befor I purchased another Seagte product.Some of us even older folks recall the ST277R in the Amstrad PC computers in the late 1980s.
Wow, that takes me back. I used to RLL the miniscribe 6053's as they were more reliable than the seagate RLL drives.Haha, I had two in a row of the ST277R drives fail. Twenty years elapsed befor I purchased another Seagte product.
Some of us even older folks recall the ST277R in the Amstrad PC computers in the late 1980s. There was a big legal battle over it.
Amstrad ended up exiting the PC business, in large part due to reputational damage from the failed Seagate drives.
Haha, I had two in a row of the ST277R drives fail. Twenty years elapsed befor I purchased another Seagte product.
Seagate bashing seems to be a pastime in the storage and homelabbing communities, but how easily everyone forgets how many Seagate drives are supplied by Dell and HP with their brand new servers. Seagates top end enterprise drives have always been at the top with all the other brands--and that's also been my experience, even buying Seagate's 2nd generation Cheetah drives and SCSI drives before there were any true enterprise drives with consumer interfaces like there is with SATA.Wow, that takes me back. I used to RLL the miniscribe 6053's as they were more reliable than the seagate RLL drives.