26TB Seagate HDD for 250 USD

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,778
1,862
113
50
HSV and SFO
That being said once you run out of Oracle Support or you have a somewhat unsupported System ... Good Luck o_O .
Unless you're homelabbing with it. :p Because last time I checked most of our stuff has no support, if not completely being used in ways it was never designed for like the $20 Dolby device. :)
 

luckylinux

Well-Known Member
Mar 18, 2012
1,430
436
83
Again, key phrase..."block device". ZFS is a file system and if it cannot... :) There's no secret sauce with Oracle.
Well I personally ran into a BUG where ZFS on top of LUKS would start to throw a Barrage of zio Pool Errors and keep trying again and again and again and again to write the same Damn 2 Faulty Blocks/Sectors without timing out or stopping.


The Workaround for a while has been to setup a LOOP Device and feed that to ZFS instead, for which I had to write the following, since I have it on my root Pool:

In Theory at least the Problem has been solved and by default with ZFS 2.3.x it switched to that new Disk Layout to hopefully avoid this.

That's just to show that a "Block Device" is a "Block Device" is not always true.

Sure ZFS Bugs happen. LUKS Bugs happen. Hardware RAID Bugs happen. And sure you can argue that in the great Scheme of Things I had a "small" Issue. Those were a couple of Hellish Weeks on a few Systems for me thougho_O .
 
  • Like
Reactions: Samir

kapone

Well-Known Member
May 23, 2015
1,751
1,134
113
Well I personally ran into a BUG where ZFS on top of LUKS would start to throw a Barrage of zio Pool Errors and keep trying again and again and again and again to write the same Damn 2 Faulty Blocks/Sectors without timing out or stopping.


The Workaround for a while has been to setup a LOOP Device and feed that to ZFS instead, for which I had to write the following, since I have it on my root Pool:

In Theory at least the Problem has been solved and by default with ZFS 2.3.x it switched to that new Disk Layout to hopefully avoid this.

That's just to show that a "Block Device" is a "Block Device" is not always true.

Sure ZFS Bugs happen. LUKS Bugs happen. Hardware RAID Bugs happen. And sure you can argue that in the great Scheme of Things I had a "small" Issue. Those were a couple of Hellish Weeks on a few Systems for me thougho_O .
Understood. Like I said earlier, bugs can happen with anything, and in your case the specific combination was a LUKS encrypted device and certain edge cases in ZFS. What I'm saying is that the block device even in this case is not the issue, it's layers above that.
 

alaricljs

Active Member
Jun 16, 2023
268
118
43
The oracle zfs code base diverged a long time ago and they have their own controllers (from the jbod, to jbod controller, to the hba/raid card. It's all running oracle firmware ) .
They themselfs know exactly what they put where because they controll the whole ecosystem and can code the controllers to not **** it up.
Which is relevant to the whole "don't bother with special vdev, get more RAM or rebuild your pool correctly" argument in what way?
 
  • Like
Reactions: Samir

pimposh

hardware pimp
Nov 19, 2022
432
263
63
Ekhm .. it is not that difficult at all.

Simultaneous access’ to the same files benefit from large ARC (as long as single files do fit into ARC, or qty of transactions does not make it evict too quickly)

Multiple users accessing pool backed with special vdev benefit way lower latency, reduced iowait.

Why these two are mistaken? F150 ain’t Corvette. Yes they both are used to move ass from point A to B.

Can you have both in garage ? Yes. Does it make sense? Depending on use case might or might not.
With budget over needs its always fun to have both, even if one is kept unused for most of year…

CoW or not CoW, running ZFS on top of HW-Raid adapter / not pure HBA brings question what to do when one need to move set of drive among systems…practical side of doubling qty of HW-Raid cards is questionable. Does all HW-Raid cards do build linux mdadm o anything that can be easily reconstructed ? Certainly not.
But.. if HBA is that superior to HW-Raid why these still do exist? :cool:

And then all F150 fanboys want to prove it can be fast as Corvette with tuning… Seems to me this is endless discussion, leaving true purpose of each fs behind common sense.
 
Last edited:
  • Like
Reactions: Samir

madbrain

Active Member
Jan 5, 2019
213
45
28
Awww c'mon, that ST-4144R is still whining and clunking away! ;-)
Some of us even older folks recall the ST277R in the Amstrad PC computers in the late 1980s. There was a big legal battle over it.
Amstrad ended up exiting the PC business, in large part due to reputational damage from the failed Seagate drives.
 
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,778
1,862
113
50
HSV and SFO
Some of us even older folks recall the ST277R in the Amstrad PC computers in the late 1980s. There was a big legal battle over it.
Amstrad ended up exiting the PC business, in large part due to reputational damage from the failed Seagate drives.
Haha, I had two in a row of the ST277R drives fail. Twenty years elapsed befor I purchased another Seagte product.
Wow, that takes me back. I used to RLL the miniscribe 6053's as they were more reliable than the seagate RLL drives.
Seagate bashing seems to be a pastime in the storage and homelabbing communities, but how easily everyone forgets how many Seagate drives are supplied by Dell and HP with their brand new servers. Seagates top end enterprise drives have always been at the top with all the other brands--and that's also been my experience, even buying Seagate's 2nd generation Cheetah drives and SCSI drives before there were any true enterprise drives with consumer interfaces like there is with SATA.
 
Last edited:
  • Like
Reactions: nexox