Based on your advice I've been accumulating 10 TB SAS drives (data center pulls ) and you were right: these things are far superior to any damn SATA drive. I have 5x8 TB and 12x10TB SAS drives now. I wish I knew about these a few years back, before accumulating so many SATAs. I think many oof the SATAs will become my cold backups. I suppose they are still fine for Plex media too, after one long slow write they can just wait around to be asked for a read and do fine. Of course I am running a mix of drive types now, the EMC shelf doesn't seem to care. Although running the mix broke my LSIUtils settings for negotiated link speeds. So back to pulling/replacing a few drives, one by one, at every reboot to get the link speeds back. Interestingly the shelves rarely to never downgrade link speeds to the SAS drives, only a few of the SATAs. As had been the usual case before I found LSIUtil and it's settings. Having a mix of link speeds (even just one slow one) seems to make them all run slow. One sure way to make it work at reboot time: unclip and partially pull all the drives, spin up the shelves, then boot the server, then inset the drives one by one with a ~30 second pause. Might be easier on the power supplies too, to not try to cold-spin 15 drives at once. I noticed it does not seem to sequence spin up, which makse sense given that these are built to never be turned off. 15 drives going from zero to 5400/7200 RPM all at once is by far the biggest power load the shelf will ever experience, and would get that upon a usual startup too.
Right now I am down to two running shelves (both full) and I have a third that I had been keeping mostly for cold backups. Now I want to take five or six 10 TB SAS drives, rig up a Ryzen board to it with a SAS 3 LSI card, and take that deep dive into ZFS. It is daunting. I am not very experienced with Linux so I'm not sure exactly where to begin. A read lot of ZFS horror stories out there, too. But a few posts back you gave someone advice and I will likely start there. Ultimately I would like to migrate all my storage to ZFS, and as I (barely) understand things, I'll need to begin with a minimum 5-6 drives that are all the same size. Then I can add to the ZFS in a similiar fashion (5 or 6 drives at a time). If all that works out, my hope is to migrate everything away from my current setup in 50 TB blocks, but I'm still pretty leery of borking some cryptic ZFS setting and losing my shit. Been running stablebit drivepool on Win 10 boxes for years now, without issue. But the performance of these SAS drives (even as currently somewhat hobbled... with queue depth set low for SATA, and no multipath link because microsoft sucks) combined with the promise of zero corruption from ZFS is causing me to venture into new territory.
Is there some benefit to running them on 230 Volts? I kept to 115 mainly because thats what I have handy for UPS's. The server nerds are awfully proud of 230V UPS units.