What Are The best Used Datacenter SAS/SATA SSD To Buy For New Server Build?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

uberguru

Active Member
Jun 7, 2013
456
31
28
For my use 1 per datacenter in my case. (6 other dc's)

I also have other zfs boxes but for different functions, but they aren't performance oriented - more into capacity;
as logarchive, ai training data storage, kube big-slow storage. (they typically go into 400-600TB total capacity)

On my cassandra nodes, i use ext4 - hardware raid10 + sed (with listed before sas ssd's)



// beyond that i use zfs at home for my home lab, media/ai server needs, total of some 400T of storage, and couple TB of faster sas3 ssd storage.
(slowly running out of space for trainning data, and upscaling projects.)
awesome, thought as much that you guys arent using ZFS on those cassandra nodes
damn you must be working for some huge company with that much capacity of storage servers
seems you guys are pushing huge data and got lots of servers spread globally

thanks for sharing the details, helpful to see how others have things setup
 

uberguru

Active Member
Jun 7, 2013
456
31
28
By 'data corruption" i mean silent data corruption caused by buggy driver on ich10 (where my 6 sata ports were connected to). The worst thing was all vm guests without file system with check summing like all the windowses and linuxes back then - they were working fine, no bsod, nothing. Until you try to copy and open zip/rar archive files larger than 1GB. Basically i copy known good zip/rar file larger than 1GB and i try to extract. It was giving error and failed to extract the contents. Identifying the culprit was really hard, i tested memory via memtest multiple times, checked hdds on another computers. Nothing. After i found the issue was caused by the driver i disabled ich10 sata and used pci based sata hba (some cheap one i found in my area).
I see, but that was issue with the driver and that was 2009. We are now 15 years after so am not sure you need to be worried much
I rather setup a script to perform my check rather than allow filesystem to be doing it for me
like i said i want vanilla filesystem so i can focus on other things
just search online for zfs failed or zfs died for horror stories of the beloved data protection filesystem

for me the way i will deal with that issue in 2009 will be create a script to perform checksum myself and then when i see driver is issue then i move to a different hardware or update driver rather than focus on filesystem change

I can bet anyone that ext filesystem is the most popular filesystem running in any environment production, dev or whatever
it is battle tested and doesnt add much ontop other than store your files for read and write and allow you to do other things yourself after
 

TRACKER

Active Member
Jan 14, 2019
260
110
43
We use ext4 and xfs for our production systems (i work in big IT company with thousands of customers).
Ext4 - i have seen issues with inodes exhausted. On XFS side actually yesterday i got " no space left on device" on one of XFS file systems on non-prod systems so, yeah, i had to run xfs_fsr (defrag whatever).
About ZFS - it really depends which implementation / branch those people used. The solaris / oracle branch was used to be most stable, but that was till few years back.
 

TRACKER

Active Member
Jan 14, 2019
260
110
43
I am not saying "use ZFS" of course, you should use whatever file system you want :)
Maybe ext4 is suitable for your use case, no problem. As you said, it is not 2009 anymore and file systems like ext4 and xfs don't have same/similar resiliency/protection from data corruption as zfs, but they count on the underlying hardware to do it for them. The thing is "sometimes" the hardware may cause silent corruption too. I am not mentioning here BTRFS as i always considered buggy :)
At the end of the day it is your / your customers data.
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,177
402
83
you can run out of space on zfs too, and worst part when you are running compression. I think its much worse when running large pools...
You don't expect to run out of space when it still shows you 20TB of free space. But with 500-800TB pools - 20TB of free space could be already out of space, and writes get hanged - a person without that exp would not expect that to be the case. Especially the case with thinking "oh i can still safely copy this vm image to backup its only 1TB - it shows i have 20 free. -- and when they cannot get writes, they be like oh something must be off with the box, i'll just reboot it ;-0 and you end up waiting until it reads whole pool (500-800T) of data, commit if it still has what was being sent to pending/commit log - before the storage can mounted again."

I once spent around whole week waiting for zfs pool to come back.
 
Last edited:

TRACKER

Active Member
Jan 14, 2019
260
110
43
you can run out of space on zfs too, and worst part when you are running compression. I think its much worse when running large pools...
You don't expect to run out of space when it still shows you 20TB of free space. But with 500-600TB pools - 20TB could be already out of space, and writes are hanged.
With XFS issue was not actual out of space, there was like couple of gigabytes still free.
With ZFS you have to (highly recommended) to have at least 20% free space in the pool because it is COW file system, that's true.
But that's by design :)
 
  • Like
Reactions: CyklonDX

jei

Active Member
Aug 8, 2021
193
113
43
Finland
you can run out of space on zfs too, and worst part when you are running compression. I think its much worse when running large pools...
You don't expect to run out of space when it still shows you 20TB of free space. But with 500-800TB pools - 20TB of free space could be already out of space, and writes get hanged - a person without that exp would not expect that to be the case.
Sure but a person not understanding even the basics of best practices maybe should not be managing 800TB to begin with.
 
  • Like
Reactions: T_Minus and nexox

jei

Active Member
Aug 8, 2021
193
113
43
Finland
  • Like
Reactions: itronin