QLC SATA SSD's for media storage/backup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bjorn Smith

Well-Known Member
Sep 3, 2019
883
487
63
50
r00t.dk
Hi,

I am considering exchanging my spinner mirror with a SATA SSD mirror. (2x12TB WD NAS disks) -> 2x8TB SATA or 4x4TB SATA - whatever is cheapest.

I exclusively use it for storing multimedia files and for storing backup's.

Meaning its mostly reads and writes from time to time, but not massive throughput or many write threads at the same time.

So I wonder what the consensus is about using e.g. a QLC SSD for these kind of purposes.

I am on the hunt for power savings and is willing to sacrifice money to get it.

I know some people would never use QLC and some probably are already using them.

Do you guys think its a bad idea, or okay for my purpose?
 

llowrey

Active Member
Feb 26, 2018
168
140
43
I have found that data retention, even with TLC and keeping the power on, isn't perfect. The periodic background scans that are supposed to happen haven't been good enough for me and I started finding occasional bad blocks during btrfs scrubs. I added a monthly scrub and have not had a bad block since. So, for data that is read infrequently, a periodic read would be a good idea.

The only other concern would be the total volume of writes given the much reduced number of erase cycles. I would assume that your overall write volume is probably low and should not be a concern.

I believe QLC is a reasonable solution for your use case and is likely a choice that I will make soon-ish. My reasons have more to do with reliability, performance (for maintenance eg scrubs), and complexity (a pcie card w/ 4 u.2 drives bolted on vs a chassis with a ton of drive bays).
 

TRACKER

Active Member
Jan 14, 2019
196
58
28
I have 4x1TB Samsung QVO 860 and use them for zfs pool already for 3 years.
They are ok-ish for reads (i usually get 370-380 MB/s per disk) but writes are ...awful.
Average speed is around 40-50MB/s and it even drops to around 20-30MB/s per drive during heavy writing.
That's after SLC part of the drive is exhausted.
 

thulle

Member
Apr 11, 2019
51
19
8
This is probably somewhat of an extreme case since the server can drop to a few °C above freezing during winter, affecting retention, but among my 3-way special-vdev drives the QLC 870 EVO is clearly worst:

ssd.png

The 850 did 40kh or so in my workstation before ending up here, so total hours is 73k, 29k, 15k, data written is 175TiB, 53TiB, 42TiB.
 

Chriggel

Member
Mar 30, 2024
89
44
18
If you're aware of the weaknesses you'll be fine. If you're not aware of them you're probably not affected by them anyway and you'll be fine as well. Sure, there's always that one person who falls through the net and gets a nasty surprise. But overall, I wouldn't be worried about QLC SSDs.

Mostly reads sounds good. We'll see more and more QLC SSDs exactly for this use case, where performance and endurance isn't of as much concern as density. And with drives getting larger, even if you start writing to them, a very low DWPD rating of 0.8 or even 0.5 will mean lots of writes per day because it's all relative to the drive's capacity.

For general home storage of media and backups and such, which is almost archival in nature, I'd certainly use them.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
883
487
63
50
r00t.dk
I use them for one huge "depot" vm which stores archive files, so yeah, something like backup :)
If I can get a consistent 100MB/s write speed across a pool of two or four drives I will be happy. I hope that is possible with drives bigger than yours
 
  • Like
Reactions: T_Minus

Stephan

Well-Known Member
Apr 21, 2017
948
715
93
Germany
QLC if cheap, maybe. ZFS mandatory with daily or weekly scrubs. I feel monthly is too long. Verify meticuously that zed mail notifications are working. Redundancy not just mirror but N+2 and from different manufacturers and with available firmware updates if possible, to smear out different failure modes over discontinuous time periods. Triple verify ashift, possibly even benchmark different ashifts to be sure, to limit write amplification.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,656
2,068
113
sees i'm not the only one concerned of these lol, i'm concerned for the 2x 2TB 870 EVO in my desktop mirrored LOL
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
883
487
63
50
r00t.dk
sees i'm not the only one concerned of these lol, i'm concerned for the 2x 2TB 870 EVO in my desktop mirrored LOL
I am also concerned - and if I could afford it I would buy used enterprise drives intead, but used SATA SSD drives used are not cheap these days :)
 
  • Like
Reactions: T_Minus

twin_savage

Member
Jan 26, 2018
69
37
18
33
Be super careful which SSDs you pick, almost all consumer drives have no protection against NAND cell charge decay. Most enterprise SSDs do have NAND cell refresh as part of their GC routine but they cost more.
If I recall correctly, the Crucial MX500 was one of the consumer drives that actually implemented charge refresh.

The NAND cell charge decay doesn't immediately result in data loss, it would likely take several years for that to happen on a moderately worn SSD, but read speeds significantly decay over time because of the charge decay. It's not unreasonable to expect 1-2 orders of magnitude read speed decrease on data written 2 years ago on a drive with no charge refresh.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
883
487
63
50
r00t.dk
The NAND cell charge decay doesn't immediately result in data loss, it would likely take several years for that to happen on a moderately worn SSD, but read speeds significantly decay over time because of the charge decay. It's not unreasonable to expect 1-2 orders of magnitude read speed decrease on data written 2 years ago on a drive with no charge refresh.
Surely scrubs should fix that right? Since data is being read?
 

pimposh

hardware pimp
Nov 19, 2022
144
80
28
To date QLC (used more than day, filled more than 60%) sucked. No matter what different reviews said. Consumer QLC were/mostly are still POS.

But as technology progesses, some latest QLC drives are getting really reasonable (not sure about SATA market as i opted out) to be considered for bulk storage/archive/media purpose. By reasonable i mean that write speed do not drop to floppy drive range, GC/internal TRIM is fine enough to think of their capacity as advertised. At least this is getting real in enterprise class gear.

If you think of using cheap SATA QLC's eg. QVO - maybe think of different filesystem than ZFS as compromise ? Once filled up they're slower than rust. Only benefit here is huge power consumption difference.
 
  • Like
Reactions: T_Minus

twin_savage

Member
Jan 26, 2018
69
37
18
33
Surely scrubs should fix that right? Since data is being read?
Unfortunately not, at least not for *most consumer SSDs.
Even TRIM isn't effectively helping (but does some very minimal refreshing on cells that happen to be marked free and then get rewritten). If you wanted to read the painfully long thread on the issue, it's here:

 
Last edited:

thulle

Member
Apr 11, 2019
51
19
8
The NAND cell charge decay doesn't immediately result in data loss, it would likely take several years for that to happen on a moderately worn SSD
About 6 months from new or so for me in the post above until I started getting read errors that result in block relocations?
 

twin_savage

Member
Jan 26, 2018
69
37
18
33
About 6 months from new or so for me in the post above until I started getting read errors that result in block relocations?
That is awfully unlucky; it's possible you got some defects in your actual NAND. I'd only expect data to be lost that fast if the operating temperature was really high or the NAND was very worn.
 
  • Like
Reactions: nexox

thulle

Member
Apr 11, 2019
51
19
8
@twin_savage I read up on it back then, and as I said I suspect it's more due to low temperature at the point of writing. Can't find the paper right now, but there's this slide in a(nother) JEDEC presentation:


retention.jpeg

This is very old data, but I suspect the general trend of lower active temperature during writes (and then higher temperature during later storage) still results in worse retention, and does so even in the case of the drives not being powered off.
 
  • Like
Reactions: T_Minus