SSD - value over lifetime 'total writes'? (ie scratch drive/extending lifespan of consumer ssd?)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
I know alot of people with consumer SSD's are unlikely to use their SSD's to death - they will be obsolete in a few years before needing to be replaced. I know enterprise drives are much more optimized for lifespan under use, i'm not sure how much (10x? 50x?) but something must reflect their 10x higher costs (at least last time I looked) even though I still don't understand why the cost difference is so much.

I can understand more overprovisioning but doubt it's 10x the size - I have to wonder if there'd be some way to do some kind of software hack - like if a 500gig drive turns into a 300gig but lasts 3x as long sounds like a fair trade to me) I've heard of things that enterprise SSD's write and read slower which reduces wear - I wonder if there'd be any way to 'force' this to extend the lifespan of a consumer drive even if it's just by bottlenecking the interface down a step? (long as it's way faster than spinning rust is all that's needed, or/and use in a RAID gets performance back up so we dont care cut drive speed in half and stripe it back to original speed)

If my SOLE PURPOSE is "total lifetime write volume" without needing the max of performance, i'm curious what the best 'value segments' for this usage would be? The assumed use would be for things like swapfile usage, scratch drive use, and were talking about well into the petabytes of usage of say video processing and such when there's not enough direct RAM to support it. Ie the plan IS to burn the drives to the ground within 1-3 years so what is the sweet spot to potentially look for for a usage like this?
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
Cost != price != value

Some points why enterprise ssds have a higher price:
  • Features in firmware (better|more algorithms for wear leveling, error cheching etc.)
  • Better components (the best flash memory is used for enterprise ssds, controller with more ram & registers to optimize the io queue)
  • Overprovisioning
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I know alot of people with consumer SSD's are unlikely to use their SSD's to death - they will be obsolete in a few years before needing to be replaced. I know enterprise drives are much more optimized for lifespan under use, i'm not sure how much (10x? 50x?) but something must reflect their 10x higher costs (at least last time I looked) even though I still don't understand why the cost difference is so much.

I can understand more overprovisioning but doubt it's 10x the size - I have to wonder if there'd be some way to do some kind of software hack - like if a 500gig drive turns into a 300gig but lasts 3x as long sounds like a fair trade to me) I've heard of things that enterprise SSD's write and read slower which reduces wear - I wonder if there'd be any way to 'force' this to extend the lifespan of a consumer drive even if it's just by bottlenecking the interface down a step? (long as it's way faster than spinning rust is all that's needed, or/and use in a RAID gets performance back up so we dont care cut drive speed in half and stripe it back to original speed)

If my SOLE PURPOSE is "total lifetime write volume" without needing the max of performance, i'm curious what the best 'value segments' for this usage would be? The assumed use would be for things like swapfile usage, scratch drive use, and were talking about well into the petabytes of usage of say video processing and such when there's not enough direct RAM to support it. Ie the plan IS to burn the drives to the ground within 1-3 years so what is the sweet spot to potentially look for for a usage like this?
Stop over thinking it & buy used enterprise SSD if you need 1-3 years of 'writing them to death'.

If you can't do that then buy quality consumer SSD and over provision.

There are dozens of forum topics and review sites that show you what over provisioning SSD will do for write performance. It still may not be the same steady-state as an enterprise but it does extend the performance of the writing longer when you over provision.

Also, there are varying grades of consumer SSD like Enterprise. For instance some low end consumer SSD have 0 cache and generic controllers and firmware and have very bad performance.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
for swap / scratch drive usage i'd go nvme rather than any ssd.
Not sure what the QD of swap is, but if it's low this could be a perfect case for optane, too.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
for swap / scratch drive usage i'd go nvme rather than any ssd.
Not sure what the QD of swap is, but if it's low this could be a perfect case for optane, too.
I agree, there's lots of better choices... what I got from his post was he wanted to go the absolutely cheapest route possible to get a SSD that will last.

I don't think he's going NVME or Optane on the 'cheap' ;)
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
maybe cheap and fast flash based swap is not a good fit at all ;)
but maybe heavily overprovisioned consumer nvme can deliver better overal solution than enterprise SSD. guess cheap 32gb optane is not an option, could work in terms of capacity but won't last long as heavily used swap with 182 TB endurance.
BTW, has anyone yet tried when they really die? :D
 
I agree, there's lots of better choices... what I got from his post was he wanted to go the absolutely cheapest route possible to get a SSD that will last.

I don't think he's going NVME or Optane on the 'cheap' ;)
Actually not at all, I just dont understand the differences at this point between all the options. Sort of like reading Backblaze studies has made me a total cynic about "enterprise" hard drives, i'm not that knowledgeable about SSD's... I didn't know whether it was one of those consumer products sort of designed to die early, or if there might be some clever workaround, or if for a use case like this (Adobe CC scratch drive for large projects and constant use) the enterprise is fully worth the money.

Optane and 3dpoint and everything else just adds yet more variables to things. :)



Stop over thinking it & buy used enterprise SSD if you need 1-3 years of 'writing them to death'.
Umm, okay..? I'm a newbie to SSD's and don't currently have one anywhere. What I wasn't sure was like HOW "used" is a used drive (does that have easily accessed tracking of 'remaining lifetime writes' for instance) is going to be vs new vs consumer overprovisioned vs Optane... i'm wondering if there is some sweet spot or hot ticket or whatever, and whether that's a stable strategy or something constantly changing.

"Cheapest" is not my goal, value is. Buying something ten times is not cheaper than buying once for 6x the cost. I'm just starting to eye something for messing with lots of 4k and higher video footage.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Ok, that clears it up. Thanks.

"were talking about well into the petabytes of usage of say video processing"

Have you thought about how many SSD are required to achieve petabyte storage capacity?

Do you know how much hardware you need to support write heavy SSD backed petabyte storage?

Do you have a network in place to handle SSD / nvme / optane backed storage?

SSD choice is probably the quickest/easiest to read about, understand, etc... there's not a lot of SSD options when going for a system of that performance and capacity and doing it right. What I mean is you're not going to get 1000s of 400GB S3700s, and you're going to go SAS, and you need good write endurance, etc... You're not going to want to replace drives in 1 year vs 4 due to man hours, etc,... tons more than just picking a SSD is needed.

I would start reading more SSD reviews, basic information, and comparing specifications, etc...
There are many benchmark sites that compare basic over provision, and different OP levels too, as well as different work loads, steady-state write, etc... etc... You need to know about your exact work load, etc, when designing a system this size.


Most people who do systems that size don't DIY :D
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
I think the OP was about petabytes written, not capacity.

To find the right drive/SSD it would be good to know what type of io a cc scratch drive is exposed. QD, blocksize, Sync/async, random/sequential.
Most could be determined by blktrace, but guess this is not an option on Windows (not sure for osx) and requires quite a portion of understanding to make usefull conclusions.

I'd suggest to ask the Vendor (Adobe) or community there what current recommendation and/or the io characteristics are.

For cc scratch i'd assume io could be sequential with larger (fixed) blocksize at low qd. As there is no real requirement for plp (if power is gone intact scratchfile or swap wont be of any use after reboot) also Raid0 of several cheaper drives shouldn't be a problem.

for sure, stuffing RAM as much as budget/the Plattform can hold would be best.
After this, i'd look at optane, nvme, (mutliple) high pbw/dwpd rated sas3/2/sata SSD (in Raid0) - in this order.
Sizing totally depends on the desired size of the scratch Volume, 400gb that should be quite fast can be had for cheap with i.e 4x 100gb hgst sas2 for USD 50/each in Raid0.
 
  • Like
Reactions: T_Minus

wildpig1234

Well-Known Member
Aug 22, 2016
2,231
478
83
49
If you are using system w ddr3 ram also consider ram drive given how relatively cheap ddr3 is if you can get them at good price if yr purpose is just needing a small cache drive space..

No need to worry about write wear and also order of magnitude faster than even the fastest ssd or nvm