What are we using for NVMe cache devices these days?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mattlach

Active Member
Aug 1, 2014
323
89
28
Hey all,

I've been thinking abotu what kinds of SSD's to use for cache devices and other high write environments these days.

WAY back I would have had to pay big bucks for a tiny SLC drive for something like this. Can't find those anymore.

More recently Samsung's Pro drives were MLC and had some pretty serious write endurance.

The latest gen Pro drives (980 Pro) now appear to be TLC, and it has me a little concerned.

That said, these things are so cheap now, so maybe I'll just get a couple of 500GB or 1TB Inland Premium drives for $59.99 or $119.99 respectively, and just beat the crap out of them until they are worn out and replace them. They have decent writes and DRAM caches, so they are surprisingly good little devices for the price, as reviewed by Servethehome here.

(Just keep in mind that Inland Premium are different from Inland Professional and Inland Platinum. The premium is what you want, at least for this application)

What do you people think?

I am a little torn.

The truth is, that with each generation of controller, NAND quality and whatever magic makes the controllers work (write amplification, wear balancing, DRAM cache's etc. etc.) improves. There is a reason you essentially can't buy an SLC drive anymore. They just aren't necessary. MLC got better to the point where it could fill that role.

The question is, have we gotten to the point where TLC is really ready to supplant MLC in high write applications?

My old 512GB Samsung 850 Pro SATA drives I have been using as write cache for years are MLC and are rated at 150TBW each. At 69,000 power on hours, and 317110382829 LBA's (~147.6 TB) written they are both listed at a wear leveling count of 30%, so they are starting to get close. (well, I mean, if 70% wear came in 69000 hours, that means I have ~30,000 hours or 3.5 years left, but I don't want to push it TOO far)

The aforementioned Inland Premium drives are Phison E12 TLC drives. The 512GB model (to keep it as close to an "apples to apples" comparison as possible) is rated at over 5x the write endurance, at 780TBW.

If these numbers are accurate, and measured the same way Samsung did on my old 850 Pro's, maybe MLC really is no longer needed? I mean those old MLC 850 Pro's are going to give me a projected final lifespan of 11.25 years in my high write environment. If the Inland Premiums truly get 5.2x longer life, that should give me 58.5 years. I don't know if I'll be around in 2080 (probably not unless we see some amazing medical progress!), but I feel like I know for sure that at least my current server build will be long obsolete...

Any thoughts? What are you using that is this side of affordable. (I mean if budgets were unlimited, I'd just put Optanes in everything
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,478
113
You're averaging under 20TB a year, which is nothing. Even QLC can take that. I wouldn't worry about endurance with your usage.
 
  • Like
Reactions: T_Minus

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
There is a reason you essentially can't buy an SLC drive anymore. They just aren't necessary. MLC got better to the point where it could fill that role.
I disagree with this: I think there are enough alternatives (nvram based nvme ssds, nvdimms like optane or dram + flash) that are nowadays affordable by "normal" enterprises which made slc based ssds obsolete.

I'm somehow concerned that you list a bunch of consumer ssds and not enterprise ssds...
 

mattlach

Active Member
Aug 1, 2014
323
89
28
I disagree with this: I think there are enough alternatives (nvram based nvme ssds, nvdimms like optane or dram + flash) that are nowadays affordable by "normal" enterprises which made slc based ssds obsolete.

I'm somehow concerned that you list a bunch of consumer ssds and not enterprise ssds...
I am a consumer. I am not an enterprise.

I use enterprise Optanes where it counts (SLOG drive for me) For everything else I'll save a penny. If I lose a drive I have redundancy. If shit really hits the fan I have backups.

For the types of applications I have in mind, the ultra low latencies of Optane drives are unnecessary, and honestly, quality consumer drives are just as reliable as enterprise ones, so I see no point in throwing money after them
 
Last edited:
  • Like
Reactions: TrumanHW and Marjan

RTM

Well-Known Member
Jan 26, 2014
956
359
63
I guess someone will have to write the following, might as well be me :)

Using enterprise drives is about more than whether or not the drive is likely to die.
Enterprise drives are generally optimized differently than consumer drives, consumer drives are often optimized for a single user, where it is preferable that the drive completes its task as quickly as possible even if it over time might be a slower strategy. Enterprise drives should generally be optimized for decent steady performance, which should ensure better performance in a multi-user environment.

To add to this, enterprise drives often have more overprovisioning in place, which allows them to do better wear levelling and have better steady state performance.

Another important feature commonly (but not always) found on enterprise drives is power loss protection circuitry. Apparently there is a slight chance of dataloss on sudden power loss with non plp (power loss protection) disks. If I remember correctly, having PLP circuitry is a requirement for some solutions (or provides additional performance with this).

EDIT: I forgot to mention that enterprise drives often have greater write endurance.

That all said, I am sure some applications are served quite decently with consumer disks :)
 
  • Like
Reactions: T_Minus and i386

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
FWIW - At $115 for 1tb Inland Premium -- That price point would make me buy used enterprise. Until CHIA you could get (good and even great) used enterprise NVME all the time for $80-100\TB. Prices are starting to come back down to the 100$\TB mark now too, give it 2-3 months if you can and you'll find higher performing and longer lasting drives for the same price or cheaper.

If you need new\warranty or now then that Inland Premium doesn't look too bad but would be nice to see more enterprise workload tests if that's your intention to help decide if it's a fit or not. 70\30 mixed work load, database specific, etc...
 

mattlach

Active Member
Aug 1, 2014
323
89
28
FWIW - At $115 for 1tb Inland Premium -- That price point would make me buy used enterprise. Until CHIA you could get (good and even great) used enterprise NVME all the time for $80-100\TB. Prices are starting to come back down to the 100$\TB mark now too, give it 2-3 months if you can and you'll find higher performing and longer lasting drives for the same price or cheaper.

If you need new\warranty or now then that Inland Premium doesn't look too bad but would be nice to see more enterprise workload tests if that's your intention to help decide if it's a fit or not. 70\30 mixed work load, database specific, etc...
I'm curious, since I am less informed about these. Which models would you be looking at?
 

mattlach

Active Member
Aug 1, 2014
323
89
28
I guess someone will have to write the following, might as well be me :)

Using enterprise drives is about more than whether or not the drive is likely to die.
Enterprise drives are generally optimized differently than consumer drives, consumer drives are often optimized for a single user, where it is preferable that the drive completes its task as quickly as possible even if it over time might be a slower strategy. Enterprise drives should generally be optimized for decent steady performance, which should ensure better performance in a multi-user environment.

To add to this, enterprise drives often have more overprovisioning in place, which allows them to do better wear levelling and have better steady state performance.

Another important feature commonly (but not always) found on enterprise drives is power loss protection circuitry. Apparently there is a slight chance of dataloss on sudden power loss with non plp (power loss protection) disks. If I remember correctly, having PLP circuitry is a requirement for some solutions (or provides additional performance with this).

EDIT: I forgot to mention that enterprise drives often have greater write endurance.

That all said, I am sure some applications are served quite decently with consumer disks :)
The PLP is certainly a good point. Less important if you are running on a UPS and have redundant power supplies, but still very valid.

As far as the optimizations for different work loads goes, this is certainly true, but in every real world test I have seen it doesn't make a noticeable difference. Even consumer NVMe SSD's perform their best at rather high queue depths, which is what you'd want from a server drive.

In my particular case, I am replacing old SATA SSD's with NVMe drives. Pretty much whatever I choose will be a huge improvement.