Finding an M.2 NVME SSD that fits the home/SMB NAS budget, performance and QA requirements

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
I've been reading across the TrueNAS, ServeTheHome and Level1Techs forums, as well as Reddit and watching YouTube, to find conversations that move the cheap NVME discussion forward for those that want either a pure NVME solution (for reasons that may cover a mix of power use, server design, environmental constraints and performance targets) or a mix that includes large, slow SATA HDDs.

Unfortunately the picture remains murky. For the most part people tend to stay in their chosen lanes whether from being biased through limited experience with brand and model, technology or a little resistance towards other options that may expand on understanding and benefit.

I find that the underlying factors to all of these discussions are threefold:
- defining requirement and framing the use case
- budget
- available technology and product

In my case I have decided to build a virtualisation platform that incorporates a virtualised NAS with full access and control of dedicated storage via PCIE and SATA passthrough. It is clear to me that my requirements will benefit from using three different kinds of storage:
- very fast high quality consumer nvme M.2 SSDs for the virtualisation host vm store
- cheap and slow but reliable, good quality nvme M.2 SSDs for the NAS
- cheap and even slower but reliable, good quality sata HDDs for NAS backup

The virtualisation platform will likely be based on VMware ESXi because I'll have access to vSphere and vCenter through a VMUG Advantage subscription. On this platform TrueNAS operates purely as a general purpose NAS unrelated to the host hypervisor (no returning an ARC accelerated ZFS volume back to host VMs). The direct attached drives need to work with ESXi so that limits my selection to Samsung, Intel, Western Digital, Kingston and a few others that get detected fine (if not validated by VMware).

The server is an AMD Epyc 7452 with 32 cores and 128GB DDR4 3200 on a motherboard that has 5 x 16x PCIE 4.0 slots and 2 x 8x PCIE 4.0 slots, which means I can use cheap PCIE cards that bifurcate the 16x slots into 4 x 4x to feed 4 x M.2 SSDs directly attached to the card - and install 5 cards to pass 20 separate NVME drives to TrueNAS. However, in this initial setup I will be using only two 2TB SSDs mounted in this way and a single 4TB HDD for backup. From this HDD a backup of critical data will go to BackBlaze. If I can, some data will go to Proton Drive if this is possible.

The selection of the fast SSDs is relatively easy. Given that this is not an enterprise use case, where corporations typically demand every possible ROI and smash their hardware, high quality consumer grade SSDs will be an excellent choice - as long as the quality is there to hold the products to their advertised specifications. This last point is where the problems arise with consumer grade hardware.

I have been looking at the cheaper PCIE 3.0 and some of the cheaper 4.0 drives, like Team Group, Silicon Power, and cheap models from Samsung, Intel, Western Digital, Kingston, and others, and many of the latest products have adequate performance for a NAS serving over 1Gbit ethernet or WiFi, or even 10Gbit ethernet when used with ARC and maybe a used enterprise drive providing L2ARC.

However, after reading about the failure rates of Team Group and Silicon Power SSDs on some forums it is clear that while the specifications for performance and durability are fine for the NAS drives the QA has failed. There is a lot of discussion on this forum about how these cheap NVME drives don't have the durability for NAS but it seems like more of a QA issue. Which raises the question: which brands, series and models do have the QA to assure that their products are up to spec?

In all of the articles and discussions about SSDs across various forums and review sites not much is said about this. Most discussions about the cheaper drives talk about low endurance or terrible performance once the cache is exhausted in sustained throughput. However, for my use case, certainly, and probably for most homelabs that use a NAS for file serving and streaming, the endurance levels (if up to spec) and performance is fine. More than adequate for 1Gbe or WiFi.

After looking for a good price to performance and quality/endurance ratio in a cheap drive I found the Intel 670p 2TB NVME M.2 SSD. This review paints and interesting picture of this drive:
Intel SSD 670p Review (QLC)
Intel expands its QLC SSD line again with the Intel SSD 670p. This M.2 SSD leverages 144-Layer QLC NAND and an improved dynamic cache
www.storagereview.com
www.storagereview.com

It is only QLC and has a TBW of 740 but that doesn't concern me because the specification puts it is far in excess of the demand that will be placed on it in this NAS use case. What I am interested in more is the likely hood that Intel puts more into the QA for this range of products which should translate into a true to specification drive. If that is the reality then this drive might be an excellent candidate for a modest home NAS based on NVME that serves network clients (the Network part of NAS).

What I find interesting in the StorageReview benchmarking and analysis is that the 670p has a well defined behaviour, especially when compared with the Corsair and Sabrent drives:

StorageReview-Intel-670p-2TB-RndRead-4K.png

StorageReview-Intel-670p-2TB-VDI-Boot.png

StorageReview-Intel-670p-2TB-VDI-Monday-Login.png

StorageReview-Intel-670p-2TB-VDI-Initial-Login.png


These are just synthetic tests but they highlight qualities in the Intel which are desirable for a plodding-along NAS workhorse.

The final essential factor in the selection criteria is the price of the 670p. In Australia right now it is $150 AUD which makes it less than twice the cost of a Seagate IronWolf 4TB 3.5" heavyweight HDD. It runs cool, fast enough especially behind ARC and L2ARC and in front of a single/mirrored IronWolf 4TB.

It would be great to hear about other cheap PCIE 3.0 or 4.0 plodders that have the QA to back up their specifications and form the backbone of a solid home or SMB NAS. Cheers.

(originally posted at TrueNAS forums: Finding an M.2 NVME SSD that fits the home/SMB NAS budget, performance and QA requirements)
 

mr44er

Active Member
Feb 22, 2020
135
43
28
More than adequate for 1Gbe or WiFi.
If you upgrade to 10G or higher in the next two years, you'll be happy having already more bang in your server. Or your usecase expands...some more VMs needed or "oh, GPU encoding is cool but pushing videobits drops IOPS on the disks...etc."

Avoid consumer or 'cheap' ssds with server stuff. At some point you will regret it.

Use redundancy with similar specs but different brands to push down the risk of total failure, bad quality, unforeseen firmware bugs...
If you boost spinning rust with SSD (L2ARC,DB/WAL ...) you have write amplification, how much is sometimes unpredictable.

If you want to use the cheap ssds anyway, triple at least your redundancy.
 
  • Like
Reactions: T_Minus

pimposh

hardware pimp
Nov 19, 2022
144
80
28
Now fill up that 670p to 70% (assuming that by rule of thumb you set op to 30%) of its nominal capacity and run I/O benchmarks.

Most likely you will be disappointed at least not to say pissed off.
Buy nice or buy twice.

Just trust the others who already were there.
 
Last edited:
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,654
2,066
113
That's marginally an acceptable desktop NVME and far from a good drive for a virtualization.

It doesn't matter how they test or QA that drive or any other, the testing for consumer drives by the manufacturer are not for your use case, the firmware isn't suitable for your workload and it generally will perform poorly and then much much worse. I made the mistake when I started out too buying consumer drives that were great... not so great when in actual use. Intel drives too ;)

I understand you're in AUS but I've seen used Enterprise NVME in AUS on ebay for fair prices (vs USA). Even if you had to pay shipping from UK or US it seems you could find better deals for more appropriate hardware ?
 
  • Like
Reactions: pimposh

nexox

Well-Known Member
May 3, 2023
717
299
63
Avoid consumer drives, all the QA in the world won't make QLC good for serious usage or eliminate the need for PLP.

I'd honestly prefer enterprise SATA SSDs over budget consumer NVMe, I don't know what the Australian market is like, but I would be getting quotes on international shipping for a pile of Cloudspeeds if I was you, to replace both your slower NVMe and the spinning drives. I would also mostly ignore m.2, unless you really need to save power or have space constraints u.2 drives are quite a lot better.
 
  • Like
Reactions: pimposh

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Great points, thank you. I've been loading VMware onto this server and the 1G network makes moving large install files a little painful, of course. I've been thinking about 10G but really once this stuff is setup the desire for more bandwidth diminishes. But if implementing 10G I would stay away from RJ45/Cat6 and go only SFP+ to either copper (DAC) or fibre (probably Single Mode because transceivers and fibre is dirt cheap). SFP+ is lower power, lower heat and just all round better tech than twisted pair copper.

I will revisit the used enterprise SSD market again because I agree with you all about this level of drive as a desirable default - but it comes down to cost and currently even the small enterprise SSDs are similarly priced to the higher performance consumer SSDs (a few hundred AUD).

This subject of finding consumer class drives to satisfy specific low-performance use cases is still of interest to me - especially as the quality and performance goalposts of QLC grade keep moving. I think the 670p is an example of that but of course we need evidence that indicates viability or not.

The key point here is "use case" and certainly as pointed out this also can change quite rapidly and people can find themselves needing those enterprise class drives either because they misjudged their use case or conditions changed.

The SATA SSD market here in Australia is effectively dead.

The true benefit of M.2 NVME is fully realised on an enterprise platform based on either Epyc or Xeon. These motherboards have so many PCIE lanes (128 lanes for the Epyc) and provide stable, reliable bifurcation and passthrough. I can buy a cheap case and populate the five or seven x16 PCIE slots with simple, cheap 4x4 bifurcation cards holding PCIE 3.0 drives that run cool. In my system I have five x16 slots and two x8 slots, all available to run at max speed all the time directly into the CPU (no chipset hub to complicate things). NVME drives mounted this way have no cables, no backplanes, no HBAs, no drivers - just pure NVME all the way through to VMs and containers.

I believe that ZFS must become truly NVME native and it is only a matter of time.
 
Last edited:

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
Talking about the Epyc platform - loads of cores and lanes - their desktop version (Threadripper) just got mighty interesting for those that can afford it:

For homelab though, nothing right now can beat the value proposition of Epyc Rome.
 

mr44er

Active Member
Feb 22, 2020
135
43
28
Every disk dies, some sooner, some later, some lock itself from firmware bugs, some have bad quality components. In the end, it doesn't really matter. No brand is the best and no best SSD exists. If it would, it would be sold out regardless of the price :D
What you want is keep your data healthy and be prepared so that every disk 'can' die at day1. Create enough redundancy and backup backup backup!
 

nickwalt

Member
Oct 4, 2023
47
14
8
Brisbane
That is reality but it is good to have some kind of predictability, which can be provided through adherence to specification ensured by adequate quality assurance.

The conditions of our specific use-case is the other part of that equation and a required consideration to determine our desired degree of fault tolerance and risk. Factoring design of redundancy, backup and tuning to match our level of risk aversion.

The review of the failure types and rates across a sample size of 1.4 million enterprise SSDs found that drives less than 4TB (if I remember correctly) fail suddenly at a rate that is an order of magnitude LESS than larger SSDs. And that TBW was not a significant factor to consider or be overly concerned about.