NVMe options for homelab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mstoebich

New Member
Mar 26, 2017
20
6
3
27
So I recently acquired a new server for my lab, now I'm debating on storage options. I'm looking for around 2TB, also nothing mission critical is planned on this machine, hence a single SSD would be enough for now. Since price difference between SATA and NVME is negligible and SAS (12G) is incredibly expensive, I've decided to look for either M.2 (with an adapter) or U.2 TLC drives - but there's a lot to choose from.

My workload is going to be mostly OpenShift clusters, maybe a little bit of Data Science from logs, metrics and IoT data, and the usual suspects of VMs (AD/DNS/Firewall etc.) all on top of ESXi 7.

Consumer drives like the a 970 evo Plus would be the cheapest at around 100€/TB, Prosumer SSDs like the 980 Pro are about 1,5x the price and enterprise U.2 like the PM9A3 are at least 1,75x. But I'm not really sure what to look out for.

Would I see any benefits from investing more money into better drives?



-- edit on workload --

So as stated above I'm going to focus mostly in stuff running on top of openshift - Mainly to improve my skillset as an devops engineer/SRE
First order of business will be building a complete CI/CD plattform prototype for work, since I tasked myself with overhauling the multiple dumpster fires that that currently is. Next up I'm planning use this as a sandbox for production implementations. Also might invest some time in learning a programming language or two (JS and GO probably) and trying to build web applications using more modern approaches like microservices, serverless and so on. But my focus is on the operational parts of running/observing/debugging such services.
 
Last edited:
  • Like
Reactions: Brian Puccio

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Would I see any benefits from investing more money into better drives?
Did you see the recent review of the micron ssd on the front page?
(The device with plp outperformed consumer ssds in sustained workloads...)
 

mstoebich

New Member
Mar 26, 2017
20
6
3
27
Just read it, very interesting writeup - clears up a few things.

But not all - I'm still unsure if investing twice the money or more is worth it. The 7400 Pro is somewhere in the ballpark of 250€/TB so about 2,5x the price of an 970 evo plus.
I know that PLP is kind of a big deal for making sure data get written properly. Also R/W Performance outside of the caches matters a lot - and can be close to spinners on cheaper SSDs (I htink this was mentioned in a QLC SSD review somewhere). But I' not sure if I'm ever going to fill those caches with my daily shenanigans. I'm also not going to be mad if i go from 4 Gigabytes per second to 1 or 2 at 80% capacity. On one hand because i should have invested in more storage by that point and on the other hand because it is still ridiculously fast. And yes i know seq R/W are only a partial picture and probably not really representative on real world usage.

I also just researched those Micron drives a litttle and as it seems there's an older version of that drive (the 7300 pro) thats just ~10% more expensive than a 980 PRO in M.2 and ~20% more in U.2. Might be worth a consideration.


So I might rephrase my question: am I going to run into issues that might cost me more in the long run (like drives dying or massive performance degradation) or am I just not getting the full performance for the full space of the drive.
TCO and future headaches are major factors in this decision.

I'm also going to add more details my usage pattern to clarify what I'm aiming for.
 
Last edited:
  • Like
Reactions: Brian Puccio

Parallax

Active Member
Nov 8, 2020
417
210
43
London, UK
Remember for Micron drives you will likely need the Community NVMe Fling to get it working. Community NVMe Driver for ESXi

Many consumer drives will not work with ESXi 7.0, something to bear in mind. A quick search through Reddit will find a bunch of sad stories of people who bought good consumer NVMe deals but couldn't get them working.
 

mstoebich

New Member
Mar 26, 2017
20
6
3
27
It's not about the capacity, more about the sustained io (reading/writing over long periods)
So is this more of an issue with IOPS or an issue with aging drives?
Sorry if all that sounds pretty stupid - I'm trying to wrap my head around that for a while now, doesnt really seem to click with me. Storage seems to be a lot more complicated than consumer hardware made me believe...

Remember for Micron drives you will likely need the Community NVMe Fling to get it working. Community NVMe Driver for ESXi

Many consumer drives will not work with ESXi 7.0, something to bear in mind. A quick search through Reddit will find a bunch of sad stories of people who bought good consumer NVMe deals but couldn't get them working.
Oh no... I would've never thought about driver issues with NVMe - i was under the impression this was lika sata or sas disks. Seems logical when you think about it, since everything is pcie now.

Might need some more research
 

zer0sum

Well-Known Member
Mar 8, 2013
849
474
63
Remember for Micron drives you will likely need the Community NVMe Fling to get it working. Community NVMe Driver for ESXi

Many consumer drives will not work with ESXi 7.0, something to bear in mind. A quick search through Reddit will find a bunch of sad stories of people who bought good consumer NVMe deals but couldn't get them working.
It's actually pretty trivial to get them working :)
You just swap the nvme driver vib out with an older one

 
  • Like
Reactions: name stolen

Parallax

Active Member
Nov 8, 2020
417
210
43
London, UK
It's actually pretty trivial to get them working :)
You just swap the nvme driver vib out with an older one

Maybe, but I would still be worried how long this would work, plus it's more faffing around to do each time you patch. I don't know about others, but I really try to get the amount of admin overhead for my home systems down as much as possible.

If you've assembled something from old parts, or upgraded a previously working setup to 7.0 then I understand you may want to do this sort of thing for a little while. But for the OP, who is buying new surely s/he is better off getting something supported natively day one, especially as it needn't cost much more to do so?
 

mstoebich

New Member
Mar 26, 2017
20
6
3
27
I'm pretty convinced now to go the Enterprise route. I'd like to avoid doing some hackery stuff to get my storage to work.

Also it seems my pricing figures from post #1 were a little off, A 1.92TB PM9A3 is about the same price as a 2TB 980 Pro. PSA: dont compare average ebay prices and amazon...

I think the PM9A3 will probably be the best choice, since its has good reviews an its the most affordable.
 
  • Like
Reactions: JoeKun

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
So I recently acquired a new server for my lab, now I'm debating on storage options. I'm looking for around 2TB, also nothing mission critical is planned on this machine, hence a single SSD would be enough for now. Since price difference between SATA and NVME is negligible and SAS (12G) is incredibly expensive, I've decided to look for either M.2 (with an adapter) or U.2 TLC drives - but there's a lot to choose from.
Whether justified or not I tend to gravitate towards the more expensive/enterprise-y hardware. They tend to have better warranties, tend to (but not always have more stringent QA processes), and lead to less headaches in the future.

And if this is for work/study write it off on your taxes.
 

mstoebich

New Member
Mar 26, 2017
20
6
3
27
I just found a minor inconvenience in all of this. My server is a HPE CL2200 Gen10, so basically a Gigabyte R281-N40. The spec sheets and the ebay listing both suggested that the 4 orange drive bays in the front are in fact u.2 nvme - wich isn't technically wrong bot sadly not the whole story. The backplane would support NVME drives, but only if the backside is populated with an extra module (HP P/N: HP P02177-001), which my unit sadly didn't come with. This part seems to be incredibly hard to come by, so I'm stuck with either 12G SAS, which would require a RAID card and more expensive SAS 12G or less performant SATA 6G SSDs. So that is a no-go for now.

The PM9A3 seems to be also available in M.2, but that has reduced performance due to a smaller power envelope. So I think buying something like an Asus Hyper M.2 and that drive might be the best option for now.


I felt like this needs to be documented somewhere, maybe some future googlers will end up here and find it helpful.
 
  • Like
Reactions: name stolen

edge

Active Member
Apr 22, 2013
203
71
28
I just found a minor inconvenience in all of this. My server is a HPE CL2200 Gen10, so basically a Gigabyte R281-N40. The spec sheets and the ebay listing both suggested that the 4 orange drive bays in the front are in fact u.2 nvme - wich isn't technically wrong bot sadly not the whole story. The backplane would support NVME drives, but only if the backside is populated with an extra module (HP P/N: HP P02177-001), which my unit sadly didn't come with. This part seems to be incredibly hard to come by, so I'm stuck with either 12G SAS, which would require a RAID card and more expensive SAS 12G or less performant SATA 6G SSDs. So that is a no-go for now.

The PM9A3 seems to be also available in M.2, but that has reduced performance due to a smaller power envelope. So I think buying something like an Asus Hyper M.2 and that drive might be the best option for now.


I felt like this needs to be documented somewhere, maybe some future googlers will end up here and find it helpful.
Your equating and HPE server with an industry standard server (Gigabyte MB) is a sad misunderstanding of HPE. HPE does substantial extra engineering to ensure their systems are incompatible with non-HPE parts.

HPE- experts at putting the Proprietary in ISS.

If you have a couple of open pcie slots, just buy cheap pcie nvme host card (go for single nvme cards, I would hate to deal with figuring out how HPE made bifurcation specific to recognizing only their cards). Good luck.
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
You overthinking this. For ESXi home lab, a consumer NVMe works fine as data storage. I have had all kinds of NVMe in my three ESXi hosts for years w./o any issue.

Samsung 970 + a cheap PCI adapter works fine. I also have a cheap Samsung PM963 (OEM drive with PLP) + PCI adapter and it also works fine. I do have my ESXi hosts on a strong UPS (they can operate for hours on big marine batteries).

If I were to buy NVMe today for my ESXi host, I wouldn't hesitate to use the cheap Samsung 980 (w/o the DRAM Cache) for a home lab. My mission-critical ESXi hosts are still on older tech such as the Intel P3700 and S3700 drives where speed is not as important as endurance and stability.