NVMe SSD Shootout - New vs Used - For Ceph & ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
827
113
Hey folks,

I'm about to buy a couple of NVMe SSDs for my new service servers.

Each server will have 3-4, perhaps identical, NVMe SSDs:
- 1-2 for Ceph DB / WAL / Metadata (Rest of the Ceph Cluster is spinning rust)
- 2 for local VM storage in mirrored config

The local VM storage SSDs should have 1.92TB each, I'm not sure about the Ceph ones.


Anyway, I need some help deciding between different offers. Of course, we're looking towards U.2 Enterprise SSDs with Power Loss Protection and all the nice stuff, not consumer stuff

Name​
Capacity
TB​
TBW PB​
IOPS Read Rand 4k (Datasheet)​
IOPS Write Rand 4k (Datasheet)​
Net Price €​
Capacity GB / €​
TBW TB / €​
IOPS Read / €​
IOPS Write / €​
Kingston DC1500M (New)​
1.92​
3.5​
510000​
220000​
210​
9.14285714285714​
16.6666666666667​
2428.57142857143​
1047.61904761905​
PM9A3 1.92TB, U.2 (New)​
1.92​
2.73​
740000​
130000​
194.1​
9.89180834621329​
14.064914992272​
3812.46780010304​
669.757856774858​
P4510 4TB (Used)​
4​
5.67​
636500​
111500​
280.25​
14.2729705619982​
20.2319357716325​
2271.18644067797​
397.8590544157​
Micron 7300 PRO 3.84TB (Used)​
3.84​
8.82​
520000​
70000​
299.25​
12.8320802005013​
29.4736842105263​
1737.67752715121​
233.918128654971​
3.84TB SDLC2LLR-038T-3NAW Sandisk Skyhawk (Used)​
3.84​
3.1536​
250000​
47000​
171​
22.4561403508772​
18.4421052631579​
1461.98830409357​
274.853801169591​
PM963 3.84TB (Used)​
3.84​
4.9194​
430000​
40000​
223.25​
17.2004479283315​
22.0353863381859​
1926.09182530795​
179.171332586786​
PM963 960GB (Used)​
0.96​
1.366​
350000​
30000​
52.25​
18.3732057416268​
26.1435406698565​
6698.56459330144​
574.162679425837​

So right now, my top picks are Kingston DC1500M vs PM9A3... The Kingston has more write endurance and more Write IOPS, while the PM9A3 has more Read IOPS, PCIe4.0 and far faster serial read speed (Kingston 3300MB/s vs Samsung 6800MB/s, maxing out PCIe3.0). I honestly don't know which one to pick.


Any ideas, suggestions or alternatives?


I am considering picking some of the PM963 960GB as cheap OS drives or whatever, as they offer a nice performance and capacity for their money. The Sandisk would probably be good if I was looking for capacity storage and had a flash only cluster.


EDIT:
I just realised the Kingston is an open-channel SSD... I'm not sure what to think about that.
 
Last edited:

ano

Well-Known Member
Nov 7, 2022
656
273
63
PM9a3 and kioxia CD6 and CM6 is whatyou want as far as nvme gen4 with umph at a good price. deploying them for misc ceph tests this month for use as journal devices, but also all nvme cluster, and one all SAS SSD cluster.


ohh and apparantly new 7400 series, I have not tested them myself.
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
827
113
and kioxia CD6 and CM6 is whatyou want as far as nvme gen4 with umph at a good price.
Yeah, sadly even the CD6 is twice as expensive as the PM9A3 :x

There's the 7450 PRO which has considerable fallen in price in the last couple of days (250 net to 210 net), so just a bit more expensive than the PM9A3
It comes in at 800k IOPS write / 120k IOPS Read, 6800MB/s serial read and 2700MB/s serial write, so it's basically the opposite of the Kingston, sacrificing a bit of write performance for more read performance

So I guess it's between the PM9A3 and the 7450 PRO now, I don't think I like the concept of an open-channel SSD (Kingston) for my use case
 

rgysi

New Member
Aug 30, 2022
5
1
3
@ano I would be interested to hear how the all nvme cluster compares to the all SAS SSD cluster.

Since ceph has currently some overhead, so that you don't reach that many iops on SSDs anyways. I saw they are working on a new osd called crimson, that is more taylored towards SSD's.

My thought was that the SAS 24Gbit SSDs from Kioxia could be a good choice.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
rgysi's note about ceph performance is important. Do you have the cluster size and client size to leverage ceph's scalability? Single-client benches are really going to disappoint you.

For the mirror pairs for local VMs, then sure you'll see a difference in performance between various NVMe (at least in synthetic benches).
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
827
113
Do you have the cluster size and client size to leverage ceph's scalability? Single-client benches are really going to disappoint you.
I'm well aware of the drawbacks, but Ceph is simply the least painful solution to self-healing, replicated storage.

Since ceph has currently some overhead, so that you don't reach that many iops on SSDs anyways. I saw they are working on a new osd called crimson, that is more taylored towards SSD's.
Yeah, I'm not using SSDs for storage anyway. Storage is spinning rust, the SSDs are just there to make it a bit more bearable by not having WAL, DB and Metadata on the slow HDDs.

Also, datasheet specs are primarily useful for comparisons within a manufacturer, and sometimes not even then, if they don't provide details on testing methodology.
It's the best we have. It's difficult to find reviews and numbers for Enterprise hardware, and I can't just order all of them and test like big companies do.
All the IOPS are rand 4k, that kinda seems to be an established standard now. And assuming that every manufacturers cheats, it's somehow comparable again :D
 
  • Haha
Reactions: Sean Ho

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
I'm well aware of the drawbacks, but Ceph is simply the least painful solution to self-healing, replicated storage.
I hear ya, I'm running a small (5-node) rook cluster at home for similar reasons. I just don't expect blazing performance out of it.

Note also that rocksdb use of the NVMe goes in large jumps: with default max_bytes_for_level_base and _multiplier, basically either 4GB, 30GB, or 286GB per OSD. So you'll generally have a lot of unused space on the NVMe (which isn't a bad thing, for endurance), unless you partition/namespace it, which is a bit of a pain with ceph.
 

mrpasc

Well-Known Member
Jan 8, 2022
496
263
63
Munich, Germany
I would avoid the Samsung dataventer SSDs, at least the OEM ones due to fact that there is absolutely no support for them. It’s a pain to get firmware updates and so on. If you can get branded ones (Dell, HP, Lenovo) it’s a little bit easier.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
Any specific reason why solidigm (ex intel) ssds were not considered?

Edit: or western digital?
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,355
827
113
Any specific reason why solidigm (ex intel) ssds were not considered?

Edit: or western digital?
Because they're at least 50% more expensive ;)


After all, I don't swim in money - If I did, I wouldn't consider buying used stuff. But if you know some nice offers, feel free to post them!
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
After all, I don't swim in money
Okay, after reading one of the first lines I had the impression that the stuff is for some business :D

I asked about intel because of firmware updates and the trouble that is finding firmware for oem ssds
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
OP's table included P4510 which is Intel. Previous-gen like P3600 are also very affordable nowadays. New Intel/Solidigm is certainly more in the "for-business" price bracket.