List of NVMe drives that support namespaces or other ways to divide one up

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

billc.cn

Member
Oct 6, 2017
49
9
8
I am evaluating all-flash VMware vSAN for a 3 node cluster and the performance is abysmal even with NVMe cache disks. I think I can get better performance if I have more disk groups. However, I only have one PCIe x8 slot available and in ESXi each disk can only be claimed by one disk group, so I will need a way to split a physical NVMe disk into multiple logical disks.

The build-in way to do this is with namespaces, but I only managed to find one model that supports more than one name space (Samsung PM1725a). Does anyone know any other disks that do?

Alternatively, is there any other way to split the drive or fit more than one drive into one PCIe slot?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
My bad for digging up an ancient thread but I did want to point out I saw this and got a bit excited. Now I assume I can add one of these per oculink connect off of my X11SPH boards and go ham! 3.84TB NVMe Micron 9300 drives hovering around $800 each on ebay and they support up to 32 namespaces each.

VMware vSAN + NVMe namespace magic: Split 1 SSD into 24 devices for great storage performance

Paging @Rand__

EDIT: Wonder if the HGST SN200 series drives support NVMe namespaces, digging and I can't sort that out quite yet so the hunt continues. Also thinking abt p4801x NVMe namespace support. Hmmm off to research.
 
Last edited:
  • Like
Reactions: arglebargle

vangoose

Active Member
May 21, 2019
326
104
43
Canada
My bad for digging up an ancient thread but I did want to point out I saw this and got a bit excited. Now I assume I can add one of these per oculink connect off of my X11SPH boards and go ham! 3.84TB NVMe Micron 9300 drives hovering around $800 each on ebay and they support up to 32 namespaces each.

VMware vSAN + NVMe namespace magic: Split 1 SSD into 24 devices for great storage performance

Paging @Rand__

EDIT: Wonder if the HGST SN200 series drives support NVMe namespaces, digging and I can't sort that out quite yet so the hunt continues. Also thinking abt p4801x NVMe namespace support. Hmmm off to research.
I was playing with SN260 namespace yesterday. I tried nvme command in linux but it didn't work.
Only found HGST Device Manager 3.4 for Windows platform and yes it supports 128 namespaces.

I ran some performance tests and no, there is very little difference between 1 ns and multiple ns with raid0.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Here is the output from Optane 900P
Code:
nvme id-ctrl /dev/nvme1n1
NVME Identify Controller:
vid       : 0x8086
ssvid     : 0x8086
sn        : PHMB742400NT280CGN
mn        : INTEL SSDPED1D280GA
fr        : E2010325
rab       : 0
ieee      : 5cd2e4
cmic      : 0
mdts      : 5
cntlid    : 0
ver       : 0
rtd3r     : 0
rtd3e     : 0
oaes      : 0
ctratt    : 0
rrls      : 0
oacs      : 0x7
acl       : 3
aerl      : 3
frmw      : 0x2
lpa       : 0x2
elpe      : 63
npss      : 0
avscc     : 0
apsta     : 0
wctemp    : 0
cctemp    : 0
mtfa      : 0
hmpre     : 0
hmmin     : 0
tnvmcap   : 0
unvmcap   : 0
rpmbs     : 0
edstt     : 0
dsto      : 0
fwug      : 0
kas       : 0
hctma     : 0
mntmt     : 0
mxtmt     : 0
sanicap   : 0
hmminds   : 0
hmmaxd    : 0
nsetidmax : 0
anatt     : 0
anacap    : 0
anagrpmax : 0
nanagrpid : 0
sqes      : 0x66
cqes      : 0x44
maxcmd    : 0
nn        : 1
oncs      : 0x6
fuses     : 0
fna       : 0x4
vwc       : 0
awun      : 0
awupf     : 0
nvscc     : 0
nwpc      : 0
acwu      : 0
sgls      : 0
mnan      : 0
subnqn    :
ioccsz    : 0
iorcsz    : 0
icdoff    : 0
ctrattr   : 0
msdbd     : 0
ps    0 : mp:18.00W operational enlat:0 exlat:0 rrt:0 rrl:0
          rwt:0 rwl:0 idle_power:- active_power:-




This is SN260
Code:
nvme id-ctrl /dev/nvme0n1
NVME Identify Controller:
vid       : 0x1c58
ssvid     : 0x1c58
sn        : SDM000063B53
mn        : HUSMR7664BHP301
fr        : KNGND110
rab       : 7
ieee      : 000cca
cmic      : 0
mdts      : 0
cntlid    : 23
ver       : 10201
rtd3r     : 5b8d80
rtd3e     : 30d400
oaes      : 0x100
ctratt    : 0
rrls      : 0
oacs      : 0xe
acl       : 255
aerl      : 7
frmw      : 0xb
lpa       : 0x3
elpe      : 255
npss      : 11
avscc     : 0x1
apsta     : 0
wctemp    : 357
cctemp    : 360
mtfa      : 0
hmpre     : 0
hmmin     : 0
tnvmcap   : 6408091205632
unvmcap   : 0
rpmbs     : 0
edstt     : 0
dsto      : 0
fwug      : 0
kas       : 0
hctma     : 0
mntmt     : 0
mxtmt     : 0
sanicap   : 0
hmminds   : 0
hmmaxd    : 0
nsetidmax : 0
anatt     : 0
anacap    : 0
anagrpmax : 0
nanagrpid : 0
sqes      : 0x66
cqes      : 0x44
maxcmd    : 0
nn        : 128
oncs      : 0x3f
fuses     : 0
fna       : 0x2
vwc       : 0
awun      : 0
awupf     : 0
nvscc     : 1
nwpc      : 0
acwu      : 0
sgls      : d0001
mnan      : 0
subnqn    : nqn.2017-03.com.wdc:nvme-solid-state-drive. VID:1C58.        MN:HUSMR7664BHP301     .SN:SDM000063B53
ioccsz    : 0
iorcsz    : 0
icdoff    : 0
ctrattr   : 0
msdbd     : 0
ps    0 : mp:25.00W operational enlat:15000 exlat:15000 rrt:0 rrl:0
          rwt:0 rwl:0 idle_power:- active_power:-
ps    1 : mp:24.00W operational enlat:15000 exlat:15000 rrt:1 rrl:1
          rwt:1 rwl:1 idle_power:- active_power:-
ps    2 : mp:23.00W operational enlat:15000 exlat:15000 rrt:2 rrl:2
          rwt:2 rwl:2 idle_power:- active_power:-
ps    3 : mp:22.00W operational enlat:15000 exlat:15000 rrt:3 rrl:3
          rwt:3 rwl:3 idle_power:- active_power:-
ps    4 : mp:21.00W operational enlat:15000 exlat:15000 rrt:4 rrl:4
          rwt:4 rwl:4 idle_power:- active_power:-
ps    5 : mp:20.00W operational enlat:15000 exlat:15000 rrt:5 rrl:5
          rwt:5 rwl:5 idle_power:- active_power:-
ps    6 : mp:19.00W operational enlat:15000 exlat:15000 rrt:6 rrl:6
          rwt:6 rwl:6 idle_power:- active_power:-
ps    7 : mp:18.00W operational enlat:15000 exlat:15000 rrt:7 rrl:7
          rwt:7 rwl:7 idle_power:- active_power:-
ps    8 : mp:17.00W operational enlat:15000 exlat:15000 rrt:8 rrl:8
          rwt:8 rwl:8 idle_power:- active_power:-
ps    9 : mp:16.00W operational enlat:15000 exlat:15000 rrt:9 rrl:9
          rwt:9 rwl:9 idle_power:- active_power:-
ps   10 : mp:15.00W operational enlat:15000 exlat:15000 rrt:10 rrl:10
          rwt:10 rwl:10 idle_power:- active_power:-
ps   11 : mp:14.00W operational enlat:15000 exlat:15000 rrt:11 rrl:11
          rwt:11 rwl:11 idle_power:- active_power:-
See nn field.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
My bad for digging up an ancient thread but I did want to point out I saw this and got a bit excited. Now I assume I can add one of these per oculink connect off of my X11SPH boards and go ham! 3.84TB NVMe Micron 9300 drives hovering around $800 each on ebay and they support up to 32 namespaces each.

VMware vSAN + NVMe namespace magic: Split 1 SSD into 24 devices for great storage performance

Paging @Rand__

EDIT: Wonder if the HGST SN200 series drives support NVMe namespaces, digging and I can't sort that out quite yet so the hunt continues. Also thinking abt p4801x NVMe namespace support. Hmmm off to research.
Weird, didnt get pinged on Sunday - just rechecked it.

On topic, I actually considered starting a thread to gather the drives that do support namespaces as info seems to be scarce - I tried finding out if P4800x supported them but came up empty.

In general (and sorry to repeat myself) Namespaces (similar to more physical drives) will only provide a speed up if your workload can use it, basically multiple concurrent threads.

Edit:
Also, I wonder whether multiple namespaces are handled differently then say multiple partitions on a drive.
The latter was not really providing speedups on regular nvme drives (eg p3700 for zfs slog), while optane did not mind. Probably sth to do with spare controler capacity or sth similar...
 
Last edited:

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
Namespaces are the NVMe equivalent of a LUN. On Linux, you'll end up with "nvme0n1", "nvme0n2" and so on for each namespace hosted on the same NVMe device (nvme0 in this instance). Namespaces will show up as a separate block device.

My understanding is namespaces get more use in the SAN space (NVMe SANs for example) as NVMe doesn't support LUNs, only namespaces.

Not sure how much detail here will just be white noise :) I can add more information if you want.
 
  • Like
Reactions: itronin

vanfawx

Active Member
Jan 4, 2015
365
67
28
45
Vancouver, Canada
The only thing I was going to add is the main reason for the difference is the command-set. SAS/SATA will use the SCSI command set and NVMe uses the NVMe command set. Because of this, they define block storage differently, LUNs on SCSI and namespaces on NVMe.

I've only run into the namespaces thing when researching my current jobs new NVMe based SAN, and how to do NVMe over Fabrics to bring NVMe to the hosts via FC. We're still doing SCSI emulation as the new FC HBA's we need will be part of the farm refresh that's hopefully this year.

For a stand-alone NVMe drive though, I've had no issues getting full performance via a single namespace. However, if you run into situations where partitions are not supported, only full block devices, I can see this being useful to split up a big NVMe drive for things like Ceph.

If you're interested, IBM and Brocade have released a "for dummies" book that goes over NVMe over Fabrics (FC)

Hope this extra information is helpful!
 
  • Like
Reactions: Mehmet and Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Interesting read, thanks - especially their take on NVMe over RDMA ;)

I assume when you say you coaxed max performance out of a single namespace you used higher QDs/Thread counts - then o/c its no issue.

The idea of that Micron article was to fake more drives to make multiple worker vms perform better. Its like every storage vendors article - carefully optimized to suit the desired result;) If they had chosen less VMs then speedup would not have been so great, with more VMs they would have saturated the drives earlier.

But its a good idea that can be used in the correct situation (limited hardware) if the software in use does not scale properly as vsan does not - the expectation that it uses 100% capability of its underlying drives for a single vm is wrong, for whatever reason it [artificially] limits the performance [probably to have reserves for many more vms] - but if you create n datastores on a single device you get n times the single limited performance thus tricking vsan.
O/c the maximum total performance of the drive is not increased by that, its just put to better use.
This will not work in all (most) situations, since it will depend heavily on the use case and scalability capabilities of the software used.
 
Last edited:
  • Like
Reactions: vanfawx

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Interesting read, thanks - especially their take on NVMe over RDMA ;)

I assume when yyu say you coaxed max performance out of a single namespace you used higher QDs/Thread counts - then o/c its no issue.

The idea of that Micron article was to fake more drives to make multiple worker vms perform better. Its like every storage vendors article - carefully optimized to suit the desired result;) If they had chosen less VMs then speedup would not have been so great, with more VMs they would have saturated the drives earlier.

But its a good idea that can be used in the correct situation (limited hardware) if the software in use does not scale properly as vsan does not - the expectation that it uses 100% capability of its underlying drives for a single vm is wrong, for whatever reason it [artificially] limits the performance [probably to have reserves for many more vms] - but if you create n datastores on a single device you get n times the single limited performance thus tricking vsan.
O/c the maximum total performance of the drive is not increased by that, its just put to better use.
This will not work in all (most) situations, since it will depend heavily on the use case and scalability capabilities of the software used.
You don't run single VM per vSAN cluster, you run hundreds. VM datastore is a very good usecase for namespace as disks get larger. That being said it is not supported by VMware at the moment but it's a good solution for homelab or small site.


NVME really shines in high queue depth while latency remains low.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well given the scalability of vSan per Host I don't think that hundreds of nvme drives is a typical use case for vSan unless somebody runs IO heavy VDI:)
But I agree, single disk was not a good example, so lets say namespaces offer a cheap way to increase the number of cache devices utilizing previously unusable disk capacity - whether you benefit or not depends on your use case (as always) :)
 
  • Like
Reactions: zeynel

muhfugen

Active Member
Dec 5, 2016
156
45
28
EDIT: Wonder if the HGST SN200 series drives support NVMe namespaces, digging and I can't sort that out quite yet so the hunt continues. Also thinking abt p4801x NVMe namespace support. Hmmm off to research.
They do, I have a few 800GB models which are split in to 2 name spaces. I didnt bother to see what the max supported is.
 

zeynel

Dream Large, Live Larger
Nov 4, 2015
505
116
43
48
Looks like the Intel 4510 support namespaces as well.



my Oracle F320 (Samsung 1725a) does not support them:

the key indicator seems to be this :

# nvme id-ctrl /dev/nvme1 -H
oncs : 0x6
[5:5] : 0 Reservations Not Supported
[4:4] : 0 Save and Select Not Supported
[3:3] : 0 Write Zeroes Not Supported
[2:2] : 0x1 Data Set Management Supported
[1:1] : 0x1 Write Uncorrectable Supported
[0:0] : 0 Compare Not Supported


I will use two of those drive to split them in cache tiers, to create 4 Disk groups in vSAN.
 

zrav

New Member
Sep 26, 2019
3
2
3
Adding the info for drives I have access to:
- Micron 9200: no support (nn=1)
- Micron 9300: 32 namespaces
- Samsung PM9A3: 32 namespaces
 
  • Like
Reactions: nasbdh9 and wvaske

ericloewe

Active Member
Apr 24, 2017
293
128
43
30
See nn field.
I wanted to highlight this because it's a bit buried and I missed it at first and the docs don't make it immediately obvious.

For the next person stumbling upon this thread, to get the number of namespaces supported by the disk, use:
Code:
nvme id-ctrl /dev/nvme0 | grep nn
Stuff I have on hand:

Samsung 970 EVO Plus1
WD WDS500G2X0C (SN750?)1
Intel P5500 (Dell firmware)128
 
  • Like
Reactions: nasbdh9

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
That Intel page which claims only the P4610 and P4810 is incorrect, though it was probably true when it was written. I know at least the P5800X supports multiple namespaces, and I think some other things in the P5XXX series do as well. I'll reach out to get that updated with correct info.

Standard Disclaimer: I work for Intel on Optane drives, but I'm just an engineer, not an official spokesperson or anything. My posts should not be taken as official statements by Intel, and all opinions are my own.