Ethernet SSDs – Hands-on with the Kioxia EM6 NVMeoF SSD

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Malvineous

New Member
Sep 28, 2018
14
6
3
Could someone explain what is meant by this section?
The real power of this, and something we did not get to show, is when it comes to namespaces. Each drive can be partitioned into multiple smaller namespaces. In our RAID 0 example, imagine, if instead of using one 2TB drive, or 23x 3.84TB drives, the system instead used 100GB namespaces from 23 drives and then had extra capacity for parity. That minimizes the amount of data on a given drive. While it increases the chance that a device will fail, it decreases the impact of a failure for higher reliability. It also means that there is more performance available to saturate NIC bandwidth because data is being pulled from many drives simultaneously.
A namespace is like a partition at the hardware level, so if you set up a 100GB namespace across 23 drives, is it saying you would then have 23x 3.74 TB drives plus 23x 100GB drives, which you could use to store parity data with some kind of RAID that stripes parity across multiple entire disks (so you get the parity only on the 100GB disks)? That seems strange so I must be misunderstanding the explanation.

Since NVMe drives already support namespaces, I'm also unclear on how this provides such a benefit. I also don't understand how it would help saturate NIC bandwidth because splitting a disk into two namespaces doesn't double its bandwidth, all namespaces share the original bandwidth available, just like normal disk partitions do.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
First of all not all nvme drives support namespaces - many of the older ones didnt, which is why we have a thread like https://forums.servethehome.com/ind...espaces-or-other-ways-to-divide-one-up.21897/

Second,
I think the benefit is that you can slice and dice the namespaces however you need them to match performance and size requirements for a particular use case.
If you need a 2TB drive on a particular box and you'd assign individual drives only you'd need to overprovision with 3.84 TB drives. Also you'd limit the client device to the IOPs of a single device.
If you handle this with multiple drives you can benefit from multiple drives IOPs.

O/c the total IOPs from a single disk will not change, but a huge advantage nvme has is the better Queue depth handling - max IOPs are usually not reached on QD1, 1T but at much higher levels. With not using drives exclusively you can utilize more of the drive's capabilities

O/c thats not new, but this seems to provide a really convenient way to provide "array'ed" drives to consumers

Or I might be totally wrong :p
 
  • Like
Reactions: Malvineous

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,101
1,518
113
Reminds me of the Seagate Kinetic line from 8 years ago. I don't recall those ever seeing widespread adoption, though, it was more object based storage and a bit of a niche product.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Its too niche to ever get support and will likely never exist outside of somebodies research lab - but this drive (or similar) with the Ceph OSD software running on board instead of NVMeOF would be pretty cool...
 
  • Like
Reactions: Malvineous