NVMe drives as KVM guest storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kwhite

New Member
May 9, 2019
3
0
1
Hello all.

I am currently using a stack of Dell servers from 2012. So...slow disk, no NVMe. They were, in fact, all bought with hard drives, although they've all received SSD upgrades in their lifetimes.

Now, it is time to buy new boxes. I'm targeting Rome boxes, and was considering something like the Gigabyte R182-Z92, with room for 10 U.2 NVMe drives.

My operational model is to use CentOS as the host OS and use KVM to host various guests: Linux and Windows. I would set up a RAID 1 or 10 with pairs of NVMe drives, then use LVM to create LVs and use them as block devices in the guests. (I do the same thing with my current Dells, but use the PERC to make the RAIDs instead.)

However, I'm reading that NVMe drives don't run at their full potential when used as block devices in guests:

https://www.usenix.org/system/files/conference/atc18/atc18-peng.pdf
http://events17.linuxfoundation.org/sites/events/files/slides/Userspace NVMe driver in QEMU - Fam Zheng.pdf

It appears that it may, in fact, be much better to use VFIO to pass the entire NVMe device through to the guest.

Is this true? What have people experienced here? Is VMWare better than KVM here? Is it better to have physical NVMe drives passed through to each guest? I'm thinking that means pairs of drives, so that I could use RAID 1 in the guest.

That seems much more complicated to me: I'd have to have enough physical drives to pass them through to each guest, and I can't divvy up drive space in any unit less than a full drive.

Is this what people do? Or has the overhead to using NVMe drives as KVM guests decreased since 2017?

Thanks,

Kevin