Does Hyper-V pass raw disks better than ESXi?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nickscott18

Member
Mar 15, 2013
77
19
8
Am wondering, especially after reading Patrick's excellent recent article (http://www.servethehome.com/zfs-linux-hyper-v-napp-it-web-management/), where he says "Hyper-V does one thing extremely well: it allows one to pass raw disks directly to virtual machines. "
RDM (Raw Device Mapping) in ESXI does not appear to be officially supported for SATA / SAS discs - where drive pass-through (the same concept in Hyper V) appears to be well supported. It seems that the most common option for getting raw discs in a ESXI vm is to pass through an HBA.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Its not a matter of "better". Its "different".

ESXi does not pass raw disks at all (well, it does using an always awkward and now-unsupported method called RDM). With ESXi (and KVM) you pass PCIe devices to the VM. In other words, you pass an entire disk controller and all of the drives attached to it to the VM.

With Hyper-V you pass access to the disk, not the controller.

There are positives and problems with both approaches. With ESXi it is difficult, for example, to pass disks to multiple VMs - they have to be on multiple PCIe devices to do so. It is also difficult to pass disks from the same controller that is used for the ESXi host itself (e.g., if you have a MB with a single disk controller & 6 drives - its hard to have one drive hosting ESXi and the other 5 passed through to a VM).

Both of these situations are quite easily handled with Hyper-V. On the downside, Hyper-V hides certain aspects of the disk from the VM - the VM has access to the raw drive but no advanced features from the controller are available. Most problematic is SMART data is hidden from the VM. You also can't manage Advanced Power Management (APM) features of the drive or controller. If you have an application where SMART or APM are important to you then Hyper-V might not be well suited.

Hyper-V does not provide any mechanism to pass through a PCIe device. ESXi & KVM do not readily allow passing through a drive from a controller (except, as noted, using RDM, which is difficult and not currently supported).

In both cases - passing through a drive or controller makes it difficult to make full use of the "hardware abstraction" of virtual systems - migrations are compromised and you've tied the VM to specific hardware characteristics. Passing a PCIe device in ESXi or KVM also disables some advanced memory management methods because the PCIe device can do direct memory reads/writes and the VM Host has to leave the VM a stable memory map or things would go haywire in a flash.

At the end of the day they are just different approaches to solving the same problem. Neither is "better", though one might fit the demands of your application or your personal tastes better than the other.
 
Last edited:

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I didn't know that RDM wasn't a supported method. As I mentioned in a different thread, so far flexRAID is the only solution I've found that is comfortable recommending RDM instead of vt-d for running under ESXi: Storage deployment on VMware ESXi: IOMMU/Vt-d vs Physical RDM vs Virtual RDM vs VMDK - FlexRAID
I had thought that would make flexRAID perhaps the only solution for virtualizing using an avoton or rangeley motherboard, since the intel atom doesn't support vt-d. From the sounds of things now, though, Hyper-V would be an alternative to ESXi for that purpose.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Since when was ESXi RDM unsupported? I use it in production at work for a few things and I'm reasonably certain it is fully supported.