vRDMA

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

markpower28

Active Member
Apr 9, 2013
413
104
43
Some notes from VMworld. vRDMA is give VM the ability to utilize RDMA function. Unlike single root I/O virtualization (SR-IOV) which pass the entire card to a single VM. This feature will allow multiple VMs to share the same NIC/HCA.

This is definitely a step forward from VMware on high performance VM. It currently in beta.

20150831_084849.jpg - Google Drive
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
From what I've been able to read, it has nothing to do with SR-IOV or MR-IOV - it simply enables RDMA to work in VMs regardless of host hardware. Before vRDMA if you wanted RDMA from a VM you had to passthrough a NIC/HCA with RDMA support to the VM to do it - now with vRDMA all VMs will have RDMA support even if the host doesn't have hardware that can do it.

If two VMs on the same host want to use RDMA with each other, the host will just memcpy between the two VMs - higher performance and lower latency than real RDMA though "on the same host" is rather restrictive and except for the virtualization layer is really taking the 'R' out of RDMA. When working across hosts, the hypervisor will use hardware for RDMA if it is available (IB or ROCE), or just regular ethernet if its not.


My personal thoughts - yet another feature that will be rarely used but that lets VMware turn one more "not feasible to virtualize" category into "can be done with ESXi", while I'm still sitting around waiting for the ability to storage-vmotion a linked clone - a feature that should be trivially easy to implement and would be used on a regular basis by every VMware-View or vCloud customer currently out there.
 
  • Like
Reactions: 33_viper_33

33_viper_33

Member
Aug 3, 2013
204
3
18
SR-IOV doesn't pass the entire card to the vm, only a virtual function of the PCI card. What you described is the function of VT-d PCI passthrough giving the VM sole control of the pci card. SR-IOV provides a virtual function to the VM, so the VM owns 1 of many virtual functions of the card. This bipasses the host processor and RAM and uses the hardware PCI card. This means more resources for VMs since less is required for the network. Note, other devices like HBAs can also use SR-IOV...
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
SR-IOV doesn't pass the entire card to the vm, only a virtual function of the PCI card. What you described is the function of VT-d PCI passthrough giving the VM sole control of the pci card. SR-IOV provides a virtual function to the VM, so the VM owns 1 of many virtual functions of the card. This bipasses the host processor and RAM and uses the hardware PCI card. This means more resources for VMs since less is required for the network. Note, other devices like HBAs can also use SR-IOV...
Actually SR-IOV is only responsible for carving the physical function into multiple virtual functions. It's entirely possible to use SR-IOV on regular physical server with no hypervisor to split eg. a NIC into 8 NICs and let the host OS use all 8 of them. VT-d can optionally be used to pass virtual functions to VMs, just like passing through an entire device that doesn't support SR-IOV.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
True, hence the reason you need VT-d and SR-IOV in order to provide virtual functions to VMs. Just wanted to ensure we weren't limiting SR-IOV to just hardware pass-through. There is more to it... Perhaps I didn't say that in the best way. Good clarification!