From what I've been able to read, it has nothing to do with SR-IOV or MR-IOV - it simply enables RDMA to work in VMs regardless of host hardware. Before vRDMA if you wanted RDMA from a VM you had to passthrough a NIC/HCA with RDMA support to the VM to do it - now with vRDMA all VMs will have RDMA support even if the host doesn't have hardware that can do it.
If two VMs on the same host want to use RDMA with each other, the host will just memcpy between the two VMs - higher performance and lower latency than real RDMA though "on the same host" is rather restrictive and except for the virtualization layer is really taking the 'R' out of RDMA. When working across hosts, the hypervisor will use hardware for RDMA if it is available (IB or ROCE), or just regular ethernet if its not.
My personal thoughts - yet another feature that will be rarely used but that lets VMware turn one more "not feasible to virtualize" category into "can be done with ESXi", while I'm still sitting around waiting for the ability to storage-vmotion a linked clone - a feature that should be trivially easy to implement and would be used on a regular basis by every VMware-View or vCloud customer currently out there.