PVRDMA second network

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

efschu3

Active Member
Mar 11, 2019
160
61
28
I'm thinking of enable RDMA for my guests by using PVRDMA. But while reading the manual, the step setting "Net.PVRDMAVmknic" to a vmk# is confusing me, as there is defined only one.

But I have two physically separated RDMA networks which I actual want to connect to two separated distributed switches on each of my hosts.

How to create a second PVRDMA network?
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I don't believe that configuration is possible. The PVRDMA feature in ESX is very limiting - It does not work with Windows VMs, and it uses a single specified vmkernel interface for ingress/egress.

I have tried to use this feature but I was never able to get it to work, even for Linux VMs that support the PV NIC driver.

A far better way is to use SR-IOV. Then you can enable RDMA features in Windows VMs. Of course PCI passthrough including SR-IOV has its limitations, but right now it's the best way.

Maybe in the future the technology will improve, but right now as far as I know, PCI passthrough is the way to go.

I have a physical Win11 Pro for Workstations PC connected over a 100G network to a Windows file server using single port passthrough of a dual-port NIC. SMB Direct w/ RDMA works perfectly.
 
  • Like
Reactions: efschu3

efschu3

Active Member
Mar 11, 2019
160
61
28
Hm to bad. SR-IOV is nice if you dont need HA or live-migration. But I need live-migration.

I try to setup Soft-RoCE (rxe) on my linux vms, but w/o success. Do you know if rxe is possible on esxi vms?
 

DavidWJohnston

Active Member
Sep 30, 2020
242
191
43
I would imagine it should work as it's a software implementation so it should work with any ethernet adapter. But that is for development, it's not "real RoCE" so I don't think it would offer performance benefits - Can you tell me what exactly is your workload that requires RDMA?

As I mentioned, the PVRDMA config in the ESXi docs here should work:



There is a list of configuration instructions which must be configured exactly as specified. (Dedicated distributed switch with only the HCAs as a single uplink on each host, vmkernel advanced config item, firewall rule, PVRDMA NIC type in guest, VM hardware version >=18, etc)

Try it, but like I said before I could never get it to work, nor could I find anybody on Reddit who claimed it worked for them either - But it's in the vmware docs so one would think it's possible to make it work.

As it specifies NIC teaming is not supported. The RDMA distributed switch must have only one uplink; the RDMA NIC.
 

efschu3

Active Member
Mar 11, 2019
160
61
28
Well I wont touch PVRDMA, cause I cant have two networks, which I need to address. And I cant use SRIOV cause it cannot be live migrated.

The goal is to use samba direct, nfs over rdma and iser inside vms, cause outside of the esxi cluster the rest of the infrastructure is already using those services by RoCE.

But I'll skip that. To much effort for no real benefit.