ESXi RDMA enablement

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Dabbling with RDMA ...

Test ESXi boxes (6.7U3) have CX3's (non pro so RoCE v1 only)

followed https://community.mellanox.com/s/article/howto-run-roce-over-l2-enabled-with-pfc--esxi-x
to enable on ESXi

Basically on each I did
esxcli system module parameters set -m nmlx4_en -p "pfctx=0x08 pfcrx=0x08"
esxcli system module parameters set -m nmlx4_rdma -p "pcp_force=3"
reboot

On the switch (6012 in ETH) I followed
https://community.mellanox.com/s/article/howto-enable-pfc-on-mellanox-switches--switchx-x


interface ethernet 1/1-1/6 flowcontrol send off force
interface ethernet 1/1-1/6 flowcontrol receive off force
dcb priority-flow-control enable
dcb priority-flow-control priority 3 enable
interface ethernet 1/1-1/6 dcb priority-flow-control mode on force

I didnt modify my current VLAN settings as I have a bunch set up for different ESXi traffic already.

On ESXi there was no mapping mentioned of VLAN to priority class (on Linux they map it with vconfig per VLAN -> https://community.mellanox.com/s/article/howto-run-roce-and-tcp-over-l2-enabled-with-pfc), so I didnt do anything else.

Networking is still working fine, no errors, but no RoCE seems to be in use - I can't see a single pause frame...

So anyone any idea what I am missing? :)

Edit:
Could it be that regular ESXi traffic will not generate RDMA traffice?
I had hoped from the description
"
RDMA in ESXi

  • RDMA can be useful to accelerate many of the hypervisor services including; SMP-FT, NFS and iSCSI.
"
that it might run in native ESXi too when enabled on interface level.

Only thing I tested till now were some vMotions...
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Nobody running RDMA on their vSphere Environment? or did it simply work for you?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I have gotten iSER to work, but not NFS+RDMA. Google seems to indicate it isn't really supported. Not sure, though...
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I don't know. I was not using a switch - two mellanox 50gb cards connected back to back with DAC cables.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
ah right, I remember:)
well need to see if anyone got it going with a (MLX) switch... else I will try with newer cards and RoCEv2
 

tsteine

Active Member
May 15, 2019
167
83
28
I have done this with ConnectX-3 Cards between ESXi host and iscsi san over an arista switch.

You will not see any pause frames for RDMA traffic if you are not using iSER or PVRDMA.

If you want to utilize RDMA on your ESXi hosts, I would direct you to the documentation from Vmware for setting this up.
Remote Direct Memory Access for Virtual Machines
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Thanks, I've looked at that.
At this time I was hoping to speed up ESXi traffic, not VM traffic only (thats for later).

Why are you saying i wouldnt see any pause frames unless I use vm traffic?
Is it a misconception that ESXi traffic can use RDMA?
 

tsteine

Active Member
May 15, 2019
167
83
28
Unfortunately, as far as I'm aware, what is currently supported in vSphere is iSER and PVRDMA.

The statement "RDMA can be useful to accelerate many of the hypervisor services including; SMP-FT, NFS and iSCSI." does not adequately reflect what is currently supported in vSphere.
I wonder if this might be a legacy statement from when vSphere supported Infiniband, rather than it's current RoCE counterpart.
 
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Hm that might explain things - but is kind of disappointing;)
But exactly why I asked for feedback :)
Thanks
 
  • Like
Reactions: tsteine