Tuning vsphere 6.5 udp performance?


New Member
Jun 23, 2016

I am in the processes of deploying a small 10Gbps network for my homelab.
When verifying the 10G network with iperf3 I saw great TCP performance (+9Gbps with 9k mtu) but the UDP performance was all over the place. Further inspection showed me TCP rtr's on some runs the average speed did not dip below 9 Gbps and the switch didn't report any interface errors.

To further analyze the UDP behavior and rule out any faulty physical networking gear I decided to start testing directly between VM's and what suprised me was that this also showed weird UDP performance on multiple hosts.

So basically I now have the following questions:
- Do I need to worry about incidental TCP retransmits (I am using DAC cables so could pick up some external noise?)
- Do I need to do tuning or further troubleshooting on the UDP side of things? It worries me that running iperf with higher bandwitdh specs increases the packetloss to the high single digits!

Additional info
- Tried with 2 vcpu's per VM
- Tests are running on a host with xeon-d cpu and 64GB ecc ram.
- VM's are running Debian 9 with open-vm-tools and vmxnet3 nic.
- UDP inconsistencies are observed on multiple vsphere host
- Tests done with a new vswitch which was not connected to the physical network.
- Tried multiple MTU settings since at first I suspected mtu issues.
- TCP performance is good on all hosts.
- Tuning iperf's parameters do improve the results but this also increases packetloss!

Running iperf with the -V switch seem to indicate UDP being CPU bottlenecked (no-offload), playing with parameters like -W, -P and -Z does impact the performance somewhat but whenever UDP test are run with -b > 3000 performance is inconsistent.

Hope someone with more experience can shed some light on this situation and possibly verify the working of iperf3 udp on vsphere 6.5+.

iperf-udp-between-vms.PNG iperf-udp-between-vms-parameters.PNG iperf-tcp-between-vms.PNG iperf-udp-between-vms-9k-mtu-tuned.PNG