Poor VM to VM performance ESXi 5.5u2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
DL385G8, 192GB RAM.

ESXi 5.5 U2, X520-DA2 adapters, 8024 switch

MTU 9000 on both switch and OS, same Vswitch
Ubuntu 14.04, VMXNET3, VMtools installed

iperf -c X.X.X.X -P10

vm to vm I'm getting 6-7Gbit/sec.

10G across the network I'm getting 3-4Gbit/sec.

CPU isn't an issue.

HP support (overseas is completely usesless) has been working for a week solid
and VMware is not helpful.

Any smoking guns I should be starting with? I'm at my wits end.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@dwright1542

Have you ensured you are getting jumbo end-to-end, not saying it will be your be-all/end-all or answer for that matter but if your attempting jumbo it is CRUCIAL to get it configured end-to-end, EVERY HOP/interface meaning phys switch (could be global or per vlan), virtual switch (standard or vDS), vmkernel interface (if using for stg/vMotion/sVMotion/vSAN/etc.), and of course w/in guest OS. (4 places if memory serves me correct, miss one and it is ALL for nothing).

Take care, whitey
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
It's worse than I thought. even vm-vm on the same vswitch is an issue. That 6-7 was short lived. Further testing shows 3-4 local. Vmware is stumped.
 

tby

Active Member
Aug 22, 2013
222
111
43
Snellville, GA
set-inform.com
Sounds bad, over in my Dell 1965W thread way back I posted iperf single-connection @ > 8Gbps between two Ubuntu 12.10 nodes on separate 5.5u2 servers with a single core of L5520 and zero tuning.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
It's worse than I thought. even vm-vm on the same vswitch is an issue. That 6-7 was short lived. Further testing shows 3-4 local. Vmware is stumped.
Yeah something is jacked up for sure, I think VM to VM same host w/in Hypervisor memory even on a vmxnet3 10Gbps vnic they will slam 25-30Gbps.
 

ultradense

Member
Feb 2, 2015
61
11
8
41
Try this:
1) Set bios in hw-server to max performance mode and possibly turn of C-states on the processor to lower latency on NIC interrupts
2) Try turning off offload functions on you nic driver in your vms. For example start with turning off LSO (large send offload). And try one by one. The virtual offloading will have to be done by ESXi, but I've seen it perform badly a numerous times.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Try this:
1) Set bios in hw-server to max performance mode and possibly turn of C-states on the processor to lower latency on NIC interrupts
2) Try turning off offload functions on you nic driver in your vms. For example start with turning off LSO (large send offload). And try one by one. The virtual offloading will have to be done by ESXi, but I've seen it perform badly a numerous times.
Yeah, performance is on High Full time. Already set that.

We've played with LRO / LSO, in fact VmWare did. They were stumped as well.