I just upgraded my home server from ESXi 4.1 to 5.0.0.
Prior to the upgrade I had been running mainly a ZFS server based on SE11, which I recently moved to OpenIndiana151a. I also have a Debian 6 VM and a few other miscellaneous VMs.
Anyway, when I first migrated, I installed ESXi 5 on a freshly secure-erased 80GB SSD (intel X25-M Gen1). Other than that, there were no server hardware or network topology changes. I then migrated my OI VM over, and updated to the new tools. This is where the problems started.
After installing the tools, and the new VMXNET3 driver, I began to get messages spammed to the console every few seconds that look like this:
Nov 14 17:21:29 (hostname) vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no
In addition to that, I noticed a significant dropoff of network bandwidth. I used to be able to copy large files from Win7 clients to the server over GigE at almost full saturation (over 90MB/s). Now it's closer to 50-60. iperf gives similar results.
So, thinking it was a problem migrating the OI VM, I tried doing a clean install of OI on the SSD datastore. No change. Still the same performance, and same error messages. I then tried the new Solaris 11 release (since ESXi now has an option specifically for Solaris 11 in the new VM wizard). It also gives the same error message and about the same results.
Doing iperf tests between Debian and OI VMs on the virtual switch gives about 670MB/s from OI->Debian, and around 500MB/s from Debian->OI. While this is faster that my physical gigabit network, notice that the ratio of theoretical maximum to actual is about the same: 50-70% of max. I don't know if that's significant or not.
Another test I tried was using iperf between two nearly identical Debian VMs. This was up in the 900MB/s range.
Finally, using iperf with multiple threads seems to come close to the old ESXi 4.1 performance. I need to do -P 4 or -P8 to get there though.
Has anyone else experienced issues with ESXi 5.0 network performance?
Prior to the upgrade I had been running mainly a ZFS server based on SE11, which I recently moved to OpenIndiana151a. I also have a Debian 6 VM and a few other miscellaneous VMs.
Anyway, when I first migrated, I installed ESXi 5 on a freshly secure-erased 80GB SSD (intel X25-M Gen1). Other than that, there were no server hardware or network topology changes. I then migrated my OI VM over, and updated to the new tools. This is where the problems started.
After installing the tools, and the new VMXNET3 driver, I began to get messages spammed to the console every few seconds that look like this:
Nov 14 17:21:29 (hostname) vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no
In addition to that, I noticed a significant dropoff of network bandwidth. I used to be able to copy large files from Win7 clients to the server over GigE at almost full saturation (over 90MB/s). Now it's closer to 50-60. iperf gives similar results.
So, thinking it was a problem migrating the OI VM, I tried doing a clean install of OI on the SSD datastore. No change. Still the same performance, and same error messages. I then tried the new Solaris 11 release (since ESXi now has an option specifically for Solaris 11 in the new VM wizard). It also gives the same error message and about the same results.
Doing iperf tests between Debian and OI VMs on the virtual switch gives about 670MB/s from OI->Debian, and around 500MB/s from Debian->OI. While this is faster that my physical gigabit network, notice that the ratio of theoretical maximum to actual is about the same: 50-70% of max. I don't know if that's significant or not.
Another test I tried was using iperf between two nearly identical Debian VMs. This was up in the 900MB/s range.
Finally, using iperf with multiple threads seems to come close to the old ESXi 4.1 performance. I need to do -P 4 or -P8 to get there though.
Has anyone else experienced issues with ESXi 5.0 network performance?