ESXi 5.5, OmniOS Stable and Napp-it - vmxnet3 and jumbo frames

yu130960

Member
Sep 4, 2013
122
10
18
Canada
[copied from post #13]

1. download napp-it_14b_vm_for_ESXi_5.0-5.5.zip and deploy it

2. (optional) log in with putty and update to OmniOS to r151012
Code:
/usr/bin/pkg unset-publisher omnios
/usr/bin/pkg set-publisher -P -g http://pkg.omniti.com/omnios/r151012/ omnios
/usr/bin/pkg update --be-name=omnios-r151012 entire@11,5.11-0.151012
3. using the esxi console log in and uninstall vmware tools

Code:
# vmware-uninstall-tools.pl
4. Fix the vmware tools driver bug by modding /vmware-tools-distrib/bin/vmware-config-tools.pl as set out in here

Code:
vi /vmware-tools-distrib/bin/vmware-config-tools.pl
Use the down arrow to start scrolling down, then type
Code:
/currentMinor
then find the
Code:
if ($minor < $currentMinor) {
    $osDir = $minor;
  } else {
    $osDir = $currentMinor;
  }
and switch to the following by hovering over the < and pressing r and then >

Code:
if ($minor > $currentMinor) {
    $osDir = $minor;
  } else {
    $osDir = $currentMinor;
  }
once the change is made press Esc and :wq to save and quit

and reinstall the vmware tools to include the v. 11 drivers

Code:
# cd /vmware-tools-distrib
# ./vmware-install.pl
5. Modify /kernel/drv/vmxnet3s.conf file as set out below and then REBOOT
Code:
EnableLSO=0,0,0,0,0,0,0,0,0,0;

MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;
6. [Edit read post 19 & 22 for script] After every reboot (Note: Jumbo frames setting does not survive reboot) enter the following settings:

Code:
ndd -set /dev/vmxnet3s0 accept-jumbo 1
7. Revise all the relevant ESXI vswitches from 1500 MTU to 9000 and any other piece of hardware in the pipeline (physical switches)
 
Last edited:

yu130960

Member
Sep 4, 2013
122
10
18
Canada
I just tried Gea's esxi image and it appears that his is running Solaris version 10 vmware tool drivers as well. Has anyone other than the above noted article got jumbo frames to work with OmniOS and napp-it?
 

yu130960

Member
Sep 4, 2013
122
10
18
Canada
Update: I was able to install the Solaris 11 drivers Vmware tools following this article:

The only note to make the change to the 'bug' is to edit the following file:

/vmware-tools-distrib/bin/vmware-config-tools.p

I got Jumbo frames working but then my NFS datastore become unstable

Still testing
 
Last edited:

bmacklin

Member
Dec 10, 2013
96
5
8
Update: I was able to install the Solaris 11 drivers Vmware tools following this article:

The only note to make the change to the 'bug' is to edit the following file:

vmware-tools-distrib/bin/vmware-config-tools.p

I got Jumbo frames working but then my NFS datastore become unstable

Still testing
I'm very concerned. What kind of transfers are you getting between your VMs?

I'm attempting a similar set up once all of the parts arrive but if OmniOS has bad network performance then I don't see the point...

Please report back on what you find... thank you.
 

yu130960

Member
Sep 4, 2013
122
10
18
Canada
I'm very concerned. What kind of transfers are you getting between your VMs?

I'm attempting a similar set up once all of the parts arrive but if OmniOS has bad network performance then I don't see the point...

Please report back on what you find... thank you.
The problem I am having is that the VM to VM transfer are varying a great deal. I am in the process of switching all my VMs to e1000 drivers to see if that makes a difference.

It also doesn't help that my Windows 7 VM seems to be doing wonky things with the iperf results and reports super low numbers, where I can physically transfer data between the VMs at a much higher rate.
 

yu130960

Member
Sep 4, 2013
122
10
18
Canada
For now, I give up on jumbo frames. I am not sure that the performance gain, if any, is worth the trouble and instability.

I am back to where I started with vmxnets0 for my OmniOs and for now I am trying e1000g0 for my windows VM. On big files between Windows 7 and the OmniOS I am averaging about 50-60 MBps.

If the only reason you are going for an all-in-one is killer network throughput between various VMs, then I am not seeing that.

I still love it for all the other reasons.
 

33_viper_33

Member
Aug 3, 2013
200
2
18
For now, I give up on jumbo frames. I am not sure that the performance gain, if any, is worth the trouble and instability.

I am back to where I started with vmxnets0 for my OmniOs and for now I am trying e1000g0 for my windows VM. On big files between Windows 7 and the OmniOS I am averaging about 50-60 MBps.

If the only reason you are going for an all-in-one is killer network throughput between various VMs, then I am not seeing that.

I still love it for all the other reasons.
Is there a reason you are so comitted to OmniOs? I used VMXNET3 adapters under OpenIndiana and Windows 2012 server and achieved stable transfers averaging 400MB/s which was a controller limit. I have no experience with OmniOs, but from reading it seams that support isn't as good as some of the other distros. Experts, correct me if I'm wrong.
 

gea

Well-Known Member
Dec 31, 2010
2,513
851
113
DE
Is there a reason you are so comitted to OmniOs? I used VMXNET3 adapters under OpenIndiana and Windows 2012 server and achieved stable transfers averaging 400MB/s which was a controller limit. I have no experience with OmniOs, but from reading it seams that support isn't as good as some of the other distros. Experts, correct me if I'm wrong.
Basically, OmniOS and OI server are nearly identical because both are based on newest Illumos developments.

They differ in:
- OI is a pure community project. There is no commercial interest or company behind but with newer dev releases from time to time.
I do not expect a stable at any time but OI gives you a nice playground to test a powerful Solaris server or desktop system

-Although OmniOS is free there is a company behind with commercial interest. You can buy commercial support, they offer a stable about every 6 months, biweekly bugfixes, a bloody developer version and Long-Tail Support releases. This is why it is the preferred option for a storage server and production use.

OI and OmniOS offer different repositories that cannot be used by the other but you can use the Joyent/SmartOS pkgin repository on both of them with the same software.


Regarding ESXi and All-In-One:

There are a lot of options and indvidual problems. What I can say:
- ESXi 5.5 and e1000 is broken and only usable with some special settings
- vmxnet3 is the fastest at all solution but may give you speed and stability problems, you must evaluate on your config
- Settings like Jumboframes and Link aggregation may give you speed and problems, you must evaluate on your config

- ESXi 5.1 and e1000 is a very stable config
- You may reduce cpu load/problems on external high speed transfers with a nic in pass-through mode for external transfers

Your experience may differ on every update or config.
This is why netapp, Nexenta and others are allowing only "tested" parts at a high price.
 
Last edited:

yu130960

Member
Sep 4, 2013
122
10
18
Canada
I am not about to give up on ESXi 5.5 yet. My box has >32gb of ram.

I am in the process of testing a linux vm to replace my windows vm and am noting that I can achieve a fairly consistent 100 MBps.

I am sticking with vmxnet3 for my napp-it vm as I had nothing but problems with the e1000 even with the mods suggested by gea.

My hardware is all stuff on the HCL, so am not sure where I am leaking the VM to VM throughput.
 

Mike

Member
May 29, 2012
482
16
18
EU
Vmwarez paravirtualised nics are pretty good, but suck pretty bad at some Unix's out there afaik.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
keep in mind ESXi is designed for many vm's going fast. It has issues with 1 vm going fast. That is why passthrough still exists (and SR-IOV).

You have to have 4-8 vcpu to service (and pin) interrupts to move data in a VM just as important as in the physical.

ethtool -S vmnic0 and if you don't see 4 to 8 rings IN use - bad.
 

yu130960

Member
Sep 4, 2013
122
10
18
Canada
EDIT: I think I got it working now, although would like to know how to make some of the setting #6 survive reboot.

_____________________

I came back to trying to get jumbo frames to work almost a year later (now using ESXi 5.5 u2) and have had some success.

1. download napp-it_14b_vm_for_ESXi_5.0-5.5.zip and deploy it

2. (optional) log in with putty and update to OmniOS to r151012
Code:
/usr/bin/pkg unset-publisher omnios
/usr/bin/pkg set-publisher -P -g http://pkg.omniti.com/omnios/r151012/ omnios
/usr/bin/pkg update --be-name=omnios-r151012 entire@11,5.11-0.151012
3. using the esxi console log in and uninstall vmware tools

Code:
# vmware-uninstall-tools.pl
4. Fix the vmware tools driver bug by modding /vmware-tools-distrib/bin/vmware-config-tools.pl as set out in here

Code:
vi /vmware-tools-distrib/bin/vmware-config-tools.pl
Use the down arrow to start scrolling down, then type
Code:
/currentMinor
then find the
Code:
if ($minor < $currentMinor) {
    $osDir = $minor;
  } else {
    $osDir = $currentMinor;
  }
and switch to the following by hovering over the < and pressing r and then >

Code:
if ($minor > $currentMinor) {
    $osDir = $minor;
  } else {
    $osDir = $currentMinor;
  }
once the change is made press Esc and :wq to save and quit

and reinstall the vmware tools to include the v. 11 drivers

Code:
# cd /vmware-tools-distrib
# ./vmware-install.pl
5. Modify /kernel/drv/vmxnet3s.conf file as set out below and then REBOOT
Code:
EnableLSO=0,0,0,0,0,0,0,0,0,0;

MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;
6. [EDIT Read post 19 & 22] After every reboot (Note: Jumbo frames setting does not survive reboot) enter the following settings:

Code:
ndd -set /dev/vmxnet3s0 accept-jumbo 1
7. Revise all the relevant ESXI vswitches from 1500 MTU to 9000 and any other piece of hardware in the pipeline (physical switches)
 
Last edited:

yu130960

Member
Sep 4, 2013
122
10
18
Canada
I am still testing the OmniOS/napp-it VM and I just moved a few VMs over to it. I was experiencing some crashes of OmniOS but I think that had to do with the tunables below (which I have stopped using). If it stays stable, I will implement the changes on my production server and do some speed testing (other than iperf).
Code:
#ipadm set-prop -p max_buf=4194304 tcp
#ipadm set-prop -p recv_buf=1048576 tcp
#ipadm set-prop -p send_buf=1048576 tcp
 
Last edited:

yu130960

Member
Sep 4, 2013
122
10
18
Canada
Just enabled my main OmniOS napp-it server with jumbo frames as set out in post #13 and it appears to be stable and iperf is giving me 10 Gbit/s speed in and out of the server with VMs on the same box. I have NOT enabled the tuneables detailed in post #16 as I am pretty happy with the iperf results.
 

J-san

Member
Nov 27, 2014
67
42
18
40
Vancouver, BC
Hi yu130960,

I followed most of these steps on a new OmniOS AIO server I'm just setting up and testing now. (omnios-8c08411)

I did a iperf test between OmniOS server and an Ubuntu client after enabling jumbo-frames and 9000mtu (virtual switch, vm to vm):
9.63 Gbits/sec

I didn't modify the latest VMware tools Solaris drivers install:
/vmware-tools-distrib/bin/vmware-config-tools.pl
(I tried the Solaris 11 driver, but this but it seemed unstable; however that may have been caused by rebooting and not doing the "ndd -set /dev/vmxnet3s0 accept-jumbo 1" after reboot..)

Nor did I change the EnableLSO=0,0,0,0,0,0,0,0,0,0; (it defaults to 1,1,1,1,1,1,1,1,1,1;)

Once you reboot in Step 6, your NFS datastores or iSCSI datastores in ESXi will be very erratic and drop out if like you said, if you do not perform:

Code:
ndd -set /dev/vmxnet3s0 accept-jumbo 1
As a quickfix to get this working automatically after a reboot of the OmniOS vm, I did the following:

Create a new script under /etc/init.d/ called:

VMnetEnableJumbo

Code:
#!/bin/bash

startcmd () {
  ndd -set /dev/vmxnet3s0 accept-jumbo 1
 
  ## Add any extra additional vmxnet3s interfaces here
  #ndd -set /dev/vmxnet3s1 accept-jumbo 1
}

stopcmd () {
  echo "no stopping this train, just starting"
}

case "$1" in
'start')
  startcmd
  ;;
'stop')
  stopcmd
  ;;
'restart')
  stopcmd
  sleep 1
  startcmd
  ;;
*)
  echo "Usage: $0 { start | stop | restart }"
  exit 1
  ;;
esac

exit 0


Note: For the next step I'm not sure if you can move this up to start earlier, but napp-it has a script for startup under /etc/rc3.d/ directory.

Add a symbolic link to the newly created script so that it runs automatically upon boot under /etc/rc3.d/:

Code:
ln -s /etc/init.d/VMnetEnableJumbo /etc/rc3.d/S98VMnetEnableJumbo

You'll see a number of messages from the vmxnet3s driver in the console as it has to disable and re-enable a second time after the script enables jumbo-frames.

Hope that helps.

(note, don't forget to enable 9000mtu on whatever client you're testing as well as the vm switch, and the vmkernel section if using for vmware NFS datastore)

Cheers.

EDIT: I think I got it working now, although would like to know how to make some of the setting #6 survive reboot.

_________

5. Modify /kernel/drv/vmxnet3s.conf file as set out below and then REBOOT
Code:
EnableLSO=0,0,0,0,0,0,0,0,0,0;

MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;
6. After every reboot (Note: Jumbo frames setting does not survive reboot) enter the following settings:

Code:
ndd -set /dev/vmxnet3s0 accept-jumbo 1
7. Revise all the relevant ESXI vswitches from 1500 MTU to 9000 and any other piece of hardware in the pipeline (physical switches)
 
  • Like
Reactions: lmk and yu130960