AIO OmniOS 1500 vs 9000mtu open-vm-tools

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
Couldn't figure out how to set MTU to 9000 in Omnios r151028 with ESXi 6.5 and open-vm-tools..

Started from older napp-it-san024 and upgraded to r151028.

Already had the following set:
Code:
TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
RxBufPoolLimit=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
EnableLSO=0,0,0,0,0,0,0,0,0,0;
MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;
But I couldn't enable mtu 9000 (worked previously for older vmware-tools):
Code:
root@napp-it-san024:/# ndd -set /dev/vmxnet3s1 allow-jumbo 1
operation failed: Operation not supported
The possible values only go up to 1500?
Code:
root@napp-it-san024:/etc/vmware-tools# ipadm show-ifprop -p mtu
IFNAME      PROPERTY        PROTO PERM CURRENT    PERSISTENT DEFAULT    POSSIBLE
lo0         mtu             ipv4  rw   8232       --         8232       68-8232
lo0         mtu             ipv6  rw   8252       --         8252       1280-8252
e1000g0     mtu             ipv4  rw   1500       --         1500       68-1500
e1000g0     mtu             ipv6  rw   1500       --         1500       1280-1500
vmxnet3s0   mtu             ipv4  rw   1500       --         1500       68-1500
vmxnet3s0   mtu             ipv6  rw   1500       --         1500       1280-1500
vmxnet3s1   mtu             ipv4  rw   1500       1500       1500       68-1500
vmxnet3s1   mtu             ipv6  rw   1500       --         1500       1280-1500
So I eventually came across this link:
Improve 10Gbps Performance on napp-it (Solaris 11)

So I disabled vmxnet3s1 and was finally able to set mtu to 9000:

Code:
# ipadm disable-if -t vmxnet3s1
# dladm set-linkprop -p mtu=9000 vmxnet3s1
# ipadm set-ifprop -p mtu=9000 -m ipv4 vmxnet3s1
# ipadm enable-if -t vmxnet3s1
# ifconfig vmxnet3s1 up

# dladm show-link vmxnet3s1
LINK        CLASS     MTU    STATE    BRIDGE     OVER
vmxnet3s1   phys      9000   up       --         --

# ipadm show-ifprop -p mtu vmxnet3s1
IFNAME      PROPERTY        PROTO PERM CURRENT    PERSISTENT DEFAULT    POSSIBLE
vmxnet3s1   mtu             ipv4  rw   9000       9000       9000       68-9000                                                      
vmxnet3s1   mtu             ipv6  rw   9000       --         9000       1280-9000
 
Last edited:

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
So I did a quick benchmark before and after changing the MTU for NFS shared ESXi storage:

Config is
  • All In One
  • esxi vswitch setup with 9000 MTU
  • NFS shared datastore to ESXi 6.5 over software vswitch above
  • Windows VM with attached vmdk hard disk on NFS datastore

1. With OmniOS vmxnet3s1 (datastore nfs interface) MTU 1500:

omnios-r151028-1500mtu-benchmark-2x6tb-s3700-slog.PNG

2. With vmxnet3s1 (datastore nfs interface) to MTU 9000


omnios-r151028-9000mtu-benchmark-2x6tb-s3700-slog.PNG

So it looks like it does matter what you set your MTU to even when sharing over the internal ESXi vswitch at least for NFS shared datastores.

Cheers
 
  • Like
Reactions: gea

dragonme

Active Member
Apr 12, 2016
282
25
28
what processor are your running and what resources are you assigning to both the napp-it and target VMs?

I would love for you to run some comparisons turning off and on LSO at 9000 and 1500 as well

I use very low power (for 1366) l5640s which are great for threads in a 2x setup (24 cpu) but clocks are low so single threaded stuff suffers and on esxi 6.0 with these chips and napp-it.. interrupts go crazy high during high iops ops..

I am finding slightly better performance for external data through the hosts intel nic using LSO on... since the physical adapter permits LSO.. not sure if there will be a benefit or not in a pure virtual environment
 

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
Hi Dragonme,

The hardware is:
  • Supermicro X10DRi-T
  • 2 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
  • 128 GB RAM
  • 3 x 9211-8i in HBA IT mode /w P20.0.0.7 firmware
I've assigned the following to the OmniOS VM:
  • 4 x Cpus
  • 59392 MB RAM
  • 3 x 9211-8i in passthrough
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@J-san

thats hella beefy resources for a zfs machine... mine has 4cpu of l5640 and only 10gb of ram!!

as for setting the mtu 9000 .. easiest way with napp-it is to just use the GUI? in network settings?

there is a bit of confusion on the inter webs as to which solaris functions are best used to set/modify network settings especially virtual ones..

glad you have it figured out

report back if you ever get to messing with lso changes both running through physical adapters as well as internal vmswitches ...

also, if you care concerned with latency on the NFS.. I have found that setting the napp-it vm latency high and reserving cpu/memory (memory has to be reserved anyway since you are passing the LSI cards) to help greatly.. especially under load..

seems that interrupts and context switching is minimized thus less overhead and less %wait

further.. keep all napp-it resources on the same NUMA node... unless absolutely necessary (i.e. don't give it more than 50% of memory / spanning between NUMA nodes) otherwise the VM becomes a 'wide' vm and resource scheduling again begins to add significant overhead and resource contention
 

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
Hi Dragonme,

Yes, it's a work machine :) , and is also used for hosting VMs locally on it.

I haven't played with setting the MTU via the napp-it GUI, although that might be a good spot as well.

I have used napp-it for changing the SD.conf for updating the list of drives to force support of 4k sectors, and also to let OmniOS know about powerloss protection supported SSDs for faster SLOG performance.

Following is a more modern Crystal Disk Mark 6..

Also interesting is the difference between what virtual controller you are using to attaching a disk to your guest VM.
The following is on the same datastore, just difference in scsi controllers.

Eg. A test Win2k8r2 VM with the Virtual Disk Controller set to

  • LSI Logic SAS controller:
omnios-r151028-9000mtu-CryDskMrk3-2x6tb-s3700-slog-lsi-ctrl.PNG
  • VMware paravirtual:
omnios-r151028-9000mtu-CryDskMrk3-2x6tb-s3700-slog-paravirt-ctrl.PNG
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@J-san

yeah.. that paravirutual when avail is just like the improvements with vmxnet.. it reduces the overhead in 'simulating' something for the sake of simulating vs streamlining and optimizing the needed functionality while reducing the overhead ,.., i.e. processor time and more importantly now with machines patched for hearblead and meltdown.. interrupts / resources now that processes have to be separated ..


napp-it .. really rather omnios did some strange things when I was using drives via RDM passthrough.. the existing pool that was built on another machine worked fine.. but passing a new drive to omni/napp-it then adding it to the pool.. it didn't format it correctly.. add it via a partition instead of whole disk and mismatched sector/ashift...didnt see it until after the server was reconfigured so I could pass the entire sata controller and do away with DRM.. everythink looked peachy until the switch then I could see what it did under the hood..

it appeared to have been added as a whole disk as an RDM when in fact it was partitioned .. weird..
 

DistantStar

New Member
Dec 21, 2019
20
5
3
So I disabled vmxnet3s1 and was finally able to set mtu to 9000:
Thanks, this is what finally allowed me to set the mtu after getting the link busy errors every other way. However, after setting the mtu to 9000 here, it seems to take a while longer for the network to come up after a reboot. It used to be accessible via SSH or SMB as soon as the console login came up, but now it takes a minute or two extra after that to be able to SSH or browse via SMB. Any ideas? Or maybe coincidence with some other issue...
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Have you enabled Jumbo on all network devices (Switches, ESXi vswitch, clients). Do you use dhcp? (dhcp server must support Jumbo then). Try a manual ip setting.