AIO OmniOS 1500 vs 9000mtu open-vm-tools

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by J-san, Mar 15, 2019.

  1. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Couldn't figure out how to set MTU to 9000 in Omnios r151028 with ESXi 6.5 and open-vm-tools..

    Started from older napp-it-san024 and upgraded to r151028.

    Already had the following set:
    Code:
    TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
    RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
    RxBufPoolLimit=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
    EnableLSO=0,0,0,0,0,0,0,0,0,0;
    MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;
    But I couldn't enable mtu 9000 (worked previously for older vmware-tools):
    Code:
    root@napp-it-san024:/# ndd -set /dev/vmxnet3s1 allow-jumbo 1
    operation failed: Operation not supported
    
    The possible values only go up to 1500?
    Code:
    root@napp-it-san024:/etc/vmware-tools# ipadm show-ifprop -p mtu
    IFNAME      PROPERTY        PROTO PERM CURRENT    PERSISTENT DEFAULT    POSSIBLE
    lo0         mtu             ipv4  rw   8232       --         8232       68-8232
    lo0         mtu             ipv6  rw   8252       --         8252       1280-8252
    e1000g0     mtu             ipv4  rw   1500       --         1500       68-1500
    e1000g0     mtu             ipv6  rw   1500       --         1500       1280-1500
    vmxnet3s0   mtu             ipv4  rw   1500       --         1500       68-1500
    vmxnet3s0   mtu             ipv6  rw   1500       --         1500       1280-1500
    vmxnet3s1   mtu             ipv4  rw   1500       1500       1500       68-1500
    vmxnet3s1   mtu             ipv6  rw   1500       --         1500       1280-1500
    
    So I eventually came across this link:
    Improve 10Gbps Performance on napp-it (Solaris 11)

    So I disabled vmxnet3s1 and was finally able to set mtu to 9000:

    Code:
    # ipadm disable-if -t vmxnet3s1
    # dladm set-linkprop -p mtu=9000 vmxnet3s1
    # ipadm set-ifprop -p mtu=9000 -m ipv4 vmxnet3s1
    # ipadm enable-if -t vmxnet3s1
    # ifconfig vmxnet3s1 up
    
    # dladm show-link vmxnet3s1
    LINK        CLASS     MTU    STATE    BRIDGE     OVER
    vmxnet3s1   phys      9000   up       --         --
    
    # ipadm show-ifprop -p mtu vmxnet3s1
    IFNAME      PROPERTY        PROTO PERM CURRENT    PERSISTENT DEFAULT    POSSIBLE
    vmxnet3s1   mtu             ipv4  rw   9000       9000       9000       68-9000                                                      
    vmxnet3s1   mtu             ipv6  rw   9000       --         9000       1280-9000
     
    #1
    Last edited: Mar 15, 2019
  2. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    So I did a quick benchmark before and after changing the MTU for NFS shared ESXi storage:

    Config is
    • All In One
    • esxi vswitch setup with 9000 MTU
    • NFS shared datastore to ESXi 6.5 over software vswitch above
    • Windows VM with attached vmdk hard disk on NFS datastore

    1. With OmniOS vmxnet3s1 (datastore nfs interface) MTU 1500:

    omnios-r151028-1500mtu-benchmark-2x6tb-s3700-slog.PNG

    2. With vmxnet3s1 (datastore nfs interface) to MTU 9000


    omnios-r151028-9000mtu-benchmark-2x6tb-s3700-slog.PNG

    So it looks like it does matter what you set your MTU to even when sharing over the internal ESXi vswitch at least for NFS shared datastores.

    Cheers
     
    #2
    gea likes this.
  3. dragonme

    dragonme Active Member

    Joined:
    Apr 12, 2016
    Messages:
    282
    Likes Received:
    28
    what processor are your running and what resources are you assigning to both the napp-it and target VMs?

    I would love for you to run some comparisons turning off and on LSO at 9000 and 1500 as well

    I use very low power (for 1366) l5640s which are great for threads in a 2x setup (24 cpu) but clocks are low so single threaded stuff suffers and on esxi 6.0 with these chips and napp-it.. interrupts go crazy high during high iops ops..

    I am finding slightly better performance for external data through the hosts intel nic using LSO on... since the physical adapter permits LSO.. not sure if there will be a benefit or not in a pure virtual environment
     
    #3
  4. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Hi Dragonme,

    The hardware is:
    • Supermicro X10DRi-T
    • 2 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
    • 128 GB RAM
    • 3 x 9211-8i in HBA IT mode /w P20.0.0.7 firmware
    I've assigned the following to the OmniOS VM:
    • 4 x Cpus
    • 59392 MB RAM
    • 3 x 9211-8i in passthrough
     
    #4
  5. dragonme

    dragonme Active Member

    Joined:
    Apr 12, 2016
    Messages:
    282
    Likes Received:
    28
    @J-san

    thats hella beefy resources for a zfs machine... mine has 4cpu of l5640 and only 10gb of ram!!

    as for setting the mtu 9000 .. easiest way with napp-it is to just use the GUI? in network settings?

    there is a bit of confusion on the inter webs as to which solaris functions are best used to set/modify network settings especially virtual ones..

    glad you have it figured out

    report back if you ever get to messing with lso changes both running through physical adapters as well as internal vmswitches ...

    also, if you care concerned with latency on the NFS.. I have found that setting the napp-it vm latency high and reserving cpu/memory (memory has to be reserved anyway since you are passing the LSI cards) to help greatly.. especially under load..

    seems that interrupts and context switching is minimized thus less overhead and less %wait

    further.. keep all napp-it resources on the same NUMA node... unless absolutely necessary (i.e. don't give it more than 50% of memory / spanning between NUMA nodes) otherwise the VM becomes a 'wide' vm and resource scheduling again begins to add significant overhead and resource contention
     
    #5
  6. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Hi Dragonme,

    Yes, it's a work machine :) , and is also used for hosting VMs locally on it.

    I haven't played with setting the MTU via the napp-it GUI, although that might be a good spot as well.

    I have used napp-it for changing the SD.conf for updating the list of drives to force support of 4k sectors, and also to let OmniOS know about powerloss protection supported SSDs for faster SLOG performance.

    Following is a more modern Crystal Disk Mark 6..

    Also interesting is the difference between what virtual controller you are using to attaching a disk to your guest VM.
    The following is on the same datastore, just difference in scsi controllers.

    Eg. A test Win2k8r2 VM with the Virtual Disk Controller set to

    • LSI Logic SAS controller:
    omnios-r151028-9000mtu-CryDskMrk3-2x6tb-s3700-slog-lsi-ctrl.PNG
    • VMware paravirtual:
    omnios-r151028-9000mtu-CryDskMrk3-2x6tb-s3700-slog-paravirt-ctrl.PNG
     
    #6
  7. dragonme

    dragonme Active Member

    Joined:
    Apr 12, 2016
    Messages:
    282
    Likes Received:
    28
    @J-san

    yeah.. that paravirutual when avail is just like the improvements with vmxnet.. it reduces the overhead in 'simulating' something for the sake of simulating vs streamlining and optimizing the needed functionality while reducing the overhead ,.., i.e. processor time and more importantly now with machines patched for hearblead and meltdown.. interrupts / resources now that processes have to be separated ..


    napp-it .. really rather omnios did some strange things when I was using drives via RDM passthrough.. the existing pool that was built on another machine worked fine.. but passing a new drive to omni/napp-it then adding it to the pool.. it didn't format it correctly.. add it via a partition instead of whole disk and mismatched sector/ashift...didnt see it until after the server was reconfigured so I could pass the entire sata controller and do away with DRM.. everythink looked peachy until the switch then I could see what it did under the hood..

    it appeared to have been added as a whole disk as an RDM when in fact it was partitioned .. weird..
     
    #7
Similar Threads: OmniOS 1500
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Checking complete disks for errors in OmniOS Saturday at 6:54 AM
Solaris, Nexenta, OpenIndiana, and napp-it Intel X553 driver for OmniosCE Sep 19, 2019
Solaris, Nexenta, OpenIndiana, and napp-it Peformances problem using Resolve with folder on Omnios ZFS server Aug 12, 2019
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS 151030 VM (ESXi) with LSI 9400-8i Tri-Mode HBA freezing up Aug 10, 2019
Solaris, Nexenta, OpenIndiana, and napp-it SMB3 development on Illumos (NexentaStor, OmniOS, OpenIndiana and SmartOS) Aug 10, 2019

Share This Page