ESXi 5.5 vswitch network setup - All-in-one

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by J-san, Dec 2, 2014.

  1. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    I'm using this ESXI network setup in my test server running:
    - OmniOS as the NFS share
    - ESXi local vSwitch to connect OmniOS NFS share to ESXI for datastore
    - 9000 MTU for Jumbo Frame support on storagenet vswitch and 2nd OmniOS nic.
    - 1500 MTU for regular traffic to smb shares to other outside machines on 1st OmniOS nic.

    Thought I would share it as I heard about the concept in a few places but not any step by step guide.

    I will assume you already have OmniOS setup with networking and will add a second Network Card using vmxnet3 to the OmniOS VM. Also assume VMware tools are already installed and vmware tools \bin perl script was not modified.

    Regular data traffic to outside clients/servers will be on:
    virtual machine port group: "VM Network" - with OmniOS ip address set to 192.168.1.15
    ESXi VMKernel Port - Management Network - IP address 192.168.1.220

    Storage traffic will be on a new vSwitch on IP address in range:
    192.168.20.1 - 192.168.20.254
    with virtual machine port group: "storagenet"
    with OmniOS ip address set to 192.168.20.15
    ESXi VMKernel Port - "VMKernel-storagenet" - IP address 192.168.20.220

    Steps:

    Under "Configuration -> Networking" in ESXi 5.5

    Setting up separate vSwitch for storage traffic:
    1. Click Add Networking.
    2. Choose "Virtual Machine" type
    3. Choose "Create a vShpere standard switch" and uncheck any "vmnic1" etc. we want "No physical Adapters".
    4. Choose network label "storagenet".
    5. Choose Finish.

    Add VMkernel Port to new vSwitch
    (allowing NFS mount via this virtual switch to OmniOS)

    6. Make a note of the vSwitch# for new Virtual Machine Port Group.
    7. Click Add Networking.
    8. Choose "VMKernel"
    9. Select "Use vSwitch#" (that you noted)
    In the preview it should list the new storagenet.
    10. Choose network label "VMKernel-storagenet".
    11. Next.
    12. Use the following IP settings:
    IP: 192.168.20.220
    (this will be the IP you must authorize in NFS share)
    Subnet mask: 255.255.255.0
    13. Next, Finish.

    Edit MTU settings in vSwitch and VMkernel to accommodate Jumbo Frames

    14. Choose "Properties" next to storagenet vswitch#
    15. Click on vSwitch on left view of properties and choose "Edit"
    16. Change MTU to 9000 and click OK.
    17. Click on VMKernel-storagenet on left view and choose "Edit"
    18. Change MTU to 9000 and click OK.
    (now your virtual switches will pass MTU 9000 packets correctly)

    Add new vmxnet3 network card to OmniOS VM.

    19. Edit Settings on OmniOS VM
    20. "Add" -> "Ethernet Adapter"
    21. Choose type: vmxnet3, Network Label: storagenet
    22. Click Next, Finish.


    Setup IP address in OmniOS VM
    (this assumes you already have a setup vmxnet3s network adapter)

    23. Create interface if needed
    # ipadm create-if vmxnet3s1

    24. Setup interface ip address
    # ipadm create-addr -T static -a 192.168.20.15/24 vmxnet3s1/v4

    Show adapters
    # dladm show-link

    LINK CLASS MTU STATE BRIDGE OVER
    vmxnet3s0 phys 1500 up -- -- (management interface 1 GB physical)
    vmxnet3s1 phys 1500 unknown -- -- (Storage interface 10G virtual)

    Check the speed and state

    #dladm show-phys


    25. Configure Jumbo Frames in OmniOS (refer to other thread)
    See:
    https://forums.servethehome.com/ind...le-and-napp-it-vmxnet3-and-jumbo-frames.2853/

    Note: Different MTU settings for file sharing/regular traffic and virtual storage network.
    If you would like the original adapter vmxnet3s0 to stay as MTU 1500,
    and the new vmxnet3s1 to be MTU 9000 perform the JumboFrame setups in 25, and afterwards:

    ****************************
    *** Allow one VMXNet3 adapter to be MTU 1500,
    *** otherwise will default to max 9000,
    *** this setting will persist across reboots.
    ****************************
    Change regular file sharing MTU so packets don't have to be re-fragmented.
    # ipadm set-ifprop -p mtu=1500 -m ipv4 vmxnet3s0


    26. Ensure ZFS sharenfs security is set to allow new ip address of storage vmkernel:

    You can use : to allow multiple hosts:
    # zfs set sharenfs=rw=192.168.1.220:192.168.20.220,root=192.168.1.220:192.168.20.220 pool/dstore

    or just one host (allowing new esxi vmkernel storage ip address to connect):
    # zfs set sharenfs=rw=192.168.20.220,root=192.168.20.220 pool/dstore

    ------------------------------------

    Unmount any NFS shares from old IP address in ESXi

    27. Remove from inventory any virtual machines running off old NFS storage
    28. Remove NFS datastores shared off old NFS storage ip address (and switch).

    Add NFS shares to ESXi from OmniOS using new storagenet vSwitch

    29. Click on "Configuration" - "Storage" - "Add Storage"
    30. Choose Network File System, "next"
    31. Enter: Server: 192.168.20.15 (new vmxnet interface we added for OmniOS)
    Folder: /pool/dstore
    Datastore Name: VSAN-pool-dstore

    32. Profit from fast NFS storage!


    [​IMG]
     
    #1
    dawsonkm, yu130960, sboesch and 2 others like this.
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,541
    Likes Received:
    4,464
    Nice guide.
     
    #2
  3. yu130960

    yu130960 Member

    Joined:
    Sep 4, 2013
    Messages:
    122
    Likes Received:
    9
    I am going to read this thread a few times before I attempt an overhaul of my system.

    I guess there is no downside to keeping your management network to jumbo frames provided your physical network switch supports it.

    Awesome write up and love the pic. I see that the scroll bar cut off the top, am I missing something from the top half of the window?
     
    #3
  4. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Not really any downside if your management network supports it, otherwise your network performance might suffer as the packets have to be re-transmitted with a smaller MTU.

    There's nothing really up there apart from the normal stuff in the default virtual machine port group: "VM Network".
    This is just the default networking group that is created with a default ESXi install, where most of your virtual machines connect to.


    One thing if you want to just want to test this, is you don't technically have to remove your VMs from inventory.

    You could: perform most of the steps, adding the 2nd network card to the OmniOS VM + setup 2nd ip address, adding the new virtual network switch etc, and re-add your existing datastore via NFS using the new ip address along-side your existing one (just name it differently).

    You could then create a test VM or remove from inventory a test VM and re-add it through the new NFS datastore and change its network to the virtual switch and set its IP range to the new network. (I just added a 2nd network card to my Ubuntu VM for testing and set the IP to the 192.168.20.x range to test directly between them).

    Let me know your results! I would be interested to see if it makes a difference in performance.

    I tried assigning my un-used 1GB Intel NIC to the storagenet VM network when benchmarking to see if LSO in Solaris would help out if there was actual hardware present in the vSwitch but it seemed to make the CPU usage higher without any benefit in bandwidth. (it might be different with a 10GB NIC that supports LSO..)
     
    #4
    Last edited: Dec 3, 2014
    yu130960 likes this.
  5. Atomicslave

    Atomicslave Member

    Joined:
    Dec 3, 2014
    Messages:
    46
    Likes Received:
    11
    How big was you're performance gain?
     
    #5
  6. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Here's a comparison of NFS performance using the old vswitch set to 1500MTU versus a separate network vswitch with 9000MTU. The overall ESXi cpu usage is to the right of each benchmark.

    [​IMG]
     
    #6
    yu130960 likes this.
  7. yu130960

    yu130960 Member

    Joined:
    Sep 4, 2013
    Messages:
    122
    Likes Received:
    9
    Nice! How do you do the benchmarks? Do you just run Crystal on a VM?
     
    #7
  8. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Yes, I just removed the additional Hard Disk for benchmarking, Added the original VM Network NFS datastore which uses 1500mtu, and then added the Hard Disk back on the 1500mtu datastore mount and ran Crystal in the VM.

    I just placed the "Opened" Console for the VM next to the right side of CPU usage performance tab for the top level ESXi to get the CPU usage on the server. (nothing else is really running at this point yet) Then ran the Crystal benchmark. Print screen and then cropped in Gimp.

    Removed Hard Disk, added back through 9000mtu NFS datastore mount, and re-ran benchmark.
    Print screen and then cropped in Gimp.

    The CPU usage is pretty consistent so you do save CPU as you're getting more data back and forth with 9000mtu without chopping it up into smaller size with 1500mtu (more TCP header overhead).
     
    #8
  9. Hank C

    Hank C Active Member

    Joined:
    Jun 16, 2014
    Messages:
    642
    Likes Received:
    66
    what storage do you have for the OminiOS? What tuning have you done on it?
     
    #9
  10. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    #10
  11. Hank C

    Hank C Active Member

    Joined:
    Jun 16, 2014
    Messages:
    642
    Likes Received:
    66
    would freenas work better for newbie? or OmniOS would be fine for newbie tweaking?
     
    #11
  12. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    #12
  13. Hank C

    Hank C Active Member

    Joined:
    Jun 16, 2014
    Messages:
    642
    Likes Received:
    66
    i want to do it barebone. I got 10gb on it as well so I like to know how to tweak with easier way =)
     
    #13
  14. epicurean

    epicurean Member

    Joined:
    Sep 29, 2014
    Messages:
    543
    Likes Received:
    20
    Hi J-San,
    Thank you very much for your guide.

    How do you do no.23 onwards? via putty into OmniOS VM?
     
    #14
  15. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,240
    Likes Received:
    741
    I am currently doing some performancetests under ESXi and AiO
    I found the following

    If I install ESXi (5.5U2, 6.00u1) and import the napp-it storage server template
    - no problem with NFS

    If I change the OS, update the OS or modify vmxnet3 settings
    - mostly I needed to start OmniOS twice to enable new settings in vmxnet3s.conf (seems they are buffered)
    - sometimes NFS is not mounted or disappears, an ESXi reboot fixes that
    - sometimes it was necessary ro reboot ESXi and remove all VMs from inventory, delete/readd NFS and readd VMs to inventory

    - ESXi 5.5
    - I was not able to use MTU 9000 in vmxnet3s.conf as ESXi hangs during power on the Windows VM

    ESXi 6
    MTU 9000 ok

    On a Testsetup with a Windows 10 VM on a local AiO NFS via vswitch and a local performance test
    - nearly no difference between different network setting (as all traffic is software)
    - ip tuning is mainly relevant for external transfer or if you pass-through a nic

    btw
    I have added a vmxnet3s tuning section to my tuning panel for napp-it pro
    in menu system > tuning
     
    #15
    Last edited: Jan 15, 2016
  16. epicurean

    epicurean Member

    Joined:
    Sep 29, 2014
    Messages:
    543
    Likes Received:
    20
    Hi Gea,
    Would you suggest I upgrade to ESXI 6 from 5.5 in order to get MTU 9000?
     
    #16
  17. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,240
    Likes Received:
    741
    If you use an external 10G ethernet Jumboframes is faster -
    Tests about the difference going over ESXi or use a 10G network in pass-through mode are needed.
    see https://www.napp-it.org/doc/downloads/performance_smb2.pdf

    For internal transfers between ESXi and the storage VM over vnics in software, Jumboframes seems not as relevant. Beside that ESXi 6.00U1 seems quite stable.

    Another aspect may be the new webconsole and the option to use USB as a datastore
    (for cheaper napp-in-one and home setups as you can put the storage VM onto)

    How to use USB as a datastore in ESXi 6
    USB Devices as VMFS Datastore in vSphere ESXi 6.0
    you can use the new free ESXi webclient to create an USB datastore in ESXI (stop usbarbitrator per CLI)
     
    #17
  18. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Hi Epicurean,

    Anything with a '#' is done from the command line shell inside the OmniOS VM yes.. so any way you would like to get to the shell:

    1. Open the ESXi Console for VM
    (sometimes best if you're modifying network settings so you will not get disconnected)
    2. SSH in via Putty, Cygwin etc

    Cheers
     
    #18
    epicurean likes this.
  19. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Hi Gea,

    On ESXi 5.5 you mentioned the hanging problem during bootup of a VM (from the AIO datastore), did you perform the following command after bootup of the OmniOS VM?

    # ndd -set /dev/vmxnet3s0 accept-jumbo 1

    I experienced the All-In-One OmniOS datastores becoming very erratic and dropping out on ESXi 5.5u2 if I didn't perform the above mentioned command upon every boot. (I put it into a startup script to ensure it gets called)

    See ESXi 5.5, OmniOS Stable and Napp-it - vmxnet3 and jumbo frames


    Also, for the Windows 10 VM just curious, but what you were testing?
    Win10VM network test to the OmniOS VM? Or just some file read/write benchmark testing within the Windows 10 VM to a local drive on the OmniOS datastore?

    Cheers!
     
    #19
  20. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,240
    Likes Received:
    741
    ESXi 5.5
    I only modified vmxnet3s.conf (enable jumbo) and activated via ipadm and skipped Jumbo then as it gaves problems. My production systems are now all on 6.00u1 so the interest to solve this 5.5 problem was low.

    With Windows 10 I tested either local disk performance to C: as the VM is on NFS and to virtual disks on other pools as I was interested in overall disk performance to my ZFS storage over NFS.
     
    #20
Similar Threads: ESXi vswitch
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it The ultimate ZFS ESXi datastore for the advanced single User (want, not have) Nov 3, 2019
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS 151030 VM (ESXi) with LSI 9400-8i Tri-Mode HBA freezing up Aug 10, 2019
Solaris, Nexenta, OpenIndiana, and napp-it ESXi 6.5 with ZFS backed NFS Datastore - Optane Latency AIO benchmarks Apr 5, 2019
Solaris, Nexenta, OpenIndiana, and napp-it ESXi VMDK on ZFS backed NFS - thin provisioning Mar 25, 2019
Solaris, Nexenta, OpenIndiana, and napp-it how to create iscsi volume for datastore use in esxi Mar 1, 2019

Share This Page