1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

ESXi + OmniOS w/ napp-it - Optimizations/Recommended Settings?

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by ZzBloopzZ, Sep 27, 2013.

  1. ZzBloopzZ

    ZzBloopzZ New Member

    Hello,

    I have a newly built All-In-One. It is currently on ESXi v5.1 Update 1 with just one VM installed currently, OmniOS with napp-it.

    Server Specs:

    Intel E3-1230v2
    32GB 1600 ECC UDIMM
    SuperMicro X9SCM-IIF
    2x IBM M1015 (IT Mode) connected to 10x Toshiba 3TB in RAIDZ2
    1x 16GB SanDisk Cruzer Fit USB Flash Drive (for ESXi)
    1x Crucial M4 256GB SSD (for my VM's)
    Connected via CAT 5e on a Trendnet 16 port Gigabit switch

    Usage:

    I plan to just run 2 VM's on this system. Primary Vm, will be OmniOS for file server duties using ZFS. I also plan to use it for FTP and SABnzbd. Second VM will be pfSense which I hear is amazing. I will enable full pass-through of the 2 IBM M1015 to OmniOS.

    Based on my intended usage, what settings do you guys recommend?

    ESXi Related Questions:

    1. How much RAM should I allocate to OmniOS?
    2. How many vCore and vCPU should I set for OmniOS?
    3. For best network performance, should I use vmxnet3 adapter or the E1000 virtual adapter? One guide I read says to use vmxnet3 while other says E1000.
    4. Since the mobo has two Intel NIC's. Should I use use the other NIC and pass-through to PFsense?
    5. Since my pass-through 2x IBM M1015 have the v16 BIOS. Should I load v16 drivers into ESXi or in the OmniOS VM?
    6. Any other ESXi related settings/tweaks I should configure?


    napp-it Related Questions:

    7. When creating a pool, ZFS version of pool, should I select the standard "default" option or is there a particular version I should use? This is the first time I will be using my 10x 3TB Toshibas, so I am not coming from a older pool version.
    8. Any other particular tweaks I should perform in napp-it? I mainly followed the guide on napp-it.org. Look's like there is nothing else to really tweak besides creating the network shares, but figured I would ask.

    Appreciate any feedback!
  2. 33_viper_33

    33_viper_33 Member

    I haven’t started working on optimization yet as far as giving VMs only what they need, so I’m a bad source of info there.
    For pfsense, you will want to pass through one of your NICs for the WAN. This is mainly for security reasons. If your switch supports it, you can use vlans instead, but I’ve always shyed away due to security concerns. This is something I would like to experiment with as I play with vCloud and allow guests to move from host to host. The other NIC can be a vnic. The E1000 will be plenty fast as I’ve seen it hit upwards of 750Mb/s. I did get the vmxnet 3 working under pfsense. VMware Front Experience: How to install (or update) VMware Tools in pfSense is a good guide to get this working. Just note, you will need to change the path for some of these commands to the version of freebsd your copy of pfsense is using. Mine seems stable but I have seen an error on boot twice over the past 3 months. I haven’t figured out the cause is yet but am not too motivated since it doesn’t seem to effect anything. If this is something you want to be stable and not experimental, than e1000 is more than adequate for a router.

    I use vmxnet 3 for all my guests. e1000 is more resource intensive than vmxnet3. I think I may be in the minority as most posts from the real experts here appear to use the e1000. The main advantage of e1000 is compatibility. Vmxnet3 is not supported by many OS natively.

    For ZFS, I am again not an expert and have only recently started playing with it. Most guides I have read suggest 1GB of RAM per 1TB of storage plus an extra 2GB of ram for the OS. I would default to DBA or Piglover’s experience and advice. I believe one of those two have written something up recently that suggests that 1GB/1TB is overkill.

    For OmniOS, take a look for andreas’ post for a preconfigured .OVF. It may save you some time. I’m using OpenIndiana with success. I may be creating a new install of it soon. If you would like, I can create a .OVF file containing Openindiana and Napp-it.

    For pass through, the drivers need to be present in the OS. You definitely want to pass your storage drives (EDIT: storage adapter) to the guest.
    Last edited: Sep 28, 2013
  3. gea

    gea Member

    1. all that is not needed elsewhere
    OmniOS/ ZFS needs about 1-2 GB. If you are satisfied with pure disk peformance this is enough.
    But RAM is about 1000x faster than your disks. If you add more RAM it is used as readcache.

    With pfsense as the only other VM, it would give OmniOS 24 GB RAM.

    2.
    With ESXi 4 there was a stability problem with vcores > 1
    With ESXi5 you can give two or four vcpu cores

    3.
    E1000 is the stablest option. With internal All-In-One transfers, it allows datarates up to some GB/s.
    VMXnet3 is the fastest option. It reduces CPU load and gives you up the twice performance.
    On some configs it is not as stable as e1000.

    4. I always use a switch to split physical ports and connect the switch to ESXI or Solaris with a tagged vlanand split it there to different vnics. This ist the simpelst config and allows vmotion and Snaps on ESXi side

    (If someone needs your data, he goes to the next bar and order a thiev instead of steal your data in a manner that requires highest engineering skills in a NSA manner)

    5.
    Use the dafault driver

    6.
    Beside RAM and vcpu, nothing per default.

    7.
    standard gives you a V.28/5000 pool with feature newest flags enabled.
    You cannot export such a pool to other/older systems.
    If you need this export option, select v.28

    8.
    Nothing per default.
    (Defaults on Solaris are mostly ok for a NAS/SAN)
    Last edited: Sep 28, 2013
  4. ZzBloopzZ

    ZzBloopzZ New Member

    Great, thank you for the quick response. Changed around few things, looks like I can soon finally use my server!
  5. 33_viper_33

    33_viper_33 Member

    Please post your experiences with your setup. Though I'm leaning towards a single processor E5 system for more ram, I'm still considering the E3-1230v3 or 1240v3. I would like to know how it handles your setup.
  6. ZzBloopzZ

    ZzBloopzZ New Member

    Well I built this system around April 2013, but just finally had time to configure/stress test it last week. Everything is working great so far!

    Low on electricity, very quiet and fast enough for my media storage needs. Granted I will only have 2-3 VM's on it total, so it is good enough for me.

    The only tough part was to find 32GB 1600 ECC UDIMM, as it is really expensive. I ended up buying it used on a random forum for $175 shipped.
  7. gea

    gea Member

    Problem with ESXi 5.5 and e1000 vnic:

    I have stability problems when using e1000 on the OmniOS VM under ESXi 5.5
    It works when using the VMXnet3 vnic.

    Maybe this problem:
    VMware KB: Possible data corruption after a Windows 2012 virtual machine network transfer


    update:
    ESXi 5.5 and e1000 seems ok after disabling tcp offload
    by adding the following lines to /kernel/drv/e1000g.conf (reboot required)

    #tcp offload disable
    tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
    lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;


    I have uploaded 13b with this fix included
    (base config, you can add add-ons separately when needed)
    Last edited: Oct 1, 2013
  8. nemeas

    nemeas New Member

    Has anyone managed to get jumbo packets working on the vmxnet3 adapter? Solaris related forums confuse me on the correct way of configuring this.
  9. gea

    gea Member

    I do not use myself, but I suppose something like (for vmxnet3s0)

    ipadm delete-if vmxnet3s0
    ipadm create-if vmxnet3s0 (maybe after set mtu)
    dladm set-linkprop -t -p mtu=9000 vmxnet3s0
    ipadm create-addr -T static -a x.x.x.x/24 vmxnet3s01/v4

    optionally. you need to edit (with a reboot) /kernel/drv/vmxnet3s.conf
    to allow mtu > 1500 for this driver
  10. ZzBloopzZ

    ZzBloopzZ New Member

    Thanks for the update Gea. I recall you having other performance and NFS stability problems on ESXi v5.5 and OmniOS. Do you have details and remedies for these problems?

    Debating if I should upgrade from ESXi 5.1 Update 1 to 5.5 this weekend.
  11. gea

    gea Member

    If you check the knowledge base at vmware or any other, you must accept that noting is perfect at all.

    Up to ESXi 5.1 I have had no serious problems, not with e1000 nor with VMXnet3
    With 5.5 the e1000 driver will work only with TCP segmentation offload disabled.

    Usually, you must evaluate this with your hardware.
  12. ZzBloopzZ

    ZzBloopzZ New Member

    Thanks for the feedback.

    I will upgrade to ESXi 5.5, since I plan to use VMXnet3 with OmniOS.

Share This Page