ESXi 6.7u3, Napp-It AIO, & Tuning

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pc-tecky

Active Member
May 1, 2013
202
26
28
Hello,

Effectively a repost (if I can remove the other one). I don't want to rework everything, but I also have no qualms about ripping everything out to rebuild and configure everything (ESXi 6.7u3, Napp-It, etc.) with a clean slate.

The all Intel server (my ESXi box) has: effectively the P4216 chassis; the S2600 motherboard; 2x Xeon E5-2670 v1 CPUs; 128GB DDR3 ECC RAM; ~500GB SanDisk SSD; a new 1000GB/1TB Samsung SSD; a new/unused 32GB SanDisk Fit USB flash drive; a 6-port 1GbE card; 2x LSI 2008 8-port IT mode HBA cards, and 8x 2.5" 4TB (shucked from external) Seagate drives in an existing ZFS array. I did a clean install of ESXi 6.7u3 to the USB drive, and put logical order to the chaos of the network interfaces by editing the esx.conf file at /etc/vmware/.

I now have a Mac Mini (Late 2014 model), in addition to all my other physical systems - which are all based on a physical 1GbE network. Issues that I am having - surprise surprise, the Mac Mini/MacOS 10.15 Catalina in both physical and VM forms will not transfer files to the SMB (enabled) ZFS array - but I can upload those same files up to EXSi 6.7u3 or even between the physical and VM just fine (but that is a cumbersome method to transfer files with). I also have the VM MacOS configured with dual NICs in an aggregate/bonded link - seems to work just fine.

So the questions I now have:

My last attempt with the Napp-It AIO was mixed. I was trying to create an internal/external network, an ESXi-NFS network, but all of that created a broken and chaotic virtual entangled network mess.

How to update OmniOS/OMniOSce? Per @gea reply, use"pkg update" akin to "sudo apt update && apt upgrade" (apt == apt-get -- same difference) used in many Linux distributions.

How much RAM do I want to give to this Napp-It VM? It seems initially it was 3GB, 8GB in the minimum, 16GB or even 32GB are options. As it's just me, one person, what would be ideal?

Do I want or need ZIL/slog or L2ARC drives for the Napp-It? How do I set that up?

How to update NappIt? From within Napp-It's WebGUI inteface (yeah, I know, that was a bit redundant).

Do I want to aggregate/bond a few e1000G and/or vmxnet3 virtual NICs?
-- dladm create-aggr -L actice -l vmxnet3s0 -l vmxnet3s1 -l vmxnet3s## fastaggrlan0
-- ipadm create-if fastaggrlan0
-- ipadm create-addr -T dhcp fastaggrlan0/dhcp (or fastaggrlan0/V4 for IPv4 vs fastaggrlan0/V6 for IPv6)
--- while everything looks configured correctly, I can't get an IP address.. is that because my home/consumer networking goods are inferior?

--or--

Do I want to simply set jumbo frames with MTU=9000 option to the virtual NICs and to the vSwitch(es) of ESXi? If so, how?

Eventually, I want to get/go 10GbE. Also considering PCIe to multi- NVMe +/- SATA adapter cards.
 

pc-tecky

Active Member
May 1, 2013
202
26
28
err, never mind, it seems trying to create an aggr/bonded link with OmniOSce creates some kind of Loopback condition on the trunk link(s), thus preventing the aggr/bonded link from getting an IP address, unless it's back to the inferior consumer networking hardware?!?

meanwhile, the vmxnet3 NICs may or may not have a connection and definitely don't have IP addresses? are the e1000g NICs better or the same as the vmxnet3 NICs in this situation for Napp-It??
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A few hints

If you want newest SMB and ZFS features, update to OmniOS 151032 stable
(or use the ova template with 151032)

Minimal RAM requirement for a 64bit Solaris/OmniOS is 2 GB. With Intel Optane disks, this may be even fast. With slower SSDs or mechanical disks you want a rambased read/write cache to improve performance. For a production system/fast lab system you usually use 8-64 GB for a ZFS storage server, in some cases even more. An L2Arc extens the rambased readcache. With enoug RAM its worthless. With less RAM it can be helpful.

On a Crash, the content of the rambased writecache is lost. While this will not corrupt ZFS, it may corrupt VMs with foreign filesystems or for example databases with transactions. To avoid a dataloss you can enable sync to protect the write-ramcache. With mechanical disks this can mean that write performance per disk can go down from 100 MB/s to 10 MB/s. The Slog is there to protect the ramcache without such a performance degration. (Slog must be VERY fast regarding latency and qd1 iops and must have powerloss protection).

Napp-it Update, see menu About > Update

ESXi and Solaris/OmniOS allow to setup virtual nics and virtual switches with aggregation and vlan support. Very many options... Basically I would follow a Kiss principle, especially on first steps. This means use one virtual switch in ESXi, connect the physical ESXi nic to this switch (no vlan or aggregation). Coonnect the ESXi management port to this vswitch. Then in OmniOS use one vnic (e1000 or vmxnet3) connected to the ESXi vswitch. This means that from everywhere you can access the ESXi and OmniOS management gui and lan services like NFS or SMB.

E1000 is a very old nic and the always available default in ESXi and VMs. It is slow and detected as a 1G nic. Vmxnet3 is much newer. It comes with vmware tools, allows a higher performance due a reduced CPU load. It is detected as a 10G nic. Real performance of both depend on CPU power as they are both software. Link aggregation only complicates vnic settings. Use one vmxnet3 nic and call it a day.

Jumboframes can increase performance on high speed networks. You must enable Jumbo in ESXi network settings and in OmniOS for a nic. In OmniOS you can set Jumbo (mtu) only on deactivated nics. So yo must either setup the nic manually or use e1000 for management, disable vmxnet2, set mtu to 9000 and re-enable (napp-it menu System > network eth)
 

F1ydave

Member
Mar 9, 2014
137
21
18
The all Intel server (my ESXi box) has: effectively the P4216 chassis; the S2600 motherboard; 2x Xeon E5-2670 v1 CPUs; 128GB DDR3 ECC RAM; ~500GB SanDisk SSD; a new 1000GB/1TB Samsung SSD; a new/unused 32GB SanDisk Fit USB flash drive; a 6-port 1GbE card; 2x LSI 2008 8-port IT mode HBA cards, and 8x 2.5" 4TB (shucked from external) Seagate drives in an existing ZFS array. I did a clean install of ESXi 6.7u3 to the USB drive, and put logical order to the chaos of the network interfaces by editing the esx.conf file at /etc/vmware/.
You have a very similar build to me. Gea answers a number of your questions that I asked him over the last month or two.

https://forums.servethehome.com/ind...-virtual-machine-memory-usage-critical.26714/

I dedicated 40gb of ram per Gea's suggestion to Napp-it and since I have a backup power supply, I didnt go with a slog. In the next month or two I will setup esxi to power down if an outage is detected. I have an OVA for my backup power supply around here somewhere.


I am curious to know more about "put logical order to the chaos of the network interfaces by editing the esx.conf file at /etc/vmware/." What was the purpose of this?