ESXi 5.5, OmniOS Stable and Napp-it - vmxnet3 and jumbo frames

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
It works great for me so far, as I've had to reboot a few times to test BIOS settings.
The only thing to remember of course is to ensure that the permissions on the script in /etc/init.d/ are correct.

My simple script above was originally from a copied napp-it startup/shutdown script, so the permissions and +x executable permission was already set upon the copied script.

I might have to re-test with the Solaris 11 driver modification for vmware tools that you mentioned, and see if the CPU usage or throughput is better.

Here's my hardware:

Supermicro SC846BA-R920B
Supermicro X10SRL-F (single proc)
32GB RAM DDR4 2133 Mhz - 16 GB given to OmniOS VM.
1x Intel Xeon E5-2620v3 - 6 core
100GB - Intel S3700 SSD SLOG
2x 480 GB Intel S3500 SSD
4x WD RE4 SATA 7200rpm
 
Last edited:
  • Like
Reactions: Patrick

yu130960

Member
Sep 4, 2013
127
10
18
Canada
I made the changes, but haven't rebooted yet. To make change the permissions and mkae the script executable

Code:
chmod 755 /etc/init.d/VMnetEnableJumbo
chmod +x /etc/init.d/VMnetEnableJumbo
 

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
I did some more testing and posted a benchmark with most of my config and tuning here:
(from BIOS settings, network, disk etc)

[H]ard|Forum - View Single Post - OpenSolaris derived ZFS NAS/ SAN (Nexenta*, OpenIndiana, Solaris Express)

I'll do a final bit of performance testing, but the following seems to work well for my setup.

- Jumbo Frames enabled using Solaris10 driver
- Disabling LSO
- Bumping TCP buffers

Changes made to:
/kernel/drv/vmxnet3s.conf

TxRingSize=512,512,512,512,512,512,512,512,512,512;
RxRingSize=512,512,512,512,512,512,512,512,512,512;
RxBufPoolLimit=1024,1024,1024,1024,1024,1024,1024,1024,1024, 1024;
EnableLSO=0,0,0,0,0,0,0,0,0,0;
MTU=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;

TCP buffer sizes:
### Change tcp buffers: max_buf (4 MB), recv_buf (1 MB), and send_buf (1 MB):

# ipadm set-prop -p max_buf=4194304 tcp
# ipadm set-prop -p recv_buf=1048576 tcp
# ipadm set-prop -p send_buf=1048576 tcp


Network benchmark with iperf:

There seemes to be a bit of variability, either some throttling going on
in ESXi or in the Ubuntu Client or OmniOS server.

Ubuntu VM - 14.04.1 LTS Client (9000 mtu)
OmniOS VM server - 4 vcpus
(over storagenet vswitch, no nic attached)

iPerf Short test:
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 27.4 GBytes 23.5 Gbits/sec

Longer test:
[ 5] 0.0-520.0 sec 793 GBytes 13.1 Gbits/sec
44%-20% cpu (avg 31%) of OmniOS guest (4 vcpus) measured from ESXi.

These settings seemed to give good performance without eating into the CPU as much.
(when I enabled LSO and/or the Solaris 11 driver the CPU usage was higher)
 
  • Like
Reactions: yu130960

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Awesome work. We have the same set up. Can you take a screen grab of your ESXI network setup. I still don't have a handle on how people are setting up the AIO by having the SAN on a separate network without a physical card.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE

RyC

Active Member
Oct 17, 2013
359
88
28
Thanks gea. If I want to update my image from the previous one, do you recommend installing this image and importing the existing pool, or just go through the manual upgrade procedure for OmniOS and nappit?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I run OmniOS all the time on vSphere 6, last tried Gea's napp-it appliance maybe 2 months or so ago and had no issues either, the latest appliance I have not tried.
 

cw823

Active Member
Jan 14, 2014
414
189
43
I cannot get the vmxnet3 adapters to use jumbo frames no matter what I do.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
They changed some underlying network plumbing, I WAS using some variation of ndd/ifconfig/dladm cmds such as:

Used to use this, need to check if still needed.

vi /kernel/drv/vmxnet3s.conf

mtu=9000,9000,9000,9000,9000,9000,9000,9000,9000,9000;

Also:
ndd -set /dev/vmxnet3s0 accept-jumbo 1

And finally:
ifconfig vmxnet3s0 mtu 9000

Now it just seems to need:


dladm set-linkprop -p mtu=9000 vmxnet3s0

I think variations of dladm and ipadm will show the links/interfaces using jumbo after this.


 

mad1993max

New Member
Jan 27, 2016
17
0
1
34
i got jumbo frames enabled on solaris 11.3 and a vmxnet3, and everything is working exept the nappit webserver it only loads the initial screen "Initializing Web-UI" and stucks at this screen has someone a solution, or experienced the same problem?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Jumboframes can give performance but also problems as every part of your network must support. Even if thenapp-it start screen is not shown from cache it can be that small amounts of data work while larger not.

There is nothing special with http/https transfers so if you find problems there you will find them propably with other transfers. I had a case where everything seems to work but high performance replications fail one or the other time.

I would use at least a separate management interface with base settings.
 

crazyj

Member
Nov 19, 2015
75
2
8
49
Can anyone confirm that these tweaks still need to be done with the current Napp-it / OmniOS 2017.01 or 2016.04? I'm having some network connectivity issues in my AIO and wondering if there's things I need to address in the base setup of it.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Solarish defaults are optimized for slower systems with less RAM.
On faster systems with more RAM you can

- use vmxnet3s with increased vmxnet3s buffers
- increase NFS servers and buffers (default on newest OmniOS)
- incease tcp buffers
- use Jumbo Frames (relevant for external transfers over a real nic)

This is not a must but a tuning option on some systems and use cases.
What is your problem?
 

crazyj

Member
Nov 19, 2015
75
2
8
49
I don't really know. For a week, I haven't been able to reconnect my nfs between napp-it and centos7 vm. Now all of a sudden it mounted again. Is it being flaky? Idk. I thought maybe some of my jumbo frames settings might have been screwing with it but hard to think that with it working.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
What I have seen is that when you modify vmxnet3 settings or some NFS settings you should/must reboot ESXi
 

dragonme

Active Member
Apr 12, 2016
282
25
28
what if you only want to apply mtu or lso etc to one vmxnet3 device and not all ... since one is connected to my 'real' network and the other is the all in one internal NFS network?

does any tuning improve the internal (no physical nic) network?