OmniOS / ESXi / multiple NICs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

asche

New Member
Oct 6, 2017
18
4
3
46
Ladies and gentlemen, hope you can help me out here as I have not (yet) found the pertinent information by google crawling:

I am building a new "all-in-one" home server based on ESXi and a storage VM, but having trouble with multiple NICs on OmniOS (and potentially on other VMs).

Intended layout:
  • ESXi as host supervisor.
  • OmniOS with napp-it as storage VM
    • SMB storage to other physical clients (desktop, laptops) for e.g. backup / file level access
    • SMB storage to "server" VMs so they can e.g. provide streaming services
    • NFS storage to ESXi to store the VMs
  • separate VMs to provide services to my network (Squeezebox, NextCloud, print server, etc.)
Twist: To increase performance, I would like to have a multi-NIC setup:
  • SMB storage go out via a dedicated (pass-through) NIC to external clients (desktop) (192.168.1.250)
  • SMB storage to be provided via an internal VMware NIC (192.168.250.250) / VM storage network to the other VMs, so they can access the storage fast (greater-than-gigabit speed)
  • NFS storage to be provided via (the same) internal VMware NIC / VM storage network to the ESXi host to store the VMs
  • the other VMs will use the internal physical NIC (via a VMware NIC) for general traffic, i.e. they have two VMware NICs (one for external traffic and one to access the storage network): one in 192.168.1.x and one in 192.168.250.x.
I have set up the internal VM storage network (192.168.250.x) and (after a few trial and errors) my Debian VM can ping and access the storage VM over this, while also being able to access internet etc. through its second NIC.

Access by the VM kernel to the storage network should work too - not sufficiently tested yet - I have a second VMkernel interface in a separate port group on the same vSwitch as the VM storage network and in the same IP range (192.168.250.1). Haven't gotten round to properly set up the NFS share though.

Problem:

However, I cannot get OmniOS to work (consistently) with both a dedicated physical NIC (192.16.1.x) and a separate VMware internal NIC (192.168.250.250).

If I have only the physical NIC, it works fine - obtains an IP address via DHCP and is accessible.

If I add the VMware NIC, the physical NIC stops to receive a DHCP address and can no longer reach the 192.168.1.x network. Even if I set up the physical NIC as a static address -- no luck, 192.168.1.250 can not reach/ping the gateway at 192.168.1.1. Up, down, reboot, re-defining the default gateway et al. have not helped at all.

The VMware NIC however can ping /reach its the VM storage network in 192.168.250.x.

Would you happen to have any pointers? Happy to "RTFM" if you would just tell me the correct chapter ...

Hardware (in case you feel it matters):
Dell PowerEdge T30
32GB ECC RAM
NICs: (1) built-in Intel 219(?) NIC; (2) Intel i350-T4 quad NIC
4x3 TB SATA NAS drives
1x Intel S3700 boot drive [may become L2ARC/SLOG later on]
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
This should work if you
- provide manual ip settings for all nics (do not mix static and dhcp)

For the internal NFS traffic
- create a vswitch where you connect a vnic from OmniOS
- connect the management net to this vswitch
- all ip must be in the same network range

to allow additional external SMB
- connect the vswitch with a physical nic

For a second nic in pass-through mode
- all ip nics must be static (vnic and physical nic)
- set gateway according to your external net (this affects all nics)
- set dns ex to Google 8.8.8.8

another option would be a second ESXi vswitch connected to your second physical nic
You can then choose for every VM to connect a vnic to the second physical nic (always prefer vmxnet3 for vnics)
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I am not sure whether the performance of a pass-through nic vs letting esxi handle the networking and using vmxnet3 nice is going to make a significant impact.

I had documented the steps for a working AIO earlier. Pick up the relevant sections that you need. As @gea mentioned use static ips.

https://forums.servethehome.com/index.php?threads/esxi-napp-it-all-in-one-with-usb-datastore.15897/

Resource for command line network configuration.

https://omnios.omniti.com/wiki.php/GeneralAdministration#SettingupdynamicDHCPnetworking
 

asche

New Member
Oct 6, 2017
18
4
3
46
Thank you both - I have now a working system!

1. Using only static IPs in OmniOS.
2. Using only VNICs in OmniOs. One for the internal storage network, one attached to a (separate) physical NIC.
3. Works.
 
Last edited: