Napp-It AIO on 6.5[6.7] and Windows 10 Pro

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pc-tecky

Active Member
May 1, 2013
202
26
28
@gea, Can you explain this behavior?

I just got done installing ESXi 6.7 and Napp-It (via OVA, dated May 2018), then stepped down to ESXi 6.5U2 and restored/registered Napp-It with ESXi 6.5. I can get files from Windows 10 Pro to the file share on Napp-It, but I can't get any sized file (little documents and larger ISOs) from Napp-It back to Windows 10 with either a network Error 0x8007003B or simply hangs/times out.

On another note, I think I'm hosed on my older FreeNAS setup, while I still have the data drives mostly untouched, I can't find the OS drive, and therefore can't restore the prior ZFS array.

----

Got it working, 6 - 7 hours later.. Added SMBv1 back to Windows 10. Double checked my workgroup name. And lastly, made a change to reflect private network as private, and not public. I suspect the last change made all the difference. Buried and not as simple straight forward as I recall it being in Windows 7 and prior.

Next question though is how to allow ESXi access to a portion of the ZFS data pool available to host some VMs?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,275
1,260
113
DE
Next question though is how to allow ESXi access to a portion of the ZFS data pool available to host some VMs?
You can either share a ZFS filesystem via NFS (what I prefer) or via iSCSI.
In ESXi you can use either as a datastore for VMs
 

pc-tecky

Active Member
May 1, 2013
202
26
28
I've been trying to share the ZFS via NFS from within Napp-It. But so far I still only have the local SSD as only ESXi VM storage option. What do I need to do? Where do I go? I'm pretty sure I have NFS running from within Napp-It, whether secured or not is another issue..
 

gea

Well-Known Member
Dec 31, 2010
3,275
1,260
113
DE
1.
A napp-it network interface must be in the same ip range and vswitch than the ESXi management interface

2.
Enable NFS with a everyone@=modify ACL set

3.
add the NFS share in ESXi like 192.168.1.10:/poolname/filesystem

To make it secure, add a firewall rule for NFS or use iSCSI restricted to an ip range. For best security use a dedicated napp-it for VM storage only (ex SSD pool) and use a dedicated vswitch for all management interfaces, no public access.

For general filer use, add a second napp-it VM with mass-storage and a second nic to the management LAN ex for backups.
 

pc-tecky

Active Member
May 1, 2013
202
26
28
@K D, thanks.. missing something.. a few things actually..

1) How did you go from default vSwitch0 (etc.) to custom names?

2) maybe my downfall, I have a singular ZFS pool set for SAMBA/CIFS initally, then enabled NFS. Still not seeing how it's made available to ESXi.

Looking at adding some complexities like hosting untangle NG or pfSense with a virtual ESXi External switch and a virtual ESXi Internal switch, and maybe an Internal NFS only switch or VLAN.

Modem WiFi Adapter Switch <-> ESXi External switch <-> untanlge <-> ESXi Internal switch <-> external Cisco Switch for other physical boxes, my main PC, etc.
 
Last edited:

pc-tecky

Active Member
May 1, 2013
202
26
28
While the virtual switches are 9000 MTU, the Napp-It virtual NICs remain at 1500 MTU. How do I configure this to be 9000? How do I change the IP address. Familiar with windows and Linux - this is a bit different..
 

gea

Well-Known Member
Dec 31, 2010
3,275
1,260
113
DE
Basically you must disable the link, change MTU and re-enable.
But as internal transfers within ESXi is pure software, MTU does not matter.

I asume that you manage via the e1000 link and use the second vmxnet3 link for data transfers. In menu System > Network ETH you can then disable the link and set MTU

From console commands, see
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : OmniOS
Setting Datalink Properties - Oracle Solaris Administration: Network Interfaces and Network Virtualization
 

pc-tecky

Active Member
May 1, 2013
202
26
28
well that was a bit of a headache, but got the MTU set after disabling, setting MTU=9000, and then re-enable the interface. Then I lost IP access.

ESXi VMKernel NICs
vmk0 --> (10.1.1.4) Management Network
vmk1 --> (10.1.100.251) Internal Storage Kernel
vmk2 --> (10.1.100.105) Storage Network (if used, disassociates Napp-It 3rd NIC from Storage switch)

Virtual Switches
vSwitch0 (can't move, rename, or edit much??, MTU: 9000 ; VLAN ID: 0 )
Storage (the internal network for storage, MTU: 9000 ; VLAN ID: 100)
Internal ( ??, MTU: 9000 ; VLAN ID: 0 ; future use with pfSense or Untangle router OS, 2-ports in, 4-6 ports out)
External ( ??, MTU: 9000 ; VLAN ID: 0 ; future use with pfSense or Untangle router OS, 2-ports in, 4-6 ports out)

Port Groups
VM Network --> vSwitch0 (default)
Management Network --> vSwitch0 (default)
Internal Storage Network --> vSwitch0 (tried following K D's writeup, doesn't click and I suspect some missing steps, common for them)
Storage Network --> Storage

OK, so how does the VMK1 with PortGroup 'Internal Storage Network' on vSwitch0 jump over to PortGroup 'Storage Network' on 'Storage' switch?? I suspect that it doesn't work quite that way.

I'll have to start physically assigning IPs despite how much I enjoy the simplicity of how DHCP works and not having to worry about assigning IPs in the past.

----
Alright, made some progress. Started Napp-It after having removed and then re-added VMK2 to the 'Storage' switch and removed VMK1 - needed to make the virtual physical connection all within the same realm. For whatever reason, the other way around kicked Napp-It off the storage switch. But I have had a few purple deaths as well.
 

pc-tecky

Active Member
May 1, 2013
202
26
28
@gea , @K D - ok, so trying this instead and hoping maybe it works:

ESXi VMKernel NICs
vmk0 --> (10.1.1.4) Management Network
vmk1 --> (10.1.100.105) Internal Storage Kernel
** got to watch this. why does the ESXi VMK1 NIC connection disassociates Napp-It's NIC connection to the Storage switch, or stranger yet, make the Storage switch disappear??

Virtual Switches
vSwitch0 (can't move, rename, or edit much??, MTU: 9000 ; VLAN ID: 0 )
Storage (the internal network for storage, MTU: 9000 ; VLAN ID: 100)
Internal ( ??, MTU: 9000 ; VLAN ID: 0 ; future use with pfSense or Untangle router OS, 2-ports in, 4-6 ports out)
External ( ??, MTU: 9000 ; VLAN ID: 0 ; future use with pfSense or Untangle router OS, 2-ports in, 4-6 ports out)

Port Groups
VM Network --> vSwitch0 (default)
Management Network --> vSwitch0 (default)
Kernel Storage --> Storage (ESXi for NFS)
Storage Network --> Storage (Napp-It NFS)

root@napp-it-026:~#
root@napp-it-026:~# ipadm show-addr
ADDROBJ ~> TYPE ~> STATE ~> ADDR
lo0/v4 ~> static ~> ok ~> 127.0.0.1/8
lo0/v6 ~> static -> ok~> ::1/128
e1000g1/v4 ~> dhcp ~> disabled ~> ?
wmxNET3s0/v4 ~> dhcp ~> disabled ~> ?
e1000g0/v4 ~> dhcp ~> disabled ~> ?
e1000g2/v4 ~> static ~> disabled ~> 10.1.100.151/24
root@napp-it-026:~# ipadm show-if
IFNAME ~> STATE -> CURRENT -> PERSISTENT
lo ~> ok ~> -m-v------46 ~> ---
e1000g0 ~> down ~> bm------46 ~> -46
e1000g1 ~> disabled ~> ---------- ~> -46
e1000g2 ~> disabled ~> ---------- ~> -46
root@napp-it-026:~# dladm show-link
LINK ~> CLASS ~> MTU ~> STATE ~> BRIDGE ~> OVER
e1000g0 ~> phys ~> 9000 ~> up ~> -- ~> --
e1000g3 ~> phys ~> 1500 ~> unknown ~> -- ~> --
e1000g4 ~> phys ~> 1500 ~> unknown ~> -- ~> --
root@napp-it-026:~# dladm show-phys


I don't think I have e1000g4, e1000g3, or vmxnet3s0 any longer, but the system commands show I still have these phantom handware still installed.. so why are the showing up? how do I remove them?

ipadm disable-if -t *interface*
dladm set-linkprop -p mtu=9000 *interface*
ipadm delete-if *interface*
ipadm enable-if *interface*
ipadm show-if
 

pc-tecky

Active Member
May 1, 2013
202
26
28
@gea , @K D , please explain the menu options (clear as mud for me atm)
New datastore \\ Provide NFS mount details

Name: ESXi NFS (use whatever name I please that will be meaningful?)
NFS Server: 10.1.100.151 (the assigned static IP using VLAN ID: 100)
NFS Share: (this would be?? a) a local mount point -or- b) a remote mount point)
NFS Version: [X] NFS3 -or- [ ] NFS4
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
New datastore \\ Provide NFS mount details

Name: ESXi NFS (use whatever name I please that will be meaningful?) - Yes. Use whatever name you want.
NFS Server: 10.1.100.151 (the assigned static IP using VLAN ID: 100) - IP Address of the Napp-IT/OmniOS VM
NFS Share: (this would be?? a) a local mount point -or- b) a remote mount point) - ZFS File System you have shared in your VM that you want to use .
NFS Version: [X] NFS3 -or- [ ] NFS4 - NFS3
 

pc-tecky

Active Member
May 1, 2013
202
26
28
my first casualty, a drive is dead..

-----

I do all of that and nothing so far..

I go to ZFS Pool, and under the NFS colum of the share click to turn it 'ON'.. then where to for permissions, or is that it? if Napp-It is on 10.1.1.5, then how does 10.1.100.151 on second NIC for Napp-It work - is this a case of a dual 'homed' server?
 
Last edited:

pc-tecky

Active Member
May 1, 2013
202
26
28
eurika! forward slash vs back slash? I don't know now which it was, but that was the hang up and I got it working... err!!!!! :mad::confused::confused:o_O:rolleyes: the entire mess with VLAN IDs, as far as I could tell was extra work that didn't work, couldn't ping between the static IP addresses at all, no connections, nothing really..
 
Last edited:

pc-tecky

Active Member
May 1, 2013
202
26
28
bringing this back to life... have an error with Napp-IT stating the ZFS pool can't be found and bascially no bootloader.. quick searches point to a shared drive cache sort of bug with an easy fix from mid-2017 as I recall.. now just need to restore this if I can..