Unable to add NFS share from napp-it OVA to ESXi

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Zack Hehmann

Member
Feb 6, 2016
72
5
8
I am unable to Add the NFS share I created on the napp-it OVA. I have read through 2 of the napp-it pdfs about setting it up. I have also read these 2 threads Napp-it NFS share just won't mount 5.5U2 & ESXi 5.5 vswitch network setup - All-in-one

I'm still not able to get it to work. Any help would be appreciated. I also tried adding the share to a windows 7 VM and was not able to get it either. It's saying the path isn't right, but I know it is. Any ideas?

I'm also posting screen shots of my settings...

1.PNG 1.PNG 2.PNG 3.PNG 4.png 5.PNG 6.PNG 7.PNG 8.PNG 9.PNG 10.PNG 11.PNG 1.PNG 1.PNG 2.PNG 3.PNG 4.png 5.PNG 6.PNG 7.PNG 8.PNG 9.PNG 10.PNG 11.PNG
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Have you isolated this to be a NFS issue vs. IP communication? If you have your vswitch/network configured properly you will be able to ping the Napp-IT VM from the ESXi console. Your vswitch setup looks nearly identical to mine except I don't have a VLAN configured for the storage virtual network.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
I can ping from ESXi to the VM and from the VM to the ESXi server. I can also ping from the napp-it VM to the Windows 7 VM I added a NIC to.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I cannot discover a wrong setting.

Basically NFS works when you
- set a filesysteme in napp-it to everyone=modify
- set NFS in napp-it to on
- connect NFS in ESXi via ip and /pool/filesystem

What I would try
- does it work with the first nic and NFS=on and MTU=1500 on both sides (vmk0)?
- have you disabled the firewall?
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
I cannot discover a wrong setting.

Basically NFS works when you
- set a filesysteme in napp-it to everyone=modify
- set NFS in napp-it to on
- connect NFS in ESXi via ip and /pool/filesystem

What I would try
- does it work with the first nic and NFS=on and MTU=1500 on both sides (vmk0)?
- have you disabled the firewall?

I'm going to try with the other NIC. Also I removed both or one NIC from the OVA. I ended up changing both NICs to VMXNET3 adapters. I had to go through and use the CLI to get the NICs to work correctly. I think I set them up right, but I have no idea.

Here is a screen shot of the server overview on the about page. looks like the firewall is off...

12.PNG


I also went ahead and ran this command to try and use the first NIC for NFS.

zfs set sharenfs=rw=10.222.31.1:10.222.31.3:10.222.10.13,root=10.222.31.1:10.222.31.3:10.222.10.13 Pool_1/ZFS_FS_1

10.222.10.13 is my ESXi host.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
So I tried on the first NIC. I wasn't able to get it work. I also disabled the firewall on the ESXi host.
After reading this https://kb.vmware.com/selfservice/m...nguage=en_US&cmd=displayKC&externalId=2005284

Here is my output...
[root@1155-ESXi:~] esxcli network firewall get
Default Action: DROP
Enabled: true
Loaded: true
[root@1155-ESXi:~] esxcli network firewall set --enabled false
[root@1155-ESXi:~] esxcli network firewall get
Default Action: DROP
Enabled: false
Loaded: true
It still didn't work. I also added a check to the nfs41Client on the ESXi firewall and also noticed that the NFS Client was set to only allow connections from the following networks and it was blank. I ended up checking allow connections from any IP address. Even after all of that it's still not working. I even bothered turning the firewall on the ESXi host back on.

15.PNG

I figured out why I was getting my original error message.

The operation failed because the specified directory path is incorrect
To resolve this issue, avoid using /, \, or other special characters in the datastore name. KB (2006035)

After removing the "/" from the datastore name I now get a new error.

16.PNG
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I've been watching/thinking of jumping in on this for the last day or two but was hesitant to re-live a 'groundhog day moment'

Wanna play...ok sure I'm game/can resolve this for ya no doubt. Just don't go running off on me like some gent that had me ALL wrapped up and close to a resolution then VANISHED in another thread a month or two ago. :-D

Step 1: IF you can vmkping from vmkernel interface that is dedicated/enabled to driving IP SAN/NFS traffic to the napp-it interface that is setup for NFS/IP SAN duties then you should be set, from the filer you should also be able to ping the vmkernel IP interfaces.

Step 2: What is your underlying OS beneath the napp-it hood? OmniOS, Solaris, Linux?

Assuming Omni/Solaris based 'try' the following (don't shoot me Gea lol)

Just get on the console or ssh in if you have that enabled and create a ZFS dataset, share w/ nfs, blow WIDE open (for now)

Code:
zfs create poolname/nfsbegoodtome
zfs sharenfs=on poolname/nfsbegoodtome
chmod -R 777 /poolname/nfsbegoodtome (YES the / was left there on purpose and NOT on the previous two cmds)
Step 3: Mount to ESXi/vSphere

Report results please. I've personally done this/ran in this exact config for the LONGEST time and it really is drop dead simple so if it is not working something is seriously fubared.

EDIT: Shouldn't have to mess w/ firewalls on the hypervisor or any of that jazz. Long as you have a vmkernel IP interface in the IP SAN subnet and configured properly (same one napp-it stg svr is on) then you're good to go. I actually PREFER my IP SAN subnets to be stubb'ed/non-routed/isolated vlan's for sure.

DOUBLE EDIT: Please DO NOT EVEN mess w/ jumbo 9000 mtu until you get 1500 normal packets working and delivering solid/stable NFS stg to vSphere, just asking for a headache, we have not talked abt phys switches, end-to-end jumbo, all that fun stuff...let's walk (read CRAWL) before we run good sir! :-D

I'll help ya get there or re-produce in my lab.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
ESXi and NFS is usually a

- setup OmniOS with a nic in the same subnet as ESXi
(ex a class-c net with 192,168.0.0 and netmask 255.255.255.0) with network defaults
- create a ZFS filessystem with everyone@=rw or full
- set NFS to on (start with this base setting)

- setup ESXi 6.0u2 with defaults (one vnic, one vswitch, same subnet as OmniOS)
- add the NFS share as a datastore with its ip and path (/pool/filesystem)

As Whitey said, avoid any special settings for the first run. When its working and stable you can add security or performance settings. It should simply work then, nothing special to care. In your case as you may have done a lot of settings, go back to base settings or reinstall.

btw
As any internal transfer in ESXi is in software, jumboframes will not improve performance but may introduce problems. A better performance happens only if you are using a physical ethernet connection. More important is using vmxnet3 with some performance settings.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

For an internal all in one setup..

I have tried a separate switch with both 1500 and 9000 mtu and Vmxnet3 interface in nappit and performance is not great... Like 140MBs writes to a 2 drive stipe that has over 250MBs write speed.. With sync disabled on the share

This if from an OS X client seems like 10.10 was faster but I don't recall the specifics but 10.11.6 and 10.12 are really poor...

I still don't think that OSX is compatible with the kernel CIFS ... Connections with OS X are showing SMB 2.1 instead of 3.+

By far my fastest connections are through a vmdk hosted on a data store backed by a NFS share from napp to esxi ... Like 5x faster than SMB from the same drive to OSX... I have tried both the e100oe and the new vmxnet3 drivers in OS X.. And both are slow....

Also I have seen conflicting info on whether to use mtu 1500 vs 9000 on an internal storage switch setup... Which is better and could LSO TSO be at play here?
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
OmniOS and Oracle Solaris with the Solarish SMB server is SMB 2.1, only Nexenta is on the way to SMB3 with Nexentastor 5. But I doubt you will see an improvement with current OSX. Since OSX 10.11.5 SMB performance is poor, even with the suggestion to disable client signing -> Google "SMB perfromance OSX".

While I was able to achieve 600-1000 MB/s on 10G with Windows and OSX prior 10.11.5 now the best on OSX is around 200-300 MB/s.

About ESXi
best is to use vmxnet3 with increased buffers like

TxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
RxRingSize=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
RxBufPoolLimit=4096,4096,4096,4096,4096,4096,4096,4096,4096,4096;
EnableLSO=0,0,0,0,0,0,0,0,0,0;

e1000 is always slower with a higher cpu load (only not the case with incompatibilities with vmxnet3)
MTU should not matter as internal transfers are software only. IP tunings are relevant when you are on real ethernet.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

So is that 200-300 on internal vswitch traffic or out though 10G physical?

dumb noob question but why do these values have 10 entries per? I know.. dumb question

I am having so many issues with this setup between permissions and performance i might just give up... running this as the free version doesnt help as much of the automation and tuning is disabled...

It seems that Solarish is so wrapped around windows compatibility that it makes a mac environment painful

I mean solaris is basically unix and they go out of their way to make it work with windows while working with osx / linux and other unix is a pain in the ass....
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
200-300 MB/s was physical (Promise Sanlink2)

about Mac compatibility:
With AFP Apple modified something on every new version. For a 3rd party AFP server like netatalk, this introduced troubles as you was always on updates with constant troubles.

With SMB, (essentially the Microsoft way of filesharing) you must accept
SMB is currently the most powerfull sharing option
SMB works very well on OSX with the following restrictions

- performance is currently not in par with Windows or Linux/Unix
- Apple lacks support for modern ntfs alike ACL permissions so OSX respects them but cannot modify
- Apple lacks support for snapshots (TimeMachine is not a comparable concept) but you can access
ZFS snaps via folder filesystem/.zfs
- Apple use the very newest SMB featutes and partly extends, some Features that depend on requires a newest Windows server of essentially a Mac - Apple wants you to buy Apple at this moment or need to know about workarounds that are not click and use.

Mostly Apple is the source of the mess, not SMB, so do as Apple wants -buy only Apple or ignore some of their extensions like Timemachine or Spotlight or look for workarounds and you are happy with non a Apple server. Setting permissions is possible on the Server or from a Windows client.