ESXI / Napp-IT All In One with USB Datastore

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
1. Introduction
The goal is to up an All-In-One server with ZFS storage, SMB shares for media and files, NFS Shares for VM Storage

Hardware Build is detailed at Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3.

Storage :
SanDisk UltraFit 8GB - Esxi Install location
SanDisk UltraFit 128GB - Napp-It Install location
8x 8 TB WD Reds - Bulk storage
2x 400GB DC S3500 - VM storage
1x DC S3700 - SLOG

The motherboard has 1x PCI-E 3.0 x8 (in x16) and 1x PCI-E 2.0 x4 (in x8). I needed a 10Gbe connection and added a single port Mellanox Connect-X2. With the desired storage configuration 8x8TB for bulk and mirrored 400GB for VM, I do not have enough ports on the onboard HBA to handle this. I tried adding an additional PCIe HBA but the U-NAS case thermals are poor and the system temperature remained high.

I will be using a USB thumb drive as a VM Datastore to store the SAN VM and pass through the onboard SATA ports to the SAN VM.

The base hypervisor is Esxi 6.5. Napp-IT will provide storage services.

A friend wanted me to give him instructions that he can follow to implement a similar setup and I have seen multiple questions about this in the Forums. So I decided to just document the software installation in detail and publish it as a way of giving back to the community.

This is intended to help a beginner like me to setup an All-In-One server. These instructions are what I put together after researching for a long time and after several iterations to make things work properly. Let me know if there are any errata and I'll correct.

Without more ado here you go.


2 ESXi Installation
Download the esxi installer from VMWare and register for a free esxi license. I used SuperMicro iKVM to mount the installer iso and setup ESXi on the system.

2.1 ESXi Initial Network Configuration
  1. Login to the console
    guide-02.png
  2. Configure Management Network.
    guide-03.png
  3. Select NICs to use for Management Network.
    guide-04.png
  4. Setup IPv4 Configuration (I am using Static IP).
    guide-05.png
  5. Setup IPv6 Configuration (I disable it).
    guide-06.png
  6. Configure hostname and DNS settings (I’ve already entered a static entry in my windows DNS server.
    guide-07.png
  7. Apply the changes and reboot the host.
    guide-08.png guide-09.png
  8. After reboot, Login to the host.
    guide-10.png

2.2 Enable SSH
ESXi by default disables SSH and if you enable it from the host menu, it gets disabled every reboot. Since I need SSH access during setup, I will enable it via the services screen to have it enabled during boot.

Method 1 – Enable from the host menu
SSH gets disabled after every reboot in this method.
guide-11.png

Method 2 – Permanently enable SSH

  1. Enable and set to auto start in the services screen
    guide-12.png guide-13.png
  2. Disable SSH Warning : ESXi will display a warning when SSH is enabled. To hide the warning, In Manage -> System -> Advanced Configuration, Set the value of UserVars.SuppressShellWarning to 1.
    guide-14.png
2.3 Set up NTP configuration.
I use a Windows AD and have all systems sync time with it. In System -> Time & Date, Edit Settings and set the startup policy for the NTP service as well as the NTP server details.
guide-15.png


2.4 Resolve Driver Issues
The native driver (vmw-ahci) for the Lynx Point controller has serious performance issues. Disable it to use the sata_ahci driver.
  1. Login to the host using putty
  2. Enter Command to verify the available drivers
    Code:
    esxcli software vib list | grep ahci
  3. Disable the driver and reboot
    Code:
    esxcli system module set --enabled=false --module="vmw_ahci"
    
    reboot
    guide-16.png
2.5 Enable Passthrough of SATA controller
  1. The Lynx Point Controller cannot be passed through to a VM by default.
    guide-17.png
  2. You need to add the following line to /etc/vmware/passthru.map to enable pass through. Add it to the end of the file. DO NOT EDIT any other line in the file.
    Code:
    # INTEL Lynx Point AHCI
    8086  8c02  d3d0     false
  3. Use vi from terminal/putty to make the edit. I used WinSCP to connect to the host and used notepad to make the changes.
  4. Reboot the host.
2.6 Passthrough SATA controller and HBA
  1. Select the Lynx Point Controller and LSI HBA and Toggle Passthrough.
    guide-18.png
  2. Reboot the host to finalize the changes.
2.7 Setup USB device as a VMFS Datastore
Code:
Note : I need an 8 drive storage array + 1 drive for SLOG and 2 drives for mirrored VM data store. That is a total of 11 drives. The X10SL7-F has 8 ports via the onboard HBA. I could have added another PCIe HBA and used that for the remaining 3 drives and used the SATA ports for the Napp-IT VM datastore. In fact I tried that out. But that did not work for me due to 2 reasons.
1.    I needed to add a 10G NIC and the X10SL7-F has only one PCIEx8 slot. The other is a PCIEx2 slot without enough bandwidth.
2.    The thermals for the U-NAS case are not too great and I am already stuffing 3 additional SSDs there.
That’s why I decided to use a USB data store. It is not supported and is a hacked way of doing things and may break in a future ESXi update. That’s an acceptable risk for me as this is a home server and I have another box running as a warm spare. I will just need to shut this down and change a couple of DNS entries to “fail over to the other”. Probably a few minutes down time which is OK with me.
You can skip this section if you are using the SATA ports for the Napp-IT VM datastore.
  1. Follow the steps listed in the website referenced here to setup the second USB drive as a datastore.
    USB Devices as VMFS Datastore in vSphere ESXi 6.5
    Code:
    [root@esxm1:~] /etc/init.d/usbarbitrator stop
    UsbUtil: Releasing all USB adapters to VMkernel
    watchdog-usbarbitrator: Terminating watchdog process with PID 66683
    usbarbitrator stopped
    [root@esxm1:~] chkconfig usbarbitrator off
    [root@esxm1:~] ls /dev/disks/
    naa.2020030102060804
    [root@esxm1:~]
    [root@esxm1:~] partedUtil mklabel /dev/disks/naa.2020030102060804 gpt
    [root@esxm1:~] partedUtil getptbl /dev/disks/naa.2020030102060804
    gpt
    4164 255 63 66895872
    [root@esxm1:~] eval expr $(partedUtil getptbl /dev/disks/naa.2020030102060804 | tail -1 | awk '{print $1 "
     \\* " $2 " \\* " $3}') - 1
    66894659
    [root@esxm1:~] partedUtil setptbl /dev/disks/naa.2020030102060804 gpt "1 2048 66894659 AA31E02A400F11DB959
    0000C2911D1B8 0"
    gpt
    0 0 0 0
    1 2048 66894659 AA31E02A400F11DB9590000C2911D1B8 0
    [root@esxm1:~] vmkfstools -C vmfs6 -S USB-Datastore /dev/disks/naa.2020030102060804:1
    create fs deviceName:'/dev/disks/naa.2020030102060804:1', fsShortName:'vmfs6', fsName:esxm1-usb-01'
    deviceFullPath:/dev/disks/naa.2020030102060804:1 deviceFile:naa.2020030102060804:1
    ATS on device /dev/disks/naa.2020030102060804:1: not supported
  2. After the above steps are complete, you will see USB drive as a storage device in ESXi.
    guide-26.png guide-27.png
  3. In this host, this will be the only local data store and will be used only for Napp-IT VMDK and nothing else.

2.8 Configure the ESXi Networking
The Host has 2 onboard NICs
NIC1 - VM Network and Management Network
NIC2 - Internal Host Only network – Napp-IT Storage
The VM Network and Management Network are already setup by default. No change was made except to rename the items. Below are the steps to setup the internal host only network for Napp-IT SAN.

  1. Add a new Standard Virtual Switch – Don’t add an uplink
    guide-19.png guide-20.png
  2. Add a new VMKernel NIC for the internal storage network. A corresponding port group will also be created. Choose a VLAN ID (I'm using 210). Setup a static IP address. Do not select any of the services.
    guide-21.png guide-22.png
  3. Add a new Port Group for the storage network. Use the same VLAN as above.
    guide-23.png guide-24.png guide-25.png
  4. For the Napp-It VM, you will add a second NIC connected to the storage network and setup an IP address in the 172.16.210.0/24 network in VLAN 210


3 Install and Configure Napp-IT
@gea has provided excellent instructions for the setup at napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux :Downloads .

3.1 Napp-IT VM Setup in ESXi
  1. Download your flavor of Napp-IT from napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux :Downloads
  2. Deploy the Napp-IT OVA to ESXi to the USB Data store you created. Select VM Network when prompted. Do not select poweron after deploy.
  3. Click Finish and wait for the appliance to be deployed. You can watch the progress in the Recent Tasks pane. This will be slow and will take some time as you are using a USB data store. If using an SSD data store, it usually takes about 2-3 mins for me.
  4. Edit the VM Settings
    1. Set the amount of RAM you want to allocate and Reserve that RAM allocation.
    2. Connect the second NIC to the Storage Network Port Group
    3. Add the SATA controller and the HBA as PCI passthrough devices
guide-39.png

3.2 Napp-IT appliance configuration
This section is covered in detail in @gea ’s guide at napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux :Downloads
In addition, set the ip address of the second nic in Napp-It the 172.16.210.0/24 range.

4 Add NFS Datastore in ESXi

Refer to Post #2 for instructions. (Hit the image upload limit in this post)

You can now use esxi with the shared nfs datastore.
 
Last edited:

dragonme

Active Member
Apr 12, 2016
282
25
28
I had originally set my s5520hc up this way about a year ago.. and it was not very stable. I found that the usb drive slowed the boot of napp-it enough to sometimes timeout NFS and the esxi VMs stored on it would not start automatically.. there were ways to trick it but it was more trouble that it was worth.

The worst part of this hack, other than not supported at all by vmware... is that the usbarbator is shut down so you cant pass though usb devices to VMs..

I went back to booting esxi on usb, and napp-it on a cheap ssd connected to a sata port. Since most motherboards only have a handful of sata ports.. i have found that passing them as RDM to napp-it doesnt work badly and doesnt take that much time to setup. I use those mostly for data so I while I am sure it drives latency up a bit.. all my fast pools are on SSD attached to the lsi card I pass to napp-it... vm storage.. etc...

6.5 I have seen also has issues with USB already and some find they have to disable 6.5 drivers and use the legacy ones.. this would just add to the issues in my opinion.

Use it if you must but it may be more trouble than its worth for sata controller passthrough vs RDM
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I had originally set my s5520hc up this way about a year ago.. and it was not very stable. I found that the usb drive slowed the boot of napp-it enough to sometimes timeout NFS and the esxi VMs stored on it would not start automatically.. there were ways to trick it but it was more trouble that it was worth.

The worst part of this hack, other than not supported at all by vmware... is that the usbarbator is shut down so you cant pass though usb devices to VMs..

I went back to booting esxi on usb, and napp-it on a cheap ssd connected to a sata port. Since most motherboards only have a handful of sata ports.. i have found that passing them as RDM to napp-it doesnt work badly and doesnt take that much time to setup. I use those mostly for data so I while I am sure it drives latency up a bit.. all my fast pools are on SSD attached to the lsi card I pass to napp-it... vm storage.. etc...

6.5 I have seen also has issues with USB already and some find they have to disable 6.5 drivers and use the legacy ones.. this would just add to the issues in my opinion.

Use it if you must but it may be more trouble than its worth for sata controller passthrough vs RDM
Agree with most of your points. I was able to get a stable napp-it only with a usb3 sandisk 128gb drive. Every thing else was too unstable. Too long to boot or shutdown etc.

In my case, I needed to pass through the SATA ports as well and did not have an option to add an addl have for the extra connections I needed.
There is a specific note about this in section 2.7 where I mention the pitfalls and recommended using sata for the napp-it data store.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I didn't ready, replying to the @K D post about USB3/Sandisk 128GB working... Sorry if you went over already.

Did you you try a USB->SSD ? by chance? obv. not as tiny, but might be more consistent if USB3 is stable ?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I did consider a USB SSD. But after the Sandisk SSD was stable, I left it alone. Also, the box was going to a location where I did not want an external drive hanging that could be knocked out.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
[QUOTE="
The worst part of this hack, other than not supported at all by vmware... is that the usbarbator is shut down so you cant pass though usb devices to VMs..

I went back to booting esxi on usb, and napp-it on a cheap ssd connected to a sata port. Since most motherboards only have a handful of sata ports.. i have found that passing them as RDM to napp-it doesnt work badly and doesnt take that much time to setup. I use those mostly for data so I while I am sure it drives latency up a bit.. all my fast pools are on SSD attached to the lsi card I pass to napp-it... vm storage.. etc...

6.5 I have seen also has issues with USB already and some find they have to disable 6.5 drivers and use the legacy ones.. this would just add to the issues in my opinion.
[/QUOTE]

Can PCI devices (even if they are an entire USB PCI card) still be passthrough, if the usb is shut down?

Are there similar issues about drivers for esxi 6.0 u3?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I am having no issues in passing through pcie devices. Have been able to pass through intel graphics, sata controller, onboard hba and add on nice without issues with the usb arbitrator shut down.

I understand that this is a hacked way of doing it and each person would have to determine whether they want to pass through sata and use usb data store or not and I have clearly called that out. I am not arguing for or against it. For this particular use case I needed it and it worked out for me. It has been running stable for about 4 weeks.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
[QUOTE="
The worst part of this hack, other than not supported at all by vmware... is that the usbarbator is shut down so you cant pass though usb devices to VMs..

I went back to booting esxi on usb, and napp-it on a cheap ssd connected to a sata port. Since most motherboards only have a handful of sata ports.. i have found that passing them as RDM to napp-it doesnt work badly and doesnt take that much time to setup. I use those mostly for data so I while I am sure it drives latency up a bit.. all my fast pools are on SSD attached to the lsi card I pass to napp-it... vm storage.. etc...

6.5 I have seen also has issues with USB already and some find they have to disable 6.5 drivers and use the legacy ones.. this would just add to the issues in my opinion.
Can PCI devices (even if they are an entire USB PCI card) still be passthrough, if the usb is shut down?

Are there similar issues about drivers for esxi 6.0 u3?
[/QUOTE]
Exactly right... my experience as we'll
and with vmware since about esxi 5.x straying from the approved way of doing things becomes exponentially harder and less stable ..

as you noted.. USB napp-it vmfs hack while it can work...doesnt most of the time.. about as reliable as a toaster in a car wash
and with the USBARBITER down.. exactly right.. any VMs that need a USB host passthrough no longer work so things like a zwave control stick etc don't work. Yes you can still pass through OTHER PCI devices ... USB is mostly dead

further and you won't see this unless you look hard.. but when running things this way under esxi 6.0 with SATA RDM disks... it passes bogus (read made up virtualized) info about a drive so when you try and add a dev to a pool it usually does not do it correctly. but even napp-it is unaware

ex I. was running a 2x 8tb pool that was properly configured as whole disks, and ashift=12. I added a RDM that pointed at a fresh virgin 8TB drive, and added it to napp-it thought the GUI. all looked fine and no warnings but thinks like ZFS write leveling were not working and in fact were writing more information to the devs that were fuller... and iops were slower on the new device while I was expecting zfs to be putting more not less writes on that drive.

I converted the setup back to napp-it booting off a daughterboad raid card in a native vmfs store and then passed the entire SATA controller containing that 3 drive pool back to it.. immediately napp-it wall page was complaining that the pool was misconfigured... upon inspection that 3rd drive was added as ashift=9, and it was added as a partition.

when napp-it was using the pool as RDM passthrough all 3 drives showed up in disks, pool, smart and other menus as one would expect showing all 3 disks as members and whole devices.. once moved to SATA PASSTHROUGH that 3rd dev was no longer part of that pool.. it would show the disk as available but it would show a partition assigned to the pool.

only recourse to remedy was to offboard all 21 TB to backup, destroy and recreate.

Napp-it 19.12 and omni 22 does have a bug and didn't try this with my new build which is not online with omni LTS38 but according to gea.. if you have a pool of say older drives and did a force ashift=12 at build then add another dev.. it will be added as ashift=12 as well.. no it does not.. it added it as a ashift=9 and there is no way to force ashift on napp-it free 19.12 and the older omni

anyway.. point being here...

if you want something stable with uptimes that get past a week without needed attention or expect to auto shutdown and auto boot without tinkering ... the further you stray from vmwares way of doing things on equipment that vmware certified the more pain you will have

the biggest thing lacking with napp-it these days after the GUI that is one day younger than punch cards, compared to Truenas is the lack of VMware Storage APIs for Array Integration (VAAI).. and especially comparing free versions of each.. hands down Truenas has great anylitics and monitoring compared to Napp-its .. well .. none really..