FreeNAS virtualized under ESXi

Bjorn Smith

Active Member
Sep 3, 2019
283
114
43
Hi,

I am considering downscaling my homelab to just one server.

But that would require me to virtualize my FreeNAS under ESXi and then use the FreeNAS pools via ISCSi or NFS.

Every single time I have tried it in the past - server reboots has been a pain in the butt, since ESXi is so slow/not detecting the exported pools from FreeNAS, which causes datastores in ESXi to be unavailable.

I know its possible to do, so I wonder if some of you have good experience with this and perhaps there is a ESXi trick that I need to do to make this work properly.

Basically I want my ESXi to start by booting the FreeNAS VM, when its up and running, then it should automatically detect the datastores that is based off the FreeNAS VM and the proceed to boot the remaining VM's.

So what do I need to do to make ESXi detect/connect to the datastores exported from the FreeNAS VM running on the same machine? No handholding during boots is the end goal.

I really want to get this to work, since it will make my lab shrink from 3 servers + multiple switches down to just one server and a switch :)

Thanks
 

zack$

Active Member
Aug 16, 2018
487
186
43
I run the FreeNAS VM off of local ESXI storage, set FreeNAS to autoboot on power off and tweak remaining VMs with a delayed autostart (to give FreeNAS enough time to boot and present shares).

I also run the FreeNAS on a pass-thru 4 x 10g card for load balancing and redundancy.
 

Bjorn Smith

Active Member
Sep 3, 2019
283
114
43
I run the FreeNAS VM off of local ESXI storage, set FreeNAS to autoboot on power off and tweak remaining VMs with a delayed autostart (to give FreeNAS enough time to boot and present shares).

I also run the FreeNAS on a pass-thru 4 x 10g card for load balancing and redundancy.
So are you using the FreeNAS shares via ISCSi or NFS? And it all starts up without any manual intervention?
 

zack$

Active Member
Aug 16, 2018
487
186
43
NFS and yes. There is an autostart option on the VM and Host in ESXI that you need to toogle.

I even run other ESXI hosts off of the FreeNAS NFS shares that in turn had other VMs. Run a smb share to a dedicated Plex Box.

No manual intervention.
 

Bjorn Smith

Active Member
Sep 3, 2019
283
114
43
Ok,
I cannot remember if last time I was running ISCSi or NFS, but my experience was that I manually had to rescan to get ESXi to detect the datastores again and sometimes I even had to remove/re-add.

I guess I will have to give it a go again and see if it works better this time.

Thanks
 

neb50

Member
Aug 28, 2018
31
5
8
I used @Spearfoot scripts that are linked on the freenas forums to handle this. There are a few posts on that forum that show how to use s startup and shutdown script in Freenas to get the ESXI host to rescan and start the VM's hosted on Freenas to startup and shutdown correctly.
 

RTM

Active Member
Jan 26, 2014
581
209
43
Another suggestion: why not just use something like Proxmox? it will give you (KVM) virtualization and ZFS in one neat package.
 

Bjorn Smith

Active Member
Sep 3, 2019
283
114
43
I used @Spearfoot scripts that are linked on the freenas forums to handle this. There are a few posts on that forum that show how to use s startup and shutdown script in Freenas to get the ESXI host to rescan and start the VM's hosted on Freenas to startup and shutdown correctly.
Thanks - those scripts or just modifying ESXi itself:
https://www.reddit.com/r/freenas/comments/d5de1p
Seems to be what I am looking for.

Another suggestion: why not just use something like Proxmox? it will give you (KVM) virtualization and ZFS in one neat package.
I have tried that, but did not really like the management interface much - and converting all my vm's to KVM format seems like too big a hassle for me right now. But thank you for the suggestion.

Also it seems like NFS might just work out of the BOX - just did a TrueNAS VM, added a NFS share as a datastore - shut down the VM, did a

esxcli storage nfs list

And it turned up inacessible (as expected, since VM is shut down)

I started the VM up again

And

esxcli storage nfs list

immediately showed it as accessible.

So it might just be ISCSi that is a pain.

So I think I will probably end up with just manually starting my FreeNAS VM in ESXi via /etc/rc.local.d/local.sh and just do a esxi storage nfs list to trigger ESXi to take a look again and hopefully it will just work with the remaining VM's that is set to autostart.
 
Last edited:

Bjorn Smith

Active Member
Sep 3, 2019
283
114
43
This is what I will end up by doing - have tested it and script starts VM and wait until datastore is available. (NFS)


Code:
#!/bin/sh

VMNAME=TrueNAS
SHARE=TrueNAS


MAXLOOPSVM=10
MAXLOOPSSHARE=10

COUNT=1

logger "local.sh: Checking status of $VMNAME"

ID=`vim-cmd vmsvc/getallvms|grep "$VMNAME" | cut -d' ' -f 1`

MSG="Starting VM '$VMNAME' with ID $ID"
echo $MSG
logger "local.sh: $MSG"
vim-cmd vmsvc/power.on $ID

STATUS=`vim-cmd vmsvc/get.guestheartbeatStatus "$ID"`

MSG="Checking status of VM: '$VMNAME' : '$STATUS'"
echo $MSG
logger "local.sh: $MSG"


while [ $STATUS != "green" ]
do
        sleep 10
        STATUS=`vim-cmd vmsvc/get.guestheartbeatStatus "$ID" `
        COUNT=`expr $COUNT + 1`
        if [ $COUNT -gt $MAXLOOPSVM ]
        then
                echo "Status of VM: $VMNAME - never changed to green - exiting loop"
                logger "local.sh: Status of VM: $VMNAME - never changed to green - exiting loop"
                break
        fi
        MSG="Checking status of VM: '$VMNAME' : '$STATUS'"
        echo $MSG
        logger "local.sh: $MSG"

done

COUNT=1


#Get field 4 from a line like, which is whether or not datastore is accessible
#TrueNAS      192.168.0.44   /mnt/tank/root/esxi        true     true      false  false  Not Supported
SHARESTATUS=`esxcli storage nfs list|grep "$SHARE"|  sed 's/\s\+/ /g' | cut -d' ' -f 4`
MSG="Checking share status of datastore:'$SHARE' accessible:'$SHARESTATUS'"
echo $MSG
logger "local.sh: $MSG"

while [ $SHARESTATUS != "true" ]
do
        sleep 10
        SHARESTATUS=`esxcli storage nfs list|grep "$SHARE"|  sed 's/\s\+/ /g' | cut -d' ' -f 4`
        COUNT=`expr $COUNT + 1`
        if [ $COUNT -gt $MAXLOOPSSHARE ]
        then
                logger "local.sh: Status of datastore: "$SHARE" - never changed to accessible - exiting loop"
                break
        fi
        MSG="Checking share status of datastore:'$SHARE' accessible:'$SHARESTATUS'"
        echo $MSG
        logger "local.sh: $MSG"

done
Script will post messages to the console and into the /var/log/syslog.log file - so you can test it just by creating a script and run it.
Just change the name of the VMNAME and SHARE variables to match what you use and you are golden.

Contents of file needs to go into:

/etc/rc.local.d/local.sh

and remember to do:
Code:
# /bin/auto-backup.sh
So changes is saved properly
 

Spearfoot

Member
Apr 22, 2015
68
26
18
62
I use FreeNAS-based NFS datastores with ESXi v6.5 & v6.7 for VM storage. Here is the code I use in the FreeNAS startup script to 'wake up' these datastores. The various variables (esxiuser, esxihost, and esxidatastore) are defined the the dotted include file (host.config). I'm pretty sure the basic adapter rescan call will enable any FreeNAS-based iSCSI datastores as well, and you may be able to adapt the code below to query them for availability.
Code:
#!/bin/sh
################################################################################
#
# Forces ESXi host to rescan its datastores
#
################################################################################

. /mnt/tank/systems/scripts/host.config

datastore_waitdelay=60

nfs_datastore_available()
{
  l_return=$(ssh "${esxiuser}"@"${esxihost}" esxcli storage nfs list | grep "$1" | awk '{print $4}')
  if [ ! -z $l_return ] && [ $l_return = "true" ]; then
    return 1
  fi
  return 0
}

wait_for_nfs_datastore()
{
  l_try=1
  l_done=0

  nfs_datastore_available "$1"
  l_done=$?

  while [ $l_done -eq 0 ] && [ $l_try -lt $datastore_waitdelay ]; do
    sleep 1
    l_try=$((l_try+1))
    nfs_datastore_available "$1"
    l_done=$?
  done
  
  echo "NFS datastore $1 query result ${l_done} after ${l_try} attempt(s)..."
}

echo "+---------------------------------------------------------------------------------"
echo "+ $(date): Forcing datastore rescan on ESXi host ${esxihost}"
echo "+---------------------------------------------------------------------------------"

ssh "${esxiuser}"@"${esxihost}" esxcli storage core adapter rescan --all

wait_for_nfs_datastore "${esxidatastore}"
 

neb50

Member
Aug 28, 2018
31
5
8
The code above is almost exactly what I am using and it is working great on 3 ESXI hosts using the config file to determine which one it is running on.
 

Derwood

Member
May 22, 2019
48
0
6
I'd not utilise this notion myself and maybe have not explored enough but could you run this backwards per se.. Single hardware server with FreeNAS installed and using nested-virtualisation then install an ESXi, sphere and other ESXi domain VMs.

Yeah, I know, it's a bit of a reach option but is more plausible then running FreeNAS as a VM itself.

Sorry to butt in.
 

neb50

Member
Aug 28, 2018
31
5
8
FreeNAS VM's are not quite as mature and stable as using ESXi as the host.

The reason I went with ESXi as the hypervisor with FreeNAS under it is because I was having issues with Windows10 VM's under FreeNAS. If it can't handle Windows10 VM's under FreeNAS, then it probably wouldn't be able to handle ESXi as a VM with Windows10 VM's under it.
 
  • Like
Reactions: name stolen

Spearfoot

Member
Apr 22, 2015
68
26
18
62
I'd not utilise this notion myself and maybe have not explored enough but could you run this backwards per se.. Single hardware server with FreeNAS installed and using nested-virtualisation then install an ESXi, sphere and other ESXi domain VMs.

Yeah, I know, it's a bit of a reach option but is more plausible then running FreeNAS as a VM itself.

Sorry to butt in.
If I understand your question correctly, you're wondering if ESXi can run as a VM under FreeNAS. ESXi is a level 1 hypervisor. It has to run on the "Bare Metal". So, no -- you can't install it as a VM on FreeNAS.
 
  • Like
Reactions: Derwood

Derwood

Member
May 22, 2019
48
0
6
If I understand your question correctly, you're wondering if ESXi can run as a VM under FreeNAS. ESXi is a level 1 hypervisor. It has to run on the "Bare Metal". So, no -- you can't install it as a VM on FreeNAS.
Yeah, I'd not install as a VM but if the option did happen to occur when a situation needed it, it is possible but not long term viable or production usage I reckon.
 

acquacow

Well-Known Member
Feb 15, 2017
594
311
63
39
If I understand your question correctly, you're wondering if ESXi can run as a VM under FreeNAS. ESXi is a level 1 hypervisor. It has to run on the "Bare Metal". So, no -- you can't install it as a VM on FreeNAS.
You can definitely install ESXi inside of other hypervisors. I used to have ESXi 6.0 installed in ESX 5.5 to do vsan testing. Pretty sure I had it installed in hyperV as well once.
 
  • Like
Reactions: Derwood

Evan

Well-Known Member
Jan 6, 2016
3,144
530
113
You can definitely install ESXi inside of other hypervisors. I used to have ESXi 6.0 installed in ESX 5.5 to do vsan testing. Pretty sure I had it installed in hyperV as well once.
Yep Hyper-V actually works well nested.
 

jcl333

Active Member
May 28, 2011
200
48
28
This is an interesting thread, I like the scripts.
I realize the point here is to run VMs on FreeNAS shares.

But, what I did is buy an LSI hardware RAID with a bunch of 400Gig eSSDs to host some of the main VMs I use on VMFS6.
These controllers are supported natively in ESXi, I do RAID10, so I have redundancy and performance

I am not using my FreeNAS shares for VMs, but I could see doing this and then only having specific VMs running on the FreeNAS datastores.

I assume you are running ESXi on a USB key or similar (I use USB3 drive on MB or SATA SSD)
Where are you installing the FreeNAS VM?