You can update OmniOS,
see
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
I would export the pool and install the template with OmniOS 151024 (last one with support for ESXi 5.5).
You can then update OmniOS to a newer one
Maybe I would update the whole server to current ESXi 6.7U1 and oiptionally wait a few days for 151028 (next stable in a few days). You can either update 5.5 to 6.7 when you boot the ESXi installer (either iso or from an USB stick when you use Rufus to create a stick from iso). You can also use a new bootdisk, install ESXi 6.7u1 and import the VMs. (This would remain the old system intakt).
Unless you update your pool, the data remains accessable either from an older OmniOS and 151028
Well I finally did it (not by choice, but due to my own stupidity regarding other things - always learning I guess!) after many hours rebuilding ESXi from scratch/new install...
I've moved to ESXi 6.7U1, all VM's running fine.... I then loaded the latest ovf from you and set it up from scratch network wise, imported the pool (previously exported from the original VM) and added AD servers and bind the users... all the shares came right back up! Using VMXNET3 adapters.
Unfortunately I now have a major issue with speed and I'm at a loss why.... previously without any tuning I would saturate my gigE connection (110-112 MB/s).... now I'm lucky to crack 30 MB/s....
iperf test from my desktop to the server (napp-it is the iperf server) comes back with 930Mbit/sec so the network connection seems to be ok...
All I did was import the pool, nothing else.
Attached are some screen shots, maybe you have an idea where I could look?
Did a benchmark test from napp-it... not sure if I did it correctly though, just used the default settings.
Code:
start filebench..
Filebench Version 1.4.9.1
16633: 0.000: Allocated 126MB of shared memory
16633: 0.003: File-server Version 3.0 personality successfully loaded
16633: 0.003: Creating/pre-allocating files and filesets
16633: 0.016: Fileset bigfileset: 10000 files, 0 leafdirs, avg dir width = 20, avg dir depth = 3.1, 1254.784MB
16633: 0.022: Removed any existing fileset bigfileset in 1 seconds
16633: 0.022: making tree for filset /storage_z2/filebench.tst/bigfileset
16633: 0.051: Creating fileset bigfileset...
16633: 2.047: Preallocated 8015 of 10000 of fileset bigfileset in 2 seconds
16633: 2.047: waiting for fileset pre-allocation to finish
16633: 2.048: Starting 1 filereader instances
16671: 2.091: Starting 50 filereaderthread threads
16633: 4.094: Running...
16633: 34.129: Run took 30 seconds...
16633: 34.133: Per-Operation Breakdown
statfile1 48686ops 1621ops/s 0.0mb/s 0.4ms/op 15us/op-cpu [0ms - 2521ms]
deletefile1 48614ops 1619ops/s 0.0mb/s 3.1ms/op 66us/op-cpu [0ms - 2390ms]
closefile3 48694ops 1621ops/s 0.0mb/s 0.1ms/op 6us/op-cpu [0ms - 690ms]
readfile1 48696ops 1621ops/s 214.1mb/s 1.5ms/op 63us/op-cpu [0ms - 2284ms]
openfile2 48699ops 1621ops/s 0.0mb/s 0.9ms/op 20us/op-cpu [0ms - 2105ms]
closefile2 48700ops 1621ops/s 0.0mb/s 0.1ms/op 6us/op-cpu [0ms - 2323ms]
appendfilerand1 48706ops 1622ops/s 12.6mb/s 3.7ms/op 53us/op-cpu [0ms - 1720ms]
openfile1 48712ops 1622ops/s 0.0mb/s 1.0ms/op 21us/op-cpu [0ms - 2483ms]
closefile1 48712ops 1622ops/s 0.0mb/s 0.1ms/op 6us/op-cpu [0ms - 1885ms]
wrtfile1 48718ops 1622ops/s 202.6mb/s 6.1ms/op 84us/op-cpu [0ms - 2450ms]
createfile1 48730ops 1622ops/s 0.0mb/s 4.4ms/op 62us/op-cpu [0ms - 2415ms]
16633: 34.133:
IO Summary:
535667 ops, 17835.374 ops/s, (1621/3244 r/w), 429.4mb/s, 375us cpu/op, 7.1ms latency
16633: 34.133: Shutting down processes
ok.