OmniOS ce fresh install nfs denied and mc didnt install

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vask

New Member
Nov 6, 2013
3
0
1
On one of my napp-it AIO servers the ssd failed and I was unable to image it so I did a fresh install of everything including 151024ce for omnios. When I got to installing with wget -O - www.napp-it.org/nappit | perl it did everything but install midnight commander which I really want. I could not find anywhere the actual commands to just do it manually that would of been in your script.

Second problem is I had my NFS storage setup from this thread ESXi 5.5 vswitch network setup - All-in-one . When you get to the part to

# zfs set sharenfs=rw=192.168.1.220:192.168.20.220,root=192.168.1.220:192.168.20.220 pool/dstore

it goes through but omnios denies access with error

"mountd[611]: 192.168.20.220 denied access to pool/dstore".

I dont understand how it denies access to something I just explicitly gave it access to. I do a

showmount -e serverip

and it shows

/pool/dstore 192.168.1.200,192.168.20.220 .

So I am a little confused and not sure how to fix it. Any help people could give me would be greatly appreciated!
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
For AiO you can use my ESXi ova template that is ready to use.

If you install napp-it manually on OmniOS it currently lacks mc as this is offered by the OmniOS repo from University of Maryland. Their mc is not the most current and give problems with current OmniOS.

Easiest is to copy the files from http://napp-it.org/doc/downloads/mc.zip and set /usr/bin/mc to executable

regarding NFS
I would start with a simple nfs=on and set all filepermissions to everyone@=modify recursively.
 

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
Hi vask,

Instead of:

Code:
# zfs set sharenfs=rw=192.168.1.220:192.168.20.220,root=192.168.1.220:192.168.20.220 pool/dstore
Try the following, it seems the newer omnios versions require an @ in front of the IP:

Code:
# zfs set sharenfs=rw=@192.168.1.220:@192.168.20.220,root=@192.168.1.220:@192.168.20.220 pool/dstore

Hope that helps
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

I did an in place upgrade from 22 go 24CE and 18.01 free on a esxi 6 all-in-one

I updated to 18.01 then upgraded omnios per the CE website.

seems like most everything updated ok.. I get some werid garbage during bootup before the BE menu.. goes too fast to read it but its some kind of sector read errors?


some issues

under basic server info, the zil just shows the script text when run? does not return values just the script itself

there are some format issues on some of the tables where the lables are not in the right place but this is a carryover I think not a big deal

seems like zpool and zfs dataset usage numbers dont update, refreshing the page, force reloading the page, dont work. I have to log out and log back in to see current usage, free space, etc on the live datastores.


without a license, when I back up the appliance (using napp-it) , is it enough data to restore users, network connections, and jobs... especially jobs since I have used replication to backup my primary pool manually to an attached disk shelf that I keep powered off when not needed.. and since your snapshots are all named with job numbers that would be a pain to figure out on a re-install.

speaking of the snapshot/replication process.. once a pool has been replicated by individual dataset (like you recommned), there is no way to do an incremental of the entire pool right? since the snapshot naming for the individual data sets each have a unique job number it would send the entire pool again vs being incremental right?

I wish you could make scheduled automatic replication to the same host a free feature.. since in a pro setup it really wouldnt be used and the replication would be to a different server. but for home use its just a backup locally.

looks like open-vmware-tools is installed and loaded still, but esxi says they are out of date, but the open-vmware website says its the current version and apt update says nothing to update?
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
hello Dragonme

About ZIL info
- which menu?

About table format
- it may be the case that a css from a former napp-it is cached by your browser, try F5/reload otherwise please add the exact menu and browser

About ZFS or disk values
- napp-it buffers them for a short time to be more reactive.
Especially if you modify them outside napp-it delete the buffer (ZFS filesystems >> delete ZFS buffer)

About restore
Restore /var/web-gui/_log for napp-it settings (or menu Users > Restore for a current napp-it Pro). This restores all napp-it settings incl Jobs. A napp-it backup job saves them to your datapool

Without a backup you can continue a replication. Create a new job with same source/target and the old job-id that you can get from a snap listing.

If you want to initially replicate a whole pool, use the recursive option. For a incremental continuation this is a bad idea as it fails completely whenever you create a new filesystem.

If you have filesystem replication and want to replicate the whole pool, you must add replications of the missing filesystems.

About free replication
- there are many free scripts or you can use a napp-it home license that includes more recent updates.

About open-vm-tools
Try an update via "pkg update open-vm-tools"

You can also use my ready to use OVA template with current OmniOS and everything preinstalled. You need to restore mainly napp-it settings and users then.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
ZIL stat from the system-->basic statistics-->zil

formats.. yeah.. already did force reloads.. I will keep track of the tables as I come accross them.

I upgraded to 18.01 free from an expired 17.xx pro trial initial install and it retained many of the pro charts and menues.. should I downgrade and re-upgrade?

as for vmtools

root@napp-it-san:/onlinepool/dvd# which open-vm-tools

no open-vm-tools in /usr/ccs/bin /usr/bin /bin /usr/sbin /sbin

root@napp-it-san:/onlinepool/dvd# pkg list -af open-vm-tools

NAME (PUBLISHER) VERSION IFO

system/virtualization/open-vm-tools 10.1.15-0.151024 i--

but esxi insists its out of date

I am using you OVA, installed it about 8 months ago.. omnios 22 and wanted to update to 24CE...
speaking of the OVA, can you make the OVA using a smaller thin provisioned disk vs the 40GB thick provision.. would make it easier to put onto the volume being used for boot.

as for replication via napp-it send-rcv snapshots

if I have individual, manual, replications for each file system.. is there a way to do an incremental on the whole pool without running all of them individually, manually again, and without sending the whole pool.. again..

and you are saying that if I add a file system to an existing pool that has been sent-recvd.. it fails? is that a zfs problem or a napp-it issue
that seems like a big flaw as file systems are constantly changed
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
In 2017 I switched from a Javascript driven menu to a pure css based one.
After an update you must force a reload of the new css (F5/reload)

VM-Tools
10.1.15/151024 is current. I have no problems with it and ESXi 6.5 U1

About pool replication
You must either rerun a whole new pool replication with the -r/recursive option or you just add replications for the missing filesystems

With the recursive option, napp-it initially just calls zfs send with the recursive option for a whole pool replication. If you then run a further replication, zfs send use the incremental option. Napp-it does not care if there are new sub-filesystems where a complete replication would be required instead the incremental. I would expect this to fail even on the newest ZFS variants.

btw. To backup, create an OVA that is again around 3GB in size

About VM bootdisk size
The OVA is thin provisioned with a filesize of around 3 GB. The time to deploy is quite the same does not matter if it creates a 20GB or a 40GB VM disk so the question is how large should it be.

OmniOS needs around 15 GB to work. Add 5-10 GB for bootenvironments and the space that is needed for a major update so it should be 20-25GB to not care about.

Add then the space that is required for swap (1/4 of physical memory) and optionally dump (1/2 of physical memory) and you are at the 40GB size for a production ready system with up 32 GB RAM or more for the storage VM.

see Planning for Swap Space - Oracle Solaris Administration: Devices and File Systems

If you want a smaller size, you can do a regular setup of OmniOS/OI or Solaris and napp-it + vm-tools.
 
Last edited:

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

thanks for getting back

as I have said.. I have done a forced reload and I dont think that is the issue.. there are a couple basic tables in the free version that just dont format right.. the lables are not over the correct column.

this is probably a language barrier but I am just not following you answer to my question..

with a pool as such

pool
dataset a
dataset b
dataset c

if I orginally backed up the pool by doing individulal replications with -R on dataset a, dataset b, dataset c .. since the free version is not schedulable .. is there a way that I can run a -R send on the whole pool incrementally without sending the whole pool from scrach....


as for the OVA .. yeah.. obvously that is thin but the ova created a thick prov VM disk.. this is a waste..

if I have a 60GB ssd for the napp-it vm to use .. and the VM creates a 40GB thick disk.. I cant create a new version of napp-it on the same drive to test without having to delete the first. even if the first is only using like 5 GB which is typical..
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
About the display problem
can you try a different browser. Some like Safari ignore a change of a css on reload for some time

About the replication
This is possible but only in theory. You would need to
- create a new job to replicate the pool filesystem without recursive option
- move the datasetes below this (may require another replication)
- rename all dataset snap with the job-id of the pool replication

a recursive incremental replication will work then as it find all required snaps.

about the VM
I would suggest you create a new and smaller VM manually
- create a new VM (Solaris 11-64bit) and boot the OmniOS installer iso
with a e1000 and a vmxnet3 vnic, configure ip of the e1000 during setup
- install vmware-tools (pkg install open-vm-tools)
- install napp-it via wget opt reboot
- set root pw via "passwd root" to allow SMB as root

done in a few minutes
The ready to use ova adds TLS email and system tuning
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

the rejiggering of the snaps sound like a pain so I will likely just replicate each by hand

if I start multiple jobs at the same time, will it do them in parrallel or sequentially?

this is on a backup pool on an extrnal disk shelf via a 8088 cable and only power it on once or twice a month to do a backup.
at some point I am thinking of connecting the serial port of the esxi server to the jr45 serial managemnt port on the disk shelf to possibly be able to turn the disk shelf on, import the pool, run the replications and power off the shelf. this would obviously need to be scripted, can the replication jobs be called by cron since this really cant be done within napp-it expecially the free version. where are the replication scipts stored at?

also, speaking of importing pools

the external shelf exports - power cycles and imports fine.


if I try to export a pool and pull the drive that is connected to napp-it via esxi RDM, I get a freeze, dont think its esxi freezing as I can still log into the web interface but napp-it becomes un-responsive, cant usually ssh into it but it appears that the other pools are active still...

is 'hot swapping' .. ie exporting a pool and pulling the RDM drives a limitation? I am tyring to figure out how to fit another HBA card into the system, it has the slots but the area in the chassis is really 1U and the existing 4i4e card lays across it so I am using some onboard SATA ports for one of my data pools

thanks gea
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
I would also suggest to add the missing replication jobs for the othe rfilesystems.
You can run them in parallel

Solaris supports hot swapping disks but only if the driver supports this. With LSI HBA this is the case per default, with Sata/AHCI you can enable hotswap ex in bios and a setting like "set sata:sata_auto_online=1" in /etc/system

I doubt that RDM hotswap over the ESXi disk driver work. In such a case, the disk subsystem can hang as your ssh problem shows. You can try a format at console that lists all disks (cancel after listing with ctrl-c). This will propably hang. Napp-it use this for disk detection so will hang as well.

Beside other possible troubles with Sata + expander, the best ways to connect an external shelf is SAS to an expander where your disks (SAS or Sata) are connected.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

thanks again for helping out..

sata:sata auto online is enabled but since these 'sata' disks are not really attached to the napp-it vm via a virtual sata bus, I dont think it really matters. napp-it does fine with the disk shelf power cycling and attaching / disconnecting from the pass through lsi card via the 8088 cable so I think its likely an issue where esxi does not like to see the disks backing the RDM to be disconnected.

this might also explain the occaisional 'hang' that the system experiences .. I avg about 10-15 days uptime and then I get a freeze of sorts where the console page of napp-it complains about pci and device waiting I mentioned it here a couple weeks ago... cant ssh into napp-it cant even do anything from the console window... it does seem that possibly the pools are still going but generally the only way to clear it is to shut down and reboot the host, power cycling and rebooting just napp-it does not work either.

it could also be an interrupt rempping issue esxi has with some chipsets mentioned here
vHBAs and other PCI devices may stop responding in ESXi 6.0.x, ESXi 5.x and ESXi/ESX 4.1 when using Interrupt Remapping (1030265)


either way, in order to make this setup more reliable I am going to have to find a way to connect another HBA card in the server so that all my disks controlled by napp-it are lsi based and passed through at the hardware level.

my current setup has one lsi92xx based 4e4i card.. the 4i are being used for a fast ssd based pool that napp-it serves esxi the VMs on through NFS

the 4e is the backup disk shelf, it has a 3 x 5 disk raidz pool (15 disks in 3 stripped 5 drive raidz1 devs) connected via 8088 and the performance there is great. on this old hardware (s5520HC - 2 x l5640) giving esxi 4 vCPU and 8gig of ram.. while running 5 VMs on the ssd pool, the backup pool still scrubs at over 850gbs and I can hit almost 1000gbs if I make the napp-it vm settings high latency, and reserve all vCPU (vitural memory already reserved due to lsi passthrough)

biggest issues continue to be

stability - likely the pass through RDM disks for a data pool or the interrupt remapping issue (might try putting a virtual SATA controller in the VM and hand the RDM disks on that )

connectivity -- internet speeds are just not where they need to be. I have disabled LSO on both NICs in the napp-it VM but still traffic inbound to the VM is slower than traffic outbound. my virtual 10gbe network between VMs struggles to hit 9gbs . I can get just about line speed out of the esxi host to physical computers on my network but inbound speeds just like VM to VM are slower by about 40%. I see on average 70gbs inbound to the esxi host to napp--it SMB shares. I can get line speed of 110gbs to my physical server on the same switch

I am thinking of disabling lso on everything, including the physical adapter on the host to see if that helps, but that still wont address the virtual traffic. something is just not right here. both adapters on the napp-it vm are vmxnet3 and have your base tuning applied. VM to VM should be faster and should be equal inbound or outbound.