Napp IT for proxmox?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

epicurean

Active Member
Sep 29, 2014
785
80
28
Is Napp IT supported as a VM inside Proxmox 6.3 VE? I am thinking of migrating from esxi to proxmox.
 

gea

Well-Known Member
Dec 31, 2010
3,160
1,195
113
DE
Not sure if it would make sense to run a Unix/Solaris based ZFS storage VM under Linux unless you want the Solarish SMB server or Comstar iSCSI functionality or S3 compatibility as a filesystem feature. What you can do is to run napp-it under Linux. Functionality is reduced to ZFS management then as many advanced napp-it features are based on Solaris, napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Linux
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Not sure if it would make sense to run a Unix/Solaris based ZFS storage VM under Linux unless you want the Solarish SMB server or Comstar iSCSI functionality or S3 compatibility as a filesystem feature. What you can do is to run napp-it under Linux. Functionality is reduced to ZFS management then as many advanced napp-it features are based on Solaris, napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Linux
Thank you gea. Would my exported pools in Napp It (Esxi)be imported again under proxmox without issues?
 

pinkanese

New Member
Jun 19, 2014
27
10
3
33
I moved from ESXi + Napp-IT to Proxmox last year. I actually forgot to export my pools, but after I installed Proxmox I just attached the drives and the pools were recognized and I could import them. Was super easy. Then just created an LXC container for SMB sharing and mounted the pools to it.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
I moved from ESXi + Napp-IT to Proxmox last year. I actually forgot to export my pools, but after I installed Proxmox I just attached the drives and the pools were recognized and I could import them. Was super easy. Then just created an LXC container for SMB sharing and mounted the pools to it.
Thank you. Can you be more specific step wise? Assume I am a total idiot. I am installing Proxmox for the first time
 

pinkanese

New Member
Jun 19, 2014
27
10
3
33
It is really going to depend on your setup. I have only done it once so by no means an authority here, Google is your friend as always.

Shutdown everything using the pools, I doubt Napp-it has an export function in the GUI so you probably need to jump into the command line, then simply "zpool export poolname", then shutdown the host.

I moved to a new server, but if you are keeping the old hardware I would suggest disconnecting all the drives except whatever you are using for the Proxmox boot disc.

Get Proxmox installed and running, shut it down, and then reattach the rest of the storage you want to use. Open up the shell and run "zpool import" it should list the pools your exported before. Then simply "zpool import poolname".

The really tricky part is what comes next. My pool was just used as a Samba share for media and backups. If you are using the pools for VM storage you are going to have to do some conversion to get them to work under Proxmox.
 

gea

Well-Known Member
Dec 31, 2010
3,160
1,195
113
DE
Shutdown everything using the pools, I doubt Napp-it has an export function in the GUI so you probably need to jump into the command line, then simply "zpool export poolname", then shutdown the host.
??
Pool >> export is a often needed menu item.
 
  • Like
Reactions: epicurean

epicurean

Active Member
Sep 29, 2014
785
80
28
It is really going to depend on your setup. I have only done it once so by no means an authority here, Google is your friend as always.

Shutdown everything using the pools, I doubt Napp-it has an export function in the GUI so you probably need to jump into the command line, then simply "zpool export poolname", then shutdown the host.

I moved to a new server, but if you are keeping the old hardware I would suggest disconnecting all the drives except whatever you are using for the Proxmox boot disc.

Get Proxmox installed and running, shut it down, and then reattach the rest of the storage you want to use. Open up the shell and run "zpool import" it should list the pools your exported before. Then simply "zpool import poolname".

The really tricky part is what comes next. My pool was just used as a Samba share for media and backups. If you are using the pools for VM storage you are going to have to do some conversion to get them to work under Proxmox.
Thank you. This ZFS pool indeed is for my plex movies , and not for VM storage. Is it complicated to enable NFS and samba share in proxmox?
 

pinkanese

New Member
Jun 19, 2014
27
10
3
33
Thank you. This ZFS pool indeed is for my plex movies , and not for VM storage. Is it complicated to enable NFS and samba share in proxmox?
No harder than everything else up to this point. I imagine there is a way to do it directly on the host but I decided to move my shares to an LXC container to keep things separated.

Setup a container with your preferred flavor of Linux and install the packages for an NFS server. There is also a template for a Turnkey Fileserver in Proxmox but I have not tried it.

You will have to go into the shell and add your storage to the container manually. Look into bind mounts, but mostly you are adding a line "mp0: /mypool/storage,mp=/storage" to the end of the config file for the container. You might need to do multiple mounts if you created different sub volumes in the zfs pool. When you go to setup your shares the path will be something like /mnt/storage.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Sorry for more newbie questions.
Why did you "move your shares to an LXC container"? Does keeping things seperate make it more complicated for Plex Server to see the movies files?
 

vjeko

Member
Sep 3, 2015
73
2
8
63
Not sure if it would make sense to run a Unix/Solaris based ZFS storage VM under Linux unless you want the Solarish SMB server or Comstar iSCSI functionality or S3 compatibility as a filesystem feature. What you can do is to run napp-it under Linux. Functionality is reduced to ZFS management then as many advanced napp-it features are based on Solaris, napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Linux
gea, I would appreciate if you could comment.
With vmware changes with ESXI, I was planning to port everything to proxmoxx - that includes the AiO where I have OmnOS on local drive and
the rest of the drives on a HBA. I am far from experienced in the subject, could you indicate what you would suggest
in order to have the complete omnios/zfs functionality if you don't suggest running omnios as a vm in proxmoxx ?
 

gea

Well-Known Member
Dec 31, 2010
3,160
1,195
113
DE
I have not done tests with Solaris based VMs under Proxmox as I am quite busy atm with ZFS on Windows.

Under Proxmox you have ZFS that you may use directly. You will loose the comfort of a storage VM and you must use SAMBA (Solaris SMB is quite unique regarding ntfs alike ACL, troublefree snaps as Windows previous versions or local SMB groups. Solaris Comstar is also to consider as is Solaris/Illumos ZFS that I consider superiour regarding stability). So you can check how well OmniOS runs under Proxmox, consider a barebone setup or a switch to ZoL.

In my own setup I decided to stay with ESXi 7/8 as long as possible.
 
  • Like
Reactions: vjeko

you

New Member
Mar 29, 2024
2
2
3
I have been running OmniOs CE with napp-it since proxmox 6, currently running on proxmox 8.1.10 with the latest kernel 6.5.13-3-pve; passthrough of a LSI3008 with firmware 16.00.12.00. Before I was using nappit with ESXI for some years.

I put all my VMs on a separate nvme belonging to proxmox. I do not use any snapshot features from proxmox itself. I run standard proxmox backups on a nightly basis to my nappit-filer-vm and once a week to a cloud storage provider. Recoveries are done with ease. If you document your proxmox setup properly :)

If you passthrough your HBA you might experience some trouble. I, f.e., was running proxmox 8 with an older 5.x kernel from proxmox 7, which was still available after upgrade. Up until last week I have not been able to get an omnios or openindiana vm installed with standard proxmox 8 kernel (6.x), since it immediately crashed if I tried with hba passthrough.

I did some testing the other day and the following setup is now working without any workarounds. Key changes I applied related to q35, cpu -> host, hba hostpci with pcie. My /etc/pve/qemu-server/100200.conf:

Code:
acpi: 1
agent: 0
balloon: 0
boot: order=ide0
cores: 4
cpu: host
hostpci0: 0000:01:00.0,pcie=1,rombar=0
hotplug: disk,network
ide0: images:vm-100200-disk-1,size=32G,ssd=1
kvm: 1
localtime: 1
machine: q35
memory: 32768
name: vmnas
net0: e1000=DE:2B:F5:8E:00:00,bridge=vmbr0
net1: vmxnet3=52:42:4D:04:00:00,bridge=vmbr0
numa: 0
ostype: solaris
scsihw: virtio-scsi-single
smbios1: uuid=93cd5325-c77e-45e2-a3bf-3875fabf07a5
sockets: 1
startup: order=1,up=180,down=120
tablet: 1
vmgenid: de2f4e02-8ec9-4ba4-8adc-01cb45f01dda
Corresponding hardware and config settings on attached screenshots.
 

Attachments

Last edited:
  • Like
Reactions: gb00s and gea