Napp-it cs web-gui for (m)any ZFS server or heterogeneous serverfarms

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
In ZFS on Windows 2.2.3 from yesterday there was a problem with CPUID detection that could crash Windows during setup

There is a new version that fix this.
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
I have uploaded a new nightly of the client/server ZFS GUI napp-it cs (Mar. 20)

- Allows large zfs list or get all (up to several MB)
- Client/ Frontend web-ui app (Windows): Copy and Run
- Server/ Backend software (BSD, Linux, OSX/ Solaris/Illumos, Windows): Copy and Run from any location like /var/web-gui, desktop or ZFS pool
- Jobs like snap or scrub run remote, replication any source /any destination memberserver is next.
- Gui performance: I would say very good especially as there is no local ZFS database to allow CLI modifications

see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris , ZFS Server for Windows
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
There are new release candidates of Open-ZFS 2.2.3 for OSX and Windows



btw
the more people are testing Open-ZFS 2.2.3 on OSX or Windows and report problems,
the faster remaining installer, driver or integration problems get fixed.

ZFS 2.2.3 on OSX or Windows is quite identical to Upstream Open-ZFS 2.2.3 so data security
should be already as good or bad as ZFS on Linux

Issues · openzfsonosx/openzfs
Issues · openzfsonwindows/openzfs
 
Last edited:
  • Like
Reactions: Aluminat

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
To add a TrueNAS Server to a servergroup

- Enable SSH, allow root (sharing options) or SMB
- Copy napp-it cs_server to a filesystem dataset ex tank/data (/mnt/tank/data)
- open a root shell and enter:
perl /mnt/tank/data/cs_server/start_server_as_admin.pl
Add Truenas to your servergroup (ZFS Servergroup -> add)

truenas.png

Anyone with a Qnap ZFS Box?
Can you confirm a similar setup
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
Next step:

Replication from any server to any server is basically working
on local transfers and between some combinations. Need to check why it does not on others,

repli.png

Howto:
- add all servers to menu ZFS servergroup
- create replication jobs from any to any server (remote between hosts not working on any combination)

If you start a job, it runs minimized, so you can open the cmd Window to check
- click on "replicate" in joblisting to check last runs
- click on date of a replication job in joblisting to see datails of last run
- enter rl (remotelog) or cl (commandlog) intro the cmd field to get remote logs (rld or cld to delete)
- open menu System > Process List to see running processes on a selected server

Use menu Pools , Filesystem and Snaps to check remote servers

Setup:
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
napp-it cs beta, current state (apr.05)

Server groups with remote web-management: (BSD, Illumos, Linux, OSX, Windows): ok
ZFS (pool,filesystem,snap management): ok on all platforms
Jobs (snap, scrub, replication from any source to any destination): ok beside Windows as source or destination
(Windows as source works with nmap/netcat on Windows)
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
How much RAM do I need for napp-it cs

RAM for a ZFS filer has no relation to pool or storage size (beside dedup)!

Calculate 2 GB for a 64bit OS, add 1-2 GB for a Solaris based filer and 3-4 GB for a BSD/Linux/OSX/Windows based filer for minimal read/write caching or ZFS can be really slow. RAM above depends on web-gui, number of users or files, data volatility or wanted storage performance. Add more RAM for diskbased pools than SSD pools for a good performance.

With napp-it cs I suggest 8 GB for the Windows machine where the frontend web-gui is running, 16GB if you additionally use ZFS on Windows on that machine.

For the ZFS filers that you want to manage with napp-it cs there is no additional RAM requirement for the server app what means that napp-it cs can manage a Solaris/Illumos based ZFS filer with 2-3 GB RAM and a BSD/Linux/OSX/Windows ZFS filer with 4-6 GB RAM what may allow to manage even a small ARM filerboard with ZFS like a Raspbee up from 2GB remotely with a ZFS web-gui (you only need ZFS and Perl on it).

If you use ZFS on such a board, you may try and report.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
Update:
The current napp-it cs now consists of three components (previously only 1st and 2nd)

1. The web-gui frontend under Windows

This allows you to manage the server or servers via browser and http/https.

2. The server backend cs_server.

These are two Perl scripts that should run on every server where Perl is installed.
These scripts run with admin rights so that you can call zfs and zpool. The backend scripts are addressed by the frontend via a socket connection on port 63000. The connection requires authorization and can be limited to the IP of the frontend computer. Console commands and the corresponding answers are transmitted unencrypted.

3. Https server to transmit encrypted commands and responses via “callback”.

A command such as zfs list is uploaded encrypted as a file to an https server. This can be the Apache server from napp-it cs, also with a self-signed certificate, or another https server with a valid, secure certificate. Curl is required to upload and download the commands and responses. This is usually included (also in Windows 10/11) or has to be installed, e.g. in Free-BSD 14 with pkg install curl. The module /cgi-bin/cs/cs-connect.pl is required on the https server. It should run on any CGI capable web server. Under Linux/Unix, the first line of the script must be adjusted (path to Perl, /usr/bin/perl). This allows you to quite elegantly get around the problem that a web gui on the LAN usually only uses insecure https with a self-signed certificate. The disadvantage of an external https server is a slightly higher latency in the web GUI. I'll try to keep this tolerable with command caching.

Callback is generally used if the response to a command is extensive because a socket connection has problems with it. To activate callback, enter the IP of the server in napp-it cs under About > Setting, e.g. www.university.org . You can also specify an https server in the server script to force only encrypted transmission.

Current information: napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Downloads
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
Napp-it cs is not quite finished yet but is now "feature complete" for the time being.
The current beta now has

- Alerts and reports (SSL/TLS)
- User management (create/delete)
- Encrypted file systems

The client/server architecture offers interesting options for encrypted file systems:
- Creation of file systems optionally with a simple password from which a sha256hex hash is generated.
- The password is split into three parts
- The password (complete and parts 1-3) is initially only saved on the computer with the web-gui, not locally on the ZFS servers.
Store/back up these password files (especially .komplettkey) safely and divide parts 1-3 between the ZFS server and the web servers.

There are the following options for opening a file system
- Providing the key files, either completely or divided into parts 1-3 locally, w1 and w2.
Usually one part should be on the ZFS server and the other two parts on one or two https web servers.
You then either need both web servers to put all three parts together, or the web servers w1 and w2 both have parts 2 and 3 of the key.
An unlock request on a ZFS server then ensures that it tries to load all missing parts of the key from the web servers.
This allows decentralized ZFS management. Access to the web servers requires an auth key and access can be restricted to IP. You can use the weg-gui itself as the web server or public https servers with a valid certificate.
-or you can open via shorter pw (base of pw hash) or the whole passphrase

That's the concept. Now it's time to test to fix the last bugs so that everything runs smoothly.
And then documentation.

enc.png

Manual:
www.napp-it.org/doc/downloads/napp-it_cs.pdf
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
Reports und Alerts gehen jetzt in napp-it cs, z.B.

Reports aof all server (daily or weekly)
Pool and connection state

Code:
Statusreport job 1716393663 from my-w11 from 05.06.2024  16:00

################
Storage reports: r01,r02, r03, r04
################


#######
Report: r01#poolstatus#parse_zpool_status#SIL#AS.pl
#######
#######
Member: free_bsd_14~192.168.2.75
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.75 (192.168.2.75) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: localhost~127.0.0.1
#######
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  4.50G  1.68M  4.50G        -         -     0%     0%  1.00x  DEGRADED  -

  pool: tank
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 0B in 00:00:01 with 0 errors on Sat May 25 13:02:38 2024
config:

    NAME                STATE     READ WRITE CKSUM
    tank                DEGRADED     0     0     0
      mirror-0          DEGRADED     0     0     0
        physicaldrive1  ONLINE       0     0     0
        physicaldrive2  FAULTED      3    76     0  too many errors

errors: No known data errors

#######
Member: omnio46~192.168.2.44
#######
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  39.5G  5.03G  34.5G        -         -     1%    12%  1.00x    ONLINE  -
tank   13.2G  1.91G  11.3G        -         -     0%    14%  1.00x    ONLINE  -

#######
Member: omni~192.168.2.203
#######
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
daten1  7.25T  4.06T  3.19T     390G         -    13%    55%  1.00x    ONLINE  -
nvme     372G   283G  88.6G        -         -    60%    76%  1.00x    ONLINE  -
rpool   34.5G  10.3G  24.2G        -         -    31%    29%  1.00x    ONLINE  -

#######
Member: openindiana~192.168.2.36
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.36 (192.168.2.36) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: osx~192.168.2.78
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.78 (192.168.2.78) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: proxmox_1~192.168.2.71
#######
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
z1    5.50G  2.65M  5.50G        -         -     0%     0%  1.00x    ONLINE  -

#######
Member: raspberry4~192.168.2.89
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.89 (192.168.2.89) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: smartos~192.168.2.96
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.96 (192.168.2.96) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: solaris~192.168.2.50
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.50 (192.168.2.50) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: truenas~192.168.2.72
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.72 (192.168.2.72) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: win2019-ad~192.168.2.124
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.124 (192.168.2.124) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: windows11-1~192.168.2.64
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.64 (192.168.2.64) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.




#######
Report: r02#joberror#parse_job_results#SIL#AS.pl
#######
jobs.log



#######
Report: r03#license#reminder#SIL#AS.pl
#######



#######
Report: r04#smart#temp_and_shortcheck#SI#AS.pl
#######
or Alerts (bad disk or pool near full > 90%).
Alerts are repeated once per day
Code:
Job 1716393712
Alertmessage:
Alertdate: 06.05.2024

member: localhost~127.0.0.1
  pool: tank
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 0B in 00:00:01 with 0 errors on Sat May 25 13:02:38 2024
config:

    NAME                STATE     READ WRITE CKSUM
    tank                DEGRADED     0     0     0
      mirror-0          DEGRADED     0     0     0
        physicaldrive1  ONLINE       0     0     0
        physicaldrive2  FAULTED      3    76     0  too many errors

errors: No known data errors
 

gea

Well-Known Member
Dec 31, 2010
3,208
1,214
113
DE
I have uploaded napp-it cs rc2

improved performance
improved jobs
supress restart of jobs
encrypted filesystems with keysplit, sha256 pwhash (from an easy to remember pw) or prompt
some bugfixes

Installation not required, just download to Windows 10/11/Server, with or without ZFS on Windows and start:
- Download https://www.napp-it.de/doc/downloads/xampp.zipand unzip to c:\xampp
- Start web-gui as admin: c:\xampp\web-gui\data\start_zfs-gui_as_admin.bat (mouse right click)

- Upload c:\xampp\web-gui\data\cs_server\ to /var/web-gui on Free-BSD, Illumos, Linux/Procmox, OSX, Solaris or Windows
or download cs_server online: curl https://www.napp-it.org/nappitcs | perl

- optionally: edit/create /var/web-gui/cfg/server.auth (auth string for access)
- Start cs_server: perl /var/web-gui/cs_server/start_server_as_admin.pl

- Open browser https://localhost (or ip from a remote machine)
- Add ZFS servers to menu ZFS Servergroup (with same auth as on server)"C:\xampp\web-gui\data\cs_server\start_server_as_admin.pl"atible

Compatibility
You can add a napp-it Illumos/Linux/Solaris server to napp-it cs.
You can continue replications in napp-it cs. Recreate the job there with same source, destination and jobid