OpenZFS NAS (BSD, Illumos, Linux, OSX, Solaris, Windows + Storage Spaces) with napp-it web-gui

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
In ZFS on Windows 2.2.3 from yesterday there was a problem with CPUID detection that could crash Windows during setup

There is a new version that fix this.
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
I have uploaded a new nightly of the client/server ZFS GUI napp-it cs (Mar. 20)

- Allows large zfs list or get all (up to several MB)
- Client/ Frontend web-ui app (Windows): Copy and Run
- Server/ Backend software (BSD, Linux, OSX/ Solaris/Illumos, Windows): Copy and Run from any location like /var/web-gui, desktop or ZFS pool
- Jobs like snap or scrub run remote, replication any source /any destination memberserver is next.
- Gui performance: I would say very good especially as there is no local ZFS database to allow CLI modifications

see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris , ZFS Server for Windows
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
There are new release candidates of Open-ZFS 2.2.3 for OSX and Windows



btw
the more people are testing Open-ZFS 2.2.3 on OSX or Windows and report problems,
the faster remaining installer, driver or integration problems get fixed.

ZFS 2.2.3 on OSX or Windows is quite identical to Upstream Open-ZFS 2.2.3 so data security
should be already as good or bad as ZFS on Linux

Issues · openzfsonosx/openzfs
Issues · openzfsonwindows/openzfs
 
Last edited:
  • Like
Reactions: Aluminat

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
To add a TrueNAS Server to a servergroup

- Enable SSH, allow root (sharing options) or SMB
- Copy napp-it cs_server to a filesystem dataset ex tank/data (/mnt/tank/data)
- open a root shell and enter:
perl /mnt/tank/data/cs_server/start_server_as_admin.pl
Add Truenas to your servergroup (ZFS Servergroup -> add)

truenas.png

Anyone with a Qnap ZFS Box?
Can you confirm a similar setup
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
Next step:

Replication from any server to any server is basically working
on local transfers and between some combinations. Need to check why it does not on others,

repli.png

Howto:
- add all servers to menu ZFS servergroup
- create replication jobs from any to any server (remote between hosts not working on any combination)

If you start a job, it runs minimized, so you can open the cmd Window to check
- click on "replicate" in joblisting to check last runs
- click on date of a replication job in joblisting to see datails of last run
- enter rl (remotelog) or cl (commandlog) intro the cmd field to get remote logs (rld or cld to delete)
- open menu System > Process List to see running processes on a selected server

Use menu Pools , Filesystem and Snaps to check remote servers

Setup:
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
napp-it cs beta, current state (apr.05)

Server groups with remote web-management: (BSD, Illumos, Linux, OSX, Windows): ok
ZFS (pool,filesystem,snap management): ok on all platforms
Jobs (snap, scrub, replication from any source to any destination): ok beside Windows as source or destination
(Windows as source works with nmap/netcat on Windows)
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
How much RAM do I need for napp-it cs

RAM for a ZFS filer has no relation to pool or storage size (beside dedup)!

Calculate 2 GB for a 64bit OS, add 1-2 GB for a Solaris based filer and 3-4 GB for a BSD/Linux/OSX/Windows based filer for minimal read/write caching or ZFS can be really slow. RAM above depends on web-gui, number of users or files, data volatility or wanted storage performance. Add more RAM for diskbased pools than SSD pools for a good performance.

With napp-it cs I suggest 8 GB for the Windows machine where the frontend web-gui is running, 16GB if you additionally use ZFS on Windows on that machine.

For the ZFS filers that you want to manage with napp-it cs there is no additional RAM requirement for the server app what means that napp-it cs can manage a Solaris/Illumos based ZFS filer with 2-3 GB RAM and a BSD/Linux/OSX/Windows ZFS filer with 4-6 GB RAM what may allow to manage even a small ARM filerboard with ZFS like a Raspbee up from 2GB remotely with a ZFS web-gui (you only need ZFS and Perl on it).

If you use ZFS on such a board, you may try and report.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
Update:
The current napp-it cs now consists of three components (previously only 1st and 2nd)

1. The web-gui frontend under Windows

This allows you to manage the server or servers via browser and http/https.

2. The server backend cs_server.

These are two Perl scripts that should run on every server where Perl is installed.
These scripts run with admin rights so that you can call zfs and zpool. The backend scripts are addressed by the frontend via a socket connection on port 63000. The connection requires authorization and can be limited to the IP of the frontend computer. Console commands and the corresponding answers are transmitted unencrypted.

3. Https server to transmit encrypted commands and responses via “callback”.

A command such as zfs list is uploaded encrypted as a file to an https server. This can be the Apache server from napp-it cs, also with a self-signed certificate, or another https server with a valid, secure certificate. Curl is required to upload and download the commands and responses. This is usually included (also in Windows 10/11) or has to be installed, e.g. in Free-BSD 14 with pkg install curl. The module /cgi-bin/cs/cs-connect.pl is required on the https server. It should run on any CGI capable web server. Under Linux/Unix, the first line of the script must be adjusted (path to Perl, /usr/bin/perl). This allows you to quite elegantly get around the problem that a web gui on the LAN usually only uses insecure https with a self-signed certificate. The disadvantage of an external https server is a slightly higher latency in the web GUI. I'll try to keep this tolerable with command caching.

Callback is generally used if the response to a command is extensive because a socket connection has problems with it. To activate callback, enter the IP of the server in napp-it cs under About > Setting, e.g. www.university.org . You can also specify an https server in the server script to force only encrypted transmission.

Current information: napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Downloads
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
Napp-it cs is not quite finished yet but is now "feature complete" for the time being.
The current beta now has

- Alerts and reports (SSL/TLS)
- User management (create/delete)
- Encrypted file systems

The client/server architecture offers interesting options for encrypted file systems:
- Creation of file systems optionally with a simple password from which a sha256hex hash is generated.
- The password is split into three parts
- The password (complete and parts 1-3) is initially only saved on the computer with the web-gui, not locally on the ZFS servers.
Store/back up these password files (especially .komplettkey) safely and divide parts 1-3 between the ZFS server and the web servers.

There are the following options for opening a file system
- Providing the key files, either completely or divided into parts 1-3 locally, w1 and w2.
Usually one part should be on the ZFS server and the other two parts on one or two https web servers.
You then either need both web servers to put all three parts together, or the web servers w1 and w2 both have parts 2 and 3 of the key.
An unlock request on a ZFS server then ensures that it tries to load all missing parts of the key from the web servers.
This allows decentralized ZFS management. Access to the web servers requires an auth key and access can be restricted to IP. You can use the weg-gui itself as the web server or public https servers with a valid certificate.
-or you can open via shorter pw (base of pw hash) or the whole passphrase

That's the concept. Now it's time to test to fix the last bugs so that everything runs smoothly.
And then documentation.

enc.png

Manual:
www.napp-it.org/doc/downloads/napp-it_cs.pdf
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
Reports und Alerts gehen jetzt in napp-it cs, z.B.

Reports aof all server (daily or weekly)
Pool and connection state

Code:
Statusreport job 1716393663 from my-w11 from 05.06.2024  16:00

################
Storage reports: r01,r02, r03, r04
################


#######
Report: r01#poolstatus#parse_zpool_status#SIL#AS.pl
#######
#######
Member: free_bsd_14~192.168.2.75
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.75 (192.168.2.75) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: localhost~127.0.0.1
#######
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  4.50G  1.68M  4.50G        -         -     0%     0%  1.00x  DEGRADED  -

  pool: tank
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 0B in 00:00:01 with 0 errors on Sat May 25 13:02:38 2024
config:

    NAME                STATE     READ WRITE CKSUM
    tank                DEGRADED     0     0     0
      mirror-0          DEGRADED     0     0     0
        physicaldrive1  ONLINE       0     0     0
        physicaldrive2  FAULTED      3    76     0  too many errors

errors: No known data errors

#######
Member: omnio46~192.168.2.44
#######
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  39.5G  5.03G  34.5G        -         -     1%    12%  1.00x    ONLINE  -
tank   13.2G  1.91G  11.3G        -         -     0%    14%  1.00x    ONLINE  -

#######
Member: omni~192.168.2.203
#######
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
daten1  7.25T  4.06T  3.19T     390G         -    13%    55%  1.00x    ONLINE  -
nvme     372G   283G  88.6G        -         -    60%    76%  1.00x    ONLINE  -
rpool   34.5G  10.3G  24.2G        -         -    31%    29%  1.00x    ONLINE  -

#######
Member: openindiana~192.168.2.36
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.36 (192.168.2.36) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: osx~192.168.2.78
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.78 (192.168.2.78) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: proxmox_1~192.168.2.71
#######
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
z1    5.50G  2.65M  5.50G        -         -     0%     0%  1.00x    ONLINE  -

#######
Member: raspberry4~192.168.2.89
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.89 (192.168.2.89) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: smartos~192.168.2.96
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.96 (192.168.2.96) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: solaris~192.168.2.50
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.50 (192.168.2.50) >hi

Es konnte keine Verbindung hergestellt werden, da der Zielcomputer die Verbindung verweigerte.

#######
Member: truenas~192.168.2.72
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.72 (192.168.2.72) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: win2019-ad~192.168.2.124
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.124 (192.168.2.124) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.

#######
Member: windows11-1~192.168.2.64
#######
4629 Socketerror: sub socket on connect with ip: 192.168.2.64 (192.168.2.64) >hi

Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.




#######
Report: r02#joberror#parse_job_results#SIL#AS.pl
#######
jobs.log



#######
Report: r03#license#reminder#SIL#AS.pl
#######



#######
Report: r04#smart#temp_and_shortcheck#SI#AS.pl
#######
or Alerts (bad disk or pool near full > 90%).
Alerts are repeated once per day
Code:
Job 1716393712
Alertmessage:
Alertdate: 06.05.2024

member: localhost~127.0.0.1
  pool: tank
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 0B in 00:00:01 with 0 errors on Sat May 25 13:02:38 2024
config:

    NAME                STATE     READ WRITE CKSUM
    tank                DEGRADED     0     0     0
      mirror-0          DEGRADED     0     0     0
        physicaldrive1  ONLINE       0     0     0
        physicaldrive2  FAULTED      3    76     0  too many errors

errors: No known data errors
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
I have uploaded napp-it cs rc2

improved performance
improved jobs
supress restart of jobs
encrypted filesystems with keysplit, sha256 pwhash (from an easy to remember pw) or prompt
some bugfixes

Installation not required, just download to Windows 10/11/Server, with or without ZFS on Windows and start:
- Download https://www.napp-it.de/doc/downloads/xampp.zipand unzip to c:\xampp
- Start web-gui as admin: c:\xampp\web-gui\data\start_zfs-gui_as_admin.bat (mouse right click)

- Upload c:\xampp\web-gui\data\cs_server\ to /var/web-gui on Free-BSD, Illumos, Linux/Procmox, OSX, Solaris or Windows
or download cs_server online: curl https://www.napp-it.org/nappitcs | perl

- optionally: edit/create /var/web-gui/cfg/server.auth (auth string for access)
- Start cs_server: perl /var/web-gui/cs_server/start_server_as_admin.pl

- Open browser https://localhost (or ip from a remote machine)
- Add ZFS servers to menu ZFS Servergroup (with same auth as on server)"C:\xampp\web-gui\data\cs_server\start_server_as_admin.pl"atible

Compatibility
You can add a napp-it Illumos/Linux/Solaris server to napp-it cs.
You can continue replications in napp-it cs. Recreate the job there with same source, destination and jobid
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
Windows NAS with Hyper-V, SMB direct, ntfs ACL, Storage Spaces, Refs (and ZFS)

In current ZFS web-gui Napp-it cs dev I am working to add full support for Windows Storage Spaces. First I only wanted to make sure that disks used by a Windows Storage Pool could not be attached to a ZFS pool and vice versa.

I have had a quite bad opinion of the Windows Storage Pool and Storage Spaces concept. But at second look I must say that it is very flexible, more than realtime raid like ZFS or other non-raid concepts. The performance was often called as bad but that seems only true when you format Storage Spaces with a default 4K setting instead a faster 64-512K setting with a lower space efficiency for smaller files. Using thick provisioning instead thin provisioning is another item where you choose between space efficiency vs performance.

So why not add full support for Windows mainly Windows 10/11 or Windows Server with the SMB direct, the fastest SMB option with RDMA. Nearly everyone has it or knows it and it has some premium features like Hyper-V, SMB direct/RDMA and superiour ntfs ACL (over Posix ACL), a very flexible Pool handling with any disks (size and type) or data tiering. ReFS offers Performance and the two main ZFS advantages Checksums and Copy on Write. The upcoming support of ZFS makes it a perfect combination. The reason that there are not too many thinking of a Windows NAS or AiO is a really bad usability, a very confusing naming of Storage Spaces and especially that you absolutely need Powershell to handle it. The tools on the GUI are not too helpful, not well organized and offer only very basic settings. Remembers me at my first steps with ZFS and the zfs and zpool commands at console beside instead two commands you have several douzens. I will try to close the gap to ZFS NAS appliances on Linix/Unix with my web-gui.

So first a few Basics of a Windows NAS

1. disks

Quite easy if you think at physical disks. But with Windows you have also virtual disks, see my menu Disks

1719952186067.png

Basid an SSD and an USB Stick, you see iSCSI targets from my OmniOS fileserver (Comstar). A very special disk type are those with manufacturer Msft. These are filebased virtual disks intended for Hyper-V. They are fast, can be thin provisioned and size up to over 50000GB.

see menu Disks > Windows Management > Filebased Virtualdisks where you can create or connect/disconnect them or check if they are part of a Windows pool.

1719952622816.png

If you always place filebased virtualdisks on a location like c:\vhdx, v:\vhdx or on SMB shares like \\ip\vhdx you can handle them without additional configuration.

Storage Spaces

This is not a diskbased raid concept, just a bunch of disks of different type or size. If you create Storage Spaces, you can define per Space if you want redundancy or striping over different disks via data chunks not disk redundancy, Hotspare and data tiering is possible.

1719952989457.png

ZFS Raid
OpenZFS 2.2.3 on Windows has reached rc6.
Some remaining compatibility problems with Windows but more than ready for first tests. You can choose if you want ZFS filesystems with realtime dedup, encryption, compress, hybrid pools for small io, safe sync write, snapshots, replication with open files (these features are not in ReFS) or ReFS that is faster than current ZFS

1719953331261.png
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
ZFS Storagecluster

ZFS Pools not over disks but whole servernodes. This allows a whole failure of a node and performance can scale over nodes.

Setup under Windows is ultra easy

1. Create an SMB share \\ip\vhdx on each node.
The server that manages the raid and shares (master, locallhost) can be a node and needs r/w access to other smb shares

2. create a filebased virtual disk on each node ex \\ip\vhdx\cs_server.vhdx manually
or via napp-it cs (min 10GB, max 65000GB, can be thin provisioned)

1720025389676.png

If the virtual disks are connected (my example three nodes) you can create a ZFS pool over them ex a 3way mirror over 3 network nodes.

1720025499493.png

What I have found
On a node failure, the pool is degraded.
To reaccess, use Disk > Replace and replace the disk with itself

The Nodes can be on any OS (I used OmniOS and Windows).
For SMB direkt, master must be Windows Server (ex free 2019).
Clients must support SMB direct ex Windows 11.

You cannot mount a filebased disk simultaniously from two servers
as the file is locked when connected. This avoids data corruptions that can happen with other solutions ex SAS clusters.

Of course you can use the same setup for Storage Spaces with redindancy over nodes.
 
Last edited:
  • Like
Reactions: pixelBit

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
There are several aspects

1. ZFS
Given the security and features ZFS is fast but not as fast as others like ntfs or ReFS.
If it can become faster, we will see, it improves on Windows. Currently it is beta release candidate 6.
A ZFS raid should help a little.

2. Network latency and CPU load
I have done a lot of tests in the past with iSCSI. In the end it was a decision performance vs HA.
I decided to use multipath SAS for my HA config as it is much faster and easier to manage

3. Usability
Network mirrors over FC or iSCSI are quite complicated

Main item and first game changer now is usability. A cluster over SMB is quite zero config when paired with vhdx files from Hyper-V
Second game changer is SMB direct/RDMA that can deliver a much better network performance with ultra low cpu load.

I have currently no SMB direct capable 40Gb+ nics but tests from Besterino have shown a superiour performance:

(use Chrome to translate on the fly)
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
About Windows Storage Spaces

In the process of adding support for Storage Spaces into napp-it cs
I stumbled always upon the incoherent naming of Storage Spaces items.

Windows Storage Spaces can be therefor very confusing, mostly because of the different Windows tools where each only offers a limited set of options, the need to learn Powershell for proper settings and a very inconsistent way of naming between docs, tools and Powershell commands. I will try to make Storage Spaces more manageable.

I decided to use the following meanings:

1. Physical Disks are HD, SSD, NVMe.

Powershell also lists virtual disks with the get-physicaldisk command

2. Virtual Disks are those based on a file (.vhdx)
In the docs, Storage Spaces are also often named virtual disks (very confusing)

3. Volumes and Partitions
This is what you see in Explorer, ex a NTFS, ReFS or ZFS disk, usually with a driveletter

4. Windows Storage Pool
This is a blackbox where you throw in your physical disks.
In the docs use of this term is often mixed with Storage Space

5. Storage Spaces
This is a Virtual Device that is treated like a disk as you can place partitions and volumes on them.
Redundancy is defined here (not on Pool or Disk level)
In the docs, Storage Spaces are also often named virtual disks (very confusing)

6. SMB Storage Cluster
This is a setup with a whole node treated like a physical disk. Connectivity is over SMB via vhdx files
This setup works with Windows 10/11 and Server. Windows Server adds superiour SMB Direct/RDMA
This is different to Microsoft S2D Cluster which is Windows Server only.
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
I now want to add Tiering support to Windows Storage Spaces. As there is hardly another storage technology that is as flexible as Storage Spaces, I am looking for some basic thumbrules to tame the beast.

First steps are quite easy with a web-gui (otherwise you need Powershell)

1. create a storage pool from disks with a different performance ex hdd, ssd or scm
2. define tiers ex hdd tier (optinized for mirrored spaces and a second hdd tier for parity spaces) and an SSD/SCM tier for mirrored spaces

This is the easiest part as you only prepare the Tiers for different sort of Storage Spaces (ex mirrorred or striped ones)

1721139947495.png

Next step is complicated

You ceate a Storage Space that should use the Tiers. Not too complicated , set size, filesystem ex Dev-Drive in Windows 11 (=ReFS) and sizes of Tiers you want to use but this is more a trial and error

Is it an option to skip tier sizes?
This should mean, use what is there because I do not want to split Tiers over more Spaces

You also must set Allocation Unit Size (default 4k, ntfs 4K-2048K, ReFS 4K-64K)
This is is known to be performance relevant. The default 4K seems bad, an optimmized value depends on selected Raid level and number of disks, see Storage Spaces with parity, very slow writes. Solved!

Is there a value that is quite ok with most setups and use cases like 64K in general
or max 2-3 values with a special setting ex 64K with ReFS, 256K with ntfs and 3-5 disks and 128K with more disks


1721141174716.png

Other possible thumbrules (ok or not ok in most cases)

- A writecache that uses a part of the fastest tier is not worth the effort ?
- Use Parity Raid with Storage Spaces when capacity is more important than performance
-add yours


Pool maintenance

- As there is no diskbased raid, just replace faulted disk (used automatically)
or insert new disk and add to pool
- Pool optimize is needed when..



please comment when you are more experienced than me with Storage Spaces
 
Last edited:
  • Like
Reactions: pixelBit

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
I have added some new ideas to my napp-it cs (for BSD, Linux, Solaris and Windows) and ZFS encryption

Create ZFS filesystems:
- use 256sha-hex to create keys (no nonfusion with oO0iIlL characters in keys)
- can generate a sha256 hash from a short, easy to rmember pw as key

https Keyserver
- either included Apache on Windows or your company/university webserver and public certificates
- passphrase and ip range protected

3way Keysplit (each part 20 char)
- distribute keyparts on local ZFS server and/or webserver w1 or w2
- HA/ failover if w1 or w2 is offline
- unlock with distributed keys or directly via key or simple/short pw
- key overview of files local/w1/w2

https://www.napp-it.de/doc/downloads/napp-it cs encryption.pdf
 

gea

Well-Known Member
Dec 31, 2010
3,428
1,335
113
DE
About ZFS Dedup
Realtime dedup is one of the killer features of ZFS as it can massively reduce required capacity for redundant datablocks. This also can increase performance as there is less data that must be written or read from pool. While dedup can be enabled per filesystem, it creates dedup tables that work poolwide. Once enabled, you cannot disable.

Traditional Dedup
Beside dedup2 in native Oracle Solaris ZFS, dedup was in the past a feature that should be avoided in most cases as it can eat up all RAM with a catastrophic impact on performance and even with smaller dedup tables it affected performance negatively without options to limit or clean up dedup tables. A special dedup vdev can limit the negative impact on RAM usage and performance..

Fast Dedup in Open-ZFS
The new Fast Dedup feature can be a gamechanger as it may allow to enable dedup in most cases just like compress with mostly more advantages than disadvantages as RAM usage, DDT sizes, DDT cleanup and overall performance has seen massive improvements.
You can enable Fast Dedup in menu Pools > ZFS > Features (newest Open-ZFS)

Fast dedup pool properties
dedup_table_size: the total size of all DDTs on the pool and
dedup_table_quota: the maximum possible size of all DDTs in the pool

When set, quota will be enforced by checking when a new entry is about to be created. If the pool is over its dedup quota, the entry won't be created, and the corresponding write will be converted to a regular non-dedup write. Note that existing entries can be updated (ie their refcounts changed), as that reuses the space rather than requiring more.

dedup_table_quota can be set to auto, which will set it based on the size of the devices backing the "dedup" allocation class. This makes it possible to limit the DDTs to the size of a dedup vdev only, such that when the device fills, no new blocks are deduplicated.

Fast Dedup is currently in Open-ZFS 2.2.6 master and available soon (already in Open-ZFS on Windows 2.2.6 rc for first tests) Without Fast Dedup (Open-ZFS) or Dedup2 (Solaris), dedup should be avoided in most cases.

Fast Dedup Settings in next napp-it cs with Open-ZFS 2.2.6 rc1 on Windows or any other ZFS server once Fast Dedup becomes available.

1725656873793.png
 
Last edited: