TUTORIAL: Beauty by simplicity, OR one ZFS Snapshot used by 5 Layers of Applications

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rootgremlin

Member
Jun 9, 2016
42
14
8
This is about ZFS Filesystem and the (not) need to stack multiple layers of ZFS and snapshots,
but made to utilize all the functionality from ZFS napshots on the proxmox host.


to achieve this fabulous glory of software-engineering i utilized this projects:
cv4pve-autosnap and
Zamba Fileserver on LXC

After installation of cv4pve-autosnap i configured cron with the following script:
root@pve0:~# cat /etc/cron.d/PMSnapshot

Bash:
PATH=/usr/bin:/bin:/usr/local/bin/

SNAP_HOST="127.0.0.1"
SNAP_TOKEN='snapshot@pam!SNAP=xxxxxxxx-YOUR-TOKEN-iD-HERE-xxxxxxxxx'

# "all" for all VMs, exceptions with "-123" possible, or just the following VMs: "123,124"
SNAP_VMID="@all-pve0,-1011,-2022,-2035"

SNAP_KEEP_HOURLY=7
SNAP_KEEP_DAILY=13
SNAP_KEEP_WEEKLY=12
SNAP_KEEP_MONTHLY=3

# minute (0-59) | hour (0-23) | day of month (1-31) | month (1-12) | day of week (1-7)(Monday-Sunday) | user | program | programparameter


# 3 hourly
0 3,6,9,12,15,18,21    *    *    *    root    cv4pve-autosnap --host="$SNAP_HOST" --api-token="$SNAP_TOKEN" --vmid="$SNAP_VMID" snap --label="_hourly_" --keep="$SNAP_KEEP_HOURLY" > /dev/null

# weekly -> So; daily -> Mo-Sa
0 0    2-31    *    *    root    [ "$(date +\%u)" = "7" ] && cv4pve-autosnap --host="$SNAP_HOST" --api-token="$SNAP_TOKEN" --vmid="$SNAP_VMID" snap --label="_weekly_" --keep="$SNAP_KEEP_WEEKLY" > /dev/null || cv4pve-autosnap --host="$SNAP_HOST" --api-token="$SNAP_TOKEN" --vmid="$SNAP_VMID" snap --label="_daily_" --keep="$SNAP_KEEP_DAILY" > /dev/null

# monthly
0 0    1    *    *    root    cv4pve-autosnap --host="$SNAP_HOST" --api-token="$SNAP_TOKEN" --vmid="$SNAP_VMID" snap --label="_monthly_" --keep="$SNAP_KEEP_MONTHLY" > /dev/null
This generates ZFS Sopshots on the Hypervisor host.......

Bash:
user@pve0:~# zfs list -t snapshot tank/ssd/subvol-2550-disk-1
NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
tank/ssd/subvol-2550-disk-1@auto_monthly_220701000140    180K      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_monthly_220801000118      0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220807000142       0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220814000122       0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220821000117       0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220828000142       0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220830000120        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220831000046        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_monthly_220901000136      0B      -     25.6G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220902000103        0B      -     25.6G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220903000120        0B      -     25.6G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220904000107       0B      -     25.6G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220905000106        0B      -     25.6G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220906000120        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220907000118        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220908000127        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220909000134        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220910000151        0B      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_weekly_220911000110      80K      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220912000152      160K      -     21.0G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220913000114      168K      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220913150119       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220913180146       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220913210122       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_daily_220914000148        0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220914030149       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220914060139       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220914090145       0B      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_hourly_220914120120     160K      -     21.1G  -
tank/ssd/subvol-2550-disk-1@auto_manually_220914133115     0B      -     21.1G  -
and integrates them inside the Proxmox GUI.

Proxmox_Snapshots_LXFS.png


with the help of the Zamba toolbox and scripts, i configured an AD-Integrated Fileserver as an LXC Container
and configured it to utilize those same snapshots, and present them to the Users of the Fileserver-Container

for that to work, you must use the following "shadow" config-parts
inside smb.conf of your LXC container - Fileserver

CSS:
[global]
    workgroup = XXXXXX
    security = ADS
    realm = XXXXXX.LOCAL
    server string = %h server
    vfs objects = acl_xattr shadow_copy2
    map acl inherit = Yes
    store dos attributes = Yes
    idmap config *:backend = tdb
    idmap config *:range = 3000000-4000000
    idmap config *:schema_mode = rfc2307
    winbind refresh tickets = Yes
    winbind use default domain = Yes
    winbind separator = /
    winbind nested groups = yes
    winbind nss info = rfc2307
    pam password change = Yes
    passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
    passwd program = /usr/bin/passwd %u
    template homedir = /home/%U
    template shell = /bin/bash
    bind interfaces only = Yes
    interfaces = lo ens18
    log Level = 1
    log file = /var/log/samba/log.%m
    max log size = 1000
    panic action = /usr/share/samba/panic-action %d
    load printers = No
    printcap name = /dev/null
    printing = bsd
    disable spoolss = Yes
    allow trusted domains = Yes
    dns proxy = No

####### supplys ZFS Snapshots as Windows "Previous Versions"
####### snapshot naming is set through paket cv4pve-autosnap

    shadow: snapdir = .zfs/snapshot
    shadow: localtime = no  #####  DO NOT set YES, as it's currently a bug
    shadow: sort = desc
    shadow: format = ly_%y%m%d%H%M%S
    shadow: snapprefix = ^auto_\(manual\)\{0,1\}\(month\)\{0,1\}\(week\)\{0,1\}\(dai\)\{0,1\}\(hour\)\{0,1\}$
    shadow: delimiter = ly_

[data]
    comment = Main Share
    path = /tank/data
    read only = No
    create mask = 0660
    directory mask = 0770
    inherit acls = Yes

a bug in Samba module "shadow_copy2" makes it so that only localtime=no works,

which in turn for me in my environment adds UTC+1+DST == UTC+2 to the windows listing in Previous Versions

But now the Users can also access and use the same ZFS Snapshots that were made by cron / cv4pve-autosnap / Proxmox / Admin on the Proxmox Host

Previous_Versions.png
Previous_Versions_ALL.png

So the same ZFS Snapshot is utilized and accessed by 5 levels of "user" permissions and Applications.
1. Linux ZFS Filesystem on the Hypervisor
2. ProxMox inside its Web-GUi
3. The Linux Container natively accessing the ZFS Mountpoint
4. The AD-Integrated Samba instance running inside the Container
5. The User accessing the Windows Fileshare and the "Previous Versions" dialog

Is this cool, or what?

The added benefit is, that the whole Fileserver with all its data and access permissions can be fully:
1. Backed up
2. Replicated
3. restored
while it is the most lightweight on resources and disk space.
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I imagine this still works with zfs send/receive considering it's pretty much a 'normal' ZFS version used on de PVE host :cool:
 

Stephan

Well-Known Member
Apr 21, 2017
933
710
93
Germany
OP removed already any doubt why ZFS is the ultimate nerd filesystem of choice.

"Slightly" tangentially related, there is a new video out by tekwendell with Allan where they chat about new ZFS features in 2nd half and in 1st half how the shot ZFS installation of Linus of LTT was pretty much rescued, and zdb improved by the authors with such a pathological case:

Turns out you can get 90% of your data back, even if the backplane corrupts data when stressed, various drives die, and nobody ever bothers to scrub pools or look at ZED daemon event notifications. For years.
 
  • Like
Reactions: gb00s