ZFS Clone job

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ARNiTECT

Member
Jan 14, 2020
92
7
8
Hi,
I am looking for some direction on creating a job in napp-it for Cloning file systems.

I would like to setup a VM to download and continually update a folder/drive for Win10 games and for that folder/drive to be available for a number of other virtual and physical machines, each able to read and write to it without corrupting each other's folder/drive. I expect this will accumulate to quite a large quantity of of data and I would prefer not to store multiple copies of it. I should probably avoid deduplication, as it rarely seems to be the answer.

I thought ZFS Clones, which take up very little space initially, with a napp-it: Jobs > Other > 'Create a job' scheduled once a day early in the morning.

I understand most games are ok being located on mounted SMB network shares; however, some require iSCSI to trick the game the drive is local. Also, perhaps the games are split between the NVMe and HDD pools, depending on performance required for each game. I'm not very familiar with iSCSI, but I tried it recently and I was pleased with the speed over 10Gbe; however, iSCSI has its drawbacks when compared to the flexibility of SMB/NFS, so a mixture might be the answer.

I'm hoping all game saves are stored on the user's local profiles, or cloud saves, and that their games folder/drive can be destroyed and replaced with new data each night. I'll test this when I next get an opportunity.

It could go like this:
- Updater-VM (not used for gaming) to write to an SMB folder or iSCSI target all day, except for a time early in the morning when it is scheduled not to be written to.
- In napp-it: Jobs > Other > 'Create a job'
- Take a snapshot of the original SMB and iSCSI File Systems
- Make Clones from the original snapshot for each machine requiring access, and set to r/w
- Mount the Clones as SMB shares or iSCSI targets
- Each machine's SMB share is updated and they are reconnected to their iSCSI target with full read/write permissions for use throughout the day
- At next scheduled snapshot from original file system, existing Clones are destroyed and new Clones are created including updated data
- the cycle repeats as scheduled and manually if required
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
I've not written a bash script before, but I'm reading up on trying something simple. So far I have this:

SMB
# zfs destroy -R Pool1/Directory1/VM1/SMB1@clonesnap
# zfs snapshot Pool1/Directory1/VM1/SMB1@clonesnap
# zfs clone Pool1/Directory1/VM1/SMB1@clonesnap Pool1/Directory1/VM2/SMB2
# zfs set sharesmb=on Pool1/Directory1/VM2/SMB2

-R on first line would destroy the daily snapshot and all existing child clones

iSCSI
# zfs destroy -R Pool1/Directory1/VM1/LUN1@clonesnap
# zfs snapshot Pool1/Directory1/VM1/LUN1@clonesnap
# zfs clone Pool1/Directory1/VM1/LUN1@clonesnap Pool1/Directory1/VM2/LUN2
# zfs set shareiscsi=on Pool1/Directory1/VM2/LUN2

...I need to look into setting up iSCSI by CLI a lot further
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
I've not written a bash script before, but I'm reading up on trying something simple. So far I have this:

# zfs set shareiscsi=on Pool1/Directory1/VM2/LUN2

...I need to look into setting up iSCSI by CLI a lot further
Shareiscsi is outdated, you need to use Comstar (is also much faster then old shareiscsi)

You must create 3 items for Comstar iSCSI:
a logical unit (Disk image, a ZFS dataset treated like a blockdevice/disk)
a target (a client connects a lun in a target)
a view from the logical unit to the target (makes the lun visible in the target to a client initiator)
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Thanks Gea, I'll have a read through.
I have previously setup iSCSI using your helpful napp-it Comstar menu, so I would like my script to be inline with this.

Do you have any future plans to add a 'Clones' Job menu, for automated create/destroy/share? Perhaps, for example, this might be useful for live VM backup purposes.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
A ZFS Snap is like a sudden power unplug. Some VMs do not like this.

For consistent VM backups, you need either a backup tool like VEEAM or consistent snaps like I do with Jobs > ESXi hotsnaps. This includes an ESXi hot snap in every ZFS snap. You can then save/restore a (NFS3) filesystem via replication. After a restore of a ZFS snap you can always activate the included ESXi hot memory snap.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
I've just read your ESXi hot-snaps > help ...I should use this.
I never got round to finishing VEEAM setup (free version). I've been shutting down my VMs before replication backup; consequently, less often than I would like.

Prior to these recent experiments, I've not had the need for ZFS Clones. I haven't thought what other uses there could be for scheduled Clone jobs.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
iSCSI has been more difficult to setup than SMB, but the following seems to work:

iSCSI
# LU=$(sbdadm list-lu | grep -w 'LUN2' | cut -f1 -d' ')
# stmfadm delete-lu $LU
# zfs destroy -R Pool1/Directory1/VM1/LUN1@clonesnap
# zfs snapshot Pool1/Directory1/VM1/LUN1@clonesnap
# zfs clone Pool1/Directory1/VM1/LUN1@clonesnap Pool1/Directory1/VM2/LUN2
# sbdadm create-lu /dev/zvol/dsk/Pool1/Directory1/VM2/LUN2
# GUID=$(sbdadm list-lu | grep -w 'LUN2' | cut -f1 -d' ')
# stmfadm add-view -t tg2 -n 1 $GUID

This replaces the current Cloned zvol/lu/view with an updated Cloned zvol/lu/view.

Problem is now with Windows 10 iSCSI initiator. The new clone appears under 'Disk Management as 'Offline' and requires to be set to Online manually. I am investigating if there is a way to automatically 'Online' the updated Clone, or maybe it is possible in OmniOS to use the same LU name as previous Clone, or rename after?
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
You can try to import/recreate the lu with the old guid
stmfadm create-lu -p guid=$guid /dev/zvol/rdsk/$zvol

Like I do in napp-it in Comstar > Logical Unit > Import

see my menu script "
/var/web-gui/data/napp-it/zfsos/09_Comstar iscsi=-lin/02_Logical Units/05_import LU/action.pl"

In napp-it I use the guid as part of the lu name to make import with old guid easier ex after a zvol restore via replication.
 
Last edited:

ARNiTECT

Member
Jan 14, 2020
92
7
8
Thanks Gea,

I have tried using the GUID from the to-be-destroyed clone for the GUID of the new updated Clone, and now the LU's GUID is the same before and after. The LU of the clone is different from the LU of the original parent.

This time, I started with using the iSCSI options on the 'ZFS File Systems' menu, instead of manually through Comstar. After creating the 1:1 zvol/lu/target/tg/view. I then setup the clone, which seems to work well. Finally, I went through the process of replacing the clone from a more recent snap.

# zvolp="Pool1/Directory1/VM1/iscsi_1644183296" #parent
# cnum="_1" #clone number
# zvolc=$zvolp$cnum #clone name
# zvols=$zvolp"@clonesnap" #snapshot
# lualias=${zvolp%/*}$cnum
# guid=$(sbdadm list-lu | grep /dev/zvol/rdsk/$zvolc | cut -f1 -d' ')
# stmfadm delete-lu $guid
# zfs destroy -R $zvols
# zfs snapshot $zvols
# zfs clone $zvols $zvolc
# stmfadm create-lu -p guid=$guid /dev/zvol/rdsk/$zvolc
# stmfadm modify-lu -p alias=$lualias $guid
# stmfadm modify-lu -p wcd=false $guid
# stmfadm add-view -t $lualias -n 0 $guid

In windows, the cloned drive appears as if nothing has happened, the content has not updated, but attempting to open files gives "Cannot find the file". If I Offline & Online, then the updated contents are now visible. Maybe this is a caching issue in windows?
Even this only works as long as I am not connected to both the original target and the clone at the same time. If I am connected to both, then the clone won't show the updated content, even after disconnecting/reconnecting the target. This should be ok, as I won't need access to both targets at once; I would just like the update to happen without having to Offline/Online the drive.
 
  • Like
Reactions: MiniKnight

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
iSCSI is not a file sharing protocol like NFS or SMB. It is like a locally connected harddisk. From a Windows view, you must disconnect and reconnect to use a "different" harddisk.

Maybe similar to Setup iSCSI on Windows Server with Windows PowerShell

If you want something that "hot" shows different content you may use SMB shares where you switch off one share from an old clone and recreate the same sharename from another cloned filesystem.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
I'll be sharing software primarily with SMB (might also try symbolic links) and only using SCSI when required. I was hoping it would be enough to trick Windows it had the same locally connected harddisk after a brief loss of connection.
Thanks for the link. I started looking into using PowerShell. This command works when the drives are Offline:
Get-Disk | Where-Object IsOffline -Eq $True | Set-Disk -IsOffline $False
Using PowerShell to Offline/Online is something I can schedule, shortly after the napp-it scheduled snapshot & clone.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
Maybe you can configure Windows remotely or start scripts from OmniOS via SSH or vice versa
 
Last edited:

ARNiTECT

Member
Jan 14, 2020
92
7
8
Maybe you can configure Windows remotely or start scripts from OmniOS via SSH or vice versa
Thanks for link, I haven't used SSH server on windows before. This might allow for a better solution to have the client VM's request an updated clone of the parent VM as required, rather than unnecessarily being scheduled daily. The snaps of the parent VM would still need to be scheduled at a time when they are not being updated.

PowerShell for Offline/Online of iSCSI disks only:
Get-Disk | Where-Object -FilterScript {$_.BusType -Eq "iSCSI"} | Set-Disk -IsOffline $True; Get-Disk | Where-Object -FilterScript {$_.BusType -Eq "iSCSI"} | Set-Disk -IsOffline $False