Recent content by ARNiTECT

  1. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    This works great! I returned from holiday, booted up my server and the next day I noticed 3x HDD were failed/degrade in an 8x HDD Z2 pool. It was a disaster recovery pool, so I just replaced the dead drives, recreated the pool with the same name and using the new keyword filter feature, I...
  2. A

    Troubleshooting GPU passthrough ESXi 6.5

    I initially had issues getting 2x GPUs + iGPU to pass through. You could try an older host BIOS. I am stuck on an old BIOS for my Supermicro X11-SCA-F, as newer ones won't even boot with the 3 GPUs. The order that the GPUs are set in BIOS made a difference for me: Load Optimised Defaults...
  3. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    Thanks for the update Gea. We’ve just gone away on holiday and my servers are off; I’ll have to wait a couple of weeks to check this out. In 22.03 I remember filter options for idle_active and idle_manual etc; have you added an additional keyword filter? So I could type in TankB1 and it would...
  4. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    Looks like that is the safe option. I currently filter removable backup replication jobs by 'idle_manual'; it would be really useful to be able to filter by keyword, similar to the snapshot menu, or perhaps order by replication list headings, such as 'Opt2/ to'. Do you have any plans for such a...
  5. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    This is what I thought was happening. I set all jobs with Hold:20s to retain 20 snaps The snaps appear to be there, but it still fails.
  6. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    This is how I had hoped it would work using pools with same name in replication jobs: First Pool, Initial ZFS replication: Source:Tank1/FS1 Target:TankB1/FS1 Snaps on success: Tank1/FS1@jobnumber_repli_zfs_server_nr_1 TankB1/FS1@jobnumber_repli_zfs_server_nr_1 First Pool, Incremental ZFS...
  7. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    Thanks for the suggestion Gea, I had hoped it would be possible using the same replication jobs for simplicity. I'm sure I had this working for a while before. Unfortunately I have 22 Filesystems to backup and I don't want to backup entire pools. If I do have to set this up as 2 separate sets...
  8. A

    Napp-it Offsite Drives, Keep-Hold, Source-Target

    I have 4x 2.5” backup HDD, which I split into 2 pairs. I rotate a pair from the server with a pair offsite. I would like some help on Keep/Hold settings, to ensure there is always a snap pair for replication. I would like to just keep 2 replication snaps on the Target pools (as default, in...
  9. A

    Troubleshooting GPU passthrough ESXi 6.5

    Just popping in to see if there has been any update on the situation recently? I'm still using the solution as post #245 with scripts for shutdown/start-up. I'm using ESXi 7U2 and 3x Win10 VMs with GPUs, primarily for gaming: 1. Nvidia Quadro RTX 4000 with shutdown/startup script, drivers...
  10. A

    ZFS Clone job

    Thanks for link, I haven't used SSH server on windows before. This might allow for a better solution to have the client VM's request an updated clone of the parent VM as required, rather than unnecessarily being scheduled daily. The snaps of the parent VM would still need to be scheduled at a...
  11. A

    ZFS Clone job

    I'll be sharing software primarily with SMB (might also try symbolic links) and only using SCSI when required. I was hoping it would be enough to trick Windows it had the same locally connected harddisk after a brief loss of connection. Thanks for the link. I started looking into using...
  12. A

    ZFS Clone job

    Thanks Gea, I have tried using the GUID from the to-be-destroyed clone for the GUID of the new updated Clone, and now the LU's GUID is the same before and after. The LU of the clone is different from the LU of the original parent. This time, I started with using the iSCSI options on the 'ZFS...
  13. A

    ZFS Clone job

    iSCSI has been more difficult to setup than SMB, but the following seems to work: iSCSI # LU=$(sbdadm list-lu | grep -w 'LUN2' | cut -f1 -d' ') # stmfadm delete-lu $LU # zfs destroy -R Pool1/Directory1/VM1/LUN1@clonesnap # zfs snapshot Pool1/Directory1/VM1/LUN1@clonesnap # zfs clone...
  14. A

    ZFS Clone job

    I've just read your ESXi hot-snaps > help ...I should use this. I never got round to finishing VEEAM setup (free version). I've been shutting down my VMs before replication backup; consequently, less often than I would like. Prior to these recent experiments, I've not had the need for ZFS...
  15. A

    ZFS Clone job

    Thanks Gea, I'll have a read through. I have previously setup iSCSI using your helpful napp-it Comstar menu, so I would like my script to be inline with this. Do you have any future plans to add a 'Clones' Job menu, for automated create/destroy/share? Perhaps, for example, this might be useful...