Each VM on it's own iSCSI target - issue with backup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

katit

Active Member
Mar 18, 2015
451
32
28
54
I have 5 VMs running on single server. Used windows backup and all was well (as far as backup goes)

Now I built new FreeNAS server, connected it to HyperV machine with 40Gb card. And now all my VMs live on it's own iSCSI targets.

Performance great, I get ZFS snapshots and all seem to be working as expected. Except for backup.

I am not sure how "dirty" will be ZFS snapshot when it takes it on running VM, so just in case I want to have windows backup. But now those backups fail.
Because of "file being locked by another process". The only difference is that files now in iSCSI drive.

Did anybody have this issue? Any suggstion?

Error in backup of C:\HyperV-iSCSI\build-agent-1\Virtual Machines\5FA6A330-6603-4CC1-B97D-5D0C7C825D74.VMRS during enumerate: Error [0x80070020] The process cannot access the file because it is being used by another process.
Error in backup of C:\HyperV-iSCSI\svn\Virtual Machines\6858BF64-26FB-4592-95BC-37E291DA05AF.VMRS during enumerate: Error [0x80070020] The process cannot access the file because it is being used by another process.
Error in backup of C:\HyperV-iSCSI\dev-1\Virtual Machines\766CFF05-7074-42F4-9CD0-B24BEEE852FC.VMRS during enumerate: Error [0x80070020] The process cannot access the file because it is being used by another process.
Error in backup of C:\HyperV-iSCSI\jira\Virtual Hard Disks\JIRA-X_3C936EE4-8609-44B6-8BE3-A5026D8CAC3E.avhdx during enumerate: Error [0x80070020] The process cannot access the file because it is being used by another process.
Application backup
Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE}
Component: 5FA6A330-6603-4CC1-B97D-5D0C7C825D74
Caption : Online\BUILD-AGENT-1
Logical Path:
Error : 8078010D
Error Message : Enumeration of the files failed.

Detailed Error : 80070020
Detailed Error Message : (null)
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
You cannot copy/backup open files so ZFS snapshots is a workaround.

ZFS snapshots are in a filesystem state like a sudden powerloss. While the ZFS filesystem itself is always safe due CopyonWrite, a guest VM filesystem is not. A corrupt guest filesystem can happen unless you shut down the VM prior a snap or set the VM to a backup safe state.
 

unwind-protect

Active Member
Mar 7, 2016
597
242
43
Boston
Yes, to safely backup the raw device under a filesystem that filesystem has to be unmounted or mounted read-only. Otherwise you run a risk of reversing carefully sequenced disk transactions.
 

alaricljs

Active Member
Jun 16, 2023
234
98
28
I work in a shop with several hundred Windows VMs stored as files on ZFS over NFS (vmware). Same basic issue around filesystem safety and snapshots. We use ZFS snapshots in preference to vmware and haven't had an issue with corruption after a restore or rollback. Not even once.
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
With VMware ESXi you can go around rare but possible VM corruptions in ZFS snapshots by creating an ESXi snapshot (hot memory or quiesce) prior the ZFS snap. This ESXi snap can be destroyed when the ZFS snap is created.

For a safe rollback, first restore the ZFS snap, then the included ESXi snap. This can happen under scriptcontrol or embedded in ZFS autosnap scripts (I have done such in my napp-it web-gui)
 
  • Like
Reactions: unwind-protect

richardm

Member
Sep 27, 2013
50
16
8
With VMware ESXi you can go around rare but possible VM corruptions in ZFS snapshots by creating an ESXi snapshot (hot memory or quiesce) prior the ZFS snap. This ESXi snap can be destroyed when the ZFS snap is created.
I can confirm this is how many tier-1 (high-$$$) storage vendors do it with VMware. They leverage VMware's snapshot mechanism which itself leverages VMware Tools to force a file system sync (dirty cache flush) and storage quiescence within the guest immediately prior to taking the VMware-level snapshot.

Now with VM snapshots present across all relevant VMs THEN a storage-layer (datastore) level snapshot is taken. The end effect is crash-consistent restore points for every VM being captured within the storage snapshot.

I presume Hyper-V storage technologies handle this in a similar fashion?
 
  • Like
Reactions: unwind-protect

ServerFanatic

New Member
Jun 22, 2025
3
0
1
interesting reading on ZFS and hype-v snapshots. my NAS uses ZFS however that's as far as I go willingly.

I personally on Hyper-V utilise several ISCSI targets each having there own LUN for e.g., Backup, snapshots, LiveVMs, etc however I use ReFS file system on all the ISCSI LUNs. I changed the the snapshot path for storage has Hyper-V allow you to specify the snapshot location and undertake the same for Backup and via exporting the VMs and Windows backup to its own ISCSI LUN. I have multiple ISCSI and also utilising direct attached storage on the 8 drive RAID6 which hold all live VMs. As snapshots consume the same amount of hdd space as the VMs the storage can consume space very quickly the more VMs you have and/or the size of those VMs so have sperate storage as space on the server and exporting them also give an a quick restore should I encounter issues.

I have had 2 issues when restarting the physical server after updates and unfortunate dirty shutdown corrupted my ISCSI and I lost access on both occasion thus data lose so I'm hesitant to have all live VMS running solely on ISCSI hence I have the exported VMS and the DAS. When ISCSI fails it ain't no joke. MS up they game on ReFS so its a good filesystem and is well over due from NTFS has limitation but no regarding data resilience if find. Thanks for the post learnt something new, apricated