Move data from iscsi lun to vmdk on nfs volume

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Nemesis_001

Member
Dec 24, 2022
36
3
8
Hi,

Per my previous thread, to avoid starting / stopping the mechanicals so much and keep the power consumption low, I created a NVME mirror.
The guest OS currently resides on the mechanicals on a iscsi lun.
Any suggestion what would be the best way to move them to a VMDK on a nfs share? I used iscsi before for performance, I suppose it won't be a problem anymore with the fast mirror.

Any easier way other than imaging the os drive (acronis) and dumping it on a vmdk?
 

Nemesis_001

Member
Dec 24, 2022
36
3
8
Thanks. When I created this many years ago, I think almost a decade, I got significantly better performance from iscsi compared to nfs, both with sync disabled. I think nfs was especially horrible in esxi since from what I remember at the time all writes are sync when working with esxi.
Has anything changed in this regard?
Reason I wish to try NFS again is since using iscsi if the host needs rebooting (normally due to a power outage) I need to manually login and re-scan adapters to have the iscsi . It is a tad bit annoying.

The migrate / move VM method I suppose would only move the metadata / vm files and not the block device itself?

Thank you.
 

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
iSCSI vs NFS
The default ZFS sync property means that NFS + ESXi does sync write while iSCSI does not with the default writeback setting. This can irritate.

My main reason to prefer NFS over iSCSI is flexibility. On NFS every VM is a simple folder with all files within. You can additionally enable SMB for actions like copy/move/backup or access a ZFS snap via Windows previous versions. Additionally NFS allows a simpler auto reconnect. Earlier ESXi does this automatically, newer ones require a Dummy VM, see https://forums.servethehome.com/ind...laris-news-tips-and-tricks.38240/#post-356501

An iSCSI target is based on a logical unit. This can be a raw disk, a ZFS zvol or a file. The resulting LUN is formatted with a guest filesystem like a local disk. With a filebased logical unit a move from a ZFS filesystem to another NFS shared ZFS filesystem would be possible but this is quite senseless as this will not improve or change any behaviours. Simply move VMs not logical units.
 

Nemesis_001

Member
Dec 24, 2022
36
3
8
Thanks gea.
So I suppose NFS is still sync writes only, and the only recourse would be to just disable sync entirely. Using iscsi with sync set to standard, would normal io operations be without sync, and db operations in the guest os for example require sync operations?

Also, I'm still not sure I follow about moving the vm. The block device is a zvol on a spinning pool. I would like to move the block device (preferably to a vmdk on a nfs share now) to a different pool. Wouldn't a vm move just move all the vm files, and not the block device storage itself (zvol)?
 

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
Thanks gea.
So I suppose NFS is still sync writes only, and the only recourse would be to just disable sync entirely. Using iscsi with sync set to standard, would normal io operations be without sync, and db operations in the guest os for example require sync operations?

Also, I'm still not sure I follow about moving the vm. The block device is a zvol on a spinning pool. I would like to move the block device (preferably to a vmdk on a nfs share now) to a different pool. Wouldn't a vm move just move all the vm files, and not the block device storage itself (zvol)?
You can disable sync on ESXi + NFS manually as you can on iSCSI but disabling sync has a real danger of a corrupt VM guest filesystem on a crash during writes so always use sync write on VMs and databases. DO NOT disable sync on such use cases!

The iSCSI LUN is ESXi vmfs formatted and treated like other local ESXi disks, NFS is not (remains a simple ZFS filesystem shared via the multiuser NFS protocol). You can only move files/VMs between. You can copy a zvol like the other datasets snaps and filesystems to a new pool with zfs replication (disable target, replicate, enable target and ESXi access to it)
 
Last edited:

Nemesis_001

Member
Dec 24, 2022
36
3
8
Got it, thanks.
Would placing the vm files on a dataset with sync enabled, and the vmdk itself on a dataset with sync disabled alleviate some of the risks?
 

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
The .vmdk is a file that represents the system disk of a VM. This is one of the files of a virtual machine. All files of a virtual machine are usually in the same folder either on a local ESXi vmfs datastore (local disk, iSCSI target) or a remote ZFS filesystem offered via NFS.

Sync write is a property of a ZFS filesystem and a method to protect the rambased ZFS writecache against a dataloss on a crash or power outage. As an iSCSI zvol and the source of the NFS share are ZFS filesystems you can only enable or disable sync what affects all writes to this filesystem.

So no. Especially the idea to place a vmdk to a filesystem with sync disabled is the bad idea. This is the file that needs protection as a crash may corrupt the filesystem of the vmdk (ntfs, fat32, ext4 etc). ZFS Copy on Write can only protect the ZFS filesystem on a crash not guest filesystems.
 

firstoscar

New Member
Jul 12, 2023
1
0
1
Got it, thanks.
Would placing the vm files on a dataset with sync enabled, and the vmdk itself on a dataset with sync disabled alleviate some of the risks?
Aviator online game for money ⇒ Play Aviator Spribe
Hi! yes, placing the VM files on a dataset with sync enabled and the VMDK itself on a dataset with sync disabled can alleviate some of the risks associated with using NFS shares for VM storage. So sync ensures that all writes to the dataset are flushed to disk before the write operation is considered complete. This can help to prevent data loss in the event of a power outage or other system failure.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
Hi! yes, placing the VM files on a dataset with sync enabled and the VMDK itself on a dataset with sync disabled can alleviate some of the risks associated with using NFS shares for VM storage. So sync ensures that all writes to the dataset are flushed to disk before the write operation is considered complete. This can help to prevent data loss in the event of a power outage or other system failure.
How?
Sync does not ensure that all writes are flushed to disk but means that confirmed writes are logged to a ZIL/Slog. A Crash means that lost writes from ram writecache are completed on next reboot. Without sync there are uncomplete writes with a chance for a corrupted vmdk with its guest filesystem.
 

Nemesis_001

Member
Dec 24, 2022
36
3
8
Eventually I retired the OS that was on the iscsi entirely and re-created a linux os with a containerized approach on a nfs datastore.
So that's that I suppose.

Another annoyance I've been dealing with is that Solaris isn't sending a hostname to the dhcp server.
Setting a hostname manually worked only once curiously, then stopped working after reboot.
ipadm create-addr -T dhcp -h hostname dhcp-addrobj

After reboot it stopped working entirely.
Tried working by this guide:

Didn't work.
Eventually I just set up a static dns and alias on the router, but its weird that it doesn't work.
 

gea

Well-Known Member
Dec 31, 2010
3,189
1,202
113
DE
Maybe it depends on the dhcp server. Additionally some network settings on Solaris are different between Solaris 10/ Solaris 11.1 (more or less origin of the Solaris fork Illumos) and current Solaris 11.4. The howto has no date and is not clear about the Solaris version for which it was made.

see also
 
Last edited:

Nemesis_001

Member
Dec 24, 2022
36
3
8
Really hard to say.
Most devices on the network work properly as they announce their desired hostname to the dhcp server, Macs, Windows and Androids. The default Rocky Linux setup as well. The only two devices I care about that don't are the esxi host and ominos.
Either way, doesn't matter that much, just annoyed me and wasted some time on it before giving up and just setting the static mappings and aliases on the edge router.