ESXi VMDK on ZFS backed NFS - thin provisioning

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
Just wondering what people have to say about Thin provisioned VMDK hard disks on NFS datastores, specifically is holepunching or some other similar function good to do after zeroing out free space in a VM?

eg after running a SDelete -z to zero out free space

I previously wasn't using compression due to the older version of OmniOS / ZFS being slower than I liked with it turned on.

When doing some benchmarking it doesn't seem to affect things much on the newer version of omnios-r151028 and ESXi 6.5..

Compression OFF

omnios-r151028-9000mtu-CryDskMrk6-4xs4600-recordsize_16k_paravirt-ctrlr.PNG

Compression ON (LZ4)

omnios-r151028-9000mtu-CryDskMrk6-4xs4600-compress_lz4-recordsize_16k.PNG


After some searching around I realized that holepunching isn't actually a supported on NFS mounted VMDKs:
Code:
# vmkfstools --punchzero bak01_1.vmdk
Not a supported filesystem type

However, while testing out compression again I noticed that the space was automatically re-claimed on test VMDKs stored on compression=lz4 datasets shared over NFS to ESXi 6.5:

Code:
root@vsan2:/# ls -alh /vm2/test/bak01/
total 2276725
drwxr-xr-x   2 root     root           5 Mar 20 17:22 .
drwxr-xr-x   3 root     root           3 Mar 20 17:22 ..
-rwxrwxr-x   1 root     root          84 Mar 20 17:23 .lck-bf00000000000000
-rw-------   1 root     root         40G Mar 20 17:23 bak01_1-flat.vmdk
-rw-------   1 root     root         499 Mar 20 17:22 bak01_1.vmdk

root@vsan2:/vm2/test/bak01# du -ah                                                                                              
778M    ./bak01_1-flat.vmdk                                                                                                    
4.50K   ./bak01_1.vmdk                                                                                                          
512     ./.lck-2400000000000000                                                                                                    
778M    .                                                
                                                                      
root@vsan2:/vm2/test/bak01# echo SDelete_-z                                                                                    
SDelete_-z                                                                                                                          

root@vsan2:/vm2/test/bak01# du -ah                                                                                              
16.7M   ./bak01_1-flat.vmdk                                                                                                    
4.50K   ./bak01_1.vmdk                                                                                                          
512     ./.lck-2400000000000000                                                                                                    
16.7M   .
Does this mean that you don't need to do anything else besides zeroing out free space after deleting large files from within the VM if you find your thin-provisioned VMDKs getting much larger than the "real" filesystem contents within the VM?

Thanks!
 

cbutters

New Member
Sep 13, 2016
5
0
1
40
I noticed this as well, with ZFS compression turned on; zeroing out the free space on disks, whether windows, linux or other; the lz4 compression on the ZFS datastore reclaimed the space on the pool; no holepunching required.

Unfortunately, it seems to always display the VMs as flat-files so even a small 1-2GB VM (according to ZFS usage) will still show as a large 250GB flat vmdk file over SMB access if the disk is that large inside the VM. Copying seems to take forever, even though true data is only quite small. Any ideas for a workaround for this? Yes I know about Veeam, but just wondering if there is also a quick and dirty method for backing up files a little more efficiently.
 

gea

Well-Known Member
Dec 31, 2010
3,160
1,195
113
DE
If the VM is offline, you can just export/import VMs in ESXi as templates (just like I did with my napp-it storage appliance template). The resulting file is very small (less than 2GB for a 40GB VM).
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
why not vdp? It using VMWare's changed blocks tracking and dedups the backups in addition.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83

J-san

Member
Nov 27, 2014
68
43
18
44
Vancouver, BC
I was testing some benchmarks, and was attempting to make some changes to recordsize but the test VMDK was quite large:

2TB thin provisioned vmdk flat file.

But.. I found I was waiting a long time to try and do a regular "cp" and "mv" on this 2TB thin provisioned vmdk (15 GB used).


Using Omnios omnios-r151030 I found you can preserve the "holiness" of the vmdk :)


Using a regular "cp" was inflating the vmdk, which took a long time even if the zfs filesystem had compression turned on or caused the real file size to balloon if compression was turned off.

I found that the "cpio" tool in omnios can also do a copy while preserving the sparseness of the file.
While I haven't looked into using it extensively it does work to do a copy within the same data set.

eg.
Code:
## Make backup dir to hold new sparse vmdk copy

# mkdir bkdir

## Make any changes here. eg change recordsize/compression

# zfs set recordsize=32k poolname/backup

## Copy vmdk into backup dir by echoing the name of the file to backup to cpio in pass mode

# echo 'testbak01_1-flat.vmdk' | cpio -p bakdir

## move the existing vmdk to a backup filename just in case

# mv testbak01_1-flat.vmdk.bak

## Move the new sparse vmdk back to current directory

# mv bakdir/testbak01_1-flat.vmdk .
Hopefully this helps someone else managing files locally.

Cheers