ESXi VMDK on ZFS backed NFS - thin provisioning

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by J-san, Mar 25, 2019.

  1. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    67
    Likes Received:
    42
    Just wondering what people have to say about Thin provisioned VMDK hard disks on NFS datastores, specifically is holepunching or some other similar function good to do after zeroing out free space in a VM?

    eg after running a SDelete -z to zero out free space

    I previously wasn't using compression due to the older version of OmniOS / ZFS being slower than I liked with it turned on.

    When doing some benchmarking it doesn't seem to affect things much on the newer version of omnios-r151028 and ESXi 6.5..

    Compression OFF

    omnios-r151028-9000mtu-CryDskMrk6-4xs4600-recordsize_16k_paravirt-ctrlr.PNG

    Compression ON (LZ4)

    omnios-r151028-9000mtu-CryDskMrk6-4xs4600-compress_lz4-recordsize_16k.PNG


    After some searching around I realized that holepunching isn't actually a supported on NFS mounted VMDKs:
    Code:
    # vmkfstools --punchzero bak01_1.vmdk
    Not a supported filesystem type
    

    However, while testing out compression again I noticed that the space was automatically re-claimed on test VMDKs stored on compression=lz4 datasets shared over NFS to ESXi 6.5:

    Code:
    root@vsan2:/# ls -alh /vm2/test/bak01/
    total 2276725
    drwxr-xr-x   2 root     root           5 Mar 20 17:22 .
    drwxr-xr-x   3 root     root           3 Mar 20 17:22 ..
    -rwxrwxr-x   1 root     root          84 Mar 20 17:23 .lck-bf00000000000000
    -rw-------   1 root     root         40G Mar 20 17:23 bak01_1-flat.vmdk
    -rw-------   1 root     root         499 Mar 20 17:22 bak01_1.vmdk
    
    root@vsan2:/vm2/test/bak01# du -ah                                                                                              
    778M    ./bak01_1-flat.vmdk                                                                                                    
    4.50K   ./bak01_1.vmdk                                                                                                          
    512     ./.lck-2400000000000000                                                                                                    
    778M    .                                                
                                                                          
    root@vsan2:/vm2/test/bak01# echo SDelete_-z                                                                                    
    SDelete_-z                                                                                                                          
    
    root@vsan2:/vm2/test/bak01# du -ah                                                                                              
    16.7M   ./bak01_1-flat.vmdk                                                                                                    
    4.50K   ./bak01_1.vmdk                                                                                                          
    512     ./.lck-2400000000000000                                                                                                    
    16.7M   .            
    
    
    Does this mean that you don't need to do anything else besides zeroing out free space after deleting large files from within the VM if you find your thin-provisioned VMDKs getting much larger than the "real" filesystem contents within the VM?

    Thanks!
     
    #1
  2. cbutters

    cbutters New Member

    Joined:
    Sep 13, 2016
    Messages:
    5
    Likes Received:
    0
    I noticed this as well, with ZFS compression turned on; zeroing out the free space on disks, whether windows, linux or other; the lz4 compression on the ZFS datastore reclaimed the space on the pool; no holepunching required.

    Unfortunately, it seems to always display the VMs as flat-files so even a small 1-2GB VM (according to ZFS usage) will still show as a large 250GB flat vmdk file over SMB access if the disk is that large inside the VM. Copying seems to take forever, even though true data is only quite small. Any ideas for a workaround for this? Yes I know about Veeam, but just wondering if there is also a quick and dirty method for backing up files a little more efficiently.
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,333
    Likes Received:
    782
    If the VM is offline, you can just export/import VMs in ESXi as templates (just like I did with my napp-it storage appliance template). The resulting file is very small (less than 2GB for a 40GB VM).
     
    #3
  4. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    304
    Likes Received:
    67
    why not vdp? It using VMWare's changed blocks tracking and dedups the backups in addition.
     
    #4
  5. cbutters

    cbutters New Member

    Joined:
    Sep 13, 2016
    Messages:
    5
    Likes Received:
    0
    Looks like vdp could be EOL? Source:Bye bye VMware VDP - vInfrastructure Blog
    I'm running Esxi 6.7, according to article 6.5 was the last version that had it baked in.
     
    #5
  6. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    304
    Likes Received:
    67
    #6
  7. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    67
    Likes Received:
    42
    I was testing some benchmarks, and was attempting to make some changes to recordsize but the test VMDK was quite large:

    2TB thin provisioned vmdk flat file.

    But.. I found I was waiting a long time to try and do a regular "cp" and "mv" on this 2TB thin provisioned vmdk (15 GB used).


    Using Omnios omnios-r151030 I found you can preserve the "holiness" of the vmdk :)


    Using a regular "cp" was inflating the vmdk, which took a long time even if the zfs filesystem had compression turned on or caused the real file size to balloon if compression was turned off.

    I found that the "cpio" tool in omnios can also do a copy while preserving the sparseness of the file.
    While I haven't looked into using it extensively it does work to do a copy within the same data set.

    eg.
    Code:
    ## Make backup dir to hold new sparse vmdk copy
    
    # mkdir bkdir
    
    ## Make any changes here. eg change recordsize/compression
    
    # zfs set recordsize=32k poolname/backup
    
    ## Copy vmdk into backup dir by echoing the name of the file to backup to cpio in pass mode
    
    # echo 'testbak01_1-flat.vmdk' | cpio -p bakdir
    
    ## move the existing vmdk to a backup filename just in case
    
    # mv testbak01_1-flat.vmdk.bak
    
    ## Move the new sparse vmdk back to current directory
    
    # mv bakdir/testbak01_1-flat.vmdk .
    
    
    Hopefully this helps someone else managing files locally.

    Cheers
     
    #7
Similar Threads: ESXi VMDK
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it ESXi or bare metal OmniOS? Tuesday at 12:55 PM
Solaris, Nexenta, OpenIndiana, and napp-it ESXi 6.7u3, Napp-It AIO, & Tuning Jan 8, 2020
Solaris, Nexenta, OpenIndiana, and napp-it intel x540-t2 passthrough to OmniOS 151032 on ESXi 6.7u3 Nov 30, 2019
Solaris, Nexenta, OpenIndiana, and napp-it The ultimate ZFS ESXi datastore for the advanced single User (want, not have) Nov 3, 2019
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS 151030 VM (ESXi) with LSI 9400-8i Tri-Mode HBA freezing up Aug 10, 2019

Share This Page