ESXi VMDK on ZFS backed NFS - thin provisioning

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by J-san, Mar 25, 2019.

  1. J-san

    J-san Member

    Joined:
    Nov 27, 2014
    Messages:
    66
    Likes Received:
    42
    Just wondering what people have to say about Thin provisioned VMDK hard disks on NFS datastores, specifically is holepunching or some other similar function good to do after zeroing out free space in a VM?

    eg after running a SDelete -z to zero out free space

    I previously wasn't using compression due to the older version of OmniOS / ZFS being slower than I liked with it turned on.

    When doing some benchmarking it doesn't seem to affect things much on the newer version of omnios-r151028 and ESXi 6.5..

    Compression OFF

    omnios-r151028-9000mtu-CryDskMrk6-4xs4600-recordsize_16k_paravirt-ctrlr.PNG

    Compression ON (LZ4)

    omnios-r151028-9000mtu-CryDskMrk6-4xs4600-compress_lz4-recordsize_16k.PNG


    After some searching around I realized that holepunching isn't actually a supported on NFS mounted VMDKs:
    Code:
    # vmkfstools --punchzero bak01_1.vmdk
    Not a supported filesystem type
    

    However, while testing out compression again I noticed that the space was automatically re-claimed on test VMDKs stored on compression=lz4 datasets shared over NFS to ESXi 6.5:

    Code:
    root@vsan2:/# ls -alh /vm2/test/bak01/
    total 2276725
    drwxr-xr-x   2 root     root           5 Mar 20 17:22 .
    drwxr-xr-x   3 root     root           3 Mar 20 17:22 ..
    -rwxrwxr-x   1 root     root          84 Mar 20 17:23 .lck-bf00000000000000
    -rw-------   1 root     root         40G Mar 20 17:23 bak01_1-flat.vmdk
    -rw-------   1 root     root         499 Mar 20 17:22 bak01_1.vmdk
    
    root@vsan2:/vm2/test/bak01# du -ah                                                                                              
    778M    ./bak01_1-flat.vmdk                                                                                                    
    4.50K   ./bak01_1.vmdk                                                                                                          
    512     ./.lck-2400000000000000                                                                                                    
    778M    .                                                
                                                                          
    root@vsan2:/vm2/test/bak01# echo SDelete_-z                                                                                    
    SDelete_-z                                                                                                                          
    
    root@vsan2:/vm2/test/bak01# du -ah                                                                                              
    16.7M   ./bak01_1-flat.vmdk                                                                                                    
    4.50K   ./bak01_1.vmdk                                                                                                          
    512     ./.lck-2400000000000000                                                                                                    
    16.7M   .            
    
    
    Does this mean that you don't need to do anything else besides zeroing out free space after deleting large files from within the VM if you find your thin-provisioned VMDKs getting much larger than the "real" filesystem contents within the VM?

    Thanks!
     
    #1
  2. cbutters

    cbutters New Member

    Joined:
    Sep 13, 2016
    Messages:
    5
    Likes Received:
    0
    I noticed this as well, with ZFS compression turned on; zeroing out the free space on disks, whether windows, linux or other; the lz4 compression on the ZFS datastore reclaimed the space on the pool; no holepunching required.

    Unfortunately, it seems to always display the VMs as flat-files so even a small 1-2GB VM (according to ZFS usage) will still show as a large 250GB flat vmdk file over SMB access if the disk is that large inside the VM. Copying seems to take forever, even though true data is only quite small. Any ideas for a workaround for this? Yes I know about Veeam, but just wondering if there is also a quick and dirty method for backing up files a little more efficiently.
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,244
    Likes Received:
    743
    If the VM is offline, you can just export/import VMs in ESXi as templates (just like I did with my napp-it storage appliance template). The resulting file is very small (less than 2GB for a 40GB VM).
     
    #3
  4. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    256
    Likes Received:
    56
    why not vdp? It using VMWare's changed blocks tracking and dedups the backups in addition.
     
    #4
  5. cbutters

    cbutters New Member

    Joined:
    Sep 13, 2016
    Messages:
    5
    Likes Received:
    0
    Looks like vdp could be EOL? Source:Bye bye VMware VDP - vInfrastructure Blog
    I'm running Esxi 6.7, according to article 6.5 was the last version that had it baked in.
     
    #5
  6. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    256
    Likes Received:
    56
    #6
Similar Threads: ESXi VMDK
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it The ultimate ZFS ESXi datastore for the advanced single User (want, not have) Nov 3, 2019
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS 151030 VM (ESXi) with LSI 9400-8i Tri-Mode HBA freezing up Aug 10, 2019
Solaris, Nexenta, OpenIndiana, and napp-it ESXi 6.5 with ZFS backed NFS Datastore - Optane Latency AIO benchmarks Apr 5, 2019
Solaris, Nexenta, OpenIndiana, and napp-it how to create iscsi volume for datastore use in esxi Mar 1, 2019
Solaris, Nexenta, OpenIndiana, and napp-it Napp-IT / ESXi issues Jan 6, 2019

Share This Page