ZFS and VMFS6 Space Reclamation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TechTrend

Member
Apr 16, 2016
47
14
8
65
Florida
VMware implemented a new file system type VMFS6 available with ESXi 6.5 and 6.7. One of its new features is automatic space reclamation.

Space Reclamation Requests from VMFS Datastores

The default setting on new datastores with VMFS6 is a reclamation priority of "Low" with maximum bandwidth of 100 MB/s. Some blogs mention space reclamation could take up to 12 hours using "Low". The bandwidth setting is per datastore so in theory a storage server with many datastores could use significant disk i/o bandwidth for space reclamation.

I compared two storage servers running OmniOS r151028 with napp-it, one with VMFS5 datastores and one with VMFS6 datastores. Both provision iSCSI file LUNs as datastores to ESXi 6.7u3 hosts. The one with VMFS5 datastores shows very low disk i/o (<1% on iostat %b) when all VMs are off. The one with VMFS6 datastores shows high disk i/o (>90% on iostat %b) when all VMs are off. My guess is the disk overhead comes from space reclamation.

Have you seen disk i/o degradation using ZFS iSCSI LUNs with VMFS6 datastores due to space reclamation? If so, are there any recommendations to avoid it?

The KB below for Huawei storage servers suggests disabling auto space reclamation to avoid degradation in storage vMotion. It also suggests running space reclamation manually during off-hours. A storage vMotion takes a relatively short time, so this also implies that a "Low" priority impacts disk transfers much earlier than 12 hours. Possibly the same applies to ZFS storage servers? Don't know if Huawei uses ZFS but they do implement VAAI (vSphere Storage APIs for Array Integration).

High latency during vMotion process- Huawei
 
Last edited:

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
I haven't seen I/O performance degradation due to VMFS6 space reclamation. That said, we are using Pure X20 AFA Array :)
On ZFS, my experience is limited to FreeNAS and they implement do VMWare VAAI, but only on iSCSI. The only array I've seen to do VAAI on NFS is Netapp and it requires ESXi host plugins to work.
 

TechTrend

Member
Apr 16, 2016
47
14
8
65
Florida
Contacted VMware Support on this issue. It turns out that the degradation is partly due to using a storage server that does not support VAAI unmap and partly due to a VMware bug.

VMFS space reclamation is implemented using VAAI unmap. It will not work with storage servers that don't implement it, such as OmniOS ZFS iSCSI datastores. You can confirm if a datastore has unmap support via the ESXi host command line:
Code:
# esxcli storage core device vaai status get
...
naa.600144f021617e550000599da1200003
   VAAI Plugin Name:
   ATS Status: unsupported
   Clone Status: unsupported
   Zero Status: supported
   Delete Status: unsupported
On VMFS5 datastores any attempt to do space reclamation manually fails gracefully.
Code:
# esxcli storage vmfs unmap -l testDS1
Devices backing volume 57e57597-905807ec-56fd-90e2ba1a6ce4 do not support UNMAP
On VMFS6 datastores with auto space reclamation enabled, the VMware storage subsystem goes in a loop if the target has no VAAI unmap support. That generates the additional disk i/o. Since it should fail gracefully and not keep trying on datastores without VAAI unmap support, this is a VMware bug.
Code:
[root@esxif-mia1:~] grep -i unmap /var/log/vmkernel.log
...
2019-11-08T22:59:03.064Z cpu20:2099258 opID=1e4882e5)Unmap6: 7133: [Unmap] 'testDS2':device(0x43066ddfec90)does not support unmap
2019-11-08T23:01:05.750Z cpu16:2100127 opID=66889b17)Unmap6: 7133: [Unmap] 'testDS2':device(0x43066e2495c0)does not support unmap
VMware Support recommends using VMFS6 only with storage servers that support VAAI unmap. For VMFS6 datastores implemented with OmniOS ZFS iSCSI, we'll move VMs back to VMFS5 datastores and delete the VMFS6 datastores.
 
Last edited:
  • Like
Reactions: BoredSysadmin

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Contacted VMware support on this issue. It turns out that the degradation is partly due to using a storage server that does not support VAAI unmap and partly due to a VMware bug.

VMFS space reclamation is implemented using VAAI unmap. It will not work with storage server that don't implement it, such as OmniOS ZFS iSCSI datastores. You can confirm if a datastore has unmap support via the ESXi host command line:
Code:
# esxcli storage core device vaai status get
...
naa.600144f021617e550000599da1200003
   VAAI Plugin Name:
   ATS Status: unsupported
   Clone Status: unsupported
   Zero Status: supported
   Delete Status: unsupported
On VMFS5 datastores any attempt to do space reclamation manually fails gracefully.
Code:
# esxcli storage vmfs unmap -l myDatastore
Devices backing volume 57e57597-905807ec-56fd-90e2ba1a6ce4 do not support UNMAP
On VMFS6 datastores with auto space reclamation enabled, the VMware storage subsystem goes in a loop if the target has no VAAI unmap support. Logs That generates the additional disk i/o. Since it should not keep trying when there is no VAAI unmap support, it is a VMware bug.
Code:
[root@esxif-mia1:~] grep unmap /var/log/vmkernel.log
...
2019-11-08T22:59:03.064Z cpu20:2099258 opID=1e4882e5)Unmap6: 7133: [Unmap] 'v109':device(0x43066ddfec90)does not support unmap
2019-11-08T23:01:05.750Z cpu16:2100127 opID=66889b17)Unmap6: 7133: [Unmap] 'v109':device(0x43066e2495c0)does not support unmap
VMware recommends using VMFS6 only with storage servers that support VAAI unmap. For VMFS6 datastores implemented with OmniOS ZFS iSCSI, we'll move VMs back to VMFS5 datastores and delete the VMFS6 datastores.
Solaris has SCSI umap issue since 11.1 and Oracle has since disabled it completely. I'm running 11.4 as my storage server but any other fork based on opensolaris will have the same issue until SCSI umap is fixed.
 

TechTrend

Member
Apr 16, 2016
47
14
8
65
Florida
... any other fork based on opensolaris will have the same issue until SCSI umap is fixed.
VAAI unmap is specific to iSCSI datastores, but other VAAI primitives and hardware acceleration also apply to NFS datastores.

https://www.vmware.com/content/dam/...are-nfs-best-practices-white-paper-en-new.pdf

Hardware Acceleration on NAS Devices

According to the 'esxcli storage core device vaai status get' results on my previous message, OmniOS ZFS only appears to support the VAAI Zero (Write Same) block primitive.

https://www.vmware.com/content/dam/...storage-api-array-integration-white-paper.pdf

VMware Support stated current Netapp, Pure Storage and EMC storage servers support all VAAI primitives and also hardware acceleration. Hope those are eventually supported with open Solaris forks. Those would help ZFS storage servers achieve higher disk i/o performance with VMware.
 
Last edited:

TechTrend

Member
Apr 16, 2016
47
14
8
65
Florida
There is an option to disable VMFS6 automatic space reclamation from the vSphere web client, e.g. by selecting the datastore then Configure, General, Space Reclamation. See the attached screenshot. That helps with Solaris ZFS deployments that can't go back to VMFS5.
 

Attachments

  • Like
Reactions: Gammal Sokk and gea

crazyj

Member
Nov 19, 2015
75
2
8
49
I don't suppose there's a way to shut it off using the single host-client, or CLI commands?
 

Gammal Sokk

New Member
Jun 10, 2015
3
0
1
42
I don't suppose there's a way to shut it off using the single host-client, or CLI commands?
Use the ESXCLI Command to Change Space Reclamation Parameters

esxcli storage vmfs reclaim config set

The command takes these options:

-b|--reclaim-bandwidth Space reclamation fixed bandwidth in MB per second.
-g|--reclaim-granularity Minimum granularity of automatic space reclamation in bytes.
-m|--reclaim-method Method of automatic space reclamation. Supported options:
  • priority
  • fixed
-p|--reclaim-priority Priority of automatic space reclamation. Supported options:
  • none
  • low
  • medium
  • high
-l|--volume-label The label of the target VMFS volume.
-u|--volume-uuid The uuid of the target VMFS volume.