VMware implemented a new file system type VMFS6 available with ESXi 6.5 and 6.7. One of its new features is automatic space reclamation.
Space Reclamation Requests from VMFS Datastores
The default setting on new datastores with VMFS6 is a reclamation priority of "Low" with maximum bandwidth of 100 MB/s. Some blogs mention space reclamation could take up to 12 hours using "Low". The bandwidth setting is per datastore so in theory a storage server with many datastores could use significant disk i/o bandwidth for space reclamation.
I compared two storage servers running OmniOS r151028 with napp-it, one with VMFS5 datastores and one with VMFS6 datastores. Both provision iSCSI file LUNs as datastores to ESXi 6.7u3 hosts. The one with VMFS5 datastores shows very low disk i/o (<1% on iostat %b) when all VMs are off. The one with VMFS6 datastores shows high disk i/o (>90% on iostat %b) when all VMs are off. My guess is the disk overhead comes from space reclamation.
Have you seen disk i/o degradation using ZFS iSCSI LUNs with VMFS6 datastores due to space reclamation? If so, are there any recommendations to avoid it?
The KB below for Huawei storage servers suggests disabling auto space reclamation to avoid degradation in storage vMotion. It also suggests running space reclamation manually during off-hours. A storage vMotion takes a relatively short time, so this also implies that a "Low" priority impacts disk transfers much earlier than 12 hours. Possibly the same applies to ZFS storage servers? Don't know if Huawei uses ZFS but they do implement VAAI (vSphere Storage APIs for Array Integration).
High latency during vMotion process- Huawei
Space Reclamation Requests from VMFS Datastores
The default setting on new datastores with VMFS6 is a reclamation priority of "Low" with maximum bandwidth of 100 MB/s. Some blogs mention space reclamation could take up to 12 hours using "Low". The bandwidth setting is per datastore so in theory a storage server with many datastores could use significant disk i/o bandwidth for space reclamation.
I compared two storage servers running OmniOS r151028 with napp-it, one with VMFS5 datastores and one with VMFS6 datastores. Both provision iSCSI file LUNs as datastores to ESXi 6.7u3 hosts. The one with VMFS5 datastores shows very low disk i/o (<1% on iostat %b) when all VMs are off. The one with VMFS6 datastores shows high disk i/o (>90% on iostat %b) when all VMs are off. My guess is the disk overhead comes from space reclamation.
Have you seen disk i/o degradation using ZFS iSCSI LUNs with VMFS6 datastores due to space reclamation? If so, are there any recommendations to avoid it?
The KB below for Huawei storage servers suggests disabling auto space reclamation to avoid degradation in storage vMotion. It also suggests running space reclamation manually during off-hours. A storage vMotion takes a relatively short time, so this also implies that a "Low" priority impacts disk transfers much earlier than 12 hours. Possibly the same applies to ZFS storage servers? Don't know if Huawei uses ZFS but they do implement VAAI (vSphere Storage APIs for Array Integration).
High latency during vMotion process- Huawei
Last edited: