Hey Randwell my tests trying to satisfy my (low qd, high iops) requirements have failed miserably with everything I have thrown at it.
If you have a recommendation how to satisfy 10G (for starters) at qd1/t1, 64K (esxi nfs -> sync) while maintaining at least a 2 node HA setup then please share
If you're willing to give up VMware and move over to Proxmox you can use DRBD in an HA pair which should deliver the performance you're looking for.
It wouldn't hurt to test
you can have a single NVME or layer LVM over multiple NVME drives to create a larger partition, since the replication is 1:1 in real time you'll benefit from the same native performance of local disks without the latency introduced by Ceph, vSan, any other object storage that then layers block storage on top.
Using RAW disk for the VM instead of QEMU file will also deliver better performance than a VMDK.
Theres always a copy of your data on another host ready for failover, can have 2 or more hosts it's really up to you.
It's a very simple and elegant solution that will deliver maximum return on NVME performance.
No Raid overhead either as you can't really natively raid an NVME drive on enterprise hardware, there is the Intel Raid 1 mirror but that's limited to a single mirror and why bother if you have a realtime replica on another host. (but if you want that little additional protection go the Raid 1 mirror if you have the right controller)
Could also use software raid to create 0,1,5,10 etc if you like, i would test with 1 drive replication and then you can always extend the raid or LVM.
Since the block layer isn't impeded by object storage layer 1st and doesn't need to be changed to iSCSI or connect via network as NFS or other network protocol to be pushed out to another host you cut a lot of latency and layers.
DRBD (Linbit), Linstore have a plugin for Proxmox, Proxmox is really well designed and based KVM virtualisation (which i believe has a lot less bloat than VMware) it's free to use and if you wish to support Proxmox can subscribe to one of their support tiers.
Putting all of this to the side Proxmox also offer ZFS natively and have done some tuning with NVME but it may still need further tuning, so you may still run into the same issues you are seeing now with NVME on Freenas at some stage.
give it a try
Open-source virtualization management platform Proxmox VE
How to setup LINSTOR on Proxmox VE