The NAS doesnt have ZFS, but it does have the equivalent of the SLOG, in that I have a dedicated pair of Samsung 850 EVO SSDs for write cache (on separate LSI 9211 IT mode adapters). Reads/Writes that are pure cache generally perform around 1.0 - 1.2GBs. [and since i have 1TB of cache, pretty much everything i do is cached all the time]. Behind that I have 10x Hitachi 4TB NAS drives 64MB 7.2K RPM, which easily sustain about 800MBs bandwidth when destaging the SSD cache.esxi forces sync write on for all nfs writes. you need to get a decent slog SSD to fix this.
Excellent feedback, much appreciated!Mate, you definitely shouldn't expect any sort of decent performance from something like Storage Spaces as a foundation of an ESXi datastore.
The best alternative to proper enterprise storage solutions like EMC or HP would be iSCSI or NFS over RDMA. Unfortunately, Mellanox have really bad support for ESXi in general, and, as of late, they seem to have abandoned releasing new drivers that include any sort of RDMA-accelerated transport. Maybe there is something both Mellanox and VMware are not telling us and stock VMware Software iSCSI initiator and/or NFS combined with the out-of-the-box Mellanox drivers (IB?ETH?) do utilise RDMA, but I highly doubt it. I'm pretty sure they would have been broadcasting it from every possible IT news outlet if that had been the case.
Therefore, you're left with the following choices:
Both solutions assume an iSCSI datastore. I'm not aware of any way to utilise an NFS-based datastore with RDMA.
- SRP - use 1.8.2.4 (for ESXi 5.x) or 1.8.2.5 (for ESXi 6.0) drivers with one of the following targets: Solaris 11.x (any variant) or Linux with either inbox/LIO or Mellanox OFED/SCST (you can't use Mellanox OFED with LIO as they stripped SRP support out of it). In any case, you can use it only over Infiniband (not Ethernet), which means you need either a managed switch or OpenSM running on one of the computers connected to your fabric.
- iSER - use either 1.8.3 (for ESXi 5.x or 6.0 with forced installation) over Infiniband or 1.9.x.x (for 5.x or 6.0) over Ethernet. You will have to use a Linux target, both LIO and SCST should work with both inbox and Mellanox OFED drivers. The second option requires an Ethernet switch though.
Also, as you might have noticed, none of the above support ESXi 6.5, and I can confirm forced installation doesn't help, as all Mellanox older ESXi drivers collide with 6.5 components one way or another.
I don't quite understand why you would even consider running an NFS server on Windows. It's basically forcing a technology onto a platform it hasn't been designed for. You'd be so much better off with any variant of Solaris, FreeNAS or Linux running NFS.So I'm pretty sure i missed something critical in testing NFS performance between Windows & ESX. (Sync Writes = On).
My ZFS reading/testing basically spelled it out. For the same reasons ZFS has a problem with small random writes with sync enabled, the ESX host NFS mounting a Windows share has a problem in that specific workload.
i.e. In some cases powering on a VM from a NFS datastore was fast, overall feel of the VM was swift, but any time i tried to vMotion it would go down to 3-30MBs max, and often time-out and fail. Same for deploying an .OVF to NFS datastore.
I think it may be as simple as disabling sync writes, and run in non-POSIX compliance mode. Of course there are risks associated with this, and its not recommended for critical production workloads. But if you're stuck on Windows for your NFS export, i'd very much look in to disabling sync.
It's effectively the same in ZFS, it's designed to protect you, but if you want you can disable sync, and reap the benefits of performance in any IO workload.
Well a good majority of the time i'm getting the NAS from my primary Windows 10 workstation. Just so happens it too has a 40Gb QDR IB HCA. Nice part of Windows 2012/2016 is that SMB3 support with RDMA, and driver support is a win. Its just ESX doesnt use SMB3 of course, so I want to share the 'large pool of disks with SSD cache' also to my ESX Lab. NFS or iSCSI, but iSCSI requires I carve out space and dedicated it. NFS share would allow me to share the same volume/file system between CIFS and NFS. In a perfect world, it would work.I don't quite understand why you would even consider running an NFS server on Windows. It's basically forcing a technology onto a platform it hasn't been designed for. You'd be so much better off with any variant of Solaris, FreeNAS or Linux running NFS.
Now, while there is a long standing dispute between iSCSI and NFS (on top of one of the well-established platforms above) as a foundation of an ESXi datastore, there is one crucial thing that makes a huge difference when your transport is Infiniband: RDMA support. And, in this department, you're left just with iSCSI (either in the form of SRP or iSER). Yes, you can enable NFS support for RDMA when both the client and the server are Linux, but, remember, you client is ESXi, and this limits your choices quite severely. I'd suggest you to read my previous post again to understand what options are available at the moment.
Looks like it depends on the speed of the ssd pool vs the slog.I'm still not 100% sure if it's worthwhile to have an SLOG on top of an SSD-only array, you'd might want to experiment with that.
I think we won't know until actual measurements are performed. The idea of SLOG is to make sure access to the main drives in the pool is not broken into small transactions when sync writes are requested by the client. So, potentially, an SLOG can have positive effect even when it's not faster than the main pool drives. But this is just theory, and we don't really know how it going to look like in the real world...Looks like it depends on the speed of the ssd pool vs the slog.
My (slow) 2 drive ssd pool profited from the 750 (nvme) slog for all tests but sequential writes.
Will try to test S3700's tomorrow.