EDITED.
One more test, I temporarily put in an Intel 320 SSD on the H200 HBA and used it as a SLOG. I thought I got somewhere, but forgot I still had sync disabled. With sync=standard I still only get 6 MB/s write at 13 IOPs. Not sure what to try next.
I can't pass through the Fusion IO directly to FreeNAS because the drivers aren't compiled in (although I read the paid version from iXsystems includes them). Maybe I can try with my new NMVe, but that wouldn't really work because I still need something as a local datastore.
So, I'm suspicious...
Ok, so I was pretty close on the numbers I mentioned above. I don't have a Win7 license for a VM to try to get apples to apples comparison.
With sync=disabled the 4 x 2 RED drives (4 mirrored pairs) can do 357 MB/s at 715 IOPs (this is what it reports while doing the max throughput test, not...
What I'm roughly quoting is supposed to be sequential write. Originally when I was first setting up a lot time ago I was just doing a dd in a linux vm, but now I've moved to the I/O analyzer vm, and I run the max write throughput test, so I think this should be sequential to maximize the...
I see that, but I thought I saw he couldn't pass it through natively, so was using a virtual disk to make it available to the vm, and that's where my question is, because I can't seem to get any decent performance like that.
@gea, Are you doing anything special to get good performance out of the vmdk based SLOG? Is the vm using either the LSI (Parallel), SAS controller, or paravirtual?
Mine seems to be limited (in ESXi w/ FreeNAS), see me full post here...
Quick summary of my setup, I have an ESXi (currently 5.5) all-in-one on a Dell R510 w/ H200 and 8x WD REDs using pcie passthrough and 32GB RAM to FreeNAS. I understand vmware causes sync writes over NFS and it's really slow.
I have a Fusion IO Duo (which is really 2 x 320GB on a single PCIe...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.