Just some additional testing this morning... I'm likely going to tear everything down and try and optimize/tune a bit more. Also want to test more against the Intel SSD RAID1/0 pool -vs- RAIDz1 by itself. For the fun of it, i might re-test everything again with the (4) Intel SSDs as dedicated ZIL (1.6TB) it'll be the ultimate waste of capacity, just to see if i can squeeze the IOPs out of 4 ssds. It will also tell me if my mis-alignment issue is actually an issue, or if the dual-dipping as L2ARC takes away from the ZIL itself.
But here's the Hitachi pool still, with Intel's still partitioned cache/logs as above. Now with LZ4 enabled as well.
Hitachi Pool (RAIDz1*2 with SSD) – ZFS Volume presented via iSCSI/iSER with MPIO
L4Z Compression Disabled
Sync Disabled (QD4-------------Sync Always (QD4)---------------Sync Always (QD10)
L4Z Compression Enabled
Sync Disabled (QD4-------------Sync Always (QD4)---------------Sync Always (QD10)
No noticeable difference with or without LZ4 via ATTO Disk Benchmark.
Most likely the dataset is incompressible.
Sync Disabled (QD16)-----------Sync Always (QD16)
Playing with CrystalDiskMark on the Windows VM as well as ATTO, numbers are pretty in line with the previous ATTO benchmarks, with perhaps some slower results mixed in. These are probably highlighting more of the 'real worst case' scenarios.
Will be interesting after the pool rebuild / retest of everything separate, how the unique tests fair.
Sync Disabled---------------------Sync Always
Playing with AS SSD Benchmark Bandwidth-vs-IOPS.
Again this tool is showing what appears to be more realistic worst case values.
Last but not least... Playing with Windows Explorer file transfers....
This is where I'm confused a bit....
Knowing the VM's VMDK is hosted via VMFS5 via iSCSI via ZFS, and seeing the various benchmarks push single and multithreaded workloads at various IO sizes, I would expect similar results inside Windows Explorer copying large block sequential to a 1MB block size VMFS file system.
When I start to copy a 4GB file, it will go about 1GBs for the first 2 seconds, then drop down to 30MBs for a bit, and then trickle back up quickly to 500-900MBs, then back down. I'm not sure if if i'm 100% optimized on my IPoIB and iSCSI setup, so i'm troubleshooting a bit more on my tunings to make sure my tests are valid.
I created a 48GB RAM Drive on my Windows VM. (it benchmarks @ 4.9GBs read/writes)
Copied the file from RAM Drive to C:\
(Hitachi pool w/ SSDs, Sync=Always)
But just when I thought I had it working how I wanted
Sure enough there's a wierd inconsistent performance issue to be solved.