MASSIVE thanks to
mpogr for helping me with the following setup, it wasnt easy, and I was failing for days without him, probably would have been weeks, or GTFU. But now I have something to work/test/compare with!!!
RedHat 7.3 Server [Linux 3.10.0-514.el7.x86_64]
--- Latest as of today, fully updated/patched.
Mellanox OFED Drivers [MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-x86_64]
-- Drivers custom make/installed with --add-kernel-support
SCST 3.2.x
--- custom make/installed leveraging the MLNX_OFED SRP drivers, that were kernel integrated.
ISCSI w/ SRP to ESX 6.0 u2 [Using v1.8.2.5 OFED drivers for ESX 5.x]
ZoL latest
---
via ConnectX-2 40Gb QDQ HCAs attached to Mellanox 4036.
Initial Test Benchmarks.
Same Hitachi RaidZ1 (4+1)*2 setup, with Intel S3710s partitioned for both ZIL and L2ARC, but this time with properly aligned partitions to 4K sectors, so ruled out any potential impact there.
I had a 'user issue' trying to partition/align in Solarish , and the default GUI used MBR (mis-aligned partitions) but turns out , that wasnt a thing, or it is a thing, and my bottleneck is somewhere else.
So far, nearly identical to my OmniOS+NappIt results. (using the same disks, in the same exact way ZFS wise, with the same ZFS settings)
But a massive difference in kernel/OS between OmniOS and Redhat server for sure.
Definately notice the ZFS & iSCSI SRP traffic being balanced across all 16x CPUs.
So my assumption of 1.7GBs being an upper cap due to only 1/3rd of my memory bandwidth being uses via SunOS 5.x kernel didnt pan out.