OK so I did some changes to the hardware. New 750 installed so total is on 5 now, got the qle8152 adapter swapped out with a dual port Mellanox Connect-X2 adapter. Ram has been bumped up to 256gb and another 2660v2 is in the box so I can use all 6 pcie slots.
Network issue has been solved by the adapter change. Now getting a full 9,40gbps with iperf, both ways, on both adapters.
I have configured the 750-ssd array as a raidz, not raid0 as i originally planned. This might change later but for now I'm just in a test phase.
I did a single crystal disk mark test (will do more later with anvil and atto) and got some "ok" results. I am testing this in a ESXi 6.5 VM with iSCSI. I have changed the path selection of the LUN to round-robin to load-balance on both adapters. However, because I am running the newest patch of ESXi, my VM suffers from horrible disk performance from time to time. see more here if you also experience it:
ESXi 6.5 Slow vms, High "average response ... |VMware Communities
Back on the topic, here's the crystal disk mark performance...
I saw seq read and write as high as 1,6GB/s but it looks like that the ESXi bug hits hard on especially the write performance.... So far so good, quite impressed with the raidz performance.
currently just the 4 ssd's are in the pool, but i might redo it to include the 5th.