You could try CentOS instead of Ubuntu for ZFS and then retest your iSER target (but would recommend SCST over iLO)
Btw could you share some benchmark results on iSER ramdisk target?
There are a couple of issues with the benchmarking currently.
I'm focusing for the moment on the CX3Pro.
I've tried a number of block devices, and strangely enough, Zfs is by far the easiest to use. For whatever reason, esxi will mount it, whereas esxi has issues with the same drives passed through as a LIO or SCST LUN or as mdadm raid0 stripes of them as the LUN. esxi wouldn't mount the LIO or SCT target because they reported the 4K physical block of the iodrive 2 cards. There were issues even if I format the iodrive with 512B blocksize.
I'd prefer to use Zvols eventually regardless, for the sake of resilience, but Zfs is the bottleneck.
I'd love to switch to FreeBSD, but I'm also using iodrive 2 for L2ARC, and I don't think there's any FreeBSD drivers for the iodrive. It looks like there are iodrive drivers for FreeBSD 8/9, but iSER in FreeBSD 11/12.
Heck I'd love to use illumos. Last time I tried Ominos it didn't recognize the mellanox CX3 card.
Anyway, yea a ramdisk target is what's needed for benchmark comparisons and testing. That said, it's better to know early if there are any show stoppers for using Zfs.
So, yea, I'll get some ramdisk benchmarks, and I'd like to get a few volunteers to help validate.
For those willing to test, if you have ConnectX-3 cards, esxi 6.7U1 iSER client, ubuntu 18.04 iSER target (or other distro with LIO/SCST), an ethernet switch that supports priority flow control, and are willing to try this, please PM me to discuss helping test! I only need a couple to help validate.