Ok, so far, to sum it up :
C7 - kernel ml 4.9.x + mofed 4.0 + scst trunk (3.3.x) on top of zfs 0.7.0-rc3 - functional
Gentoo (yeah, i got fed up and went back to the roots
- gentoo-sources + scst trunk + ofed (somehow in-tree) + zfs latest - functional.
Not ubuntu per se - but the entire clusterfuck brought by the kmod/dkms - which is cool when it works, but a pain to work around when it doesn't
Esxi :
tha stable 6.0 - this is actually weird as 1.8.3.0 seems to start acting up at times.
uname -a :
VMkernel ... 6.0.0 #1 SMP Release build-4600944 Nov 3 2016 22:17:36 x86_64 x86_64 x86_64 ESXi
esxcli software vib list | grep -i mel
nmst 3.8.0.56-1OEM.600.0.0.2295424 MEL PartnerSupported 2017-01-19
mft 3.7.1.3-0 Mellanox PartnerSupported 2017-01-19
net-ib-addr 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-cm 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-core 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-ipoib 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-mad 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-sa 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-ib-umad 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-mlx4-core 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-mlx4-ib 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
net-mst 3.7.1.3-1OEM.550.0.0.1331820 Mellanox PartnerSupported 2017-01-19
net-rdma-cm 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
scsi-ib-iser 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
scsi-ib-srp 1.8.3.0-1OEM.500.0.0.472560 Mellanox PartnerSupported 2017-01-18
esxcfg-scsidevs -a :
vmhba_mlx4_0.1.1mlx4_core link-n/a gsan.8100000000000000xxxxxxxxxxxxx (0000:03:00.0) Mellanox Technologies MT27500 Family [ConnectX-3]
vmhba196608ib_iser online iqn...:xx:xx Mellanox iSCSI over RDMA (iSER) Adapter
esxcfg-nics -l :
vmnic_ib0 0000:03:00.0 ib_ipoib Up 56252Mbps Full xx:.. 4092 Mellanox Technologies MT27500 Family [ConnectX-3]
Esxi - 6.5 - mind you, the 1.8.2.5 is some new release - 15.03.2016 if I'm not mistaking - that has the key element - built against the new vmkapi.
uname -a :
VMkernel .... 6.5.0 #1 SMP Release build-4887370 Jan 5 2017 19:17:59 x86_64 x86_64 x86_64 ESXi
esxcli software vib list | grep -i mel
net-ib-addr 1.9.10.6-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-cm 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-core 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-ipoib 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-mad 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-sa 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-ib-umad 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-memtrack 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-mlx-compat 2.4.0.0-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-mlx4-core 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-mlx4-en 1.9.10.6-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-mlx4-ib 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
net-rdma-cm 1.9.10.6-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
scsi-ib-iser 1.9.10.6-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
scsi-ib-srp 1.8.2.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2016-12-04
looks like I was lazy to remove some inbox/leftovers
- these other versions are not used anyway, as we all know.
esxcfg-scsidevs -a :
...
vmhba33 mlx4_core link-n/a gsan.810000000000000010e0000xxxxxxx (0000:04:00.0) Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
vmhba34 mlx4_core link-n/a gsan.810000000000000010e0000xxxxxxx (0000:04:00.0) Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
esxcfg-nics -l :
...
vmnic1000202 0000:04:00.0 ib_ipoib Up 40000Mbps Full xx:.. 4092 Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
vmnic2 0000:04:00.0 ib_ipoib Up 40000Mbps Full xx:.. 4092 Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
So far the performance seems to be there - I don't have some powerful machines as targets to test as I'm moving data around at the moment.
Besides that, I've been running Omnios until recently on my main storage (Not working with CX3 or nvme above 1.1 made me try Linux for that too) so a compare would be a bit stretched.
Let me know what else I could help with.
@mpogr - It's funny, I've just realized you're the one with the thread on the mlnx community site
. I've stumbled upon it a lot of times and, for the record, I fully agree - No point in restating the facts. Sometimes they seem idiots - you get ipoib and srp but no iser ... which you say as a company is the future ... why ?!
The situation described there is exactly what prompted me to look into building esxi driver for IB support. Let's say I haven't gotten too far but it's time (and nerve) consuming.