ConnectX-2 and ESXi 6.0

Dravor

New Member
Aug 17, 2015
19
1
3
45
*EDIT*

ESXi 6.0 users this is what did it:

esxcli software vib remove -f -n nmlx4-core -n nmlx4-en -n nmlx4-rdma

Missed that in the first page post =)
 
Last edited:
  • Like
Reactions: Jake Sullivan

Dravor

New Member
Aug 17, 2015
19
1
3
45
This was the issue!

Remove EVERYTHING I've installed (and the inbox drivers):
Code:
esxcli software vib remove -f -n net-ib-addr -n net-ib-cm -n net-ib-core -n net-ib-mad -n net-ib-sa -n net-ib-umad -n net-mlx4-core -n net-mlx4-en -n net-mlx4-ib -n net-mst -n net-rdma-cm -n scsi-ib-iser -n ib-opensm -n nmlx4-core -n nmlx4-en -n nmlx4-rdma -n scsi-ib-srp -n net-ib-ipoib -n mft -n nmst
Install just the OFED:
Code:
esxcli software vib install -d /var/tmp/MLNX-OFED-ESX-1.8.2.4-10EM-500.0.0.472560.zip
Link lights up. No OpenSM required (since my Windows host is running it). Woohoo! Now to do it on the other couple hosts, and start testing speeds.
Are these what you needed to remove?

[root@ESXi1:/opt/mellanox/bin] esxcli software vib list | grep nmlx4
nmlx4-core 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2015-04-18
nmlx4-en 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2015-04-18
nmlx4-rdma 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2015-04-18
 

Dravor

New Member
Aug 17, 2015
19
1
3
45
So I can see the NIC's in the ESXi host.

Windows for some reason has 2 IPoIB Nics, and 2 IPoIB Virtual Nics.

There is connectivity since the ESXi host shows one port has connected, and one as disconnected, just like they are cabled.

I have 2 instances of OpenSM running on Windows, each tied to the GUID of one of the adapters.

Any insight? Not using a switch here but I have a feeling something on the Windows side is off.

Do I need to set the partition config on the Windows side using PartMan?

Thanks!
 

whitey

Moderator
Jun 30, 2014
2,771
872
113
40
OK apparently I messed this up. Help a bro out.

I have a ESXi 6.0U2 host w/ a MT26428 (1 port SFP+, 1 port QSFP QDR IB). I ran the following:

esxcli software vib remove -f -n nmlx4-core -n nmlx4-en -n nmlx4-rdma
then
esxcli software vib install -d /vmfs/volumes/iso/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

rebooted, now I see vmnic2 in a down state (hooked to my 10G eth switch, was working previously w/ in-box drivers) as well as two additional storage adapter now that were not there before labeled 'vmhba33/vmhba34).

My intentions are to use a single port in eth 10G mode connected to my 10G switch and the other QSFP port connected to my IS5022 as 40G IPoIB.

Is this possible with what I have just done or is this only for IB mode iSER/SRP?

I am so lost hah.

EDIT: So I thought vmnic2 was hooked to my 10g switch, apparently that was an interface conn'ed to my IB switch, just installed the ib-opensm .vib, now I get 40G link on vmnic2, guess ethernet mode/other port is not/may not be happy as i do not see those interfaces anymore. Anyway to get these to both work simultaneously?
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,771
872
113
40
I see this in some release notes:

On VMware ESXi Server, IPoIB supports Unreliable Datagram (UD) mode only, note
that Reliable Connected (RC) mode is not supported

WEAK SAUCE!
 

markpower28

Active Member
Apr 9, 2013
415
103
43
Whitey:

In ESXi because of the driver limitation it's one or the other. I am not sure you can have both IB and eithernet at the same time, because once you run remove the driver it remove the eithernet driver and 1.8.2.5 is for IB/SRP/iSER only. I have not test the VPI card you have. This maybe a question for Mellanox forum.

Mark
 

whitey

Moderator
Jun 30, 2014
2,771
872
113
40
Whitey:

In ESXi because of the driver limitation it's one or the other. I am not sure you can have both IB and eithernet at the same time, because once you run remove the driver it remove the eithernet driver and 1.8.2.5 is for IB/SRP/iSER only. I have not test the VPI card you have. This maybe a question for Mellanox forum.

Mark
Thanks bud, yeah the state of MLX IB on VMware is super sad lookin' to me...almost pathetic. I saw a b|tch fest going on in a MLX forum that a member of ours @mpogr was trying to get some response from MLX but they cowardly bowed out. MLX EN mode seems to work fine, I should just give up on doing anything 'useful' w/ IB I guess. Not even sure how I would use SRP/iSER (these vmhba's I suppose), they show up as SCSI adapters, what do I serve up a LUN across the IB fabric to utilize those, guess some RTFM time is in order but I may just punt...gotta pick my battles and this one may not be a good use of time to fight.

EDIT: I see the 2.4.0.0 DOES look to be able to support Connected Mode IPoIB, that's something I guess.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,771
872
113
40
Back to 10G max throughput using iperf between VM's using mlx4_en w/ 2.4.0.0 OFED vSphere drivers for ESXi 6. SMH

What's interesting to me is that I can clearly see mlx4_ipoib and in 1.8.2.5 the link came up green/40G but w/ 2.4.0.0 it just stays orange on the switch.
 

mpogr

Active Member
Jul 14, 2016
115
95
28
51
@whitey: is your switch IB or EN? Also, AFAIK, 2.4.x doesn't work with X2, only X3.

Just bringing some order to the table:
* MLNX drivers 1.8.x.x and 2.x.x - IB only (won't even work with EN cards). Both have IPoIB, but only 1.8.x.x have SRP and only 1.8.3 beta has iSER. 2.x.x can be used only for IPoIB networking, which means you can use it for iSCSI, but without RDMA. For IB, you always need an SM. These drivers support only X2 (only 1.8.x.x) and X3/Pro (both 1.8 and 2). Work only on ESXi 5.x and 6.0. No go on 6.5.
* MLNX drivers 1.9.x.x (from their site) - EN only. Will switch your VPI card to EN mode upon boot, hence expect orange lights if your switch is IB. Have iSER support. These drivers support only X3/Pro (not X2!). Work only on ESXi 5.x and 6.0. No go on 6.5.
* MLNX drivers 3.x (from their site or VMware) - EN only. Same as above, except no iSER. There is an inbox driver supporting X3 in EN mode on 6.5, but, again, no iSER
* MLNX drivers 4.x (from their site or VMware) - same as above, but X4 only.

Important notes:
* No IB support for anything above X3 (meaning no support at all for Connect-IB, because it's IB-only)
* No official support for anything below X3 (although unofficially 1.8.x.x work with X2 on 5.x and 6.0)
* No current SRP support (last working driver leaked 1.8.2.5)
* No current iSER support for anything except X3/Pro in EN mode (although there is 1.8.3 beta which works in IB mode for X2/3/Pro on 5.x and 6.0)
* No SRP or iSER at all on 6.5

Seriously, the situation around Mellanox support for ESXi makes me seriously think about switching to some form of KVM...
 

mpogr

Active Member
Jul 14, 2016
115
95
28
51
Whitey:

In ESXi because of the driver limitation it's one or the other. I am not sure you can have both IB and eithernet at the same time, because once you run remove the driver it remove the eithernet driver and 1.8.2.5 is for IB/SRP/iSER only. I have not test the VPI card you have. This maybe a question for Mellanox forum.

Mark
That's absolutely correct, one of the serious limitations of any MLNX drivers on ESXi is you cannot mix IB and EN on one system, let alone have mixed port mode on the same card.
 

gaiex

New Member
Jan 14, 2017
4
1
3
Can anyone give me some help on getting ESXi based VM (winserver 2012r2) connecting directly to physical Win10 PC in 10GBe connection since I'm only getting 1GBe ?

Here is what I have:

- ESXi 6.0 u2 host with MLNX-OFED-ESX-1.9.10.0-10EM-550.0.0.1331820 drivers (removed all pre-installed drivers)
- VM win server 2012r2 with vmxnet3 nic
- vswitch only for the mellanox card and the VM using 10GBe connection
- both mellanox connectx-2 cards updated firmware to 2.9.12
- PC with Win 10 pro with MLNX_VPI_WinOF-4_80_All_win81_x64 drivers
- using sfp+ transceivers and OM3 fiber crossed for direct connection.
- jumbo frames 9000 and other nic configs as described in some posts.

Both VMware and VM and PC connect but only show 1GBe links, any ideas what I'm missing for getting the 10GBe link and speed?

Thanks