ConnectX-2 (MHQH29C-XTR) and ESXi 6.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jwc

New Member
Mar 11, 2017
10
0
1
47
I really need help with this. I'm trying to get a ConnectX-2 (MHQH29C-XTR) to work with ESXi 6.5 with the inbox drivers. According to VMware Compatibility Guide - I/O Device Search should work, although I am using firmware version 2.9.1000 since that is all I have and cannot locate later versions. The two ports show up in vmware, but the LEDs do not light up and they show up as "Link down".

I do have some ESXi 5.1 and Windows hosts on the same switch, and they have been working well.

I'm using an IS5025 unmanaged switch. I have the OFED subnet manager running on one of the Windows boxes.

This is my vmware version:
Code:
VMkernel localhost 6.5.0 #1 SMP Release build-4887370 Jan  5 2017 19:17:59 x86_64 x86_64 x86_64 ESXi
Here are the packages:
Code:
[root@localhost:/opt/mellanox/bin] esxcli software vib list|grep mlx
net-mlx4-core                  1.9.7.0-1vmw.650.0.0.4564106          VMW       VMwareCertified   2017-07-08
net-mlx4-en                    1.9.7.0-1vmw.650.0.0.4564106          VMW       VMwareCertified   2017-07-08
nmlx4-core                     3.16.0.0-1vmw.650.0.0.4564106         VMW       VMwareCertified   2017-07-08
nmlx4-en                       3.16.0.0-1vmw.650.0.0.4564106         VMW       VMwareCertified   2017-07-08
nmlx4-rdma                     3.16.0.0-1vmw.650.0.0.4564106         VMW       VMwareCertified   2017-07-08
nmlx5-core                     4.16.0.0-1vmw.650.0.0.4564106         VMW       VMwareCertified   2017-07-08
Here is my card information:
Code:
[root@localhost:/opt/mellanox/bin] ./flint -d mt26428_pci_cr0 q
Image type:            FS2
FW Version:            2.9.1000
Device ID:             26428
Description:           Node             Port1            Port2            Sys image
GUIDs:                 0002c9030010ea34 0002c9030010ea35 0002c9030010ea36 0002c9030010ea37
MACs:                                       0002c910ea34     0002c910ea35
VSD:
PSID:                  MT_0FC0110009
[root@localhost:/opt/mellanox/bin] ./mdevices_info
PCI devices:
------------
ConnectX2(rev:b0)       mt26428_pciconf0
ConnectX2(rev:b0)       mt26428_pci_cr0
[root@localhost:/opt/mellanox/bin] ./mst status -v
PCI devices:
------------
DEVICE_TYPE             MST                           PCI       RDMA            NET                       NUMA
ConnectX2(rev:b0)       mt26428_pciconf0
ConnectX2(rev:b0)       mt26428_pci_cr0               14:00.0                   net-vmnic4,net-vmnic1000402
The nics show up in vmware:
Code:
[root@localhost:/opt/mellanox/bin] esxcli network nic list
Name          PCI Device    Driver    Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description
------------  ------------  --------  ------------  -----------  -----  ------  -----------------  ----  ----------------------------------------------------------------------------------
vmnic0        0000:02:00.0  bnx2      Up            Up            1000  Full    00:26:55:2f:d4:54  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic1        0000:02:00.1  bnx2      Up            Down             0  Half    00:26:55:2f:d4:56  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic1000402  0000:14:00.0  nmlx4_en  Up            Down             0  Half    00:02:c9:10:ea:35  1500  Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
vmnic2        0000:03:00.0  bnx2      Up            Down             0  Half    00:26:55:2f:d4:58  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic3        0000:03:00.1  bnx2      Up            Down             0  Half    00:26:55:2f:d4:5a  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic4        0000:14:00.0  nmlx4_en  Up            Down             0  Half    00:02:c9:10:ea:34  1500  Mellanox Technologies MT26428 [ConnectX VPI - 10GigE / IB QDR, PCIe 2.0 5GT/s]
Thank you in advance.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The compatibility search link is not working;)
I heard before that they should work ootb but from my experience they will not work with default drivers in 6.5 (not even EN). But I think when I tried I did not even see them, so maybe things are slightly better now.

If you don't want to troubleshoot you can use the old drivers, see 10Gb SFP+ single port = cheaper than dirt (if this is is not too cumbersome for updates).
 

jwc

New Member
Mar 11, 2017
10
0
1
47
The compatibility search link is not working;)
I heard before that they should work ootb but from my experience they will not work with default drivers in 6.5 (not even EN). But I think when I tried I did not even see them, so maybe things are slightly better now.

If you don't want to troubleshoot you can use the old drivers, see 10Gb SFP+ single port = cheaper than dirt (if this is is not too cumbersome for updates).
Thank you very much. That did it!!! :D
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Great:)
Just be aware that you'll need to uninstall and reinstall these for an ESX update
 

jwc

New Member
Mar 11, 2017
10
0
1
47
Great:)
Just be aware that you'll need to uninstall and reinstall these for an ESX update
The Link speed is 40000 Mbps but the Supported speed is 10000 Mbps, full duplex. Did I do something wrong, or is this correct?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
You might need to force the speed to 40Gbit using appropriate commands.

I think from adapter side that might be sth like this
ibportstate <yourswitchlid> <port> speed 5
ibportstate <yourswitchlid> <port> espeed 1
ibportstate <yourswitchlid> <port> reset
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Have you checked the mellanox page?

Have not been using cx2 in a while tbh