Can't get passthrough to work on esxi 6.0 with connectx-3

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Sep 22, 2015
62
21
8
I can't get my Mellanox ConnectX-3 cards to passthrough to esxi VMs on my Intel C2600CP. It's added to the list of Passthrough devices, and I've got it assigned to the VM, but it never successfully initializes. In both FreeBSD and UbuntuServer guest it complains about being unable to assign an IRQ.

My current suspicion is that it's related to the PCI reset. I've updated to the latest firmware for the NIC and messed around in the /etc/vmware/passthru.map with all the different reset methods (flr,d3d0,link,bridge) but I still get no joy.

I'm about ready to give up, as everything I've seen on the internet leads to a dead end. Does anyone have any ideas? Or at least have an example of this working with similar hardware?
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I'm not familiar with your motherboard, but I have had issues in the past with a Supermicro board and pass-through because the PCI slot I was using was behind a bridge which split 16 lanes into two 8x slots. Check the block diagram of the board for a bridge or switch between the slot you are using and the CPU.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I'm not familiar with your motherboard, but I have had issues in the past with a Supermicro board and pass-through because the PCI slot I was using was behind a bridge which split 16 lanes into two 8x slots. Check the block diagram of the board for a bridge or switch between the slot you are using and the CPU.
Or try another PCI-e slot 'maybe'...grasping at straws here, ya got another 10G nic ya can throw in to test, cheap Intel X520-DA1 or similar?
 
Sep 22, 2015
62
21
8
Thanks for the suggestions. I tried another PCI slot and checked the diagram and got the same behavior. Sadly, I don't have any other 10gb nics I can use.

Perhaps I'll try again later. I have a feeling this is probably a Mellanox problem.
 

groove

Member
Sep 21, 2011
90
31
18
Thanks for the suggestions. I tried another PCI slot and checked the diagram and got the same behavior. Sadly, I don't have any other 10gb nics I can use.

Perhaps I'll try again later. I have a feeling this is probably a Mellanox problem.
Hi David,

Were you able to resolve this issue ? I was able to get the cards to passthrough on a Tyan S5070, but performance drops to a point that make it completely unusable.

Even a simple ping has very high latency :
64 bytes from 192.168.100.1: icmp_seq=0 ttl=255 time=763.345 ms
64 bytes from 192.168.100.1: icmp_seq=1 ttl=255 time=750.819 ms
64 bytes from 192.168.100.1: icmp_seq=2 ttl=255 time=737.826 ms
This is between 2 ConnectX-3 cards connected directly to each other. Both of them are installed on hosts running esxi 6.0 u2. On one host the card is passthrough to a Solaris 11.3 vm and the other is configured as a VMKernel. If I configure both cards as VMKernel then the ping latency is within a few uS (~ .500 ms).

I'm going to try to passthrough this card to a Linux VM and check the performance.

Could it be a Mellanox driver issue again ?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Do you have latest firmware for the CX3 card applied? I found some anomalies when old FW was used.
 

groove

Member
Sep 21, 2011
90
31
18
Thanks for the reply whitey - yes, both cards have the latest firmware that I could find for them - well there are 2 different ConnectX-3 cards - one is a 'Mellanox ConnectX-3 FDR InfiniBand + 40GigE MCX354A - that's got f/w ver 2.40.5000. This is one that I'm trying to pass through to the Solaris VM. The other is a MCX353A-QCAT - and that has f/w ver 2-33-5100 (the latest I could find) - this one is being used by the VMKernel on the second esxi 6.0 node.

I did get a chance to test the MCX354A by passing it through to a Linux VM and it worked fine (ping latency was down to ~ 0.600 ms). So it's just an issue when pass through the Solaris VM.

As of now I have the MCX354A attached to a non-VMKernel switch and the Solaris vm is using it through a VMXNET3. I would like to get it pass through directly to the Solaris VM so that I can test iSer / SRP between the second ESXi host and the Solaris VM.