Mellanox ConnectX-2 EN in FreeBSD 10.3 via passthrough in VMware ESXi

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

logan893

Member
Aug 12, 2016
68
12
8
44
Does anyone have the Mellanox ConnectX-2 EN running in FreeBSD 10.3 in passthrough under VMware ESXi?

I have the single port 10 Gbps SFP+ version, and the performance I can squeeze out of a point-to-point connection (with 9000 byte jumbo frames) between a physical Windows 7 machine and a virtual Windows 10 VM is around 7-9 Gbps in single direction tests, and 5-6 Gbps in each direction during unidirectional tests using iperf3.

Previous tests were with the ConnectX-2 card interfacing VMware using built-in driver version 1.9.7. With 9000 byte jumbo frames I reached only approximately 1.5-2 Gbps between the Windows 7 physical hardware, and a FreeBSD 10.3 VM (FreeNAS 9.10), during iperf2 testing. The same performance was mimicked during multiple re-transfers of the same large file from a ZFS based share over SMB reaching no more than 150-200 MB/s during transfer essentially from and to RAM disks.

I tried to get the ConnectX-2 card to work in passthrough to FreeBSD and built the 2.1.5 driver (latest supporting ConnectX-2 according to its release note). During boot the driver complains about "No IRQs left", but I doubt this is the real reason.

Code:
mlx4_core0: <mlx4_core> mem 0xfd500000-0xfd5fffff,0xe7000000-0xe77fffff irq 18 at device 0.0 on pci3
mlx4_core: Mellanox ConnectX core driver v2.1 (Oct  3 2016)
mlx4_core: Initializing mlx4_core
mlx4_core0: No IRQs left, device won't be started.
device_attach: mlx4_core0 attach returned 28
I'd like to get this NIC running either in a pfSense or a FreeNAS VM, both of which are based on FreeBSD 10.3.
 

logan893

Member
Aug 12, 2016
68
12
8
44
I decided to try driving the card directly from VMWare again.

I disabled passthrough, and upgraded my ESXi from 6.0 to 6.0 U2. Driver remains at version 1.9.7.

Running iperf3 from my Windows 10 VM with vmxnet3 using 9k jumbo frames I reach approximately 7 Gbps to and from the Windows 7 physical hardware. Reached this speed from my FreeNAS VM also, with vmxnet3 and jumbo frames.

Perhaps I had something not set-up properly previously, or a couple of reboots sorted some things out.

With this level of performance, I decided to scrap the passthrough idea.
 

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
Hi @logan893
, i was about to post a new thread, but fortunately saw this post. I'm new to > 1gbit TCP networking.
I'm planning to buy LOT OF 5 671798-001 MNPA19-XTR HP 10GB CONNECTX2 PCI ETHERNET CARD HIGH PROFILE or http://www.ebay.de/itm/331900223842 for network purpose only.
After reading a bit, the latter is for network and storage use cases as well? And AFAIK, Mellanox provides good support for their products.

My doubts are, can you run 2 MNPA19-XTR cards point to point without any hiccups and just manually configure IPs on both ends?

Also, do these support TCP offloading, TCPoE , SR-IOV and/or RDMA ?

Thanks.

Update1: just found some of the answers in http://www.mellanox.com/pdf/firmware/ConnectX2-FW-2_9_1200-release_notes.pdf.

Update2: correct link for the 2nd item: Chelsio 110-1159-40 AO Dual Port 10GPBs PCI-e FC Card HBA CC2-S320E-S
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Hi @logan893
, i was about to post a new thread, but fortunately saw this post. I'm new to > 1gbit TCP networking.
I'm planning to buy LOT OF 5 671798-001 MNPA19-XTR HP 10GB CONNECTX2 PCI ETHERNET CARD HIGH PROFILE or http://www.ebay.de/itm/33190022384 for network purpose only.
After reading a bit, the latter is for network and storage use cases as well? And AFAIK, Mellanox provides good support for their products.

My doubts are, can you run 2 MNPA19-XTR cards point to point without any hiccups and just manually configure IPs on both ends?

Also, do these support TCP offloading, TCPoE , SR-IOV and/or RDMA ?

Thanks.

Update1: just found some of the answers in http://www.mellanox.com/pdf/firmware/ConnectX2-FW-2_9_1200-release_notes.pdf.
This could answer a few more of your questions: http://www.mellanox.com/related-docs/prod_adapter_cards/ConnectX-2_EN_Cards.pdf

Connecting two cards point to point should work.
 

logan893

Member
Aug 12, 2016
68
12
8
44
@fossexplorer
Yep, connecting two cards point to point work fine. I use a 10 meter fiber with identical SFP+ modules on each end. Also tried successfully with a 5 meter passive DAC. 10 meter active DAC didn't work. Firmware states max 7 meters for DAC.

If you have more than two computers for which you want 10Gbps connectivity make sure you have enough PCIe slots, or go for the dual port variants. Set up unique subnets for each port.

I have my server configured with the 10Gbps card in a special vSwitch where my FreeNAS, Windows and pfSense have vNICs. The pfSense VM can route between the LAN and the 10G vSwitch. So if the PC which connects to the server via 10Gbps would not have the link up then it should still be able to access the FreeNAS and Windows VMs via the same IPs over the LAN. I haven't tested this fail over just yet.

RDMA/RoCE is supported from 2.7.700 firmware.

Your second link doesn't work, so no idea what that is referring to.