Proxmox with Mellanox ConnectX-3 + Mikrotik switch (QSFP+)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I'm trying to get my Mellanox ConnectX-3 dual 40GbE NIC (already set to Ethernet mode) to work in Proxmox but the system is showing the NICs are not active. Is there a known way to make these cards work in Proxmox?

1599670131688.png


EDIT: Ok I got the NICs to show as active by setting them to auto start. However, how does one add additional NICs to an already created bridge via the GUI? I can't find what the syntax is for listing multiple NICS.

EDIT: Can anyone speak to how to setup a bond between a Mikrotik QSFP+ port (which shows up on the switch as 4 ports (qsfpplus1-1,qsfpplus1-2,qsfpplus1-3,qsfpplus1-4) and Proxmox? Do I need create a bond for all 4 interfaces on the switch to become active or do i treat them all as separate interfaces that will work in parallel?
 
Last edited:

mmo

Well-Known Member
Sep 17, 2016
558
357
63
44
have you tried to create a new Linux Bridge with the ports?
 

mmo

Well-Known Member
Sep 17, 2016
558
357
63
44
Anytime I try to create a new bridge it tells me the default gateway already exists on my first bridge even though I'm trying with a completely different network.
For testing purpose only, you can leave the CIDR and Gateway blank. Then you can connect the ports to your switch (assume you have vlan setup on your switch already), reload Network with ifupdown2 to active it. Assign one of your VMs to the new bridge to confirm it works. Then you can tune your network with the samples network configuration from proxmox's wiki site below.

 

Fallen Kell

Member
Mar 10, 2020
57
23
8
If you are attempting to bond the dual ports of the 40GbE for performance reasons, you will not obtain any. The ConnectX-3 is a PCI-E Gen3 8x device, meaning that the theoretically maximum bandwidth over that link to/from the card is only 63Gbps, which does not account for any of the overhead of data encapsulation. The reason for the dual ports on the card is to connect to multiple switches at the same time for path failover in case a switch failed.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
If you are attempting to bond the dual ports of the 40GbE for performance reasons, you will not obtain any. The ConnectX-3 is a PCI-E Gen3 8x device, meaning that the theoretically maximum bandwidth over that link to/from the card is only 63Gbps, which does not account for any of the overhead of data encapsulation. The reason for the dual ports on the card is to connect to multiple switches at the same time for path failover in case a switch failed.
I'm not attempting to bond them. Just connect two Linux servers together.
 

Wolvez

New Member
Apr 24, 2020
18
4
3
The GUI takes space separated lists of ports for linux bridges and ovs switches. So just put
Code:
enp130s0 enp130s0d1
or whatever the interfaces you want are called in the Bridge Ports box.
That may create problems if both ports are attached to the same switch though.
 

Wolvez

New Member
Apr 24, 2020
18
4
3
I think most of the Proxmox gui is space separated. If that doesn't work try commas. If that doesn't work you can probably only have one item.
As far as the multiple ports to a switch goes, I think if you have 2 links between switches or between a single host and a switch a LAG must be set up. Otherwise you end up with horrific packet loss. I think the packets get sent out round robin but only come back over one link leading to half the packets getting dropped. I don't have a lot of knowledge in that area I just know that the times I've accidentally had 2 links between the same places it didn't go well. Maybe someone else will chime in with a better explanation.