Mellanox ConnectX-3 40GbE QSFP breakout cable to 4 SFP ports on 10GbE switch?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tryingtorunservers

New Member
Nov 28, 2020
8
0
1
Hi,

From previous threads (Using a breakout cable with ConnectX-3 MCX354A) , it seems like ConnectX-3 does not support breakout cables, at least in the case where one wants use the breakout cable solely as an QSFP to SFP adapter. What if all SFP ports are connected to the same switch instead?

Specifically, I'm wondering whether a QSFP breakout cable connected from ConnectX-3 (MCX354A-FCBT to be exact) to 4 10GbE SFP ports on a switch (Dell PowerConnect 6224) would work?

What if the 4 SFP ports are configured to use link aggregation in the same group on the switch? Will the NIC just treat as if it's connected to a 40GbE switch?

Thank you,
 

tryingtorunservers

New Member
Nov 28, 2020
8
0
1
Thanks a lot! Will be looking for an adapter then. Just curious, are network equipment picky about adapters? Like can I use a Cisco QSFP -> SFP adapter on a Dell switch? Sorry if these questions are basic... I'm new to all this.
 
Jul 19, 2020
51
19
8
Well, more specifically, each port on the NIC is only capable of bringing up one link. So if you break it out and connect it to a 10G switch, only the first link will come up as a 10G link, the other three will not come up at all. Breakout cables are mainly meant for use with switches as many switches can break a 40G or 100G port into four separate 10G or 25G ports.

In my experience, networking hardware is often picky about optical transceivers as they tend to carry a large markup, and you can connect two transceivers from different manufacturers, so the switch at each end of the link will be happy. DAC cables seem to be less problematic as at least some manufacturers understand that at least some level of interoperability is necessary, but that doesn't mean they are not without problems - I have had some Intel 40G NICs complain bitterly when used with certain (100G!) cables. The annoying thing is almost all of this stuff is completely artificial - the incompatibilities are directly coded into the firmware, the switch/NIC firmware/driver checks the cable identifiers against an internal white list and refuses to bring the port up if it doesn't match.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
I have had some Intel 40G NICs complain bitterly when used with certain (100G!) cables.
I had a similar experience with conect-x3 nics, I think it's because the nic firmware doesn't know the "new" specifications for 25/50/100GBE transceivers in dacs/aocs/optical modules.
 
Jul 19, 2020
51
19
8
The problem is that breaking out one NIC port into four means changes have to propagate all the way up the stack. Usually, the NIC registers one PCIe function per port, each function gets a driver instance attached, and each driver instance registers one network interface with the OS. Now, when you want to split a port into four, what do you do? I suppose maybe the driver could register more interfaces, but more likely what you would want to do is have the NIC expose four PCIe functions per port and then have each one get a driver instance. The problem is that unless you set up PCIe hot plugging and reserve address space for the BARs for those extra 3 or 6 functions, you have to reboot the computer as the BAR addresses for all PCIe devices must be assigned in one shot, address space cannot be tacked on after the fact without moving things around, which requires unloading the associated drivers and resetting the hardware. And exposing more functions may require more hardware resources on the NIC, which may not be implemented. Alternatively, perhaps the NIC could always expose 8 functions, but 6 of them would be disabled when running the ports at 40G or similar.

In short, splitting ports like that on a NIC is not trivial from an architectural standpoint.