Connectx-3 dual QSFP card - setup as 8 SFP+ switch?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

chmedly

New Member
Aug 26, 2022
8
0
1
This is a hardware + software question. I'm setting up a file server running linux. I have a dual QSFP Mellanox Connectx-3 card in it. I'm wondering if I can set it up so that the card shows up as a single interface in linux with all 8 SFP+ (breakouts) running to different workstations but all on the same subnet. As if the card is a 9 port network switch with one of those ports arriving in the server.
Furthermore, would it be possible, if workstation #1 has a dual SFP+ NIC in it, to connect two of the SFP+ connections from the server to it and achieve a 20Gb/s link?
This would all be connected with DAC cables in the same room.

Lastly, are there SFP+ "couplers"? I've been looking around and haven't found such a thing. I want to use a QSFP to SFP+ breakout cable but I want to connect a short SFP+ to SFP+ DAC cable to extend one branch to reach two different machines.
 

chmedly

New Member
Aug 26, 2022
8
0
1
CX3 cannot use breakout cables.
I currently have a breakout cable in use connecting to a single sfp port in another machine. It is working although I'm not getting the throughput I expect. (300-350MB/s one way and 160MB/s the other).
Perhaps this breakout issue is related to my performance issue?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Ok, let me rephrase that - you cannot use a breakout cable on the CX3 for its originally intended use;)

And it seems a bit of a waste to use one to instead of simply running a sfp+ cable + qsa adapter, but o/c whatever you got, if it works...
I am not sure if it could be the cause of your performance problems, I never used it that way (due to the fact of having a couple of mlx switches). I'd recommend to test (eg replace the cx3 with a sfp+ card, getting a qsa adapter + sfp+ cable etc)...
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Even if you could, switching multiple 10GbE ports in software is inefficient and needs DPDK et al.

ICX6450 / Aruba S2500 / CRS305 are very affordable, and if 4x SFP+ aren't enough ports, there's ICX6610 or 7250.
 

chmedly

New Member
Aug 26, 2022
8
0
1
Aruba S2500 draws 50w at idle, ICX6450-24 might be around 25W? CRS305-1G-4S+IN looks like it might idle at less than 10W?!? But the little Microtik doesn't appear to be very 'available'. I prefer to buy this kind of thing used (glutton for troubleshooting nightmares, I guess) but I'm not seeing any.
How well would you expect a bridge to work across just 2 SFP+ ports? If I can network 3 machines together, that's all I need in the near term.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
What's your actual requirement? If you are happy with the 25w idle power just get a 6450 and call it a day. If you have a Linux file server and 2 workstations and want to do a Y shape connection it would also work. Linux software bridge is fast enough for most use cases.
 

chmedly

New Member
Aug 26, 2022
8
0
1
Yes, 3 machines. Each one with a free PCIe slot for a 10G card. I also need the 3 machines to connect to a regular gigabit network with the "built-in" nics. I've been experimenting with setting up the bridge with a solarflare 7122f but having trouble making it work correctly.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
Assuming you already flashed all cards to their latest firmware?

And then all you need is to create a linux bridge like this
Code:
iface enp3s0f0np0 inet manual
iface enp3s0f1np1 inet manual

auto br0
iface br0 inet static
  address 10.0.0.11/16
  gateway 10.0.0.1
  bridge-ports enp3s0f0np0 enp3s0f1np1
  bridge-stp off
  bridge-fd 0
Also make sure they don't overheat.
 

chmedly

New Member
Aug 26, 2022
8
0
1
That's close to what I've got.

I haven't had these
bridge-stp off
bridge-fd 0

The biggest issue I'm finding is that the gigabit connection (internal nic) loses its ability to reach the internet.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
You need to sort out your routing table. Let's say if your LAN is on 10.0.0.0/16 then your "internal" 10g need to run out of that network, for example, 10.1.0.0/16. You also need to make sure those 2 network don't connect as a loop, which means you don't want 2 bridges between 2 networks (causing a loop).

If you can keep the Linux server running 24/7, the best option may be having a bridge with 1g and 2x 10g on the Linux server, unplug 1g on 2 workstations, and set LAN IP on all 3 computers.

That is:
Server: br0 [1g, 10g1, 10g2] 10.0.0.5/16
WS1: 10g1 10.0.0.6/16
WS2: 10g1 10.0.0.7/16

That's also the disadvantage of NIC based network, you have to keep the core server running 24/7.
 

chmedly

New Member
Aug 26, 2022
8
0
1
Yes, the main server will run 24/7.
I do have separate subnets for the two networks. (192.168.5.0/24, 192.168.7.0/24)
The only bridge I intend to create is on the one dual SFP+ nic. I don't think there should be a chance for a loop anywhere.
But the idea of limiting the workstations to a single network might simplify some things.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
If you only have three links that need the speed, and you are averse to using a switch, you could use 3x SFN7002 or similar cheap dual-SFP+ SolarFlares and give each node a direct 10GbE link to each of the other two nodes. Static IPs all around, with 3 little non-overlapping subnets (could be /30) that also are separate from your LAN subnet. Just don't try to expand this to four nodes, or you'll start reinventing token ring...
 
  • Like
Reactions: Amrhn and chmedly

chmedly

New Member
Aug 26, 2022
8
0
1
If you only have three links that need the speed, and you are averse to using a switch, you could use 3x SFN7002 or similar cheap dual-SFP+ SolarFlares and give each node a direct 10GbE link to each of the other two nodes.
This was one of my thoughts as well. Wouldn't be too hard. And if I went with the connectx-3 dual QSFP+ cards (or similar) I think I could do 40Gb/s instead of just 10...
But that would mean replacing a couple cards and cables that I already have. And I would really like to know more about bridging etc.