Mellanox ConnectX-3 firmware tweaks QDR - QSFP+

Apr 21, 2016
47
17
8
40
Hi all,
I just wanna share my (interesting) results with the cx3 boards.

So, first of all, I've got a pair of cx3 mcx354A-qcbt - 40g ib and 10g ethernet.

Well, tweaking the ini, I got the cards to work at 56gbps ib or ethernet (mind you, 56gbps is a mellanox proprietary mode and it requires cables, hcas and eventual switches that work at that speed)

Just get the mcx354a-fcb appropriate ini (Ax version) and go for it. (yeah, burn with --allow_psid_change)

On a side note, the cx3's chip will run at 427MHz as opposed the 258 for the QDR version. So far (2 month 24/7) the chips don't seem to get hotter than the stock version or more than the old cx2's that I have.

On another side note, USE the mlnx ofed package - the performance difference from the standard ofed stack is huge. You can get the same traffic/latency levels but with a much lower CPU usage.

With normal cheap amphenol 40gbps qdr cables, 40gb is the maximum speed, either ib or ether.

I've tried (so far without luck) to get a breakout cable to work at 4x10gb on the cx3's qsfp+ port.

The most traffic I was able to test with were 50Gb/s ish on both ports. I only have one machine with pcie 3.0 ports to test with . Others are only 2.0 :(

Cheers,
 
  • Like
Reactions: Rand__ and Patrick

Rand__

Well-Known Member
Mar 6, 2014
4,625
919
113
Ok, so basically you just flash the FCBT fw/ini on the QCBT card :)
Though so, just wondered b/c you mentioned some modding in the other thread;)

Now we only need to find out how to flash transceivers :D - i get the feeling Mellanox is all about firmware on the same physical hardware ...
 
Apr 21, 2016
47
17
8
40
I have used the standard amphenol ones (generic dell/intel whatever) - the cheap ones and they'll do ok.
Of course the mellanox ones would give you FDR10 or FDR.
The bottom line is you don't need to do anything about the transceivers, just use cables that work on the FCBT/FCAT revision - basically any QSFP+ would do to get 56Gb/s, both IB or Ethernet
 

Rand__

Well-Known Member
Mar 6, 2014
4,625
919
113
I was not talking about transceiver compatibility but configured speed. If one could flash/config transceivers maybe one could upgrade QDR cables to FDR cables. (just speculating o/c)
 
Apr 21, 2016
47
17
8
40
I'm afraid that would not work as they are built for different standards. The first ones are QSFP, the later are QSFP+
Only the later ones officially support 40Ge.
 

iLya

Member
Jun 22, 2016
48
5
8
@Rand__ ,

Since you seem to have some experience with these cards I was hoping you can help answer this question:

If I get the MCX354A card, can I use the QSFP+ breakout cable to connect the card to the Quanta LB6M 10GB switch and get ~40Gbpe throughput on the single port from that card?

I've built a small lab where I am using Dell C6220 and one of the nodes is hosting FreeNAS using Dual 10GB NICs for storage traffic and I am using Hyper-V on Windows 2016. However I recently discovered that with Hyper-V if you create a vSwitch on top of the 10GB ports that the throughput drops by ~60% so I am thinking of using the MCX354A cards to create a dedicated storage network for some connections and the rest for guest traffic.

Any input is greatly appreciated!!
 

Rand__

Well-Known Member
Mar 6, 2014
4,625
919
113
Which cable do you refer to? The 40GB QSFP to 4x10 GBe SFP+ cables? Those only work the other way round afaik. I.e. on a switch which supports this (i.e. Mellanox) you can connect 4x 10 GbE SFP+ Cards or a switch with 4 SFP+ ports (or a mix maybe).
You can't connect these to a card with the qsfp side I believe.
 

iLya

Member
Jun 22, 2016
48
5
8
Which cable do you refer to? The 40GB QSFP to 4x10 GBe SFP+ cables? Those only work the other way round afaik. I.e. on a switch which supports this (i.e. Mellanox) you can connect 4x 10 GbE SFP+ Cards or a switch with 4 SFP+ ports (or a mix maybe).
You can't connect these to a card with the qsfp side I believe.
Yes, I was hoping to connect the 40GB QSFP to 4x10 SFP+ cables.
I was reading the User Manual: ConnectX-3-VPI
The manual states that the "Network Connector Types" are QSFP+
Also in section "1.4 Connectivity" it states the following: "Interoperable with InfiniBand or 10/40 Gb Ethernet switches"

I was thinking of using something like this Cisco QSFP Breakout but the strange part is that if I try to locate something longer than 0.5m in length, they are all DAC cables which I would not think would work for this scenario.

Let me know what you think but based on the documentation it seems like going from the CARD (QSFP) -> 10GB Switch (SFP+) seems like a valid scenario.
 

iLya

Member
Jun 22, 2016
48
5
8
Another potential option until I upgrade to something like an Arista QSFP switch is to use an adapter that does QSFP+ to SFP+ but that will bring the connectivity down to 10Gbps on each port on the card.
I already have two other ports that are doing SFP+ so I would end up with a total of 40Gbps on each node where 2 are for storage and 2 are for guest which I won't complain too much about for now.

Here is a quick video on youtube about the QSFP to SFP adapter: LINK