Flashing stock Mellanox firmware to OEM (EMC) ConnectX-3 IB/Ethernet dual-port QSFP adapter

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,591
1,051
113
Wonder if the fiber optic cable is not long enough because that Arista QSFP-LR4-40G is meant for Long Range multi kilometer distances.
LR4 transceivers usually do not have a minimum length and do not require a damper for short distances.
 

Jamy

Member
Mar 29, 2017
44
6
8
48
I flashed a MCX54A-QCBT to the FCCT and everything went fine, it reports both ports are ETH but I cannot get lights on the NIC when connected to my Ubiquiti 10gb switch. I have verified with another server that the cables are good and the ports on the Ubiquiti are good. Anyone have any ideas what else it could be? The OS of my server (XCP-NG) is able to see the NIC. Below is an Mlxconfig output.

Code:
Device #1:
----------

Device type:    ConnectX3
Device:         /dev/mst/mt4099_pci_cr0

Configurations:                                      Next Boot
        SRIOV_EN                                    False(0)
        NUM_OF_VFS                                  8
        LINK_TYPE_P1                                ETH(2)
        LINK_TYPE_P2                                ETH(2)
        LOG_BAR_SIZE                                3
        BOOT_PKEY_P1                                0
        BOOT_PKEY_P2                                0
        BOOT_OPTION_ROM_EN_P1                       False(0)
        BOOT_VLAN_EN_P1                             False(0)
        BOOT_RETRY_CNT_P1                           0
        LEGACY_BOOT_PROTOCOL_P1                     None(0)
        BOOT_VLAN_P1                                1
        BOOT_OPTION_ROM_EN_P2                       False(0)
        BOOT_VLAN_EN_P2                             False(0)
        BOOT_RETRY_CNT_P2                           0
        LEGACY_BOOT_PROTOCOL_P2                     None(0)
        BOOT_VLAN_P2                                1
        IP_VER_P1                                   IPv4(0)
        IP_VER_P2                                   IPv4(0)
        CQ_TIMESTAMP                                True(1)
 

i386

Well-Known Member
Mar 18, 2016
4,406
1,636
113
35
Germany
in my experience the mellanox nics work with everything that you throw at them, and usually it's the other side or the cables that are problematic.

how did you test the ports on the ubiquiti switch? -> can you use the other server to test a direct link with the mellanox card with the cable?
 

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
Try loop linking your ports to see if ethernet indeed works?
In linux you can use namespaces and assign one port to a separate namespace which will allow you to have them both on the same subnet to test with something like iperf.
 

Jamy

Member
Mar 29, 2017
44
6
8
48
in my experience the mellanox nics work with everything that you throw at them, and usually it's the other side or the cables that are problematic.

how did you test the ports on the ubiquiti switch? -> can you use the other server to test a direct link with the mellanox card with the cable?
I connected my NAS to the same ports, with the same cables.
Try loop linking your ports to see if ethernet indeed works?
In linux you can use namespaces and assign one port to a separate namespace which will allow you to have them both on the same subnet to test with something like iperf.
I was able to get it working I had to get QSFP to SFP adapters and use SFP's, per this article ( Using a 40GbE (QSFP+) NIC with a 10GbE Switch (SFP+) (servethehome.com). I am not terribly familiar with 40gb Ethernet, however I was surprised that they don't auto negotiate back to 10gb.
 

Jamy

Member
Mar 29, 2017
44
6
8
48
With these cards if I buy a QSFP+ breakout cable to 4 SFP's, how do the connections work? Do I setup a lagg bond on the switch with the four SFP's? Does the NIC see all four SFP breakouts as separate connections? I don't have a cable yet, and have never used one so forgive my ignorance.
 

mach3.2

Active Member
Feb 7, 2022
143
104
43
With these cards if I buy a QSFP+ breakout cable to 4 SFP's, how do the connections work? Do I setup a lagg bond on the switch with the four SFP's? Does the NIC see all four SFP breakouts as separate connections? I don't have a cable yet, and have never used one so forgive my ignorance.
Only the first 10GbE channel will link up, the rest won't work because the CX3 doesn't support breaking out 40GbE into 4 seperate 10GbE links.
 

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
Only the first 10GbE channel will link up, the rest won't work because the CX3 doesn't support breaking out 40GbE into 4 seperate 10GbE links.
Thanks, I didn't know that. I knew it consists of 4x10 but faced the same problem trying to connect various transceivers, MM and SM in ruckus 7150(10gbe) to arista 40gbe-univ transceiver in mlx354. While the reason I couldn't is likely due to fact that those have pretty unique frequencies and probably only link up together, even if they would have been perfectly compatible, maybe they wouldn't link up because of that. I do have a QSFP+-SFP+ adaptor now but already solved it the other way using longer loop but still pairing arista to arista transceivers. Thanks for the insight.
 

blunden

Active Member
Nov 29, 2019
698
225
43
With these cards if I buy a QSFP+ breakout cable to 4 SFP's, how do the connections work? Do I setup a lagg bond on the switch with the four SFP's? Does the NIC see all four SFP breakouts as separate connections? I don't have a cable yet, and have never used one so forgive my ignorance.
They can work like that in switches, but not NICs. :)
 

Jamy

Member
Mar 29, 2017
44
6
8
48
Anyone enabled SRIOV on these cards? Does it work well? What's needed to ocnfigure it?
 

DaveLTX

Active Member
Dec 5, 2021
180
48
28
At least on windows, the 649283-B21 cards do NOT like being in auto or in IB mode, the driver will crash constantly. Seems to default to IB which is why it crashes
I flashed similarly for the 764 CX3 Pro FLR cards and being in IB mode crashes the driver.
Once you take it in mind both 764285-b21 and 649283-B21 flashed to proper mellanox firmware works fine on windows. Yet to test for linux

I couldn't even configure 649283-B21 with the HP firmware and the moment windows loads it entirely disappears from my switch and windows. Odd!

UPDATE : the 649 has 1 broken port (764 doesnt have broken ports but if you attempt to flash to mellanox firmware you WILL break the ports that will return if you reflash HP firmware)
:mad:
 
Last edited:

DaveLTX

Active Member
Dec 5, 2021
180
48
28
Those who got the 764 HP like I did, watch out, the configuration on them is completely back to front.
Flashing back the original firmware fixed the ports

1696608469033.png

EDIT : so it seems my 649283 is a rev C and they already are 40GbE capable...
Flashing the A2~A5 firmware which is publicly available breaks one port, gotta have to dig into the ini file as well
 
Last edited:

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
PCI-E gen 3 is 8 Gb/s per lane, so right there you're down to 32 Gb/s max theoretical. And even then it won't be that high due to a little bit of coding overhead.

I don't recommend changing the MTU unless it's a direct computer to computer link.

I also recommend being honest with what your NAS can do read/write wise. Unless it can really do more than 16 Gb/s (2 GB/s) then don't worry about your link performance - it's good enough. ;-)

Hi guys i just found out this forum topic today whilst googling for information regarding the same issue..

So i have 2 server in our test bench..

1 R620 with 2 CPU E5-2597v2 (total 24c/48t)
1 MCX455 100G Network Card switched to ethernet mode

on the other side we have a

1DELL R420 with 2 E5-2470v2 (20c/40t)
1 MCX354A-FCBT with firmware switched to Ethernet mode..


i just found that the servers have PCIe 3.0 X16


so i have linked both cards with 40G SR4 gbic and fiber cable.. and link auto-negotiation just came up 40Gb on both ends..

so whilst doing a bandwidth test i can only get max out in TCP 9.9gbps TX and 4.8GBps RX... in UDP mode bandwidth test i cant 9.4gbps TX and 9.4gbpx RX using bandwidth test both method sending and receiving..


Could this be a limitation on the PCIe slot lane on both servers? i have found online that the PCI-e 3.0 X16 is limite to bandwidth 16GTs is this correct? so how come a PCIe 3.0 Fiber NIC card such as MCX354A and MCX455 can achieve the 40Gbps in ethernet mode?

Also another thing that confuses me is that on the mellanox datasheet it states 40gbps Infiniband and 10Gbps Ethernet mode..

Ordering Part Number (OPN) MCX353A-QCBT - single-port card MCX354A-QCBT - dual-port card Data Transmission Rate InfiniBand: QDR (40 Gb/s) Ethernet: 10Gb/s Network Connector Types single or dual-port QSFP+ PCI Express (PCIe) SerDes Speed PCIe 3.0 x8 8GT/s


so i confess i am now confused... if the slot PCIe 3.0 X16 is limited hardware to 16Gbs how can we achieve 40Gbs on this cards?
 

Attachments

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
IPerf on windows is not the same as iperf

That's because the windows implementation of iperf is not optimized.
Boot linux from a live cd/usb on the windows host and try iperf or boot windows on both machines and use ntttcp for network testing.
Hiya.. well i am running LinuxOS routerOS on both machines.. and doing bandwidth test cannot go over 16gbps aggregate on both sides.. as if it was limiting hardware somewhere.. attached pictures below.. on TCP cannot get past 16gbps aggregate barrier RX and TX togheter.. but on UDP i can get over 19Gbps aggregate.. as pic2 below
 

Attachments

fohdeesha

Kaini Industries
Nov 20, 2016
2,828
3,269
113
33
fohdeesha.com
so i confess i am now confused... if the slot PCIe 3.0 X16 is limited hardware to 16Gbs how can we achieve 40Gbs on this cards?
PCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneously
 

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
Just a guess, you may have one card in a system's pci-e slot that's only gen 2 x 4 lanes which would net you a max of 16 Gb/s even though the 40Gb link is active.
Hiya

thanks for your post.. so i did some further testing i have actually removed all other fiber 10GB nics from the other interfaces on both machines.. and left just the 40gbp and 100gb nic card on the other side servers.. now i can get 32gbps agregate speeds on TCP as show on example below picture.. so i guess this is the max out i can get on pcie 3.0 x16 right?
 

Attachments

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
PCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneously
hiya strange.. i can only get 32gbps now.. only 16gbs down and 16gbps upload same time.. only this cards plugged on pcie 3.0 x16 no other fiber nics cads.. will try to play with mtu now will raise to 9000 mtu just to check
 

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
PCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneously
I am confused.. because on docs i have found only stated 16Gbps is that per lane? where do i find out how many lanes i have is it based on CPU model? or based on Server hardware spec slots?
 

DaveLTX

Active Member
Dec 5, 2021
180
48
28
I am confused.. because on docs i have found only stated 16Gbps is that per lane? where do i find out how many lanes i have is it based on CPU model? or based on Server hardware spec slots?
You are calculating based on GIGABYTES on the pcie lanws, pcie 3.0 is 8GIGABITS per lane
All networking equipment is based on GIGABITS
1699679470155.png
x16 is 16GB/s, a 40GbE two port needs at most a x8 slot. There is no limitation to the PCIe slots, its something else
hope that actually clears it up for you

2697 V2 has 40x pcie lanes, how its split up depends on the motherboard. 2470v2 on the other hand is only 24 lanes.but both systems have plenty wide pcie lanes with dual cpus (2x individual cpus)
 
Last edited:

P0rt4lN3T

Member
Nov 10, 2023
52
1
8
You are calculating based on GIGABYTES on the pcie lanws, pcie 3.0 is 8GIGABITS per lane
All networking equipment is based on GIGABITS
View attachment 32682
x16 is 16GB/s, a 40GbE two port needs at most a x8 slot. There is no limitation to the PCIe slots, its something else
hope that actually clears it up for you

2697 V2 has 40x pcie lanes, how its split up depends on the motherboard. 2470v2 on the other hand is only 24 lanes.but both systems have plenty wide pcie lanes with dual cpus (2x individual cpus)
Hi DaveLTX , thanks i will give it another go tomorrow, as i am using routerOS mikrotik installed on both servers.. I have made all the custom BIOS options available, I have upgraded to the latest BIOS on both servers, i have removed all the other NICs from the servers just left installed the 100G MCX455 on the R620 and the MCX354A connected on the other R420 server

But on throughput TCP even with jumbo frame.enabled and MTU 9000 it's still struggling to get past 35Gbps speed rates

That means 17,5gbps TX and 17,5Gbps RX both at the same time

If I just test receiving data max I could get was 34gbps.. Will try to find the bottleneck tomorrow could memory RAM interfere? I only have 16GB ram on both machines 1 memory slot on each machine , I will try to put 64GB RAM on each server tomorrow and try it out again.