[SOLVED]Mellanox ConnectX 3 can't get 40G only 10G

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
So I have just recently installed HP 649281-B21 into a Linux/Debian and FreeBSD server, direct link, no switch.

I followed the flashing tutorial here: https://forums.servethehome.com/ind...x-3-to-arista-7050-no-link.18369/#post-178015

And I have the following cable: Mellanox MFS4R12CB-003 Infiniband Cables

But I only get 10G, not 40G :(
Is this is firmware config error or did I get the wrong QSFP cable?

EDIT: Flash tutorial is outdated.
Use this https://forums.servethehome.com/ind...net-dual-port-qsfp-adapter.20525/#post-198015
 
Last edited:

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I can't find a source right now, but I was under the impression QDR didn't have enough bandwidth for 40GbE and that is why there was no 40GbE/QDR cards (ex CX2)
Mellanox does sell an FDR cable MC2207310-* See http://www.mellanox.com/pdf/partners/Mellanox_PartnerFIRST_Cable_Guide.pdf

Edit:
Eh, seems I was wrong. The cable you have should support FDR10 which uses four 10.3125 channels which is required for 40GbE (Line rate: 4x 10.3125 GBd = 41.25 GBd - Full-Duplex) From InfiniBand - Wikipedia & 100 Gigabit Ethernet - Wikipedia
 
Last edited:
  • Like
Reactions: BackupProphet

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
Linux side:
Code:
sudo lshw -C net

  *-network
       description: Ethernet interface
       product: MT27500 Family [ConnectX-3]
       vendor: Mellanox Technologies
       physical id: 0
       bus info: pci@0000:82:00.0
       logical name: enp130s0d1
       version: 00
       serial: 00:02:c9:3a:7d:21
       size: 10Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm vpd msix pciexpress bus_master cap_list rom ethernet physical fibre autonegotiation
       configuration: autonegotiation=off broadcast=yes driver=mlx4_en driverversion=2.2-1 (Feb 2014) duplex=full firmware=2.40.5030 ip=10.10.10.71 latency=0 link=yes multicast=yes port=fibre speed=10Gbit/s
       resources: irq:130 memory:fbc00000-fbcfffff memory:fa800000-faffffff memory:fbb00000-fbbfffff memory:f2800000-fa7fffff
FreeBSD:
Code:
ifconfig

mlxen3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
   options=ed07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
   ether 00:02:c9:37:ba:21
   hwaddr 00:02:c9:37:ba:21
   inet 10.10.10.1 netmask 0xff000000 broadcast 255.255.255.0
   nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
   media: Ethernet autoselect (10Gbase-CX4 <full-duplex,rxpause,txpause>)
   status: active
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
Okey, here is something interesting

Linux
Code:
Settings for enp130s0d1:
   Supported ports: [ FIBRE ]
   Supported link modes:   1000baseKX/Full
                           10000baseKX4/Full
                           10000baseKR/Full
   Supported pause frame use: Symmetric Receive-only
   Supports auto-negotiation: Yes
   Advertised link modes:  1000baseKX/Full
                           10000baseKX4/Full
                           10000baseKR/Full
   Advertised pause frame use: Symmetric
   Advertised auto-negotiation: Yes
   Speed: 10000Mb/s
   Duplex: Full
   Port: FIBRE
   PHYAD: 0
   Transceiver: internal
   Auto-negotiation: off
   Supports Wake-on: d
   Wake-on: d
   Current message level: 0x00000014 (20)
                  link ifdown
   Link detected: yes
 

RageBone

Active Member
Jul 11, 2017
617
159
43
Your Firmware is a bit older than mine. you have
2.40.5030

i have 2.42.5000 from the mellanox page.
have to reset the cmos of my nas to get it to boot up again. ETA* 10 min
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
If you are using a connectX-3 in emulated ethernet mode, 10gige is all you get. You can only do 40gig if you are doing infiniband.
 

RageBone

Active Member
Jul 11, 2017
617
159
43
Code:
  description: Ethernet interface
       product: MT27500 Family [ConnectX-3]
       vendor: Mellanox Technologies
       physical id: 0
       bus info: pci@0000:02:00.0
       logical name: enp2s0d1
       version: 00
       serial: 00:02:c9:ff:e1:31
       width: 64 bits
       clock: 33MHz
       capabilities: pm vpd msix pciexpress bus_master cap_list rom ethernet physical fibre autonegotiation
       configuration: autonegotiation=off broadcast=yes driver=mlx4_en driverversion=4.0-0 duplex=full firmware=2.42.5000 ip=192.168.2.2 latency=0 link=yes multicast=yes port=fibre
       resources: irq:62 memory:fb800000-fb8fffff memory:d0800000-d0ffffff memory:fb700000-fb7fffff

Code:
[rage@RageStation rdma]$ ethtool enp2s0d1
Settings for enp2s0d1:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseKX/Full 
                            10000baseKX4/Full 
                            10000baseKR/Full 
                            40000baseCR4/Full 
                            40000baseSR4/Full 
                            56000baseCR4/Full 
                            56000baseSR4/Full 
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes:  1000baseKX/Full 
                            10000baseKX4/Full 
                            10000baseKR/Full 
                            40000baseCR4/Full 
                            40000baseSR4/Full 
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Port: FIBRE
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: off
Cannot get wake-on-lan settings: Operation not permitted
    Current message level: 0x00000014 (20)
                   link ifdown
    Link detected: yes
 

RageBone

Active Member
Jul 11, 2017
617
159
43
@acquacow they are set to eth mode, where they should be able to do up to 56GbE since they are VPI IB / ETH cards.
You can take a look at the ethtool output.

I managed to get 27Gbit iperf limited by singe thread.

@BackupProphet acquacow is right, in "ipoib" simulation you can't get THAT fast.
 

RageBone

Active Member
Jul 11, 2017
617
159
43
Code:
[rage@RageStation rdma]$ ibstat
+CA 'mlx4_0'
    CA type: MT4099
    Number of ports: 2
    Firmware version: 2.42.5000
    Hardware version: 1
    Node GUID: 0x0002c90300ffe130
    System image GUID: 0x0002c90300ffe133
    Port 1:
        State: Down
        Physical state: Disabled
        Rate: 10
        Base lid: 0
        LMC: 0
        SM lid: 0
        Capability mask: 0x00010000
        Port GUID: 0x0202c9fffeffe130
        Link layer: Ethernet
    Port 2:
        State: Active
        Physical state: LinkUp
        Rate: 40
        Base lid: 0
        LMC: 0
        SM lid: 0
        Capability mask: 0x00010000
        Port GUID: 0x0202c9fffeffe131
        Link layer: Ethernet

His cards have the Mlx4_en modul loaded, i think they are running in eth mode.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Which exact firmware did you use?

If it was QCBT, you'll be stuck at 10G Ethernet.

You want the FCBT version which will run 40/56G, and you should switch them into ETH mode as well
/opt/mellanox/bin/mlxconfig -d mt4099_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2
Grab the latest firmware here - http://www.mellanox.com/page/firmware_table_ConnectX3IB

upload_2018-11-16_16-26-13.png
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,218
1,540
113
34
Germany
Okey, here is something interesting

Linux
Code:
Settings for enp130s0d1:
   Supported ports: [ FIBRE ]
   Supported link modes:   1000baseKX/Full
                           10000baseKX4/Full
                           10000baseKR/Full
   Supported pause frame use: Symmetric Receive-only
   Supports auto-negotiation: Yes
   Advertised link modes:  1000baseKX/Full
                           10000baseKX4/Full
                           10000baseKR/Full
   Advertised pause frame use: Symmetric
   Advertised auto-negotiation: Yes
   Speed: 10000Mb/s
   Duplex: Full
   Port: FIBRE
   PHYAD: 0
   Transceiver: internal
   Auto-negotiation: off
   Supports Wake-on: d
   Wake-on: d
   Current message level: 0x00000014 (20)
                  link ifdown
   Link detected: yes
I always asked myself if qsfp+ transceivers would be able to use 10gbit (or slower) and it seems it's possible. Thanks! :D
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
1. Did this ...
don't follow that post, it has you compile an image using really old firmware sources (my fault, I need to remove that post or point it to the correct one)

use this, you can flash the latest image directly from mellanox: https://forums.servethehome.com/ind...net-dual-port-qsfp-adapter.20525/#post-198015

2. Assume this means 10G cap ...
If you are using a connectX-3 in emulated ethernet mode, 10gige is all you get. You can only do 40gig if you are doing infiniband.

3. So not sure I get this ...
Thanks fohdeesha, that made a huge difference! :D Now I have 40G

>> 4. iperf results don't seem to indicate anywhere near 40G ??? <<
Code:
Test                                     Port 1 Gbps  Port 2 Gbps  Total  Comment
[1]  vmnic2 (Port 1 - 10G over switch)   8.4          N/A          8.4    Ubuntu 16.04
[2]  vmnic2 (Port 1 - 10G over switch)   8.8          N/A          8.8    FreeNAS 11.1U6
[3]  vmnic128 (Port 2 - 40G switchless)  N/A          14.4         14.4   FreeNAS 11.1U6
[4]  vmnic2 + vmnic128,  concurrent      7.4          13.2         20.6  Test 1 + 3
Comments: (1) Test 1 & 2 are duplicative and test the same NIC in different VMs, (2) VM MTU 9000, ESXi vSwitch MTU 9000, Jumbo Frames enabled on switch

(not complaining - it is quite a nice step up from where I was :) - just wondering where my missing Gbps are ;))



ESXi-01 CPU


ESXi-02 CPU


Port 1: ESXi-01 [2 x 2690 v2] | HP-649281-B21 => Mellanox QSFP-to-SFP-Adapter => 10G SFP+ Passive DAC =>
Brocade ICX6450-24
<= 10G SFP+ Passive DAC <= Mellanox QSFP-to-SFP-Adapter <= HP-649281-B21 | ESXi-02 [2 x 2680 v2]

Port 2: ESXi-01 [2 x 2690 v2] | HP-649281-B21 => 40G QSFP+ Passive DAC <= HP-649281-B21 | ESXi-02 [2 x 2680 v2]

Querying Mellanox devices firmware ...
Device #1:
----------
Device Type: ConnectX3
Part Number: MCX354A-FCB_A2-A5
Description: ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB (56Gb/s) and 40GigE; PCIe3.0 x8 8GT/s; RoHS R6
PSID: MT_1090120019
PCI Device Name: mt4099_pci_cr0
Port1 MAC: 00..00
Port2 MAC: 00..01
Versions: Current Available
FW 2.42.5000 N/A
PXE 3.4.0752 N/A
Status: No matching image found

Querying Mellanox devices firmware ...
Device #1:
----------
Device Type: ConnectX3
Part Number: MCX354A-FCB_A2-A5
Description: ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB (56Gb/s) and 40GigE; PCIe3.0 x8 8GT/s; RoHS R6
PSID: MT_1090120019
PCI Device Name: mt4099_pci_cr0
Port1 MAC: 00..90
Port2 MAC: 00..91
Versions: Current Available
FW 2.42.5000 N/A
PXE 3.4.0752 N/A
Status: No matching image found

Device #1:
----------
Device type: ConnectX3
Device: mt4099_pci_cr0
Configurations: Next Boot
SRIOV_EN True(1)
NUM_OF_VFS 16
LINK_TYPE_P1 ETH(2)
LINK_TYPE_P2 ETH(2)
LOG_BAR_SIZE 3
BOOT_PKEY_P1 0
BOOT_PKEY_P2 0
BOOT_OPTION_ROM_EN_P1 False(0)
BOOT_VLAN_EN_P1 False(0)
BOOT_RETRY_CNT_P1 0
LEGACY_BOOT_PROTOCOL_P1 None(0)
BOOT_VLAN_P1 1
BOOT_OPTION_ROM_EN_P2 False(0)
BOOT_VLAN_EN_P2 False(0)
BOOT_RETRY_CNT_P2 0
LEGACY_BOOT_PROTOCOL_P2 None(0)
BOOT_VLAN_P2 1
IP_VER_P1 IPv4(0)
IP_VER_P2 IPv4(0)
CQ_TIMESTAMP True(1)

Device #1:
----------
Device type: ConnectX3
Device: mt4099_pci_cr0
Configurations: Next Boot
SRIOV_EN True(1)
NUM_OF_VFS 4
LINK_TYPE_P1 ETH(2)
LINK_TYPE_P2 ETH(2)
LOG_BAR_SIZE 3
BOOT_PKEY_P1 0
BOOT_PKEY_P2 0
BOOT_OPTION_ROM_EN_P1 False(0)
BOOT_VLAN_EN_P1 False(0)
BOOT_RETRY_CNT_P1 0
LEGACY_BOOT_PROTOCOL_P1 None(0)
BOOT_VLAN_P1 1
BOOT_OPTION_ROM_EN_P2 False(0)
BOOT_VLAN_EN_P2 False(0)
BOOT_RETRY_CNT_P2 0
LEGACY_BOOT_PROTOCOL_P2 None(0)
BOOT_VLAN_P2 1
IP_VER_P1 IPv4(0)
IP_VER_P2 IPv4(0)
CQ_TIMESTAMP True(1)

Advertised Auto Negotiation: true
Advertised Link Modes: 1000None/Half, 1000None/Full, 10000None/Half, 10000None/Full, 40000None/Half, 40000None/Full, Auto
Auto Negotiation: true
Cable Type: DA
Current Message Level: -1
Driver Info:
Bus Info: 0000:82:00:0
Driver: nmlx4_en
Firmware Version: 2.42.5000
Version: 3.17.9.12
Link Detected: true
Link Status: Up
Name: vmnic128
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: true
Pause TX: true
Supported Ports:
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: false
Transceiver: internal
Virtual Address: 00..90
Wakeon: None

[root@ESXi-02:/opt/mellanox/bin] esxcli network nic get -n vmnic128
Advertised Auto Negotiation: true
Advertised Link Modes: 1000None/Half, 1000None/Full, 10000None/Half, 10000None/Full, 40000None/Half, 40000None/Full, Auto
Auto Negotiation: true
Cable Type: DA
Current Message Level: -1
Driver Info:
Bus Info: 0000:82:00:0
Driver: nmlx4_en
Firmware Version: 2.42.5000
Version: 3.17.9.12
Link Detected: true
Link Status: Up
Name: vmnic128
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: true
Pause TX: true
Supported Ports:
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: false
Transceiver: internal
Virtual Address: 00..9b
Wakeon: None
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
that looks about right for trying to push 40gbe on a hypervisor, it's extremely cpu intensive to push that much bandwidth and most (smart) hypervisors aren't going to starve all the VM's of cpu resource in order to do it

even on bare metal people usually don't get anything past 20gbps without switching to RDMA etc
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,727
3,075
113
33
fohdeesha.com
and "assumes this means 10gb cap" is incorrect, he's talking about the older connectx3 cards that only supported 10gbE ethernet. the card + firmware you have indeed supports 40gbE, if it didn't it would not link up with your switch (those 6610 rear 40gb ports support 40gb link speed only)
 
  • Like
Reactions: svtkobra7

svtkobra7

Active Member
Jan 2, 2017
362
87
28
that looks about right for trying to push 40gbe on a hypervisor, it's extremely cpu intensive to push that much bandwidth and most (smart) hypervisors aren't going to starve all the VM's of cpu resource in order to do it

even on bare metal people usually don't get anything past 20gbps without switching to RDMA etc
Thanks for the confirm ...

I completely agree with your points (quite a noob with this as you personally know), but in light of them I find the fact that this chap got 37.3 Gbps quite astounding, a whooping 24.1 Gbps faster. I suppose the clock on his E5-1650 v2 (3.5 Ghz) may explain some of that vs E5-2690 v2 (@ 3.0 Ghz / E5-2680 v2 @ 2.8 Ghz), but found that impressive. If I knew what I was doing I might try to emulate, but quite happy with the speed as is. Thanks again for your help. :)

Speed testing 40G Ethernet in the Homelab | Erik Bussink