qsfp+ switch seems too good to be true

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jul 19, 2020
51
19
8
As far as I am aware, such modules require support from the switch to run the port at 1 Gbps - the autonegotation extends all the way back to the switch ASIC. If the switch doesn't support running the port at 1 Gbps or it doesn't support the autonegotiation required, then this won't work.
 

AllenAsm

Member
Jul 30, 2018
55
3
8
Wanted to make a followup note on this. I have everything in the DC, I hooked up all the connectx3 cards, arista 32x switch, updated all of the drivers, turned RDMA and other needed features on. This thing works like a CHAMP. I'm getting almost full 40gbe on each port and the RDMA is working nicely. Still tuning the hyperv side of it but overall I just upgraded my little DC to 40g networking for under $2k. I appreciate all the help everyone gave to help get me there.
 

AllenAsm

Member
Jul 30, 2018
55
3
8
oh also, the 10gbps SFP+ transceiver I put in the SFP+ ports work at both 10g and 1g speeds so I was able to use that to connect it up to the router as well. The other 10gbps sfp+ port goes to the QNAP for shared file storage and backups. All works great.
 

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
I also have couple of 314a flashed to 354a ( i think, it's been a while). Trying to get RDMA working on RHEL8.3, but it seems i can't.
Code:
[root@virt3 ~]# ib_send_bw -R -d mlx4_0 -i 1  --report_gbits -D10

************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
                    Send BW Test
Dual-port       : OFF        Device         : mlx4_0
Number of qps   : 1        Transport type : IB
Connection type : RC        Using SRQ      : OFF
PCIe relax order: Unsupported
ibv_wr* API     : OFF
RX depth        : 512
CQ Moderation   : 1
Mtu             : 2048[B]
Link type       : Ethernet
GID index       : 3
Max inline data : 0[B]
rdma_cm QPs     : ON
Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
Waiting for client rdma_cm QP to connect
Please run the same command with the IB/RoCE interface IP
---------------------------------------------------------------------------------------
local address: LID 0000 QPN 0x021a PSN 0x37b10e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:02
remote address: LID 0000 QPN 0x0219 PSN 0x941c2e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:01


[root@virt0 ~]# rping -c -a 192.168.200.1
cma event RDMA_CM_EVENT_ADDR_ERROR, error -19
waiting for addr/route resolution state 1
Any help would be appreciated!
Output from lspci:
[root@virt0 ~]# lspci -vv| grep -i Mell
01:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Mellanox Technologies Mellanox Technologies ConnectX-3 Pro Stand-up dual-port 40GbE MCX314A-BCCT

EDIT:
found the command i used to flash:
flint -d /dev/mst/mt4099_pciconf0 -i fw-ConnectX3Pro-rel-2_42_5000-MCX354A-FCC_Ax-FlexBoot-3.4.752.bin -allow_psid_change burn
 
Last edited:
  • Like
Reactions: Robert Townley

Robert Townley

New Member
Dec 23, 2014
29
5
3
I also have couple of 314a flashed to 354a ( i think, it's been a while). Trying to get RDMA working on RHEL8.3, but it seems i can't.
Code:
[root@virt3 ~]# ib_send_bw -R -d mlx4_0 -i 1  --report_gbits -D10

************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
                    Send BW Test
Dual-port       : OFF        Device         : mlx4_0
Number of qps   : 1        Transport type : IB
Connection type : RC        Using SRQ      : OFF
PCIe relax order: Unsupported
ibv_wr* API     : OFF
RX depth        : 512
CQ Moderation   : 1
Mtu             : 2048[B]
Link type       : Ethernet
GID index       : 3
Max inline data : 0[B]
rdma_cm QPs     : ON
Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
Waiting for client rdma_cm QP to connect
Please run the same command with the IB/RoCE interface IP
---------------------------------------------------------------------------------------
local address: LID 0000 QPN 0x021a PSN 0x37b10e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:02
remote address: LID 0000 QPN 0x0219 PSN 0x941c2e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:01


[root@virt0 ~]# rping -c -a 192.168.200.1
cma event RDMA_CM_EVENT_ADDR_ERROR, error -19
waiting for addr/route resolution state 1
Any help would be appreciated!
Output from lspci:
[root@virt0 ~]# lspci -vv| grep -i Mell
01:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Mellanox Technologies Mellanox Technologies ConnectX-3 Pro Stand-up dual-port 40GbE MCX314A-BCCT

EDIT:
found the command i used to flash:
flint -d /dev/mst/mt4099_pciconf0 -i fw-ConnectX3Pro-rel-2_42_5000-MCX354A-FCC_Ax-FlexBoot-3.4.752.bin -allow_psid_change burn
Does your switch support and is it configured for RDMA?
 

AllenAsm

Member
Jul 30, 2018
55
3
8
Does your switch support and is it configured for RDMA?
it does and we've got this working with s2d. Its screaming fast 40g and with the RDMA the hyper v cluster is just super fast overall. Highly recommend this switch even though its older technology.

notably with this we HAD to use the connectx-3 PRO or greater cards for the RDMA support.
 
Last edited:

tochnia

New Member
Mar 5, 2022
15
3
3
I have a 42u colo cabinet with at least 16u of space available so the rear to front airflow won't really be a problem for me (maybe just longer cables).

I bought that switch and it should be delivered next week.

I found some qsfp+ to 4x rj45 10gbe cables which should solve my need for some 10gbe ports super cheaply. The only downside is I can't individually manage those 4 ports. Should be fine though since I need them to send data to/from an enterprise qnap device.

Now I need to get 5 qsfp+ NICs and some qsfp+ cables to tie this all together. I'm assuming this all works on both linux and windows 2019.

Anything else I'm going to need for this? This will be put into my personal hyper v lab so everything will have to work with that.
Can you post manufacturer/model of these e qsfp+ to 4x rj45 10gbe cables?
As I don't seem to find anything else than qsfp+ to 4x10GB SFP Optical..