[root@virt3 ~]# ib_send_bw -R -d mlx4_0 -i 1 --report_gbits -D10
************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
Send BW Test
Dual-port : OFF Device : mlx4_0
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
PCIe relax order: Unsupported
ibv_wr* API : OFF
RX depth : 512
CQ Moderation : 1
Mtu : 2048[B]
Link type : Ethernet
GID index : 3
Max inline data : 0[B]
rdma_cm QPs : ON
Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
Waiting for client rdma_cm QP to connect
Please run the same command with the IB/RoCE interface IP
---------------------------------------------------------------------------------------
local address: LID 0000 QPN 0x021a PSN 0x37b10e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:02
remote address: LID 0000 QPN 0x0219 PSN 0x941c2e
GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:01
[root@virt0 ~]# rping -c -a 192.168.200.1
cma event RDMA_CM_EVENT_ADDR_ERROR, error -19
waiting for addr/route resolution state 1
Does your switch support and is it configured for RDMA?I also have couple of 314a flashed to 354a ( i think, it's been a while). Trying to get RDMA working on RHEL8.3, but it seems i can't.
Any help would be appreciated!Code:[root@virt3 ~]# ib_send_bw -R -d mlx4_0 -i 1 --report_gbits -D10 ************************************ * Waiting for client to connect... * ************************************ --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : mlx4_0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: Unsupported ibv_wr* API : OFF RX depth : 512 CQ Moderation : 1 Mtu : 2048[B] Link type : Ethernet GID index : 3 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- Waiting for client rdma_cm QP to connect Please run the same command with the IB/RoCE interface IP --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x021a PSN 0x37b10e GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:02 remote address: LID 0000 QPN 0x0219 PSN 0x941c2e GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:168:200:01 [root@virt0 ~]# rping -c -a 192.168.200.1 cma event RDMA_CM_EVENT_ADDR_ERROR, error -19 waiting for addr/route resolution state 1
Output from lspci:
[root@virt0 ~]# lspci -vv| grep -i Mell
01:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Mellanox Technologies Mellanox Technologies ConnectX-3 Pro Stand-up dual-port 40GbE MCX314A-BCCT
EDIT:
found the command i used to flash:
flint -d /dev/mst/mt4099_pciconf0 -i fw-ConnectX3Pro-rel-2_42_5000-MCX354A-FCC_Ax-FlexBoot-3.4.752.bin -allow_psid_change burn
it does and we've got this working with s2d. Its screaming fast 40g and with the RDMA the hyper v cluster is just super fast overall. Highly recommend this switch even though its older technology.Does your switch support and is it configured for RDMA?
Can you post manufacturer/model of these e qsfp+ to 4x rj45 10gbe cables?I have a 42u colo cabinet with at least 16u of space available so the rear to front airflow won't really be a problem for me (maybe just longer cables).
I bought that switch and it should be delivered next week.
I found some qsfp+ to 4x rj45 10gbe cables which should solve my need for some 10gbe ports super cheaply. The only downside is I can't individually manage those 4 ports. Should be fine though since I need them to send data to/from an enterprise qnap device.
Now I need to get 5 qsfp+ NICs and some qsfp+ cables to tie this all together. I'm assuming this all works on both linux and windows 2019.
Anything else I'm going to need for this? This will be put into my personal hyper v lab so everything will have to work with that.