Mellanox ConnectX-3 Pro (MT27520) performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

uriunger

New Member
Sep 14, 2022
2
0
1
I have upgraded the NIC on my main proxmox node (supermicro X10DRi-T4+ with 2xE5-2699 and 512GB memory) to Mellanox ConnectX-3 Pro (MT27520 - MCX314A-BCCT). My windows desktop (i9-9920X with 64GB memory) has the same NIC. Both NICs are connected using QSFP to the 40gbe ports on a Brocade ICS6610 switch.

Running iperf3 tests between the two machines, I can only get to ~11-12Gbit. The server process core utilization gets to ~85-90%. However, concurrently running multiple iperf3 sessions does not show any higher throughout, so this is not a CPU bottleneck.

Testing is done host-to-host (no VMs in this story). Jumbo frames are enabled.

I am using default drivers on proxmox. I have tried to upgrade to Mellanox drivers, but I am not sure how to do it correctly as Mellanox's bundle does not support proxmox.

Any recommendation on how to get higher throughput?
 

uriunger

New Member
Sep 14, 2022
2
0
1
I did that as well- testing using iperf2 in a linux-to-linux setup (with the same hosts). I got roughly the same performance (11-12gbit/s).
 

Stephan

Well-Known Member
Apr 21, 2017
946
715
93
Germany
@uriunger You have to try ntttcp as suggested because this supports multiple TCP streams. Architecture of QSFP+ is not 1x40 but 4x10 Gbps. To saturate, you have to have at least four streams. Alternatively, run iperf2 on four dedicated ports and run iperf2 on client in four instances as well.