Query about 32G FDR adapter speed.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Falloutboy

Member
Oct 23, 2011
221
23
18
Hi all, I finally have the Connect X2 IPoIB's working ( at least it seems that way ) I can ping and do everything else required but I feel it's running a bit slow, I could be wrong but it's getting about 261MBytes per second in iperf3 under windows and the fastest transfer speed I have had was between a 31 drive raid array and an SSD which got up to about 360MBytes per second.

They say this card is 40Gbps but in actuality it's 32. What sort of speeds should I be getting?

It's connect-X2 ipoib card model MHQH19B-XTR and I am using the Mellenox 5.50.14740.0 driver under Windows 10 with 2.9.1000 firmware.

Cheers in advance.
 

necr

Active Member
Dec 27, 2017
151
47
28
124
Paste “ibstat” output here.
I recommend tests with IB_send_bw which is shipped with that OFED. On a Linux vanilla box without much tuning you can normally get ~26G on ConnectX-2. On Windows 10 slightly less, but I tested over a year ago last time. The biggest factors are PCIe lanes (is it x8 in x8/16?) and CPU performance.
 
  • Like
Reactions: Falloutboy

Falloutboy

Member
Oct 23, 2011
221
23
18
Paste “ibstat” output here.
I recommend tests with IB_send_bw which is shipped with that OFED. On a Linux vanilla box without much tuning you can normally get ~26G on ConnectX-2. On Windows 10 slightly less, but I tested over a year ago last time. The biggest factors are PCIe lanes (is it x8 in x8/16?) and CPU performance.
Both boxes running Windows 10 Pro, cards currently in Ethernet mode, although I have learned a lot about IB and have had it functional. The cards are PCIe Gen 2 x 8 lanes, in slots which are mechanically x16 but electrically x8, the computers are a Dual processor E5-2690 V3 and an AMD Threadripper 2950x. I did manage to get things slightly better and am now getting 200MB on two way copies between the two machines or 400MB max one way on Ethernet configuration - This reports a 10G connection under the Information tab whereas the IB reported 40 under that tab and 4x10G using the IB Tools.

I've noticed these cards seem to be by default setup for doing VM's, I am wondering also if turning a lot of that stuff off may help speed as well.

The questions I ask myself at the moment is do I need...

VMQ VLAN Filtering
VLAN ID
Virtual Switch RSS
Virtual Machine Queues
SR-IOV

I am still investigating all of these and I am still learning, I did note a bump in speed too dropping IPV4 for IPV6.
 

Falloutboy

Member
Oct 23, 2011
221
23
18
Paste “ibstat” output here.
I recommend tests with IB_send_bw which is shipped with that OFED. On a Linux vanilla box without much tuning you can normally get ~26G on ConnectX-2. On Windows 10 slightly less, but I tested over a year ago last time. The biggest factors are PCIe lanes (is it x8 in x8/16?) and CPU performance.
OOPs forgot that little request at the top: output as requested.
CA 'ibv_device0'
CA type:
Number of ports: 1
Firmware version: 2.9.1000
Hardware version: 0xb0
Node GUID: 0x0002c903004aa0b2
System image GUID: 0x0002c903004aa0b5
Port 1:
State: Active
Physical state: LinkUp
Rate: 40
Real rate: 32.00 (QDR)
Base lid: 1
LMC: 0
SM lid: 1
Capability mask: 0x90580000
Port GUID: 0x0002c903004aa0b3
Link layer: IB
Transport: IB

By the way I did a dir /s *send* and the only tools with that in the name were:
66,200 nd_send_bw.exe
61,592 nd_send_lat.exe

Am I missing something?
 

hk92doom

Member
Jun 4, 2020
48
55
18
It's funny how they deliberately advertise the 40Gb/s interface speed, knowing that PCIe 2.0 x8 only supports effective data rate of 32Gb/s. Then turn around and put somewhere in the document (after the units conversion)
expect: 3400MB/s=27.2Gb/s throughput.
 

necr

Active Member
Dec 27, 2017
151
47
28
124
nd_send_bw.exe is the one. One side’s a server, the other a client.
There’s a huge gap between the standard SMB transfer speed and the max throughput available, and there’s a thread about that too.
 
  • Like
Reactions: Falloutboy

Falloutboy

Member
Oct 23, 2011
221
23
18
nd_send_bw.exe is the one. One side’s a server, the other a client.
There’s a huge gap between the standard SMB transfer speed and the max throughput available, and there’s a thread about that too.
I ran nd_send_bw on both machines.., unfortunately I got no output... confused. being new to this I have to ask will I only get the full max out speed if I am running one machine as an RDMA host or can those sorts of speeds be achieved without RDMA?
 

necr

Active Member
Dec 27, 2017
151
47
28
124
Just high transfer speeds can be reached without RDMA, you can test with iperf or NTtcp. nd in the name of the utility probably hints that NetworkDirect has to be present and enabled on the adapter, which is again Windows Server/W10 Pro for Workstations.
RDMA is needed to save CPU resources while doing file copies etc.
 

Falloutboy

Member
Oct 23, 2011
221
23
18
Just high transfer speeds can be reached without RDMA, you can test with iperf or NTtcp. nd in the name of the utility probably hints that NetworkDirect has to be present and enabled on the adapter, which is again Windows Server/W10 Pro for Workstations.
RDMA is needed to save CPU resources while doing file copies etc.
What do you consider "high speeds" the most I seem to be able to get between two windows 10 pro machines ( note Pro - Not Pro Workstation, or Pro server or whatever just vanilla pro ) is 400MB's one way or 200MB's bi directional, I have just upgraded the cards to The 2.10.710 firmware and that has made no difference either. These cards are currently in IB mode but the results are very similar in Eth mode. In Eth mode unless it's not the adapter that it's talking about - but under configuration Network Direct is Enabled.