Hi All,
After reading this forum I decided to try to switch my home lab to 40gb/s, since adapters are cheaper than 10gb/s But ATM I can't get full speed from them.
Short story:
After update and connecting them directly I'm only getting ~20-23Gb/s on iperf/iperf3 running ubuntu 18.10 baremetal with single thread
t620:~$ iperf3 -s
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 2.63 GBytes 22.6 Gbits/sec
[ 5] 1.00-2.00 sec 2.69 GBytes 23.2 Gbits/sec
update: multi thread performance seems to be fine
t620:~$ iperf3 -P2 -c 10.10.10.2
Connecting to host 10.10.10.2, port 5201
[ 5] local 10.10.10.1 port 54416 connected to 10.10.10.2 port 5201
[ 7] local 10.10.10.1 port 54418 connected to 10.10.10.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.24 GBytes 19.2 Gbits/sec 0 907 KBytes
[ 7] 0.00-1.00 sec 2.24 GBytes 19.2 Gbits/sec 0 1005 KBytes
[SUM] 0.00-1.00 sec 4.48 GBytes 38.4 Gbits/sec 0
- - - - - - - - - - - - - - - - - - - - - - - - -
Long story:
I got a pair of what appears to be HP 649281-B21 Rev A5 (revision on sticker)
From lspci:
[PN] Part number: 649281-B21
[EC] Engineering changes: A5
[V0] Vendor specific: HP 2P 4X FDR VPI/2P 40GbE CX-3 HCA
Followed instruction https://forums.servethehome.com/ind...net-dual-port-qsfp-adapter.20525/#post-198015
And got both cards reflashed successfully with latest firmware
Device #1:
----------
Device Type: ConnectX3
Part Number: MCX354A-FCB_A2-A5
*****
Versions: Current Available
FW 2.42.5000 2.42.5000
PXE 3.4.0752 3.4.0752
Cards are plugged to
1) dell T620 / dual e5-2670v2 / 128g ram
2) dell p5820 / W-2155 / 64g ram
Systems connected directly with Mellanox active cable (mc2206310-015)
Latest Mellanox OFED MLNX_OFED_LINUX-4.5-1.0.1.0-ubuntu18.10-x86_64.tgz is installed in default configuration.
First port on both card is set to ethernet mode
$connectx_port_config -s
--------------------------------
Port configuration for PCI device: 0000:04:00.0 is:
eth
auto (ib)
Both cards negotiated correct speed / width on the host side
$lspci -vvv
b3:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]
LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Ethernet / link seems to be fine as well
$ethtool enp67s0
Settings for enp67s0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
40000baseCR4/Full
40000baseSR4/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 40000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
link ifdown
Link detected: yes
I tried to do some twiks
1) change ipv4 and core configuration to allocate more memory
2) change cpu governor to performance
3) set IRQ affinity so that it would go to correct numa node on t620 (
set_irq_affinity_bynode.sh)
4) run iperf under numactl so it would go to the same numa node.
Altogether it pushed throughput from ~20-21Gb/s to 22-23GB/s but it's still nowhere close to ~40
Does anyone have any ideas what else can I try, or what can be wrong ?
Did I miss any magic switches for OFED installation script ?
Does anyone get close to 40Gb/s from a single thread/process ?
Thanks!
After reading this forum I decided to try to switch my home lab to 40gb/s, since adapters are cheaper than 10gb/s But ATM I can't get full speed from them.
Short story:
After update and connecting them directly I'm only getting ~20-23Gb/s on iperf/iperf3 running ubuntu 18.10 baremetal with single thread
t620:~$ iperf3 -s
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 2.63 GBytes 22.6 Gbits/sec
[ 5] 1.00-2.00 sec 2.69 GBytes 23.2 Gbits/sec
update: multi thread performance seems to be fine
t620:~$ iperf3 -P2 -c 10.10.10.2
Connecting to host 10.10.10.2, port 5201
[ 5] local 10.10.10.1 port 54416 connected to 10.10.10.2 port 5201
[ 7] local 10.10.10.1 port 54418 connected to 10.10.10.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.24 GBytes 19.2 Gbits/sec 0 907 KBytes
[ 7] 0.00-1.00 sec 2.24 GBytes 19.2 Gbits/sec 0 1005 KBytes
[SUM] 0.00-1.00 sec 4.48 GBytes 38.4 Gbits/sec 0
- - - - - - - - - - - - - - - - - - - - - - - - -
Long story:
I got a pair of what appears to be HP 649281-B21 Rev A5 (revision on sticker)
From lspci:
[PN] Part number: 649281-B21
[EC] Engineering changes: A5
[V0] Vendor specific: HP 2P 4X FDR VPI/2P 40GbE CX-3 HCA
Followed instruction https://forums.servethehome.com/ind...net-dual-port-qsfp-adapter.20525/#post-198015
And got both cards reflashed successfully with latest firmware
Device #1:
----------
Device Type: ConnectX3
Part Number: MCX354A-FCB_A2-A5
*****
Versions: Current Available
FW 2.42.5000 2.42.5000
PXE 3.4.0752 3.4.0752
Cards are plugged to
1) dell T620 / dual e5-2670v2 / 128g ram
2) dell p5820 / W-2155 / 64g ram
Systems connected directly with Mellanox active cable (mc2206310-015)
Latest Mellanox OFED MLNX_OFED_LINUX-4.5-1.0.1.0-ubuntu18.10-x86_64.tgz is installed in default configuration.
First port on both card is set to ethernet mode
$connectx_port_config -s
--------------------------------
Port configuration for PCI device: 0000:04:00.0 is:
eth
auto (ib)
Both cards negotiated correct speed / width on the host side
$lspci -vvv
b3:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]
LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Ethernet / link seems to be fine as well
$ethtool enp67s0
Settings for enp67s0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
40000baseCR4/Full
40000baseSR4/Full
56000baseCR4/Full
56000baseSR4/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseKX/Full
10000baseKX4/Full
10000baseKR/Full
40000baseCR4/Full
40000baseSR4/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 40000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
link ifdown
Link detected: yes
I tried to do some twiks
1) change ipv4 and core configuration to allocate more memory
2) change cpu governor to performance
3) set IRQ affinity so that it would go to correct numa node on t620 (
set_irq_affinity_bynode.sh)
4) run iperf under numactl so it would go to the same numa node.
Altogether it pushed throughput from ~20-21Gb/s to 22-23GB/s but it's still nowhere close to ~40
Does anyone have any ideas what else can I try, or what can be wrong ?
Did I miss any magic switches for OFED installation script ?
Does anyone get close to 40Gb/s from a single thread/process ?
Thanks!
Last edited: