Mix 10G 2.5G, slow speed, high Retr

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mach3.2

Active Member
Feb 7, 2022
143
104
43
Just got the GL-9630TQ SFP+ to RJ45 module in the mail.
There are multiple options on the taobao listing, the one I ordered is the "万兆全速率自适应-Aquantia 低温" option.

Edit (Adding images): Both modules appear to be using the same external case, I think someone did mentioned earlier "Guanglian" might be the OEM of these 10G SFP+ modules.
IMG_6186.jpg

Both upload and download iperf3 tests at 2.5G and 5G can saturate the link with no retries when flow control is enabled on my switch.

With flow control disabled, I started to see retries but the throughput is still very close to the maximum theoretical speed.

Test setup:
Hypervisor
i7 4770
32GB RAM
Mellanox ConnectX-3 MCX354-FCBT, both ports used as uplink to the switch. Route based on NIC load.

Ubuntu VM (iperf3 server)
2 vCPU
512GB of RAM
Ubuntu 20.04 LTS
VMXNET3 Paravirtual NIC


Windows 10 desktop (iperf3 client)
5900X
64GB RAM
Silicom X550-AT2 NIC

Desktop > cat6 > 10GbERJ45 to SFP+ module > SFP+ slot on my switch > 2m DAC > Hypervisor


iperf3 is ran with the reverse flag, sending from the ubuntu VM(10G link speed) to my desktop(10G/5G/2.5).
Since it is well establised that uploading from a 2.5G/5G link to a 10G target won't cause any issues, I won't bother posting those results, but I did run them and I'm able to saturate the link except when on 10G. I think my hypervisor is struggling a bit with the 10G test so I wasn't able to get the full 10G.

iperf3 on 10G link speed
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.04  sec  9.04 GBytes  1.29 Gbits/sec    0             sender
[  5] (receiver statistics not available)
[  8]   0.00-60.04  sec  8.99 GBytes  1.29 Gbits/sec    0             sender
[  8] (receiver statistics not available)
[ 10]   0.00-60.04  sec  16.9 GBytes  2.42 Gbits/sec    0             sender
[ 10] (receiver statistics not available)
[ 12]   0.00-60.04  sec  17.3 GBytes  2.48 Gbits/sec    0             sender
[ 12] (receiver statistics not available)
[SUM]   0.00-60.04  sec  52.3 GBytes  7.48 Gbits/sec    0             sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 25.5% (1.1%u/24.3%s), remote/receiver 87.5% (20.8%u/66.7%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64


Windows desktop
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  9.04 GBytes  1.29 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  9.03 GBytes  1.29 Gbits/sec                  receiver
[  6]   0.00-60.00  sec  8.99 GBytes  1.29 Gbits/sec    0             sender
[  6]   0.00-60.00  sec  8.99 GBytes  1.29 Gbits/sec                  receiver
[  8]   0.00-60.00  sec  16.9 GBytes  2.43 Gbits/sec    0             sender
[  8]   0.00-60.00  sec  16.9 GBytes  2.43 Gbits/sec                  receiver
[ 10]   0.00-60.00  sec  17.3 GBytes  2.48 Gbits/sec    0             sender
[ 10]   0.00-60.00  sec  17.3 GBytes  2.48 Gbits/sec                  receiver
[SUM]   0.00-60.00  sec  52.3 GBytes  7.48 Gbits/sec    0             sender
[SUM]   0.00-60.00  sec  52.3 GBytes  7.48 Gbits/sec                  receiver
CPU Utilization: local/receiver 90.0% (17.6%u/72.4%s), remote/sender 25.5% (1.1%u/24.3%s)

iperf Done.
iperf3 on 5G link speed
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.04  sec  8.26 GBytes  1.18 Gbits/sec    0             sender
[  5] (receiver statistics not available)
[  8]   0.00-60.04  sec  8.23 GBytes  1.18 Gbits/sec    0             sender
[  8] (receiver statistics not available)
[ 10]   0.00-60.04  sec  8.27 GBytes  1.18 Gbits/sec    0             sender
[ 10] (receiver statistics not available)
[ 12]   0.00-60.04  sec  8.18 GBytes  1.17 Gbits/sec    0             sender
[ 12] (receiver statistics not available)
[SUM]   0.00-60.04  sec  32.9 GBytes  4.71 Gbits/sec    0             sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 22.1% (0.6%u/21.5%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64


Windows desktop
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  8.26 GBytes  1.18 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  8.26 GBytes  1.18 Gbits/sec                  receiver
[  6]   0.00-60.00  sec  8.23 GBytes  1.18 Gbits/sec    0             sender
[  6]   0.00-60.00  sec  8.22 GBytes  1.18 Gbits/sec                  receiver
[  8]   0.00-60.00  sec  8.27 GBytes  1.18 Gbits/sec    0             sender
[  8]   0.00-60.00  sec  8.27 GBytes  1.18 Gbits/sec                  receiver
[ 10]   0.00-60.00  sec  8.18 GBytes  1.17 Gbits/sec    0             sender
[ 10]   0.00-60.00  sec  8.18 GBytes  1.17 Gbits/sec                  receiver
[SUM]   0.00-60.00  sec  32.9 GBytes  4.72 Gbits/sec    0             sender
[SUM]   0.00-60.00  sec  32.9 GBytes  4.72 Gbits/sec                  receiver
CPU Utilization: local/receiver 78.7% (18.6%u/60.1%s), remote/sender 22.1% (0.6%u/21.5%s)

iperf Done.
iperf3 on 2.5G link speed
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.05  sec  4.06 GBytes   581 Mbits/sec    0             sender
[  5] (receiver statistics not available)
[  8]   0.00-60.05  sec  4.06 GBytes   581 Mbits/sec    0             sender
[  8] (receiver statistics not available)
[ 10]   0.00-60.05  sec  4.06 GBytes   581 Mbits/sec    0             sender
[ 10] (receiver statistics not available)
[ 12]   0.00-60.05  sec  4.36 GBytes   624 Mbits/sec    0             sender
[ 12] (receiver statistics not available)
[SUM]   0.00-60.05  sec  16.6 GBytes  2.37 Gbits/sec    0             sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 9.3% (0.5%u/8.8%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64


Windows desktop
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec    0             sender
[  4]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec                  receiver
[  6]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec    0             sender
[  6]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec                  receiver
[  8]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec    0             sender
[  8]   0.00-60.00  sec  4.06 GBytes   582 Mbits/sec                  receiver
[ 10]   0.00-60.00  sec  4.36 GBytes   624 Mbits/sec    0             sender
[ 10]   0.00-60.00  sec  4.36 GBytes   624 Mbits/sec                  receiver
[SUM]   0.00-60.00  sec  16.6 GBytes  2.37 Gbits/sec    0             sender
[SUM]   0.00-60.00  sec  16.5 GBytes  2.37 Gbits/sec                  receiver
CPU Utilization: local/receiver 39.4% (11.0%u/28.4%s), remote/sender 9.3% (0.5%u/8.8%s)

iperf Done.

I'd say we have a winner here, it's cheaper than competition and it actually works at 2.5G and 5G link speed.



Have I got lucky with my particular combination of switch/SFP+ module or is it just not possible to reproduce the issue without a 10G host in the mix?
Probably just lucky. I have a 10GTek ASF-10G-T SFP+ module and it struggles to exceed 500Mbps and 1Gbps at 2.5G and 5G link speed respectively when data is downloaded from a 10G host. I run my NIC at 10G anyway so it doesn't affect me.
 
Last edited:

foogitiff

Active Member
Jul 26, 2018
171
42
28
TLDR: Mikrotik S+RJ10 (v2) works too, enable flow control and disable/re-enable the interfaces.

I bought a used CRS309 along with five S+RJ10 v2.

I have multiple machines with dual AQC111 cards (that can do 1G/2.5G/5G), connected to the CRS309 via the S+RJ10 modules, and one server with a ConnectX-2 connected via a DAC (Before a ConnectX-3, I thought that was the issue).

I managed to get with iperf3 ~4.6G between the machines with AQC111 without any retries (or 1), but a very poor speed when the 10g server was sending. Luckily I found this thread :)

Enabling flow control was not enough, I had to disable and enable again the port to get it working. Now I get ~4.6G on all machines connected to the CRS309, and the server is able to handle two concurrent clients at ~9.6G combined.
 
  • Like
Reactions: iceman_jkh

iceman_jkh

Member
Mar 21, 2023
44
18
8
TLDR: Mikrotik S+RJ10 (v2) works too, enable flow control and disable/re-enable the interfaces.

I bought a used CRS309 along with five S+RJ10 v2.
How warm/hot do those S+RJ10 v2 get?
It seems the v1 performed badly and got quite hot, while v2 might be quite different.
 

foogitiff

Active Member
Jul 26, 2018
171
42
28
The switch is in my basement (not sure about the temperature, I need to check), I currently use 3 of them on my CRS309.

Without ventilation, two of them are usually around 65C (the ones with the sorter run ethernet cables) but the third one is cooler at around 60C (but has a 30M ethernet cable ?!).

I put a 12cm fan blowing on the side yesterday (not for the switch, but it does benefits from it), and now the temp are 49C/54C/54C (the one at 49C is closer ton the fan)
 
  • Like
Reactions: iceman_jkh

prdtabim

Active Member
Jan 29, 2022
184
72
28
TLDR: Mikrotik S+RJ10 (v2) works too, enable flow control and disable/re-enable the interfaces.

I bought a used CRS309 along with five S+RJ10 v2.

I have multiple machines with dual AQC111 cards (that can do 1G/2.5G/5G), connected to the CRS309 via the S+RJ10 modules, and one server with a ConnectX-2 connected via a DAC (Before a ConnectX-3, I thought that was the issue).

I managed to get with iperf3 ~4.6G between the machines with AQC111 without any retries (or 1), but a very poor speed when the 10g server was sending. Luckily I found this thread :)

Enabling flow control was not enough, I had to disable and enable again the port to get it working. Now I get ~4.6G on all machines connected to the CRS309, and the server is able to handle two concurrent clients at ~9.6G combined.
Hi,
Where did you found this S+RJ10 v2 tranceivers ? I'm unable to find any Aquantia based tranceivers .
 

stich86

Member
May 24, 2023
40
14
8
hi guys,

any way to get an SFP with AQC113? I'm currenltly running S+RJ10 on my CRS305 but it's very hot (currently at 90C)
Want to try another SFP to see if can reach lower temperature.

thx in advance!
 

Stephan

Well-Known Member
Apr 21, 2017
992
757
93
Germany
@stich86 Try

For Cisco SFP-10G-T-S 1.25/2.5/5/10G-T SFP+ to RJ45 CAT.6a Copper Transceiver | eBay or
https://de.aliexpress.com/item/1005005080437128.html or

Of course check if you really WANT to stay on 1/2.5/5 Gbps copper, or if jumping up to 10 Gbps SFP+ wouldn't make more sense. Way more choice in terms of hardware. The only people who really need 2.5 Gbps are imho fiber modem users which got a "Glasfaser Modem 2" with 1/2.5 Gbps RJ45 or similar, AND whose fiber subscription is beyond 1 Gbps.
 

stich86

Member
May 24, 2023
40
14
8
@stich86 Try

For Cisco SFP-10G-T-S 1.25/2.5/5/10G-T SFP+ to RJ45 CAT.6a Copper Transceiver | eBay or
https://de.aliexpress.com/item/1005005080437128.html or

Of course check if you really WANT to stay on 1/2.5/5 Gbps copper, or if jumping up to 10 Gbps SFP+ wouldn't make more sense. Way more choice in terms of hardware. The only people who really need 2.5 Gbps are imho fiber modem users which got a "Glasfaser Modem 2" with 1/2.5 Gbps RJ45 or similar, AND whose fiber subscription is beyond 1 Gbps.
i'm not so much interested to MultiG beacause i'm running XGSPON, my main concern is about heat dissipation.

Which temps reach an AQC113 stick?
 

mach3.2

Active Member
Feb 7, 2022
143
104
43
i'm not so much interested to MultiG beacause i'm running XGSPON, my main concern is about heat dissipation.

Which temps reach an AQC113 stick?
The one I have read about ~60°C on the external case near the RJ45 port. Ambient at ~29°C
Can't see any DDM info on my switch so no digital readout.

It's comparable to the 10GTek ASF-10G-T module in heat output. Thje 10GTek is using the Marvell 88X3310.

Consider pointing a fan at the tranceivers instead.
 
Last edited:

stich86

Member
May 24, 2023
40
14
8
The one I have read about ~60°C on the external case near the RJ45 port. Ambient at ~29°C
Can't see any DMM info on my switch so no digital readout.

It's comparable to my Marvell 88X3310 in heat output.

Consider pointing a fan at the tranceivers instead.
Just a single S+RJ10 v2 on my CRS305 hit about 90-107C :(
now i've put a fan seems to stay at 65-70C.. so I hope that may be this new chip generate less heat also for node production change (28nm to 14nm)
 

dag

Member
Apr 23, 2020
27
41
13
The 10Gtek ASF-10G2-T with the Marvell AQR113C is readily available on Amazon.
After extensive testing, I can confirm the AQR113 works just as good as the AQS107 by introducing PAUSE frames to control the port while the host believes the port is synced at 10GbE.
*BUT*
All the switches I have on hand accept the AQR107 just fine, while some just won’t take the AQR113 no matter what I try.

In other words, the AQR107 remains a better option if you want wider compatibility, but if the AQR113 works with your switch, then by all means…
 
  • Like
Reactions: blunden

spektykles

New Member
Apr 2, 2023
13
8
3
Marvell recently EOL'd their old 88X3310P chip and released new 3610 chip (5nm). Quite a jump, should bring 1W of power consumption. They still making Aquantia chips as mainstream option, while 3610 will be the premium one.
1692993606444.png
 

mattlward

New Member
Jun 20, 2023
3
2
3
I can confirm that the current Amazon offering of the H!Fiber copper SFP+ does work properly when running in a Mikrotik CSS-610 as a 2.5 gig connector when communicating with another system on the same switch at 10 gig. Iperf3 in bi-directional mode was used to confirm this.

However, the Mikrotik interface does not give the option or seem aware that it is not running at 10 gig. It also reports as an 850 nm SFP+ with no DOM data.