Just got the GL-9630TQ SFP+ to RJ45 module in the mail.
There are multiple options on the taobao listing, the one I ordered is the "万兆全速率自适应-Aquantia 低温" option.
Edit (Adding images): Both modules appear to be using the same external case, I think someone did mentioned earlier "Guanglian" might be the OEM of these 10G SFP+ modules.
Both upload and download iperf3 tests at 2.5G and 5G can saturate the link with no retries when flow control is enabled on my switch.
With flow control disabled, I started to see retries but the throughput is still very close to the maximum theoretical speed.
Test setup:
Hypervisor
i7 4770
32GB RAM
Mellanox ConnectX-3 MCX354-FCBT, both ports used as uplink to the switch. Route based on NIC load.
Ubuntu VM (iperf3 server)
2 vCPU
512GB of RAM
Ubuntu 20.04 LTS
VMXNET3 Paravirtual NIC
Windows 10 desktop (iperf3 client)
5900X
64GB RAM
Silicom X550-AT2 NIC
Desktop > cat6 > 10GbERJ45 to SFP+ module > SFP+ slot on my switch > 2m DAC > Hypervisor
iperf3 is ran with the reverse flag, sending from the ubuntu VM(10G link speed) to my desktop(10G/5G/2.5).
Since it is well establised that uploading from a 2.5G/5G link to a 10G target won't cause any issues, I won't bother posting those results, but I did run them and I'm able to saturate the link except when on 10G. I think my hypervisor is struggling a bit with the 10G test so I wasn't able to get the full 10G.
iperf3 on 10G link speed
iperf3 on 5G link speed
iperf3 on 2.5G link speed
I'd say we have a winner here, it's cheaper than competition and it actually works at 2.5G and 5G link speed.
There are multiple options on the taobao listing, the one I ordered is the "万兆全速率自适应-Aquantia 低温" option.
Edit (Adding images): Both modules appear to be using the same external case, I think someone did mentioned earlier "Guanglian" might be the OEM of these 10G SFP+ modules.
Both upload and download iperf3 tests at 2.5G and 5G can saturate the link with no retries when flow control is enabled on my switch.
With flow control disabled, I started to see retries but the throughput is still very close to the maximum theoretical speed.
Test setup:
Hypervisor
i7 4770
32GB RAM
Mellanox ConnectX-3 MCX354-FCBT, both ports used as uplink to the switch. Route based on NIC load.
Ubuntu VM (iperf3 server)
2 vCPU
512GB of RAM
Ubuntu 20.04 LTS
VMXNET3 Paravirtual NIC
Windows 10 desktop (iperf3 client)
5900X
64GB RAM
Silicom X550-AT2 NIC
Desktop > cat6 > 10GbERJ45 to SFP+ module > SFP+ slot on my switch > 2m DAC > Hypervisor
iperf3 is ran with the reverse flag, sending from the ubuntu VM(10G link speed) to my desktop(10G/5G/2.5).
Since it is well establised that uploading from a 2.5G/5G link to a 10G target won't cause any issues, I won't bother posting those results, but I did run them and I'm able to saturate the link except when on 10G. I think my hypervisor is struggling a bit with the 10G test so I wasn't able to get the full 10G.
iperf3 on 10G link speed
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-60.04 sec 9.04 GBytes 1.29 Gbits/sec 0 sender
[ 5] (receiver statistics not available)
[ 8] 0.00-60.04 sec 8.99 GBytes 1.29 Gbits/sec 0 sender
[ 8] (receiver statistics not available)
[ 10] 0.00-60.04 sec 16.9 GBytes 2.42 Gbits/sec 0 sender
[ 10] (receiver statistics not available)
[ 12] 0.00-60.04 sec 17.3 GBytes 2.48 Gbits/sec 0 sender
[ 12] (receiver statistics not available)
[SUM] 0.00-60.04 sec 52.3 GBytes 7.48 Gbits/sec 0 sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 25.5% (1.1%u/24.3%s), remote/receiver 87.5% (20.8%u/66.7%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64
Windows desktop
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 9.04 GBytes 1.29 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 9.03 GBytes 1.29 Gbits/sec receiver
[ 6] 0.00-60.00 sec 8.99 GBytes 1.29 Gbits/sec 0 sender
[ 6] 0.00-60.00 sec 8.99 GBytes 1.29 Gbits/sec receiver
[ 8] 0.00-60.00 sec 16.9 GBytes 2.43 Gbits/sec 0 sender
[ 8] 0.00-60.00 sec 16.9 GBytes 2.43 Gbits/sec receiver
[ 10] 0.00-60.00 sec 17.3 GBytes 2.48 Gbits/sec 0 sender
[ 10] 0.00-60.00 sec 17.3 GBytes 2.48 Gbits/sec receiver
[SUM] 0.00-60.00 sec 52.3 GBytes 7.48 Gbits/sec 0 sender
[SUM] 0.00-60.00 sec 52.3 GBytes 7.48 Gbits/sec receiver
CPU Utilization: local/receiver 90.0% (17.6%u/72.4%s), remote/sender 25.5% (1.1%u/24.3%s)
iperf Done.
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-60.04 sec 8.26 GBytes 1.18 Gbits/sec 0 sender
[ 5] (receiver statistics not available)
[ 8] 0.00-60.04 sec 8.23 GBytes 1.18 Gbits/sec 0 sender
[ 8] (receiver statistics not available)
[ 10] 0.00-60.04 sec 8.27 GBytes 1.18 Gbits/sec 0 sender
[ 10] (receiver statistics not available)
[ 12] 0.00-60.04 sec 8.18 GBytes 1.17 Gbits/sec 0 sender
[ 12] (receiver statistics not available)
[SUM] 0.00-60.04 sec 32.9 GBytes 4.71 Gbits/sec 0 sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 22.1% (0.6%u/21.5%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64
Windows desktop
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 8.26 GBytes 1.18 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 8.26 GBytes 1.18 Gbits/sec receiver
[ 6] 0.00-60.00 sec 8.23 GBytes 1.18 Gbits/sec 0 sender
[ 6] 0.00-60.00 sec 8.22 GBytes 1.18 Gbits/sec receiver
[ 8] 0.00-60.00 sec 8.27 GBytes 1.18 Gbits/sec 0 sender
[ 8] 0.00-60.00 sec 8.27 GBytes 1.18 Gbits/sec receiver
[ 10] 0.00-60.00 sec 8.18 GBytes 1.17 Gbits/sec 0 sender
[ 10] 0.00-60.00 sec 8.18 GBytes 1.17 Gbits/sec receiver
[SUM] 0.00-60.00 sec 32.9 GBytes 4.72 Gbits/sec 0 sender
[SUM] 0.00-60.00 sec 32.9 GBytes 4.72 Gbits/sec receiver
CPU Utilization: local/receiver 78.7% (18.6%u/60.1%s), remote/sender 22.1% (0.6%u/21.5%s)
iperf Done.
Code:
Ubuntu VM
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-60.05 sec 4.06 GBytes 581 Mbits/sec 0 sender
[ 5] (receiver statistics not available)
[ 8] 0.00-60.05 sec 4.06 GBytes 581 Mbits/sec 0 sender
[ 8] (receiver statistics not available)
[ 10] 0.00-60.05 sec 4.06 GBytes 581 Mbits/sec 0 sender
[ 10] (receiver statistics not available)
[ 12] 0.00-60.05 sec 4.36 GBytes 624 Mbits/sec 0 sender
[ 12] (receiver statistics not available)
[SUM] 0.00-60.05 sec 16.6 GBytes 2.37 Gbits/sec 0 sender
[SUM] (receiver statistics not available)
CPU Utilization: local/sender 9.3% (0.5%u/8.8%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
iperf 3.7
Linux [hostname] 5.4.0-147-generic #164-Ubuntu SMP Tue Mar 21 14:23:17 UTC 2023 x86_64
Windows desktop
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec 0 sender
[ 4] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec receiver
[ 6] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec 0 sender
[ 6] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec receiver
[ 8] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec 0 sender
[ 8] 0.00-60.00 sec 4.06 GBytes 582 Mbits/sec receiver
[ 10] 0.00-60.00 sec 4.36 GBytes 624 Mbits/sec 0 sender
[ 10] 0.00-60.00 sec 4.36 GBytes 624 Mbits/sec receiver
[SUM] 0.00-60.00 sec 16.6 GBytes 2.37 Gbits/sec 0 sender
[SUM] 0.00-60.00 sec 16.5 GBytes 2.37 Gbits/sec receiver
CPU Utilization: local/receiver 39.4% (11.0%u/28.4%s), remote/sender 9.3% (0.5%u/8.8%s)
iperf Done.
I'd say we have a winner here, it's cheaper than competition and it actually works at 2.5G and 5G link speed.
Probably just lucky. I have a 10GTek ASF-10G-T SFP+ module and it struggles to exceed 500Mbps and 1Gbps at 2.5G and 5G link speed respectively when data is downloaded from a 10G host. I run my NIC at 10G anyway so it doesn't affect me.Have I got lucky with my particular combination of switch/SFP+ module or is it just not possible to reproduce the issue without a 10G host in the mix?
Last edited: