Mixing 5GbE and 10GbE speeds - slower than expected

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

meyergru

New Member
Jul 12, 2020
16
1
3
Hi,

I am currently having a somewhat disappointing experience:

I have a Cisco Switch (SG350X-24P) with 2x 10 GbE Ports and 2x SPF+ slots equipped with transceivers that allow 2.5/5 and 10 GbE speeds.

Connected to the SFP+ transceivers are two Windows 10 clients with 2.5/5 and 10 GbE cards (one Intel X550 and one Buffalo LGY-PCIE-MG. These machines are ~50 feet away connected over Cat.5e cabling. Consequently, they auto-negotiate at 5 Gbps with the switch and I cannot change the cabling.

Connected to one of the 10 GbE Ports is a Linux server (Ubuntu 20.04) using an Intel X540 with 10 GbE only over a short patch cable, negotiating at 10 Gbps.

When I try "netio -t", I can see rates of ~5 Gbps from the Windows clients to the Linux server but only ~3 Gbps vice-versa. I can reach approximately the same speeds with SMB file copies in both directions.

When I do a netio between the two Windows machines, however, I can reach the expected 5 Gbps in both directions.

I have tried almost anything I could think of in order to get 5 Gbps from the server to the clients, including interrupt moderation and all that, but fail to do so. Matter-of-fact, I had to reduce interrupt moderation in order to get 5 Gpbs from Windows to Linux direction.

My guess is that TCP is just too slow to handle the situation where the server can pump data into the network at 10 Gbps but after the switches' buffers have filled, it cannot dispatch them to the client, because the downlink has only 5 Gbps. This results in lost packets and causing buffering issues. However, I doubt that I could measure that via Wireshark or the like, because the timing resolution is probably too coarse.

If indeed this is the culprit, that effect should be much worse with a speed ratio of 2 (10 Gbps sender vs. 5 Gbps receiver) than, say, a ratio of 10 (10 Gbps sender and 1 Gbps receiver). Potentially, this is the reason why I have not found any information or discussion about this "mixing speeds effect".

What I find disappointing is that I might as well use cheap Realtek 2.5 GbE adapters instead for the clients without much effective change of downlink speed. On the other hand, limiting the server speed to 5 Gbps in order to eliminate the buffering issue seems dumb as well - especially considering there is another 10 GbE backup NAS on the last of the four fast switch ports which would lose half of its bandwidth to the server by that move.

I have not yet tried ECN or flow control. Does anyone have an idea how to solve this via Windows or Linux network settings (BTW: wondershaper for rate-limiting is not the answer here: at 10 Gbps, even enabling it at all reduces the speed by a factor of 10).
 

lowfat

Active Member
Nov 25, 2016
131
91
28
40
What 2.5/5/10GB transceivers are you using? I was having a similar issue this week. Using a 2.5GbE USB adapter on my Windows machine, Mikrotik switch, Ipolex transceiver, and a ConnectX-3 in my Ubuntu server. When testing via Iperf3 from windows machine it would be good. But when in receiver mode, it would have tens of thousands of retries. The switch would show RX mac errors for the Ipolex transceiver. Errors would still show if I set my 2.5GbE adapter to 1GB, and also showed when I used onboard Intel NIC. As soon as I stopped using the Ipolex transceiver I had zero issues.
 

meyergru

New Member
Jul 12, 2020
16
1
3
The transceivers are QSFPTEK: https://www.amazon.de/gp/product/B081YKGBR4

I did not look before, but as could be expected by the fact that a Windows/Windows transfer does full speed in both directions, the switch shows no errors at all.

BTW: In order to rate-limit the downlink in ingress, I have tried flow control on the Linux server (and the corresponding switch port) to "manually enabled" now, but alas, it changed nothing.
 

meyergru

New Member
Jul 12, 2020
16
1
3
What I know by now is that apparently the buffer size of the switch is too small (although it apparently has 1.5 MByte of RAM). A wireshark trace showed that the TCP window size is at 128K, which causes a pumping effect and TCP retransmissions because of the packet drops. When I disabled TCP window scaling (effectively reducing the window size to 64K), the transfer became even slower.

This effect would show only under these specific conditions - the Cisco itself is only aware of 1 Gbit or 10 GBit, but not the 2.5 or 5 Gbit speeds. For a 1/1 or a 10/10 ratio transfer, there are no packet losses at all, while for a 10/1 ratio, the pumping effect is 5 times less noticeable and probably would go unnoticed.

Switches with a larger buffer size seem to be much more expensive (HP Aruba, Cisco Catalyst 9300). I wonder if anyone has tested such a scenario on a Mikrotik CRS328? It has 512 MByte RAM, but I do not know how much of it is dedicated to packet buffering.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I have encountered some major issues mixing speeds. Specifically, the Realtek 2.5 Gbps against 10 Gbps on a Trendnet 7080-ES switch.
Trendnet told me there is some issue with the chipset, and they are working on it.

I can tell you that a direct Ethernet connection between the Realtek 2.5 Gbps and an Aquantia 10 Gbps NIC works fine at full 2.5 Gbps rate.
Also works fine with a Netgear GS110MX.

I don't have any SFP+ equipment to test with .

Click on my username for other posts on this subject.

I returned my Realtek 2.5 Gbps NIC to Amazon because of this issue. I think the whole NBASE-T at 2.5 / 5 Gbps is a mess of compatibility problems. Go straight to 10 GBASE-T but skip 2.5 /5, unless you want headaches.
 

meyergru

New Member
Jul 12, 2020
16
1
3
I am stuck with old Cat.5 cabling in my house and I had 10 Gbe before, but that proved unreliable.

With 5 Gbe, the connection itself is fine (I once had problems with an Asus Aquantia based card, probably because of early Windows drivers), however TCP is too slow at these speeds if the switch buffer is at 128K or less and the sender is at 10 Gbe. When I tried the Realtek 2.5 Gbe onboard card on the receiver, the resulting speed is much the same as with 5 Gbe (at least in one direction).

I am quite sure that the combination of mexing speeds and small buffers is a problem. My observations are in-line with my theoretical expectations, however, I do not know if a larger buffer would theoretically fix it, because there could be another window size limit afterwards. I do not have access to a better switch, so I thought somebody could know.

Your Trendnet switch has a RAM size that is much the same as my Cisco's. Depening on how this is allocated for different purposes, you may face the exact problem here.

I asked Mikrotik sales for information about the buffer size. Actually, the CRS328 is not too expensive to try out if its buffer size is big enough.
 
Last edited:

madbrain

Active Member
Jan 5, 2019
212
44
28
With 5 Gbe, the connection itself is fine (I once had problems with an Asus Aquantia based card, probably because of early Windows drivers), however TCP is too slow at these speeds if the switch buffer is at 128K or less and the sender is at 10 Gbe.
How can you tell what the switch buffer size is ? This is not typically disclosed in specs. Nor is it adjustable in managed switches usually.
And what makes you think buffer size is the issue ?

FYI, the specs on my Netgear GS110MX switch say that there is a 128KB packet buffer. And I was achieving full 2.5 Gbps in both directions on that switch when I had the Realtek NIC, connecting to a server at 10 Gbps.


When I tried the Realtek 2.5 Gbe onboard card on the receiver, the resulting speed is much the same as with 5 Gbe (at least in one direction).
For me, 2.5 Gbps was slower than 1 Gbps in the receive direction with the Realtek NIC with the Trendnet switch. Fine with the Netgear switch.

I am quite sure that the combination of mexing speeds and small buffers is a problem.
That seems to be the case only when one of the speeds in the mix is 2.5 or 5 Gbps. Never any problem if mixing 1 Gbps on one side and 10 Gbps on the other. To me it sounds just like a switch implementation bug. Without access to firmware source, I don't think I could comment on whether buffer size is the issue.

Your Trendnet switch has a RAM size that is much the same as my Cisco's. Depening on how this is allocated for different purposes, you may face the exact problem here.

I asked Mikrotik sales for information about the buffer size. Actually, the CRS328 is not too expensive to try out if its buffer size is big enough.
In that test of mine you linked, you'll notice the tons of retries in iperf at 5 Gbps and 2.5 Gbps . It doesn't make sense to me that the buffer size could be the key to the problem with mismatched speeds. If it was, there should be issues when client is at 1 Gbps and server is at 10 Gbps as well. But there is no problem there. I don't know why buffer would only affect 5 Gbps to 10 Gbps and 2.5 to 10 Gbps cases, but not 1 Gbps to 10 Gbps. I think only the switch manufactures can really answer this. And their firmware isn't open source, unfortunately. Even if and when they do fix the bugs, they may not disclose the nature of the fix. In the meantime, all we can really say is that these are issues with 2.5 and 5 Gbps with some switches and SFP+.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
FYI, another test I just performed.

The client box is running Aquantia and attached to the Netgear GS110MX switch.
I forced the NIC down to 2.5 Gbps.

The cheap Netgear GS110MX switch is connected to the TEG-7080ES by about 100ft of cable accross rooms.

The server box is running Aquantia also, left at 10 Gbps, and attached to another port on the TEG-7080ES.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10
Connecting to host server10g, port 5201
[ 4] local 192.168.1.38 port 56305 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.75 GBytes 2.36 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.75 GBytes 2.36 Gbits/sec sender
[ 4] 0.00-10.00 sec 2.75 GBytes 2.36 Gbits/sec receiver

iperf Done.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10 -R
Connecting to host server10g, port 5201
Reverse mode, remote host server10g is sending
[ 4] local 192.168.1.38 port 56330 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec receiver

iperf Done.

As you can see, full 2.5 Gbps line rate is achieved. This is the best you can get without jumbo frames, which I have disabled on my LAN.

Repeat, but with the client NIC forced down to 5 Gbps instead :

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10
Connecting to host server10g, port 5201
[ 4] local 192.168.1.38 port 56560 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec sender
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec receiver

iperf Done.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10 -R
Connecting to host server10g, port 5201
Reverse mode, remote host server10g is sending
[ 4] local 192.168.1.38 port 56587 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.52 GBytes 4.74 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 5.52 GBytes 4.74 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 5.52 GBytes 4.74 Gbits/sec receiver

Again, full line rate, despite speed mismatch between the client and server. No retries.

In other words, the problem is only reproducible when one NIC is at 2.5 or 5 Gbps, and attached directly to the TEG-7080ES.

The link between the two switches is negotiated at 10 Gbps, though. I could try to force it down to 2.5 or 5 Gbps in the management UI of the Trendnet switch (Netgear is unmanaged).

Here we go :

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10
Connecting to host server10g, port 5201
[ 4] local 192.168.1.38 port 56740 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec sender
[ 4] 0.00-10.00 sec 5.43 GBytes 4.67 Gbits/sec receiver

iperf Done.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c server10g -i 10 -R
Connecting to host server10g, port 5201
Reverse mode, remote host server10g is sending
[ 4] local 192.168.1.38 port 56746 connected to 192.168.1.27 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.98 GBytes 1.70 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.98 GBytes 1.70 Gbits/sec 12562 sender
[ 4] 0.00-10.00 sec 1.98 GBytes 1.70 Gbits/sec receiver

This is with the client NIC at 5 Gbps, which is hooked up to the Netgear switch.
Then, the link between the two switches is forced to 5 Gbps.
Server NIC is at 10 Gbps on the Trendnet switch.

Notice the 1.7 Gbps rate on the receive side. And the 12562 retries.

Pretty clear to me that the claims of the $539 Trendnet switch to be NBASE-T compatibility are overstated.
 

meyergru

New Member
Jul 12, 2020
16
1
3
How do I know that the buffer size is small?

The memory size of my switch is 1.5 MByte (it's in the specs, your switch has 2 MByte). It is a sure thing the buffer size is one limiting factor for the TCP window size, that is just how window scaling works.

I can tell from the wireshark traces that the TCP window size is 128K, so obviously the buffer size does not suffice to reach the next windows size level of 256K.

I can see TCP lost packets (there must be, because the switch just has to drop packets after the buffer size has been reached) and according TCP retransmissions. When I disable TCP window scaling (effectively limiting the window size to 64K), the resulting speed is even lower, because the number of TCP "handshakes" is doubled from 5000 to 10000 per second.

I know there are no problems in either the 10 Gbe link from the server nor the 5 Gbe link to the client (judging from the fact that with unmixed speeds, all is fine). Considering this, I see a buffer size of 128K to just under 256K as proven in my case and no other problems.


As for the 10/1 case:

I made the same observation with mixing 10/1 speeds - only this is due to the fact that TCP is better suited to 1000 "handshakes" per second that 5000. These numbers result in my switch at the corresponding speeds of 1 and 5 Gbe and a window size of 128K. You can imagine the throughput loss as a latency that causes gaps in the transmission every time the buffer is full and packet losses occur.

Effectively, this happens with high speeds and small buffers iff packet losses occur at all (i.e. when a speed change happens and the switch must store & forward, not with 5/5 or 10/10).


Jumbo frames might help, but are no option for me as not all devices on my network are capable of it.
 
  • Like
Reactions: madbrain

madbrain

Active Member
Jan 5, 2019
212
44
28
One more tests. Both machines in this case are Win10, both on the same TEG-7080ES switch, both forced down to 5 Gbps.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c bumblebee -i 10
Connecting to host bumblebee, port 5201
[ 4] local 192.168.1.26 port 52512 connected to 192.168.1.37 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.14 GBytes 4.41 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.14 GBytes 4.41 Gbits/sec sender
[ 4] 0.00-10.00 sec 5.14 GBytes 4.41 Gbits/sec receiver

iperf Done.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c bumblebee -i 10 -R
Connecting to host bumblebee, port 5201
Reverse mode, remote host bumblebee is sending
[ 4] local 192.168.1.26 port 52517 connected to 192.168.1.37 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.23 GBytes 4.49 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 5.23 GBytes 4.49 Gbits/sec sender
[ 4] 0.00-10.00 sec 5.23 GBytes 4.49 Gbits/sec receiver

iperf Done.

This time everything looks OK. Bumblebee is an old PCIe 2.0 box with an AMD FX -8120 CPU, FYI.

Forcing client to 2.5 Gbps, server left at 5 Gbps.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c bumblebee -i 10
Connecting to host bumblebee, port 5201
[ 4] local 192.168.1.26 port 52669 connected to 192.168.1.37 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec sender
[ 4] 0.00-10.00 sec 2.69 GBytes 2.31 Gbits/sec receiver

iperf Done.

C:\Users\Julien Pierre\Desktop\iperf3>iperf3.exe -N -c bumblebee -i 10 -R
Connecting to host bumblebee, port 5201
Reverse mode, remote host bumblebee is sending
[ 4] local 192.168.1.26 port 52682 connected to 192.168.1.37 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.17 GBytes 1.00 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.17 GBytes 1.00 Gbits/sec sender
[ 4] 0.00-10.00 sec 1.17 GBytes 1.00 Gbits/sec receiver

Here we do see the 2.5/5Gbps speed mixing affecting things in only one direction.
 
Last edited:

madbrain

Active Member
Jan 5, 2019
212
44
28
How do I know that the buffer size is small?

The memory size of my switch is 1.5 MByte (it's in the specs, your switch has 2 MByte). It is a sure thing the buffer size is one limiting factor for the TCP window size, that is just how window scaling works.

I can tell from the wireshark traces that the TCP window size is 128K, so obviously the buffer size does not suffice to reach the next windows size level of 256K.

I can see TCP lost packets (there must be, because the switch just has to drop packets after the buffer size has been reached) and according TCP retransmissions. When I disable TCP window scaling (effectively limiting the window size to 64K), the resulting speed is even lower, because the number of TCP "handshakes" is doubled from 5000 to 10000 per second.

I know there are no problems in either the 10 Gbe link from the server nor the 5 Gbe link to the client (judging from the fact that with unmixed speeds, all is fine). Considering this, I see a buffer size of 128K to just under 256K as proven in my case and no other problems.
Thanks, that makes sense. I admit I did not take a look at the wireshark.

As for the 10/1 case:

I made the same observation with mixing 10/1 speeds - only this is due to the fact that TCP is better suited to 1000 "handshakes" per second that 5000. These numbers result in my switch at the corresponding speeds of 1 and 5 Gbe and a window size of 128K. You can imagine the throughput loss as a latency that causes gaps in the transmission every time the buffer is full and packet losses occur.

Effectively, this happens with high speeds and small buffers iff packet losses occur at all (i.e. when a speed change happens and the switch must store & forward, not with 5/5 or 10/10).


Jumbo frames might help, but are no option for me as not all devices on my network are capable of it.
FYI, 9KB jumbo frames resulted in even lower speeds for me when I still had the Realtek at 2.5 Gbps, FYI. <600 Mbps .

I wonder if using ethernet flow control would help prevent packet drops and forced TCP retransmits. The Aquantia NICs have a driver option for it. So does the Trendnet switch in the admin. I have left those at default, which is off.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I wonder if using ethernet flow control would help prevent packet drops and forced TCP retransmits. The Aquantia NICs have a driver option for it. So does the Trendnet switch in the admin. I have left those at default, which is off.
The default on the Aquantia drivers for Windows was "RX & Tx enabled" for flow control. The default on the switch was off. I changed it to "on". Made no difference, unfortunately.
 

meyergru

New Member
Jul 12, 2020
16
1
3
I tried flow-of-control, makes no difference. Also tried to limit bandwidth on ingress and egress to no avail. I never tried jumbo frames, but I doubt that they could fix that problem as the number of TCP "handshakes" is not reduced.

I all seems to boild down to a larger buffer in the switch.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
So what's the fix, then ? Do away with TCP ? I can't return my switch anymore. And I doubt it will take any kind of RAM expansion.
But perhaps the manufacturer can increase the buffer size for such cases in the firmware.
2MB RAM is not enough to have 128KB in both directions on all 8 ports, since that would leave no RAM for executing the code ...
The problem only shows up with mixed speeds, tough, which means the increased buffer size is only needed at most on 7 ports. So, perhaps there is hope. Also, most TCP applications aren't full-duplex. So the buffer size could be adaptive (more or less frames in one direction based on traffic). Certainly a complex fix to implement.

Which reminds me, I got to try the forked version of iperf that does bidirectional TCP tests ...
 

madbrain

Active Member
Jan 5, 2019
212
44
28
BTW, I don't know how long all your CAT5E cable runs are, and what your electricity rates are. But you could use the Netgear GS110MX as a 10 GBASE-T repeater, as it is a 2-port switch. It doesn't have this buffer issue. It was only $138 on Amazon a few weeks ago.
If you only have a few long CAT5E runs, this may help you.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I never tried jumbo frames, but I doubt that they could fix that problem as the number of TCP "handshakes" is not reduced.
They definitely didn't help me, made things worse. I had started with 9KB jumbo frames in the first place, as I mentioned above.
 

meyergru

New Member
Jul 12, 2020
16
1
3
I have a centralised structured cabling with in-wall runs from the cellar to first floor at around 20 meters. Splitting that up is no option.

I would have to write off my invest in the Cisco as well. But before that, I would like to confirm that the next switch does not have the same problem.

BTW: Yes, not all of the switches memory is allocated for buffering, however, the buffer does not need to be distributed between the ports. I also doubt that this is the case for my Cisco, as 1.5 MByte / 28 ports is less than 128 KByte.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I have a centralised structured cabling with in-wall runs from the cellar to first floor at around 20 meters. Splitting that up is no option.
Isn't CAT5E supposed to be good up to 45 meters for 10 Gbase-t ?

I realize there is the theory and there is practice ...

BTW: Yes, not all of the switches memory is allocated for buffering, however, the buffer does not need to be distributed between the ports. I also doubt that this is the case for my Cisco, as 1.5 MByte / 28 ports is less than 128 KByte.
Well, if all devices on your network are running iperf3 over TCP full speed on all 28 ports . Now, why, would you do that ? Because the switch specs quote a given switching rate. And if we can't run TCP, what are we going to run ? NetBEUI ? I bet no one has tried that on 10gig yet. Time to launch my bridged OS/2 VM and do it.
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
Isn't CAT5E supposed to be good up to 45 meters for 10 Gbase-t ?

I realize there is the theory and there is practice ...
It can technically do it, but I don't recall if it's actually certified to do so. CAT6 and CAT6a are certified to do it.

CAT6/6a have the plastic spacer between the pairs to further reduce crosstalk and, unless you are using the slim CAT6 cables (which are suitable only for in-rack length), are usually a larger gauge wire than 5e.
 

meyergru

New Member
Jul 12, 2020
16
1
3
Apart from that, I do not have Cat.5e, but Cat.5a that is more than twenty years old.

Besides, the switching speed IS reached on all ports. The switch can do it at 10/10 ratio and at 5/5. What matters here is that some receivers cannot handle the to-be-expected drop of 50% at 10/5 ratio of the packets decently because of TCP shortcomings. The packet rate is always at maximum for any given port, regardless of the size of the buffer(s). A larger buffer just helps to mask TCP's slow reaction speed.