Hi,
I am currently having a somewhat disappointing experience:
I have a Cisco Switch (SG350X-24P) with 2x 10 GbE Ports and 2x SPF+ slots equipped with transceivers that allow 2.5/5 and 10 GbE speeds.
Connected to the SFP+ transceivers are two Windows 10 clients with 2.5/5 and 10 GbE cards (one Intel X550 and one Buffalo LGY-PCIE-MG. These machines are ~50 feet away connected over Cat.5e cabling. Consequently, they auto-negotiate at 5 Gbps with the switch and I cannot change the cabling.
Connected to one of the 10 GbE Ports is a Linux server (Ubuntu 20.04) using an Intel X540 with 10 GbE only over a short patch cable, negotiating at 10 Gbps.
When I try "netio -t", I can see rates of ~5 Gbps from the Windows clients to the Linux server but only ~3 Gbps vice-versa. I can reach approximately the same speeds with SMB file copies in both directions.
When I do a netio between the two Windows machines, however, I can reach the expected 5 Gbps in both directions.
I have tried almost anything I could think of in order to get 5 Gbps from the server to the clients, including interrupt moderation and all that, but fail to do so. Matter-of-fact, I had to reduce interrupt moderation in order to get 5 Gpbs from Windows to Linux direction.
My guess is that TCP is just too slow to handle the situation where the server can pump data into the network at 10 Gbps but after the switches' buffers have filled, it cannot dispatch them to the client, because the downlink has only 5 Gbps. This results in lost packets and causing buffering issues. However, I doubt that I could measure that via Wireshark or the like, because the timing resolution is probably too coarse.
If indeed this is the culprit, that effect should be much worse with a speed ratio of 2 (10 Gbps sender vs. 5 Gbps receiver) than, say, a ratio of 10 (10 Gbps sender and 1 Gbps receiver). Potentially, this is the reason why I have not found any information or discussion about this "mixing speeds effect".
What I find disappointing is that I might as well use cheap Realtek 2.5 GbE adapters instead for the clients without much effective change of downlink speed. On the other hand, limiting the server speed to 5 Gbps in order to eliminate the buffering issue seems dumb as well - especially considering there is another 10 GbE backup NAS on the last of the four fast switch ports which would lose half of its bandwidth to the server by that move.
I have not yet tried ECN or flow control. Does anyone have an idea how to solve this via Windows or Linux network settings (BTW: wondershaper for rate-limiting is not the answer here: at 10 Gbps, even enabling it at all reduces the speed by a factor of 10).
I am currently having a somewhat disappointing experience:
I have a Cisco Switch (SG350X-24P) with 2x 10 GbE Ports and 2x SPF+ slots equipped with transceivers that allow 2.5/5 and 10 GbE speeds.
Connected to the SFP+ transceivers are two Windows 10 clients with 2.5/5 and 10 GbE cards (one Intel X550 and one Buffalo LGY-PCIE-MG. These machines are ~50 feet away connected over Cat.5e cabling. Consequently, they auto-negotiate at 5 Gbps with the switch and I cannot change the cabling.
Connected to one of the 10 GbE Ports is a Linux server (Ubuntu 20.04) using an Intel X540 with 10 GbE only over a short patch cable, negotiating at 10 Gbps.
When I try "netio -t", I can see rates of ~5 Gbps from the Windows clients to the Linux server but only ~3 Gbps vice-versa. I can reach approximately the same speeds with SMB file copies in both directions.
When I do a netio between the two Windows machines, however, I can reach the expected 5 Gbps in both directions.
I have tried almost anything I could think of in order to get 5 Gbps from the server to the clients, including interrupt moderation and all that, but fail to do so. Matter-of-fact, I had to reduce interrupt moderation in order to get 5 Gpbs from Windows to Linux direction.
My guess is that TCP is just too slow to handle the situation where the server can pump data into the network at 10 Gbps but after the switches' buffers have filled, it cannot dispatch them to the client, because the downlink has only 5 Gbps. This results in lost packets and causing buffering issues. However, I doubt that I could measure that via Wireshark or the like, because the timing resolution is probably too coarse.
If indeed this is the culprit, that effect should be much worse with a speed ratio of 2 (10 Gbps sender vs. 5 Gbps receiver) than, say, a ratio of 10 (10 Gbps sender and 1 Gbps receiver). Potentially, this is the reason why I have not found any information or discussion about this "mixing speeds effect".
What I find disappointing is that I might as well use cheap Realtek 2.5 GbE adapters instead for the clients without much effective change of downlink speed. On the other hand, limiting the server speed to 5 Gbps in order to eliminate the buffering issue seems dumb as well - especially considering there is another 10 GbE backup NAS on the last of the four fast switch ports which would lose half of its bandwidth to the server by that move.
I have not yet tried ECN or flow control. Does anyone have an idea how to solve this via Windows or Linux network settings (BTW: wondershaper for rate-limiting is not the answer here: at 10 Gbps, even enabling it at all reduces the speed by a factor of 10).