How much difference does a managed rack switch make in terms of latency for VDI compared to a basic unmanged switch?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bilbo1337

Member
Sep 18, 2020
79
45
18
Florida
I'm curious to know if it's worth it to upgrade from this super cheap netgear unmanaged 1GbE switch to a switch with some 10GbE SFP+ connections. From what I've been reading, SFP+ is way better than regular RJ-45, not even considering bandwidth but just latency. When I remote in to my server I do notice it's not the most responsive system. I played around with increasing video memory to 32MB and changed latency sensitivity to high on the CPU in esxi but it's still somewhat laggy. Any tips before I spend money on potentially worthless upgrades?
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
I think you might be confusing some things :)

Whether a switch is managed or unmanaged should not have any effect on its packet forwarding performance (latency). Granted, switches with management tend to be more powerful/more capable hardware and will probably generally have better performance, but that's not related to the presence of management features.

Differences in latency between 1G and 10G ports using copper (xBaseT), fiber, or DACs certainly exist but would be measured on the order of microseconds, not milliseconds or higher. For most applications I suspect the difference could be measured but would not be noticeable for an user in an interactive session.

Like most things of this type, addressing this should start out with measurements. What have you done to measure latency across each link in your network, and end-to-end between the server and the remote systems?
 
  • Like
Reactions: Marjan

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
Switch traffic is not going over management CPU, so unmanaged or managed switch is wrong question to ask - will not matter.

SFP+ at 10 Gbps is better than 10GBase-T because it lacks modulation overhead required for twisted pair, which increases latency. Also lower power that way. But latency should still be below 1 ms so imperceptible.

Try disabling in your virtualization stack any CPU power-saving and set it to "high-performance" to see if things improve. Such lag is usually time it takes for CPU to ramp up frequency when you click something and CPU processing is required.

If this is RDP, disable in "Experience" tab any eye candy aside from font smoothing, desktop composition and visual styles.
 
  • Like
Reactions: Marjan

jdnz

Member
Apr 29, 2021
80
19
8
remote console on esxi is pushing screen deltas down the lan - same as Remote Desktop on windows / XRDP / VNC et al. This means it's not only bandwidth limited by the lan, but also cpu bound each end processing the deltas ( even down a 10gbe link vmrc isn't 'snappy' )

Expecting it to be as 'responsive' a local system outputing to a video card is totally unrealistic - it was never designed for that
 

bilbo1337

Member
Sep 18, 2020
79
45
18
Florida
Hmm. I was really hoping a switch that'd be DAC'd up would be fast. Right now I think everything's trying to communicate through my ISP's provided modem/router so I thought that'd be slowing everything down. Idk why LAN over motherboard isn't more common :/
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
It will be 'fast', but that doesn't matter if it's not the limiting factor in your connections. If you have 10ms of round-trip latency now, for example, reducing that by 40-50 microseconds isn't going to be an improvement you'll notice.

If the traffic between your server and 'remote client' is traversing your ISP modem/router, then that could very well be a bottleneck and you could consider restructuring your network so that those two nodes are on the same layer 2 network (which would be the fastest option) or so that their layer 3 networks are routed by the device they are directly connected to (no additional hops).
 

jdnz

Member
Apr 29, 2021
80
19
8
if everything is plugged into the ethernet ports on your ISP router/modem then then there should be no difference between that and having them all plugged into an unmanaged switch - the ethernet ports will still be attached to a hardware switch chip so will be running at wire rate, not going via the router cpu

if you're not sure run iperf3 from your computer to the vm and check what speed you're getting
 
  • Like
Reactions: kpfleming

nabsltd

Active Member
Jan 26, 2022
339
207
43
Expecting it to be as 'responsive' a local system outputing to a video card is totally unrealistic - it was never designed for that
I don't do any 3D on my VMs (hosted on ESXi). That said, normal 2D graphics over 1Gbit is exactly like being at the console, as far as speed is concerned.