Performance of shared 10GbE and Hyper-V

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nanaya

New Member
Jan 21, 2020
3
0
1
First the setup:

  • Desktop:
    • CPU: Ryzen 3600
    • NIC1: Mellanox MCX311A-XCAT
    • NIC2: Realtek 8111C
    • OS: Windows 10
  • Server:
    • CPU: Xeon E3-1230v2
    • NIC1: Mellanox MCX311A-XCAT
    • NIC2: Intel onboard gigabit
    • OS: FreeBSD 12.1
There's no switch between 10GbE both connected through a 3m DAC, and server gigabit is connected to a wifi AP and switch.

A simple iperf test between 10GbE cards without any bridging or whatever showed:
  • MTU 1500: 5-7Gbps
  • MTU 9000: 8-10Gbps
At this point I concluded I should use jumbo frame and have my gigabit and 10GbE network separate (instead of as bridge under FreeBSD) as I don't think I can setup jumbo frame for the gigabit network.

So far it's fine. The problem came when I added the NIC as part of Hyper-V external switch with "Allow management operating system to share this network adapter" settings enabled - hoping I can have 10GbE for both host and VMs.

There's no problem when doing iperf etc inside VM (7-10Gbps) but if I test the network from the desktop (host) itself, things stop responding: slow mouse cursor movement, task manager hangs/stutter, music playback stutters. iperf still showed 8+Gbps but I can't use the desktop while it's running.

In the end, I currently have my desktop connected through gigabit network and have 10GbE dedicated for the VMs. My desktop get full gigabit and usable desktop on load, and VMs get faster network between themselves and server.

The question is, is there anything I can do so I can share the 10GbE between desktop/host and VM without killing my desktop? (and without resorting to getting extra port and/or actual switch)

Or if I need additional hardware, I can use some recommendation. Especially one that's cheaper than getting two dual ports cards. Also my server has no more PCIe slots while there's only one extra PCIe x1 slot for desktop.
 
Last edited:

nanaya

New Member
Jan 21, 2020
3
0
1
set a 10G link between the the VM and the switch. attach the switch to the 10G card.
you dont need to allow the VM to interface with the NIC and slow it down...
I'm not sure I understand this correctly. I don't think Hyper-V allows sharing NIC directly to a VM and in my setup the VMs have been connected through a (v)switch:

Code:
  (server) nic
      |
 (desktop) nic
      |
   vswitch
  |   |   |
 host vm1 vm2
And this is the problematic layout: having some network load on host/desktop ⇔ server slows host/desktop itself. The goal is 10G connection for both host/desktop ⇔ server and vm ⇔ server.
 

nanaya

New Member
Jan 21, 2020
3
0
1
Interestingly, I tried the same setup again just now and it seems fine :confused:
I don't think I changed anything after last night apart of disabled gigabit NICs and finalized server config.

Take a look at this article
Hyper-V Virtual Networking configuration and best practices


Google hyper-v switch QOS Quality of Service
Also search hyper-v switch set bandwidth limit
Thanks for the link and hints. I'll check them later.

Update 2020-02-02: the stutters are back
 
Last edited: