Hyper-V networking 'decent to best' practice for virtual switches?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Hi,

Looking to get some input on how much it matters when creating virtual switches in Hyper-V.

Almost have my ScaleIO cluster set up and the way I have my servers(6) networked is I have 2 GbE ports hooked up to a Cisco 3560E switch and 2 10GbE SFP+ ports hooked up to a Quanta LB6M.

My original plan was to give each server a different subnet and implement ACL's on the Cisco switch so that the servers didn't communicate via GbE. I want to statically enforce ScaleIO traffic over 10GbE. Then I got to thinking, what if I start creating VMs that I want to interact with other VMs that might be hosted on a different node (e.g. say node 1 is 10.1.0.0 and node 2 is 10.2.0.0 and they each are hosting a VM in the failover cluster).

Creating virtual switches from the GbE interface will prevent the two VMs from communicating. The other 'easy-way-out' method would be just to create all of my virtual switches from the 10GbE network adapters(ConnectX-3 cards) and then trunk down to the TwinGig 10GbE X2 modules on my Cisco switch so that the VMs could get internet access.

Any ideas?
 

DavidRa

Infrastructure Architect
Aug 3, 2015
330
153
43
Central Coast of NSW
www.pdconsec.net
Well now this is a big question :) I strongly suggest you do not try too hard to prevent the servers from communicating, especially as you may well want to cluster the hosts in the future (as that will allow you to have HA for your VMs).

Instead, perhaps consider following the "Converged Networking" best practice. In fact, I would probably suggest the following starting point:
  • Use the NetLBFO tools to create a 2 member 10Gbps team; LACP with L4 Src+Dst would be ideal (New-NetLBFOTeam))
    • Create a VMNetworkAdapter on the team for host OS management (New-VMNetworkAdapter -ManagementOS)
    • Create a VMNetworkAdapter on the team for ScaleIO (New-VMNetworkAdapter -ManagementOS)
    • Create a VMNetworkAdapter on the team for Live Migration (New-VMNetworkAdapter -ManagementOS)
    • Set VLANs on the VMNetworkAdapter (Set-VMNetworkAdapterVLAN)
  • Use the NetLBFO tools to create a 2 member 1Gbps team; LACP with IP Src+Dst balancing would be ideal (New-NetLBFOTeam))
    • Bind a vmSwitch to the team for VM connectivity (New-vmSwitch)
This should force ScaleIO and host access to go via the 10Gbps network. It will also permit your VMs to communicate across the 1Gbps NICs and let you use Live Migration, VM clustering etc.

You should put your DCs and Host Management either on the same VLAN on VLANs which can route; even if the hosts can see other machines on the 1Gbps NICs, the configuration will ensure the hosts communicate only on the 10Gbps as they won't have IP addresses on anything connected at 1Gbps be it team or NIC.

You can, if necessary, add 1Gbps ports to be dedicated cluster private networks for your VMs, should you see a need (create separate vmSwitches for these). This sort of illustrates what I mean:

vSwitch Example.png
 
Last edited: