Private vmswitch or VLAN'd External vmswitch?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dreece

Active Member
Jan 22, 2019
503
161
43
I have a few VMs which have a private network between each other, currently I am using Hyper-V's Private Virtual Switch type to provide this private network.

I do have plenty ports to VLAN these VMs via an External vmswitch all connected to a real switch.

Question is, should I? what is better practice?
 

fishtacos

New Member
Jun 8, 2017
23
13
3
If the VMs are in their own host, use a private vmswitch - if you have more than one host, use a VLAN.

Ultimately they serve the same purpose, so use what provides ease of use or better performance. No need to overthink it.
 
  • Like
Reactions: Dreece

Dreece

Active Member
Jan 22, 2019
503
161
43
You're right, no need to overthink it.
I was just wondering on VMQ/vRSS/Offloads/RDMA etc provided by pNics reducing cpu usage?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
nope, the private networks are all software based... no offloading on those... but they are not limited except by CPU.

if you do connect them to the outside world, the connections to the outside world will be offloaded as per the adapter settings...

Chris
 

Dreece

Active Member
Jan 22, 2019
503
161
43
nope, the private networks are all software based... no offloading on those... but they are not limited except by CPU.

if you do connect them to the outside world, the connections to the outside world will be offloaded as per the adapter settings...

Chris
Sorry, you got the wrong end of the stick...
pNic is microsoft's abbreviation for Physical Network Interface Card, their vocabulary, not mine.


they call a virtual nic on the host = vNic
they call a virtual nic inside a guest = vmNic

I'm not being nicpicky lol

Anyhow, I put my pNics (physical nics) to good use and created an isolated VLAN on the physical switch itself, enabled all the RDMA/RSS/VMQ options etc... went through powershell looking at all the stats and ran perfmon whilst nttcp load tests etc.. and indeed compared numbers to private switch (software based guest only virtual switch)... the difference was night and day... effectively cpu cores were flatlined whilst using pNics and also latency was very consistent using pNics compared to private switch.

I use the server as renderfarm too and the benefit of network numa scaling is very advantageous in a hugely shared resource server where cpu cores are often maxx'd out... indeed to minimise impact across all vms, especially the router vm, though I could have isolated cores per vm, I prefer dynamic scaling which just makes my life easier.

Now I just hope some clever lady/gent comes along and creates a patch which allows us to passthrough consumer GPUs bypassing nvidia and ati's greedy skew locks like KVM already provides.

Actually thinking I might as well wipe it all clean and start with KVM as host and then run hyper-v as guest.... hmmm..... (probably have to wait for a boring sunday to come along)

(though I don't think I can passthrough pcie through nested virtualisation infrastructure, IOMMU cannot be nested, you can only do it the once it appears based on my hyperv to nested kvm tinkering conclusions, maybe kvm to hyper-v might be different, I guess I shall have to wait/try and see, but it shouldnt impact running stock windows as guests on kvm just guests running on hyper-v the nested guest itself)
 
Last edited: