nope, the private networks are all software based... no offloading on those... but they are not limited except by CPU.
if you do connect them to the outside world, the connections to the outside world will be offloaded as per the adapter settings...
Chris
Sorry, you got the wrong end of the stick...
pNic is microsoft's abbreviation for Physical Network Interface Card, their vocabulary, not mine.
they call a virtual nic on the host = vNic
they call a virtual nic inside a guest = vmNic
I'm not being nicpicky lol
Anyhow, I put my pNics (physical nics) to good use and created an isolated VLAN on the physical switch itself, enabled all the RDMA/RSS/VMQ options etc... went through powershell looking at all the stats and ran perfmon whilst nttcp load tests etc.. and indeed compared numbers to private switch (software based guest only virtual switch)... the difference was night and day... effectively cpu cores were flatlined whilst using pNics and also latency was very consistent using pNics compared to private switch.
I use the server as renderfarm too and the benefit of network numa scaling is very advantageous in a hugely shared resource server where cpu cores are often maxx'd out... indeed to minimise impact across all vms, especially the router vm, though I could have isolated cores per vm, I prefer dynamic scaling which just makes my life easier.
Now I just hope some clever lady/gent comes along and creates a patch which allows us to passthrough consumer GPUs bypassing nvidia and ati's greedy skew locks like KVM already provides.
Actually thinking I might as well wipe it all clean and start with KVM as host and then run hyper-v as guest.... hmmm..... (probably have to wait for a boring sunday to come along)
(though I don't think I can passthrough pcie through nested virtualisation infrastructure, IOMMU cannot be nested, you can only do it the once it appears based on my hyperv to nested kvm tinkering conclusions, maybe kvm to hyper-v might be different, I guess I shall have to wait/try and see, but it shouldnt impact running stock windows as guests on kvm just guests running on hyper-v the nested guest itself)