Hyper-v 2016 + Server 2016 Essentials as VM = lower than expected host CPU used

Discussion in 'Windows Server, Hyper-V Virtualization' started by Sielbear, Apr 22, 2017.

  1. Sielbear

    Sielbear Member

    Joined:
    Aug 27, 2013
    Messages:
    31
    Likes Received:
    2
    I installed the free Hyper-v 2016 server about a month ago into a spare Dell C2100 chassis I had lying around. The original configuration had dual L5530 CPUs - nothing crazy. I installed Server essentials 2016 for a home domain and a plex server.

    Everything seemed to run ok (I had 16 vCPUs configured), but the dual L5530s were a little underpowered for some of the transcoding / DVR recording I wanted to do.

    So I upgraded the CPUs to one I had used before in this chassis under VMWare - dual x5570s. After installing the CPUs, I verified the physical host sees all 12 cores / 24 logical cores. I then increased the number of vCPUs to 24 and reset the NUMA parameters to match the physical hardware on the VM settings.

    When I max out the essentials server with a transcode operation or other test (which sees all 24 cores), and the processor is showing 100% utilization in the VM, the host computer is only registering 60%-70% % Guest Run Time and % Total Run Time is 2% - 3% higher. I've tried setting the host performance profile to high, but that made no change.

    I know in the past, I've been able to utilize all of the available resources on a guest, so I'm a little stumped on this one.

    What am I missing?
     
    #1
  2. Sielbear

    Sielbear Member

    Joined:
    Aug 27, 2013
    Messages:
    31
    Likes Received:
    2
    Ok - I did a lot more testing on this today. I have an identically configured host system to compare against. I started measuring CPU Wait Time Per Dispatch. On the system that's running great 100% CPU load when tested, I average about 34,000 nanoseconds CPU Wait time per dispatch. On my problematic server, I'm averaging 1,000,000 nanoseconds CPU Wait Time per Dispatch.

    How do I troubleshoot this? I did some web searching, but I didn't find much in terms or resolving this.

    For comparison, this was with 1 VM running on each test system. The only thing that's different is that on test host #1, I have less RAM. One VM at home is using a little over 1/2 of the RAM in the system whereas the other test system, the VM is using < 1/2 the available RAM. With only 1 VM running, I wouldn't think this would be an issue.
     
    #2
  3. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,841
    Likes Received:
    423
    What happens when you drop 2 vcpu off ?

    In ESX this happens...

    The issue is for the VM to execute 24 cores it needs all 24 'slots' free to schedule all at one, anything at all else running it and it will wait.

    Basic example, 8 physical cores, 2 x 4 vcpu, they can schedule together in one clock cycle, 2 x 6 vcpu, they will never be able to schedule together in the same cpu clock cycle, it's not possible to schedule half cores to run.

    The best ideal is always to keep the vcpu number as small is sensible for your workload, eg 2 or 4 etc. The most efficient being 1.

    I don't know wat happens in Hyper-V and maybe it's different, just thought to try (makes note that I should Lau with hyper-v soon again)
     
    #3
  4. Sielbear

    Sielbear Member

    Joined:
    Aug 27, 2013
    Messages:
    31
    Likes Received:
    2
    Thanks for the response, Evan. I was reading similar concerns. This seems to make sense.

    After doing more testing, I made some changes and rebooted the system. I'm not 100% certain, however, I reduced the amount of RAM assigned to the VM. I essentially followed the guidance in the Hyper-v settings for the maximum NUMA allocation available with the current configuration. I noticed that the well-performing system was under this threshold while the poorly-performing system was over this threshold.

    After reducing the memory from 16 GB to 10 GB, my CPU Wait Time Per Dispatch has dropped to between 40,000 nanoseconds and 100,000 nanoseconds, depending on load. I also see that the % Guest Run Time is now hitting 97% - 99% whereas before, the maximum I could hit was approximately 75%.

    I'm going to test reducing the vCPUs, however, that may not be as much of an issue at this point. The other two systems rarely do anything at all. One is an OpenVPN server and the other is a Windows 10 home install I use for testing software. Rarely used.

    If I get anything more concrete I'll post back. Thanks again for chiming in.
     
    #4
Similar Threads: Hyper-v 2016
Forum Title Date
Windows Server, Hyper-V Virtualization Has anyone ever got SR-IOV working with Chelsio T420-CR on Hyper-V 2016/2019? Jan 22, 2019
Windows Server, Hyper-V Virtualization Server 2016 Hyper-V Host with Linux iSCSI Target Mar 22, 2018
Windows Server, Hyper-V Virtualization Dell R730 , Hyper-V server 2016, NIC problem Aug 5, 2017
Windows Server, Hyper-V Virtualization Help needed for Setting up Windows Server 2016 Hyper-V cluster Jun 28, 2017
Windows Server, Hyper-V Virtualization Moving from Windows Server 2016, to Hyper-V on Nano Server 2016. Will I need to format data drives? May 18, 2017

Share This Page