Server 2019: How to get best ST performance inside guest VM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

blublub

Member
Dec 17, 2017
34
2
8
42
Hi

We have a Hyper-V host running Server 2019 and guest VMs running Server 2019 as well.
In our main Server VM each job that is processed is single threaded and I am looking into improving thins a little bit if possible.

Since it is Server 2019 it using the core schedule and currently the VM in question has 24 vCPUs and inside the VM it shows up as 12c/24t - which makes sense.
Inside the machine settings there is an option called "hardware threads per core" which is set to 0 /default) and above I can see that this equals 2.

So question here is would I gain anything by setting that to 1 ?

thx for any feedback
 

Marjan

New Member
Nov 6, 2016
25
4
3
Hi,

I never had any need to change this but here is what it looks like to me.
The default of 0 is just use what ever the physical CPU on host has, that is 2 threads per core. Setting to 1 I guess would be to use only one thread per core.

Back to your question, depends on what software is running in your VM. There are some that actually run a little bit faster with one thread. So, if I were you I would try setting to 1 to see if VM and whatever is installed on it, runs a little bit smoother. Some testing is needed.

Generally, I would leave it on default setting unless I know for certain that setting of 1 will help.

I hope there is someone with more precise/better answer than mine.

Cheers
 
  • Like
Reactions: blublub

b-rex

Member
Aug 14, 2020
59
13
8
I would leave the thread settings alone. On a virtual machine, it does very little to anything at all since even though the pCPU is available via the vCPU, the hypervisor is actually controlling the scheduling of the processor. For example, if your VM is demanding 100% of a vCPU, the hypervisor takes that demand and schedules it on a pCPU (and as one thread). If you're not CPU bound, then it usually is one to one...but if you have oversubscribed your CPU as most do, then both threads on your VM still are competing against other tasks scheduled on the pCPU. The actual threading in the VM doesn't matter (as much) because it's abstracted by the hypervisor. Also, there are very few cases where reducing the number of threads from two to one will make a difference. In fact, in most virtual cases it will hurt performance and will restrict your VM's vCPU's ability to schedule fully against the pCPU. One must consider what a server is doing. For example, it may make sense if you have multiple CPU bound applications running simultaneously doing highly heterogeneous tasks to restrict the number of threads that can run on a CPU at one time. Even though a CPU has two threads it can really only handle one to process at a time. Intel uses SMT (HTT: hyperthreading) while most older CPUs end up using interleaving (although I believe this has changed for AMD, for example). Technically speaking, SMT should allow both threads to be processed simultaneously...yet the CPU is still bound by the number of reorder buffers, caches, ALUs, etc. Both threads will still share those aspects of the processor meaning even with SMT, you can have a bottleneck at the CPU where each thread is handled in a more interleaved fashion. NUMA and memory overhead also play a role in how these settings should be configured.

Suffice it to say, it's complicated. Leaving it at the default allows the operating system of both the guest and the hypervisor to optimize compute. In general, with an Intel HT CPU, it makes sense to just leave it at two to take advantage of SMT when possible. Depending on other processors, ones that are more time divided, it might make sense to reduce it to one...but this is a thing of the past. Most processors, especially ones running now are using simultaneous multithreading.
 
Last edited:
  • Like
Reactions: blublub and Marjan

ecosse

Active Member
Jul 2, 2013
463
111
43
Might be completely off topic but Windows Server 2016 had poor storage performance unless DisableDeleteNotify was enabled. Interested to hear any disk performance results in 2019 if you did have any time to test