CPU v. RAM Utilization

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,802
113
I know this is going to sound like a funny topic, but does anyone have a virtualization server that looks like this in terms of CPU v. RAM utilization?
upload_2017-10-16_10-12-29.png

I know, it needs more RAM. That aside, does anyone else have a server with 30x RAM utilization v. CPU utilization?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Not atypical at all, especially if you are doing straight virtualization of existing systems. Lots of memory overhead in carrying the guest OS, lost of conservative planning in allocating RAM to the VMs. KVM/Virtio balloon drivers don't really work all that well (esp. with windows guests, but true with Linux guests too).
 

nkw

Active Member
Aug 28, 2017
136
48
28
I think this is a pretty common scenario. Isn't this the type of stuff that Intel and Micron are really trying to hit with Optane and QuantX? Big memory footprint VMs with low compute/io utilization? I don't quite get what they are doing with the goofy consumer m.2 Optane stuff they are peddling, and the only other form of Optane I've seen so far is NVME ssd. I thought the real use for this stuff was supposed to be a layer between DRAM and NAND ssds that can be addressed as DRAM.
 
  • Like
Reactions: Stux

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Concur, 'par for the course w/in Virtualization space' where memory to CPU consumption/allocation are in different worlds. We typically see approx 20% CPU usage by the time our memory is consumed @ 80% so a factor of 4 for us. HIGHLY dependent of course on CPU to memory balance/config and workloads.
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Same here on most hosts, guess ARC when pve is running on zfs and usual caching the linux kernel does, also within vm's, is part of this picture.
 
  • Like
Reactions: Patrick

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Normal for general virtualization, unless you have some specific CPU heavy stuff (and we usually configure those hosts differently).

Quick rule for me is 16gb or more per core is a good ratio.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
so many cycles left for mining :D

I'm currently upgrading with fewer cores but higher frequency CPUs in 1P systems.
I need more nodes (maybe ceph somewhen) but not necessary more CPU, NUMA and a significant higher power consumption.
Hopefully the things that hit the CPU are also a bit snappier due to the faster cycles.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Similar sort of load to every company I've worked for that uses virtualisation; lucky if you've used half of your available CPU before your average host runs out of RAM for any more guests. You'll typically have fifty or so VMs per cluster that'll have a relatively high CPU requirement for some or all of the day, but they'll be dwarfed by the hundreds of other VMs with very low CPU and IO requirements.

Such setups are becoming more and more common I think, because core counts at a given price point seem to be going up, yet the cost of RAM seems to stay high.

<insert standard rant about the people who set up the chargeback model for lifecycle costs decided on CPU as the single metric to be used, when memory and IO and even storage are considerably more expensive by the back-of-a-fag-packet calculations of us in the trenches>
 
  • Like
Reactions: Stux

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
We do chargeback by memory and storage and just a base cost for VM that covers CPU.
Being a cost center at the end of the day users pay and no more no matter what the model for charging is.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Normal for general virtualization, unless you have some specific CPU heavy stuff (and we usually configure those hosts differently).

Quick rule for me is 16gb or more per core is a good ratio.
How convenient, my EXACT home lab/cluster config ratio per ESXi host (8 cores, 128GB memory, UP systems).

LOL...sssshhh don't tell anyone the super 'secret sauce' but that does seem to be a good balance though for general purpose clusters excluding anything insane like SAP HANA clusters that we virtualize or VDI for engineering.
 

ruffy91

Member
Oct 6, 2012
71
11
8
Switzerland
Just add a few VDI Desktops and workers on that host and CPU will be the limiting factor.
Browsers with JavaScript LOVE churning through these CPU cycles.


We have a client with about 1700 persistent VDI Desktops on 160 Cores. It all went well for a few months (<25% CPU load on a work day) until we convinced him to do Windows Updates on his Desktops. CPUs and Storage were 100% loaded for a few days straight.
He will hopefully change to non-persistent Desktops soon.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
today i did a pveperf on a production box, what took nearly an Hour :eek:
getting nervous, i figured pveperf flushes the vm system competely, what was not what i expected or intended to do.
(imho it should really warn/ask for proceed)

in the end, the vm flush somewhen finished clean and released > 60gb on thos 192gb box.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I would always keep VDI away from general virtualization clusters, it has such special needs in terms of CPU and IO and maybe video or probably licensing it make sense not to mix it.
Likewise something like a bunch of sharepoint servers or SAP systems.

We don’t generally do it but could also seperate Linux and Windows into seperate clusters just to save license costs.
 

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
@Patrick, did you ever get a definite answer on this?

Normally people will say to add more memory when suffering from performance problems. While that is usually good advice, I have been that they have seen times times when adding more memory without adding correpondingly more CPU will make the problem worse. I don't remember the details of the explanation unfortunately.

Does anybody know of a serious explanation of the difference adding more CPU vs more memory, and whether this is different according to hypervisor?
 

realtomatoes

Active Member
Oct 3, 2016
251
32
28
44
@Patrick, did you ever get a definite answer on this?

Normally people will say to add more memory when suffering from performance problems. While that is usually good advice, I have been that they have seen times times when adding more memory without adding correpondingly more CPU will make the problem worse. I don't remember the details of the explanation unfortunately.

Does anybody know of a serious explanation of the difference adding more CPU vs more memory, and whether this is different according to hypervisor?
this is usually the case on our database vms. when we add more ram they can load more data to process and if the vcpu isn't sized appropriately with the increase in ram, we get that imbalance. that's why i always push them to test the resource increases with their codes on the UAT/load-test server before making any increase.
 
  • Like
Reactions: T_Minus