Move from dual cpu server to single cpu to reduce power consumption

Neil Jefferies

New Member
Jun 28, 2019
12
0
1
UK
neilstechdocs.blogspot.com
You can also generally set the power limits on the CPU in the BIOS so going for a higher powered CPU and limiting it is an option. Then you can fine tune your performance/power balance. For example, I had a pair of E5-2643's which I limited to 95W with 115W boost (as opposed to 130W/156W). In practice it didn't make much difference, even to benchmarks. I think, as processes improved, later date processors had a real TDP some way below their markings.
 

Markess

Active Member
May 19, 2018
513
224
43
If I understand correctly, your primary concern is idle power consumption? If so, your gains will be limited with the Intel E5 v2 platform. There's a lot of good advice out there on how to limit power draw under load, but for idle consumption you have fewer options and it looks like most of them are already covered.

A couple things to keep in mind when considering a solution that reuses the DDR3 RAM (so keeping to E5 v2):

  • As others have noted: in general, E5 chips from the same generation (Ivy Bridge) will have similar idle power draw at similar base frequencies. The differences are more pronounced under load, but that doesn't seem to be the issue here. CPUs with a lower base frequency will draw less power at idle, but you're going to find the actual savings isn't a whole lot. If you buy a new CPU(s) for a lower base frequency, you'll need to run them for a LONG time before they pay for themselves.
  • If you opt for a single CPU Socket 2011 Motherboard (X9SRxx) its going to have the same Intel C6xx family chipset as yourX9DRi-LNF+ board. You'll see some power savings because there's "less mass"in the board to push power through, and if you get one with fewer features (2 onboard NIC instead of 4, etc) you'll see savings there too. But again, you'll find that all else being equal (same CPU, same add in cards, etc.) you aren't going to see much savings over simply pulling one CPU from your current machine. At the savings of only a few watts, it will take you a long time for the investment in a single CPU motherboard to pay off.
  • Simply pulling one CPU will give you a noticeable reduction for "free", so you may want to try that just to see how it goes. You note that you are normally peaking at 20% CPU load, so it seems you have some headroom. Depending on your configuration, you won't be able to use all your RAM, but it would be the same case with a single CPU board.
  • Switching to a single CPU with higher core count may be an option too. But in this case, you're already at 10 Cores, so you might find the ROI on a 12 core isn't much.
As an example, my system with an X9DRT-F, dual E5-2628L v2, 128MB RAM, and 4x3.5 SATA Drives powered by a Gold PSU in a "desktop" style case drew ~77 Watts at idle with ESXi and no VMs active. Changing to a single E5-2640v2 (but keeping the 128GB of RAM) brought it down to ~55 Watts, some of which was probably from one less CPU cooler and lower overall fan RPM due to less overall heat.
 

Macx979

New Member
Dec 26, 2019
11
2
3
You can also generally set the power limits on the CPU in the BIOS so going for a higher powered CPU and limiting it is an option. Then you can fine tune your performance/power balance. For example, I had a pair of E5-2643's which I limited to 95W with 115W boost (as opposed to 130W/156W). In practice it didn't make much difference, even to benchmarks. I think, as processes improved, later date processors had a real TDP some way below their markings.
That's a good idea. But I guess it doesn't make much of a difference when idling.


If I understand correctly, your primary concern is idle power consumption? If so, your gains will be limited with the Intel E5 v2 platform. There's a lot of good advice out there on how to limit power draw under load, but for idle consumption you have fewer options and it looks like most of them are already covered.

A couple things to keep in mind when considering a solution that reuses the DDR3 RAM (so keeping to E5 v2):

  • As others have noted: in general, E5 chips from the same generation (Ivy Bridge) will have similar idle power draw at similar base frequencies. The differences are more pronounced under load, but that doesn't seem to be the issue here. CPUs with a lower base frequency will draw less power at idle, but you're going to find the actual savings isn't a whole lot. If you buy a new CPU(s) for a lower base frequency, you'll need to run them for a LONG time before they pay for themselves.
  • If you opt for a single CPU Socket 2011 Motherboard (X9SRxx) its going to have the same Intel C6xx family chipset as yourX9DRi-LNF+ board. You'll see some power savings because there's "less mass"in the board to push power through, and if you get one with fewer features (2 onboard NIC instead of 4, etc) you'll see savings there too. But again, you'll find that all else being equal (same CPU, same add in cards, etc.) you aren't going to see much savings over simply pulling one CPU from your current machine. At the savings of only a few watts, it will take you a long time for the investment in a single CPU motherboard to pay off.
  • Simply pulling one CPU will give you a noticeable reduction for "free", so you may want to try that just to see how it goes. You note that you are normally peaking at 20% CPU load, so it seems you have some headroom. Depending on your configuration, you won't be able to use all your RAM, but it would be the same case with a single CPU board.
  • Switching to a single CPU with higher core count may be an option too. But in this case, you're already at 10 Cores, so you might find the ROI on a 12 core isn't much.
As an example, my system with an X9DRT-F, dual E5-2628L v2, 128MB RAM, and 4x3.5 SATA Drives powered by a Gold PSU in a "desktop" style case drew ~77 Watts at idle with ESXi and no VMs active. Changing to a single E5-2640v2 (but keeping the 128GB of RAM) brought it down to ~55 Watts, some of which was probably from one less CPU cooler and lower overall fan RPM due to less overall heat.
55 to 77 Watts is almost 30%. So that's actually quite some saving.


I just did the test and removed one CPU. I measured the power of my server for each state. The results are quite impressive.

Power of ESXI only with 7 VMs runningPower of ESXI only with 2 VMs runningTotal Power Rack with 7 VMs running
2 CPUs134W114W325W
1 CPU84W70W278W

2 VMs was the minimum in order to measure power with Database and Grafana VM.

the delta varies from 44W to 50W ~ 40%. I wasn't expecting such a big difference.

I could now leave it that way but I guess I'm gonna watch for a cheap Xeon V2 and sell my dual cpu setup. Should be close to cost neutral and I can reuse the entire 128GB.
 

hmw

Active Member
Apr 29, 2019
235
77
28
55 to 77 Watts is almost 30%. So that's actually quite some saving.
In Northern California we pay 30 cents per KWh - those watts add up ! You can also use ESXi's power management to change from Balanced to Low Power for VMs - assuming you have setup everything in BIOS correctly, ESXi will enter deeper C-states and P-states more often, leading to the processor spending more time in idle & deeper idle.

Screen Shot 2020-07-09 at 1.09.11 PM.png


The other thing to note is that Skylake & Kaby Lake don't need that much OS support for power savings - one big reason why the Xeon E-2126 has such good idle, for example even when running under something like OPNSense/pfSense. If you use consumer CPUs and can live with less memory bandwidth and less memory - you can get really low idle power usage

Here's idle power per socket from Anandtech's 7F52 article:

Screen Shot 2020-07-09 at 1.22.57 PM.png
 

Macx979

New Member
Dec 26, 2019
11
2
3
In Northern California we pay 30 cents per KWh - those watts add up ! You can also use ESXi's power management to change from Balanced to Low Power for VMs - assuming you have setup everything in BIOS correctly, ESXi will enter deeper C-states and P-states more often, leading to the processor spending more time in idle & deeper idle.

ACPI P-states, ACPI C-states was already activated. I even went with Low power instead of balanced which gave me another couple of watts.

In Germany power is also ridiculously expensive. 29 euro cents which is roughly 33 dollar cents.
That's actually the reason why I am optimizing my power consumption. And yes, I agree 30% reduction sum up to some good savings and in my case moving to a single cpu is even more efficient. For a powerful ESXI machine consuming something between 70-80 is a number I can live with.
Maybe a single board draws even less than a dual board with only 1 cpu... we'll see.

Anyway, reducing the power consumption of this particular server is only one piece on my way of optimizing my power bill. There are also other ideas like replacing my two rack switches by a single one, deactivate unused switch ports, switch off power plugs during night (automated of course) and many more.
 

Fritz

Well-Known Member
Apr 6, 2015
2,139
498
83
66
I just retired my 10g network to save power. Felt I wasn't getting enough benefit for power consumed. My Quanta LB6M was both power hungry and noisy as crap. Plus I had dual 10g NIC's in all my servers.
 

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
While it’s DDR4 there has been some 8-core Xeon-D boards around hard that are nice and low power.
D-1540/1541 with 4 x 32gb and and couple of SSD and just average PSU can idle at around or just under 30w

With exception of the epyc3000 soc intel generally idles a good deal lower than AMD. Does really depend on priority but an always on system should be easy enough to find something below 40w or so.
 

Macx979

New Member
Dec 26, 2019
11
2
3
I just retired my 10g network to save power. Felt I wasn't getting enough benefit for power consumed. My Quanta LB6M was both power hungry and noisy as crap. Plus I had dual 10g NIC's in all my servers.
I haven't actually thought about the 10G network in terms of power consumption.
Had a look at the specs of my intel X540-T2 - max is 17,4W but most likely the idle power is much lower.
Power consumption (max): 100Mbps 6.6W, 1 Gbps 9.5W, 10 Gbps 17.4W Operating voltage: 3.3V and 12V

retiring the 10G Network is only the last option. My NAS outperforms a 1G link and therefore there is some benefit. :)
 

kapone

Well-Known Member
May 23, 2015
796
388
63
Any chance of installing a few solar panels...? :)

p.s. I'm seriously considering that in the next house we buy (in the market right now). Solar prices are starting to get attractive, especially for always on devices.
 
  • Like
Reactions: Tha_14

Macx979

New Member
Dec 26, 2019
11
2
3
Any chance of installing a few solar panels...? :)

p.s. I'm seriously considering that in the next house we buy (in the market right now). Solar prices are starting to get attractive, especially for always on devices.
I agree.

:) Actually I had a look at this. Especially these one-panel-systems for balconies, since I live for rent. But even tough it's called plug and play we have too many regulations and bureaucracy over here which pushes the return of invest too far into the future. And it could even come with insurance issues in case something happens. So unfortunately that's not an option
 

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
10G copper especially is a power hog, very surprised lower power network hasn’t come. (L3 I mean)
If you just want some low power small number of ports use something like the 4 port Mikrotik and some SFP+ cards can keep low power for some basic L2 switching
 

Rand__

Well-Known Member
Mar 6, 2014
4,589
912
113
p.s. I'm seriously considering that in the next house we buy (in the market right now). Solar prices are starting to get attractive, especially for always on devices.
I love mine - wished I had more space or they'd be more efficient though - still spending way too much on electricity (o/c running a slightly overpowered homelab does not help at all)
 
  • Like
Reactions: Evan

Macx979

New Member
Dec 26, 2019
11
2
3
10G copper especially is a power hog, very surprised lower power network hasn’t come. (L3 I mean)
If you just want some low power small number of ports use something like the 4 port Mikrotik and some SFP+ cards can keep low power for some basic L2 switching
currently I have a Mikrotik CRS326-24G-2S+RM and a CRS317-1G-16S+RM. I bought the latter since I was planing to use more than 4x 10Gb and you hardly find any switch with more than 4Gb SFP+ besides the full blown all SFP models.
So, I don't need more than 4 SFP+ anymore, I'm going to replace both switches by a CRS328-24P-4S+RM and reduce power consumption a little further. A nice side effect will be that I reduce the complexity of my network which sometimes drive me nuts. :)
 

111alan

Member
Mar 11, 2019
46
17
8
Haerbing Institution of Technology
No, single socket won't save you any power if you want the same performance over all core and per core. To drive the same amount of cores at the same frequency you always need the same amount of power. In fact dual socket means there are 2 vrm sets, meaning the current through each vrm is smaller, which actually increase the power efficiency up to 5-10%.

Of course there are 2 north bridges, but its power consumption is rather small(unless it's zen2's huge io die ). There could also be more drams but the memory performance also increases for 2s 8ch over 1s 4ch. You can use less dimms if dram performance isn't very important.

BTW don't think that AMD Zen2 can magically reduce power usage for nothing. The low TDP of EPYC2 also means low frequency. For example, EPYC2 7702 runs at about 2.35GHz while rendering. If you don't need per-core perf, you can bring any CPU's perf/pwr ratio up by lowering the frequency.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
@111alan the topic here and for more home labs is ‘idle’ usage or mostly idle so peak performance is not usually a concern.
essentially it’s normally the case the most efficient systems is the one that works the fastest and races back to idle.

i am clearly in the minority here because I can why anybody would ever need a dual socket home lab except for one reason and that is because you want to test enterprise stuff and used dual socket that is prevalent cheap in the used market and my extension also home where power costs are Low and equipment is cheap.
with the exceptions of a few I can’t even see why anybody at home needs more than say 8-core for actual homeserver usage, I know there is a few doing masses of transcodes and some people doing heaps of video stuff (but I would farm that out to new cpu’s for sure) but for the most part i don’t think the bulk of people need more. (Besides some of us have 10,000’s of cores to play with at work)
 

Fritz

Well-Known Member
Apr 6, 2015
2,139
498
83
66
@111alan the topic here and for more home labs is ‘idle’ usage or mostly idle so peak performance is not usually a concern.
essentially it’s normally the case the most efficient systems is the one that works the fastest and races back to idle.

i am clearly in the minority here because I can why anybody would ever need a dual socket home lab except for one reason and that is because you want to test enterprise stuff and used dual socket that is prevalent cheap in the used market and my extension also home where power costs are Low and equipment is cheap.
with the exceptions of a few I can’t even see why anybody at home needs more than say 8-core for actual homeserver usage, I know there is a few doing masses of transcodes and some people doing heaps of video stuff (but I would farm that out to new cpu’s for sure) but for the most part i don’t think the bulk of people need more. (Besides some of us have 10,000’s of cores to play with at work)
It's not about need.
 

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
It's not about need.
I know. I don’t ‘need’ all my storage as flash... not really any rational reason at all that old archives even sit on ssd. But yeah it’s cool to have.
Don’t worry I get that just wanted to point that there were options and also acknowledge if you have a workload then fast is good
 
  • Like
Reactions: Tha_14 and Fritz

Macx979

New Member
Dec 26, 2019
11
2
3
I know. I don’t ‘need’ all my storage as flash... not really any rational reason at all that old archives even sit on ssd. But yeah it’s cool to have.
Don’t worry I get that just wanted to point that there were options and also acknowledge if you have a workload then fast is good
you could also argue that nobody really needs a home lab or a car with more than 50 horse powers. ;) But having both of it, is nothing particular rational but rather having fun using it. :)
I think it's all about the purpose. I you know your demands exactly you can size the hardware to fit perfectly with no headroom but minimum power consumption and perfect cost/power ratio.
Another reason, you mentioned as well, is that (~7-8 years) old enterprise hardware is damn cheap and the only drawback is power consumption.
 
  • Like
Reactions: Tha_14 and Fritz

Fritz

Well-Known Member
Apr 6, 2015
2,139
498
83
66
My rack lies dormant most of the time so power consumption isn't an issue but noise is. That's why I go rid of my 10G.
 
  • Like
Reactions: Tha_14