Move from dual cpu server to single cpu to reduce power consumption

Macx979

New Member
Dec 26, 2019
11
2
3
Hi,

as the title says, I am about to reduce the overall power consumption of my server rack and its components.

Among other devices I run a Supermicro X9DRi-LNF+ mainboard with 2x e5-2660v2 with 128GB DDR3 ECC.
Basically that's too much power for my homelab but I got this set some time ago on ebay for a decent price. That time power was not my concern. :)

Currently the system runs ESXI7 with 7 VMs active and it idles at around 5% CPU load with peaks to 20%. I also use some VM with Desktop OS'es which I switch on from time to time. Saying this, means I need some CPU headroom but the system idles most of the times. When idle, the system draws 110W from the wall.

Since I am trying to reduce my power bill I thought about replacing it with a single core cpu and mainboard. Since DDR4 ECC Ram is extremely expensive, I'd rather stay with the DDR3 Ram I have. However, by doing so, my options are limited.

I broke it down to a couple of CPUs which sell cheap on ebay and could potentially reduce the power draw.

E5-2695 v2 - 12C - 115W TDP -> powerful CPU but still high TDP
E5-2648L v2 - 10C - 70W TDP -> looks quite promising to me - not sure tough if it has enough power.
E5-2650L v2 - 10C - 70W TDP
E5-2450L v2 - 10C - 60W TDP
E5-2448L v2 - 10C - 70W TDP

In terms of mainboard I'd probably go for a supermicro X9 mainboard depending on the cpu socket


Do you guys think, that way make sense in terms of reducing my power consumption? It actually should reduce the consumption by 40W at least which equals to roughly 100€/y for the power bill.
Does moving from a dual cpu to a single cpu actually make a big difference? especially when idling?

any other thoughts on my idea?

thx & Best
Macx
 

Rand__

Well-Known Member
Mar 6, 2014
4,593
912
113
Two main things to clarify
1. How much memory do you need (32, 64, 128, or more)
2. Whats the minmum clock speed you need (<2GHz; 2-2,4GHz; >2.4GHz) (the latter for single threaded /interactive processes)

If those 7 VMs idle most of the times you might get away with 4 cores / 8 threads and a Xeon E3 (v3/5 depending on memory req). Check the current peak MHz consumed to get a feeling for total performance you need.
Else you could get a cheap E5 single socket cpu (depending on clockspeed vs cores requirements)

Forget about the L CPUs, they dont use less power, they just limit the TDP artificially.
 
Last edited:

BeTeP

Well-Known Member
Mar 23, 2019
520
324
63
Forget about the L CPUs, they dont use less power, they just limit the TDP artificially.
All processors of the same family and generation use the same manufacturing process so they do use the same power at the same frequency per core. You are right about -L processors having max frequency artificially limited to fit the TDP but you are forgetting that they also have lower base frequency as well. So they do use slightly less power while idling.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,269
430
83
Even if you want to stick with DDR3, moving to a single-CPU board that can run the Xeon v3 or v4's would give you substantial power savings; the v2 platforms all tended to be quite power hungry but this was much improved with the Haswell Xeons.

Buying a Supermicro 1P X10 board that can take a Haswell Xeon should allow you to re-use your DDR3 memory and you won't be using a motherboard where some of the PCIe slots won't work.

However, this is going to be dependent on how much CPU/memory resource you really need. If you're peaking at 20% on a 20-core setup you might be able to get away with a very cheap and very power-efficient E3 Xeon, but if you need more than four cores the E5-16xx v3's are very popular. With the CPU load so relatively low, are you actually using all of that 128GB of RAM? What PCIe IO are you using currently?

As others have said, if your system is mostly idle already, switching to another CPU of the same generation won't do much for your power consumption; removing one of the CPUs will be an improvement, but not as much as making something with low power usage in mind.
 
Last edited:

BlueFox

Well-Known Member
Oct 26, 2015
1,075
517
113
Buying a Supermicro 1P X10 board that can take a Haswell Xeon should allow you to re-use your DDR3 memory and you won't be using a motherboard where some of the PCIe slots won't work.
Haswell E5 means DDR4 (with some rare exception, none Supermicro). If you mean E3 boards, ECC UDIMMs are just as expensive as DDR4 RDIMMs.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,269
430
83
Gah, sorry yes I was getting my generations confused >_<

Yes, Haswell E3s are DDR3 UDIMM-only, and the Haswell-E/E5s are mostly all DDR4 so neither would work with DDR3 RDIMMs.

Probably best at this point to evaluate how little CPU and RAM you can get away with and build accordingly...
 

hmw

Active Member
Apr 29, 2019
236
77
28
...

Do you guys think, that way make sense in terms of reducing my power consumption? It actually should reduce the consumption by 40W at least which equals to roughly 100€/y for the power bill.

...

Does moving from a dual cpu to a single cpu actually make a big difference? especially when idling?

...
any other thoughts on my idea?

thx & Best
Macx
You said the system consumes 110W when idle, there's loads of factors involved: i.e. PSU quality (gold? titanium?), number of fans & idle speed, BIOS C-state and power settings

As a general rule - if you go for more modern components you will see some power savings because the more modern CPUs have more power saving states and can shift in and out of them faster. However this has to be enabled in BIOS. If you have server style PSUs which are titanium rated you might see less idle power consumption.

And server fans are rated 24/7 so are run at higher speeds all the time - meaning more power

Another culprit is add-in cards - 10GbE NICs, HBAs - all consume 10W ~ 15W each.

Hence if you want to reduce idle power to 60W, you should carefully look at all components and not just the CPU
 

kapone

Well-Known Member
May 23, 2015
796
388
63
You must have other things connected to the system.

I have multiple dual xeon systems of the same generation (2x e5-2650v2 - same 95w TDP) with 128gb RAM and even with a 10g Nic connected, they idle at ~77w with ESXI loaded. This is with a platinum rated PSU and PWM fans.
 

Macx979

New Member
Dec 26, 2019
11
2
3
I have multiple dual xeon systems of the same generation (2x e5-2650v2 - same 95w TDP) with 128gb RAM and even with a 10g Nic connected, they idle at ~77w with ESXI loaded. This is with a platinum rated PSU and PWM fans.
Additionally there is a Dual 10GbE NIC in there as well as 6 Noctua fans of different sizes. I already tweaked out whatever power saving feature was possible.

I think you guys agree that there is a difference between an enterprise server and a home lab server. In an enterprise environment you try to max out these servers for having the best efficiency in terms of revenue vs cost.

In a home lab environment, the server idles most of the time but we install new VMs in order test out stuff, we start and kill instances etc. What I'm trying to say is, that I cannot figure out my maximum demands and size the server based on this. Simply because I don't know what I am gonna try out tomorrow. And of course I don't wanna limit my possibilities because of this.
That was the reason why I particularly asked for idle power consumption.

So my feeling is, I could live with a 12/24 core single CPU if it reduces my idle power consumption remarkably. But based on your valuable comments I understand that a Xeon v2 generation is probably not the best way to save some bucks and I should go for a v3 or v4 and maybe start with only 64GB Ram and top it up if needed.

Do you guys have any recommendation for a good v3/v4 Xeon with minimum 12 Cores and good Performance per Watt ratio?

Macx
 

Rand__

Well-Known Member
Mar 6, 2014
4,593
912
113
From what I read I'd almost recommend splitting up your builds... one low power box for the always on VMs and another one (same or more power) for when you want to play around
 

hmw

Active Member
Apr 29, 2019
236
77
28
So my feeling is, I could live with a 12/24 core single CPU if it reduces my idle power consumption remarkably.

Macx
Why not go for a EPYC 7302P or similar and use the cTDP setting to push the TDP limit lower? The problem is that if you're spending on a titanium rated PSU, EPYC, NVMe/SSDs, DDR4 etc - these cost $ and while you will save energy, it will be some time before you can have that return on investment - from a dollar perspective of course. Keep in mind that PCIe 4.0 will have higher energy usage than PCIe 3.0

From https://www.servethehome.com/amd-epyc-7302p-review-a-category-killer/

Even with that here are a few data points using the AMD EPYC 7302P in this configuration when we pushed the sliders all the way to performance mode and a 155W cTDP:

  • Idle Power (Performance Mode): 99W
  • STH 70% Load: 169W
  • STH 100% Load: 198W
  • Maximum Observed Power (Performance Mode): 221W
The article states they used a Supermicro AS-1014S-WTRT which had 2 x Intel DCS3710 SSDs and 10Gbase-T enabled. If you replace the SSDs with a single NVMe, use a motherboard with 1GbE, lower cTDP and use slower RAM (as well as using a 2U/3U chassis to have slower, more energy efficient fans) - you should have a homelab that can consolidate everything and still not consume oodles of energy. This is exactly what I did with my homelab :)
 

hmw

Active Member
Apr 29, 2019
236
77
28
So whats your idle power draw with that combo?
With:
  • 1 x EPYC 7302p
  • 8 x16GB DDR4 3200
  • 1 x GTX1080
  • 2 x Mellanox ConnectX-3/ConnectX-4 LX
  • 1 x LSI 2308
  • 5 x 4TB HDDs
  • 2 x 1TB NVMe
  • 5 x 80mm high static pressure Arctic Cooling fans
  • 1 x 92mm CPU fan
  • 1 x 750W Seasonic Platinum PSU (fan's set to always on)
I have 4-5 VMs on ESXi, and this gives me 110 ~ 125 watts idle. But it also consolidates 3 separate servers

Update: Setting the cTDP to Auto (155W) instead of the 180W, brings the idle power down to 104W (according to my SMT1500 UPS)
 
Last edited:
  • Like
Reactions: Rand__

Macx979

New Member
Dec 26, 2019
11
2
3
Rand__
Well-Known Member
Yesterday at 10:20 PM
New
Add bookmark
#14
From what I read I'd almost recommend splitting up your builds... one low power box for the always on VMs and another one (same or more power) for when you want to play around
That's actually something I also thought about. I have an additional small server with a Celeron J4105. But as far as I know, using ESXI doesn't work well out of the box on this machine. Docker could be an option however.

Most likely the cheapest way to save some power could be to deactivate one of the cpu's. Of course this cuts the amount of usable ram by half to 64GB. Unfortunately there is no bios option to deactivate one cpu. Instead I have to remove it completely.

In term of dual vs single cpu setup: Does anyone knows what I could expect in terms of power drop for idle. Given the mentioned baseline of 110W my dual cpu setup has at the moment.

Why not go for a EPYC 7302P
I rather look for older CPU since these are quite cheap to buy. So basically it's about finding the sweet spot between hardware cost and power cost... as usual. :)
 

kapone

Well-Known Member
May 23, 2015
796
388
63
A Supermicro X9SRL-F (single 2011 socket) with an E5-2650v2, 64GB RAM (8x8gb), single CPU fan, single boot SSD, no NICS connected, booted to Windows = ~42-43w idle.

This is also with a platinum PSU (which has it's own fan as well)
 

Macx979

New Member
Dec 26, 2019
11
2
3
A Supermicro X9SRL-F (single 2011 socket) with an E5-2650v2, 64GB RAM (8x8gb), single CPU fan, single boot SSD, no NICS connected, booted to Windows = ~42-43w idle.

This is also with a platinum PSU (which has it's own fan as well)
That's extremely helpful. Thank you.