Xeon Scalable vs EPYC idle power consumption

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

graczunia

Member
Jul 11, 2022
45
21
8
I love playing with both dual socket monsters and tiny mini micros. However the energy prices are not looking good in Europe, and I need something with more expansion capabilities than a TMM system can provide without breaking the bank. I'm looking for a replacement for my dual E5 v2 system as a main hypervisor/storage server. Something with plenty of compute and PCIe, while also trying to focus on reducing idle/low-load power draw. I'm thinking of a single socket LGA3647 or EPYC system. Haven't thought much about the specifics yet as I would like to hear your opinion on which one seems to be superior in terms of performance per watt as well as idle power draw.

If anything better comes to mind (e.g newer Xeon-D's?) I'm all ears - at the bare minimum I need enough SAS connectivity for at least 8 drives, x16 PCIe for NVMe's and 10GbE (w/ potential upgrade to 25/40); however I'd rather avoid the latest and greatest due to budget constraints. Trying to figure out something under 500€ for just the CPU+board.

Thanks in advance
 

gb00s

Well-Known Member
Jul 25, 2018
1,204
605
113
Poland
SM H11SSL with 7302P ... or choose another CPU depending on your workload like a 7371 (Naples) with higher clock 3.7Ghz on all cores or 7F32 (8cores) at base clock 3.7Ghz and up to 4Ghz turbo. A CPU combo for 3647 and a decent CPU below 500 EUR I still find hard to achieve.
 

gb00s

Well-Known Member
Jul 25, 2018
1,204
605
113
Poland
Those seem to idle around 100w for single socket. That's why I didn't get an epyc platform.
Puuuuh. I can't confirm it. My H11 with 7371 uses around 128w on 24/7 avg having all ram and pice slots populated. I were able to consolidate 3* 2011-3 platforms into 1. That's why I have chosen Epyc.

Edit: But Epyc gave me other headache with me always trying to go cheap with older and for me affordable hardware. Doesn't work well sometimes.
 

mach3.2

Active Member
Feb 7, 2022
138
95
28
i had a 100-000000054-04 (7502 QS) which idles at 30 Watts in windows(cores down to 1000-1200mhz)
the processor is currently running cinebench...look at the second value for min.power.
View attachment 27110
How about at the wall if you happen to have a kill-a-watt? I'd imagine ~100W to be pretty accurate for total system consumption measured at the wall.
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,464
1,654
113
How about at the wall if you happen to have a kill-a-watt? I'd imagine ~100W to be pretty accurate for total system consumption measured at the wall.
what do you expect for a Xeon Scalable with similar configuration ?
btw: thread is about "EPYC idle power consumption"
 

heromode

Active Member
May 25, 2020
380
203
43
For reference when i built my dual Xeon E-2680v4 (2x14 core) on a Asus Z10PA-D8 mobo, with absolutely minimum hardware, ie 2x CPU, 2x32GB rdimm's per socket, 2x bequiet tower coolers, and one 140mm rear fan, i was just able to get idle to 60W at the wall, trying all the available CPU power options in bios. The BMC/IPMI module alone would add 10W idle plus about 6W if sending a remote desktop image.

So that could serve as a reference for older HW on a C612 chipset.

Edit, a single E5-2680v4 pulls about 13W idle, iirc that same setup with only a single CPU was about 47W..
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,464
1,654
113
motherboard: X11SPM-F
memory: 6x 8GB RDIMM 2666Mt
CPU: Platinum 8175M 24cores.
ssd: ADATA NVMe 128GB
keyboard/mouse: USB
PSU: MSI mpg a850gf
OS: Windows10, power-saving plan: balanced
AC voltage 230V
kill-a-watt power on the wall at idle: 30..35Watts.

HWinfo reports 17Watts package power for the 8175M
note if you compare this with a EPYC you have to add the PCH power to the scalable CPU because EPYC is SoC.
 

heromode

Active Member
May 25, 2020
380
203
43
my old 14 core Xeon is 13W per package, your newer 24 core is 17W at idle.. the CPU is not the problem. But any motherboard especially a server mobo will add alot. and the fans add alot, and then add the pcie devices.. especially 10gbit nic's. And each HDD is at best 3-4 watts idle without spindown.. So it's very difficult to build a hypervisor/storage server that pulls less that 50W at idle. And if you do, even a light load will make that closer to 100W.

My hypervisor/storage server pulls 160W at light load atm. intel pstate governor at powersave, most cores at 1200MHz.

PSU: Corsair RM750
Mobo: Asus Z10PA-D8
CPU: 2x E5-268v4
RAM: 4x 2133 32GB Samsung ECC rdimm
Fans: 2x 140mm chassis fans @ 1000 rpm, 2x bequiet dark rock slim tower @ about 400rpm
1x intel DC P3700 800GB nvme SSD
1x Intel 910 400GB SSD (very old MLC that pulls 20W at idle)
2x PNY Quadro P620 GPU's, both active in passthrough for 2 desktops with light load
1x Solarflare 7022 2x10gbit SFP+ nic, light load
4x 14TB helium HDD (connected to mobo SATA)

remove the GPU's and the powerhungry old 910, and you'd be at 120-130W at best.. but then add a SAS card, 4 more hdd's, and a few more nvme's.. and you're back at 160W idle.

If the op figures out how to have a hypervisor with 8x hdd's, 10gbit nic, multiple nvme drives that pulls less than 100W idle, then i'd consider that a great achievement.
 

heromode

Active Member
May 25, 2020
380
203
43
The typical helium filled HDD's draw about 5W each at idle.. so 8x of those without spindown is already 40W, plus the SAS controller say 10W, that's 50W idle right there with no CPU or MOBO at all :)
 

heromode

Active Member
May 25, 2020
380
203
43
The op asks for a CPU that pulls less at idle, but that will make no difference. Even if he pays 10 grand for the latest greatest, he will save at most 10 Watts. Same with mobo, the saving will be at best 10 Watts if he invests thousands for the latest and greatest.

It's all the other components that add. First of all the op needs to implement a spindown script for the HDD's. If needed they need to be segregated so that for ex 2 hdd's that always need to be spinning are separated from the other 6 hdd's that are only used occasionally. Then there is the case and CPU fans. a SAS card and a 10gbit nic will atleast need some airflow, which means atleast one intake fan. then he needs one or two CPU fans, plus an exhaust fan. 4x fans is atleast 4x3W = 12W AT BEST

The nic and sas card will have to be configured with latest ASPM L0sL1 functionality to minimize their idle power draw, well, good luck with that.

The nvme drives will have to be the latest and greatest, carefully chosen for their low idle power draw, plus must work with latest ASPM standard. Then the hypervisor must be tuned to that as well.

My point is the op wants a hypervisor/storage server with 8x hdd's, a SAS controller and 10gbit network that pulls like 50W at idle.
He will have to pay thousands of dollars to make that happen, enough to cover the electricity bill of a 150W system for years.

The better way is to reduce the additional components, and use software solutions like hdparm HDD spindown, and partitioning of nvme devices.
 
  • Like
Reactions: abq and mach3.2

Stephan

Well-Known Member
Apr 21, 2017
948
715
93
Germany
Prio #1 to improve power draw really is putting the disks into standby when not in use. Of course this depends on usage profile. If there are VMs constantly hitting the disks, or users of a company through SMB, etc., this will be hard. For the occasional movie or daily backup at home, with rare use over the day, this would be ideal.

Best tool for HDD spindown is hd-idle in the upcoming 1.19 version (improved SAS standby, more disks etc): Commits · adelolmo/hd-idle Many years ago when 2 TB WD Green drives were the greatest, hdparm was method of choice to make HDDs go into standby themselves. My modern HGST 14 TB no longer support APM, AND also refuse to sleep when setting EPC (Enhanced Power Condition) timers. But hd-idle works good as "last straw". Will save 30 Watts (5W->1W) with 8 disks.

The other issue is package C-states. For a server anything above C2 IMHO forget it. Some NVME, or SAS or Mellanox card will prevent deeper sleep. For a laptop its important. My Acer Swift 1 with SATA and Goldmont Plus will go into package C6/C7 sleep and with display off just SSH and Wifi running will consume ~500mW which means the battery will last over a day.

To lower costs, look into solar. A Hoymiles HM-1500 with OpenDTU, four 410W panels and some wires these days are ~900 EUR. In PL I'd expect 1500 kWh p.a. from the setup. In NL you can just feed it back into the network as is and lower your fee. Not sure about PL technicalities. But 1500 kWh would be 170 Watts 24/7 load. Should be enough for any EPYC or Cascade Lake. At nasty 40ct/kWh would pay for itself after only 2 years.
 

jei

Active Member
Aug 8, 2021
156
82
28
Finland
If my notes are correct:

H12SSL-i + 2 sticks of 16GB 1xR8 RAM + Cooler + EPYC 7302.

68 W from the wall idle.
168 W Prime95 CPU heavy

Previous system X11 Xeon 4210R and Xeon 4208 were a bit lower but nothing that would make a difference in the power bill in the real world with accessories.

edit: Found 4210R measurements from the wall:

X11SPM-TPF + 1 stick of 16GB 1xR8 RAM + Cooler + Xeon 4210R

40 W Idle
90 W Prime95 CPU heavy

edit2: That's maybe a 15-20€ difference in electricity per year on Europe/Finland.
 
Last edited:
  • Like
Reactions: mach3.2

heromode

Active Member
May 25, 2020
380
203
43
Best tool for HDD spindown is hd-idle in the upcoming 1.19 version (improved SAS standby, more disks etc): Commits · adelolmo/hd-idle
Thx for this. I didn't know anything about spindown as i haven't actually dived into it yet, all i had was memories of hdparm..

I have yet to implement my 'spindown' scheme, but this information is gold. I am in the process of buying cables and pcie slot adapters to use 'external SAS' instead of a fileserver.

thanks to @jei i received the icybox 4x hdd enclosure, which i plan to use as a external 'fileserver', without the need for an extra case, atx psu, mobo, ram, cpu, nic, and all the cables and money that entails. I'll just slap some rubber feet under the box, and then buy the cables and adapters..

i already have a LSI 3008 card, now all i need is:

SFF-8088 auf SAS HD SFF-8643 online kaufen | eBay

External Mini SAS 4X SFF-8088 auf Mini SAS HD SFF-8644 online kaufen | eBay

High Density Mini SAS HD SFF-8643 auf SAS SFF-8643 online kaufen | eBay

and something like

https://www.aliexpress.com/w/wholes...ply+Adapter+H&spm=a2g0o.productlist.1000002.0
 
  • Like
Reactions: jei

nutsnax

Active Member
Nov 6, 2014
262
101
43
113
IMO if you're sweating 100 watts Epyc and Xeon SP might not be the best route.

Maybe look to upgrading the Xeons? I have a dual 2696 v3 setup that peaks at 370w with 8x8gb DDR3 installed so DDR4 should be lower power still. RAM type and quantity makes a difference here too. I use it when I have a bucket of threads I need to throw at something and not huge IPC.