Xeon Scalable vs EPYC idle power consumption

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

i386

Well-Known Member
Mar 18, 2016
4,640
1,763
113
36
Germany
h12ssl, 7443p, 128gb ram, 17x 16tb hdds in a sm 846 with sas expander backplane and raid controller: idle ~190 watt* (windows server 2022 login screen, large movie file being read from the array)

*measured via ups: system under load - system was powerded off

@graczunia how do you define "idle"?
 
  • Like
Reactions: ano

Markess

Well-Known Member
May 19, 2018
1,210
833
113
Northern California
IMO if you're sweating 100 watts Epyc and Xeon SP might not be the best route.

Maybe look to upgrading the Xeons? I have a dual 2696 v3 setup that peaks at 370w with 8x8gb DDR3 installed so DDR4 should be lower power still. RAM type and quantity makes a difference here too. I use it when I have a bucket of threads I need to throw at something and not huge IPC.
I don't think the OP, @graczunia, is specifically sweating 100w. That came into the discussion later. Reading their post, I think their main question was, could they replace a dual E5 v2 system with something newer and get both more compute and more efficiency? They specifically mention Scalable and Epyc but mention they are open to other ideas.

Since Sandy/Ivy Bridge are a dead end upgrade wise, a system upgrade is going to require replacing motherbaord, CPU, and RAM. @graczunia , I'm not sure if your main criteria is reducing consumption or replacing a system that's on an almost 10 year old platform?

Reading everyone's replies, the consensus (which I agree with) is that it's going to be the disks, HBA(s) , 10G networking, maybe PSUs if they are innefficient, and everyting else that's attached, that drive the higher consumption. If you replace a dual E5-26xx V2 motherboard and CPUs with a single early generation Scalable or EPYC, but transfer over/reuse your other components with the same general configuration, you may get more compute/easier management, but it won't save you much powerwise.

There also seems to be a consensus that, for the early generations, EPYC drew more power. I don't have either one, so can't comment on that.

Since you'll need Motherboard, CPU and RAM as a minimum, the 500 euro budget for the base components is going to be tight. You'll get newer, but not much better in the efficiency area.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,640
1,763
113
36
Germany
is not idle

idle(not sleep / power saving):
  • unemployed,
  • unoccupied,
  • free to start a job
Saw another post of yours in another thread and remembered that I wanted to treply to a certain post :D

In my case even when nothing was read from the array the differences were between 5-10watt (usually closer to 5 watt)
 

ano

Well-Known Member
Nov 7, 2022
718
317
63
h12ssl, 7443p, 128gb ram, 17x 16tb hdds in a sm 846 with sas expander backplane and raid controller: idle ~190 watt* (windows server 2022 login screen, large movie file being read from the array)

*measured via ups: system under load - system was powerded off

@graczunia how do you define "idle"?
no bios tuning Im guessing?
 

i386

Well-Known Member
Mar 18, 2016
4,640
1,763
113
36
Germany
moestly default settings, only things changed are related to secureboot and efi (vs csm and legacy stuff)
 

ano

Well-Known Member
Nov 7, 2022
718
317
63
they are very good at default settings, only all flash(faster flash, not sata) ZFS and ceph requires the bios tuning imho, but for most stuck stuff default is plenty good.

its quite easy to get it to use 270w vs the 200w you were at and marginal performance... and across servers and time this is $$$. I've been tuning 100G and ceph stuff for hours today... an excersise in futility, I'm now back at what the previous baseline we use for ZFS systems... (and really all performance systems)
 

graczunia

Member
Jul 11, 2022
45
22
8
IMO if you're sweating 100 watts Epyc and Xeon SP might not be the best route.

Maybe look to upgrading the Xeons? I have a dual 2696 v3 setup that peaks at 370w with 8x8gb DDR3 installed so DDR4 should be lower power still. RAM type and quantity makes a difference here too. I use it when I have a bucket of threads I need to throw at something and not huge IPC.
I do have a spare ASUS Z10PE-D16 WS, might play around with it and see how it goes - would fit perfectly in my Supermicro chassis.


@graczunia how do you define "idle"?
For the sake of the argument let's assume system is booted into Proxmox with a TrueNAS VM running, no reads/writes on the disks/network activity.


I don't think the OP, @graczunia, is specifically sweating 100w. That came into the discussion later. Reading their post, I think their main question was, could they replace a dual E5 v2 system with something newer and get both more compute and more efficiency? They specifically mention Scalable and Epyc but mention they are open to other ideas.

Since Sandy/Ivy Bridge are a dead end upgrade wise, a system upgrade is going to require replacing motherbaord, CPU, and RAM. @graczunia , I'm not sure if your main criteria is reducing consumption or replacing a system that's on an almost 10 year old platform?

Reading everyone's replies, the consensus (which I agree with) is that it's going to be the disks, HBA(s) , 10G networking, maybe PSUs if they are innefficient, and everyting else that's attached, that drive the higher consumption. If you replace a dual E5-26xx V2 motherboard and CPUs with a single early generation Scalable or EPYC, but transfer over/reuse your other components with the same general configuration, you may get more compute/easier management, but it won't save you much powerwise.

There also seems to be a consensus that, for the early generations, EPYC drew more power. I don't have either one, so can't comment on that.

Since you'll need Motherboard, CPU and RAM as a minimum, the 500 euro budget for the base components is going to be tight. You'll get newer, but not much better in the efficiency area.
Spot on - I'm looking for a replacement for the powerhog that the dual E5 v2 system is; not really considering how to reduce it's power draw as it doesn't seem to be worth the effort. Something that is more 'worthy' to run 24/7 on a student budget for a lack of better word :D
Not really sweating the 100w indeed, just trying to find something that would give me better performance per watt, although I am trying to reduce my lab's overall power consumption - and replacing this system would be a huge step towards that. One way or another, the system has to go and if I'm going to shell out some money I'd rather do it right and pick up something that would last me a couple good years - and while LGA2011-3 might be the way to go, I figured I'd might as well look into something newer as I'm rather unfamiliar with anything beyond E5 V4's.

Maybe I should specify more, here are the current system specs:

2x E5-2630L v2
Supermicro X9DRH-7F
12x8GB DDR3 ECC
2x Supermicro 750W Platinum
Intel 82599 (x520) NIC
2x 120GB SATA SSD
2x Samsung PM983 0.96TB
8x HGST 2TB

According to IPMI, it seems to idle in the neighborhood of 150W, with a very light load putting it up to over 200W. If I could come up with something that gives me say, twice the performance while cutting the idle power in half I'd see that as a win. I might stretch out the budget a little by selling off the CPU/MB/RAM combo, but the rest would ideally stay. There are for sure some further optimizations to be made (drive spindown, bigger but less drives etc), but the E5 v2 system doesn't seem to be a good base for it as the hardware itself is simply inefficient compared to modern offerings.
 

Primoz

New Member
Jun 30, 2025
5
1
3
I think this might be the best place for my peculiar question.

I'm currently running a TrueNAS Scale NAS which is on the 'consumer' platform running a Xeon E3-1225 v5 and 32 GB of ECC RAM with 2 20 TB SATA drives mirrored for data, 2 smaller drives that are spun down when not in use (only send over snapshots once per hour from the main pool) and an M.2 NVMe drive for apps (Plex, Navidrome, Immich, etc., more apps incoming in the future) and potentially VMs (don't have any and unlikely to have on this platform). This machine sits at about 30 W in idle if I say my UPS reporting the power draw correctly.

I also have a Xeon E5-2690 v2 system with 128 GB of RAM, 2 SSDs (SATA and M.2 NVMe) and a 1050 Ti GPU at the moment (was laying around) setup as a workstation that draws quite a lot more power even when idle, but it's run as needed.

I've been thinking about accessing it via Parsec from my laptop and, as I wouldn't need it beside me (next to the monitor), thinking about virtualizing the workstation on the NAS (for the moment let's ignore TrueNAS Scale is not the best of hypervisors and focus on the HW). Ideally I would have it running all the time if it did not impact the power consumption (as the NAS would be running all the time anyway).

The question, what kind of power draws could I expect if I upgraded the NAS to handle the workstation duties too? 128 GB of RAM would for sure be enough for everything, but 6 SATA ports would be a bonus (currently I have 2 SATA SSDs for boot, 2 data and 2 backup drives) as I would prefer to not add an HBA. Which platform to go forward with? AM5 does not really offer expandability to cover all of that (very few boards offer 6 SATA ports), what could I expect from an Intel Scalable, an Epyc or maybe even a Xeon W on an LGA-2066 (this solution could be very cost effective)? Power wise the v2 Xeon is, at this moment plenty. So a more modern platform with 8 to 10 cores would likely suffice at the moment, even handling NAS duties on the side

Solar is currently not an option, if it was, I would not be asking these kinds of questions :)
 
  • Haha
Reactions: Stephan

Stephan

Well-Known Member
Apr 21, 2017
1,085
845
113
Germany
This machine sits at about 30 W in idle
I am biased towards off-roadmap Cascade Lakes. 24C/48T 8259CL have hit 50 bucks on ebay. If you can get a cheap 3647 board and reflash the VRM and/or BIOS from the thread on STH, that would be a cheapish upgrade with not totally castrated expandability of the usual solutions. But this is no longer rock bottom low power of your E3. At least double the power, and with a few cards maybe triple so 60-90W in idle. Also make sure to cool the chipset and RAM with some airflow. But pretty much zero bugs left or sufficient workarounds are in Linux for this generation. The issue is that hyperscalers have been getting rid of parts of their x64 chips and are switching to in-house ARM designs. So for Xeon Scalable 4+ or Xeon 6 or whatever no more fleet of engineers and millions of engineer hours to iron out any remaining bugs.

On the other hand, performance of latest AMD chips with AVX512 also nice, but I know not much. Maybe somebody else can chip in.

The question is, do you need this many cores. My desktop is an old Kaby Lake E3-1275 v6 on a Supermicro X11SAT board with 64 GB RAM. X11 with i3wm desktop got really snappy going from 6.6 to 6.12 kernel even with CPU in powersave mode instead of performance. And with all mitigations on. But I compile kernels etc. on the big machines. You could even still use your NAS machine for desktop work and with modesetting: add support for TearFree page flips (!1006) · Merge requests · xorg / xserver · GitLab and a proper wrapper around mpv and using xrandr that switches modelines during a movie to an even ratio of movie's FPS, you will never have seen such a smooth scrolling star field. I added an example for a 4K display below.

Bash:
#!/bin/bash

if [ $# -eq 0 ]; then
    echo "Syntax: $(basename $0) [file]"
    exit 1
fi

OPTS=""
CONF=$(mplayer -vo null -ao null -frames 0 -identify "$@" 2>/dev/null | \
    awk '
    BEGIN {
        w=-1
        h=-1
        f=-1
        FS="="
    }
    /^ID_VIDEO_WIDTH=/ { w=$2 }
    /^ID_VIDEO_HEIGHT=/ { h=$2 }
    /^ID_VIDEO_FPS=/ { f=$2 }
    END {
        if (w<0 || h<0 || f<0) {
            print "NOIDEA"
            exit
        }
        if (w>1920 && h>1080 && f==25) { print "UHD25"; exit; }
        if (w<=1920 && h<=1080 && f==23.976) { print "FHD23"; exit; }
        if (w<=1920 && h<=1080 && f==24) { print "FHD24"; exit; }
        if (w<=1920 && h<=1080 && f==25) { print "FHD25"; exit; }
        if (w<=1920 && h<=1080 && f==29.97) { print "FHD2997"; exit; }
        printf "NOIDEA (w=%s h=%s fps=%s)\n",w,h,f
    }')

if [[ "$CONF" =~ NOIDEA* ]]; then
    CONF="FHD50"
    OPTS="--deinterlace=yes --vf=yadif --hwdec=vaapi-copy"
    echo "Unknown video resolution and fps, guessing $CONF $OPTS"
fi

monitor=$(xrandr --listactivemonitors | awk '/[0-9]: \+?\*.*\/.*\/.*/ { print $NF }')
[ -z "$monitor" ] && {
    echo "Unknown display, bailing out." >&2
    exit 1
}

case $CONF in
    UHD25)
        MODELINE="433.356 3840 3848 3880 3920 2160 2197 2205 2211 +HSync -VSync"
        DPI=144
        ;;
    FHD50)
        MODELINE="226.6 1920 1928 1960 2000 1080 1119 1127 1133 +HSync -VSync"
        DPI=96
        ;;
    FHD2997)
        MODELINE="274.285 1920 1928 1960 2000 1080 1130 1138 1144 +HSync -VSync"
        DPI=96
        ;;
    FHD23)
        MODELINE="274.285 1920 1928 1960 2000 1080 1130 1138 1144 +HSync -VSync"
        DPI=96
        ;;
    FHD24)
        MODELINE="333.216 1920 1928 1960 2000 1080 1143 1151 1157 +HSync -VSync"
        DPI=96
        ;;
    FHD25)
        MODELINE="286.5 1920 1928 1960 2000 1080 1132 1140 1146 +HSync -VSync"
        DPI=96
        ;;
    *)
        echo "Unknown video mode $CONF, bailing out." >&2
        exit 1
        ;;
esac

# shellcheck disable=SC2086
xrandr --newmode MPV $MODELINE
xrandr --addmode "$monitor" MPV
xrandr --output "$monitor" --mode MPV --dpi "$DPI"

mpv --fullscreen $OPTS "$@"

xrandr --output "$monitor" --mode 3840x2160 --rate 60 --dpi 144
xrandr --delmode "$monitor" MPV
xrandr --rmmode MPV

exit 0
 
  • Like
Reactions: nexox

FrankTL

Member
Aug 9, 2022
73
26
18
I love playing with both dual socket monsters and tiny mini micros. However the energy prices are not looking good in Europe, and I need something with more expansion capabilities than a TMM system can provide without breaking the bank. I'm looking for a replacement for my dual E5 v2 system as a main hypervisor/storage server. Something with plenty of compute and PCIe, while also trying to focus on reducing idle/low-load power draw. I'm thinking of a single socket LGA3647 or EPYC system. Haven't thought much about the specifics yet as I would like to hear your opinion on which one seems to be superior in terms of performance per watt as well as idle power draw.

If anything better comes to mind (e.g newer Xeon-D's?) I'm all ears - at the bare minimum I need enough SAS connectivity for at least 8 drives, x16 PCIe for NVMe's and 10GbE (w/ potential upgrade to 25/40); however I'd rather avoid the latest and greatest due to budget constraints. Trying to figure out something under 500€ for just the CPU+board.

Thanks in advance
Here some first-hand experience:
* Xeon-D (at least Ice Lake) has a relatively high idle power consumption (at least my 4C x12sdv does)
* On the AMD side, my 7950X has much higher idle draw than my 8600G. Maybe it's due to the infinity fabric of the multi-chiplet design, maybe the 8600G is using mobile (power saving) design optimizations?
* Chipsets on AMD have high idle power draw. If you don't need the PCIe switch capabilities, then a chipset-less board can save quite a bit of idle power draw. My 8600G on a chipset-less board goes down to 12W idle, incl 64GB RAM, SSD and Wifi/BT card.
* Check the motherboard bios support for more C-states on AMD side - this doesn't tend to work as well as on the Intel side, especially on server boards.
 

Primoz

New Member
Jun 30, 2025
5
1
3
I am biased towards off-roadmap Cascade Lakes. 24C/48T 8259CL have hit 50 bucks on ebay. If you can get a cheap 3647 board and reflash the VRM and/or BIOS from the thread on STH, that would be a cheapish upgrade with not totally castrated expandability of the usual solutions. But this is no longer rock bottom low power of your E3. At least double the power, and with a few cards maybe triple so 60-90W in idle. Also make sure to cool the chipset and RAM with some airflow. But pretty much zero bugs left or sufficient workarounds are in Linux for this generation. The issue is that hyperscalers have been getting rid of parts of their x64 chips and are switching to in-house ARM designs. So for Xeon Scalable 4+ or Xeon 6 or whatever no more fleet of engineers and millions of engineer hours to iron out any remaining bugs.

On the other hand, performance of latest AMD chips with AVX512 also nice, but I know not much. Maybe somebody else can chip in.

The question is, do you need this many cores. My desktop is an old Kaby Lake E3-1275 v6 on a Supermicro X11SAT board with 64 GB RAM. X11 with i3wm desktop got really snappy going from 6.6 to 6.12 kernel even with CPU in powersave mode instead of performance. And with all mitigations on. But I compile kernels etc. on the big machines. You could even still use your NAS machine for desktop work and with modesetting: add support for TearFree page flips (!1006) · Merge requests · xorg / xserver · GitLab and a proper wrapper around mpv and using xrandr that switches modelines during a movie to an even ratio of movie's FPS, you will never have seen such a smooth scrolling star field. I added an example for a 4K display below.

Bash:
#!/bin/bash

if [ $# -eq 0 ]; then
    echo "Syntax: $(basename $0) [file]"
    exit 1
fi

OPTS=""
CONF=$(mplayer -vo null -ao null -frames 0 -identify "$@" 2>/dev/null | \
    awk '
    BEGIN {
        w=-1
        h=-1
        f=-1
        FS="="
    }
    /^ID_VIDEO_WIDTH=/ { w=$2 }
    /^ID_VIDEO_HEIGHT=/ { h=$2 }
    /^ID_VIDEO_FPS=/ { f=$2 }
    END {
        if (w<0 || h<0 || f<0) {
            print "NOIDEA"
            exit
        }
        if (w>1920 && h>1080 && f==25) { print "UHD25"; exit; }
        if (w<=1920 && h<=1080 && f==23.976) { print "FHD23"; exit; }
        if (w<=1920 && h<=1080 && f==24) { print "FHD24"; exit; }
        if (w<=1920 && h<=1080 && f==25) { print "FHD25"; exit; }
        if (w<=1920 && h<=1080 && f==29.97) { print "FHD2997"; exit; }
        printf "NOIDEA (w=%s h=%s fps=%s)\n",w,h,f
    }')

if [[ "$CONF" =~ NOIDEA* ]]; then
    CONF="FHD50"
    OPTS="--deinterlace=yes --vf=yadif --hwdec=vaapi-copy"
    echo "Unknown video resolution and fps, guessing $CONF $OPTS"
fi

monitor=$(xrandr --listactivemonitors | awk '/[0-9]: \+?\*.*\/.*\/.*/ { print $NF }')
[ -z "$monitor" ] && {
    echo "Unknown display, bailing out." >&2
    exit 1
}

case $CONF in
    UHD25)
        MODELINE="433.356 3840 3848 3880 3920 2160 2197 2205 2211 +HSync -VSync"
        DPI=144
        ;;
    FHD50)
        MODELINE="226.6 1920 1928 1960 2000 1080 1119 1127 1133 +HSync -VSync"
        DPI=96
        ;;
    FHD2997)
        MODELINE="274.285 1920 1928 1960 2000 1080 1130 1138 1144 +HSync -VSync"
        DPI=96
        ;;
    FHD23)
        MODELINE="274.285 1920 1928 1960 2000 1080 1130 1138 1144 +HSync -VSync"
        DPI=96
        ;;
    FHD24)
        MODELINE="333.216 1920 1928 1960 2000 1080 1143 1151 1157 +HSync -VSync"
        DPI=96
        ;;
    FHD25)
        MODELINE="286.5 1920 1928 1960 2000 1080 1132 1140 1146 +HSync -VSync"
        DPI=96
        ;;
    *)
        echo "Unknown video mode $CONF, bailing out." >&2
        exit 1
        ;;
esac

# shellcheck disable=SC2086
xrandr --newmode MPV $MODELINE
xrandr --addmode "$monitor" MPV
xrandr --output "$monitor" --mode MPV --dpi "$DPI"

mpv --fullscreen $OPTS "$@"

xrandr --output "$monitor" --mode 3840x2160 --rate 60 --dpi 144
xrandr --delmode "$monitor" MPV
xrandr --rmmode MPV

exit 0
FWIW, my workstation is a Windows machine, so there's the added complexity of that. As for power draw, obviously I'm not holding my breath on achieving 30 W idle, even more so with add-in cards. Modern GPUs do have very low screen-off idle consumption, but still. Adding PCIe lanes and memory sticks raises the power draw.

And no, 24 cores is not needed, somewhere around 8 to 12 cores is likely plenty given the core numbers and loads I have today and taking into account any architectural changes with newer hardware (more performance from a given number of cores). And in either case a higher performance (Xeon/EPYC) platform gives the option of increasing core counts very easily in the future as well.

Oh, care to share the STH thread on BIOS/VRM flashing?
 

Stephan

Well-Known Member
Apr 21, 2017
1,085
845
113
Germany
Oh, care to share the STH thread on BIOS/VRM flashing?
Sure: https://forums.servethehome.com/index.php?threads/vrm-modify-icc_max-to-run-high-tdc-oem-cpu.38686/

Some boards have x16 slots that are x8 electrically, so study manual closely.

I have an RTX A4000 with aftermarket cooler and that will eat a few dozen watts until I activate persistence mode. Something to be aware of. Then 8W according to nvidia-smi. I run a patched 550 driver but that is really a pain to keep current on Arch because of all those packages rolling. Packages like pytorch or ffmpeg. Card is in one of the Cascade Lake boards for experiments.
 

Primoz

New Member
Jun 30, 2025
5
1
3
Just as a quick check, how would an LGA-2066 system compare? Thinkstation P520s are dirt cheap around here as are Z4 G4 and Dell 5820 towers.
 

RolloZ170

Well-Known Member
Apr 24, 2016
8,075
2,529
113
germany
Just as a quick check, how would an LGA-2066 system compare? Thinkstation P520s are dirt cheap around here as are Z4 G4 and Dell 5820 towers.
can be compared with LGA3647 Skylake/Cascade lake (same silicon)
data from own systems(P520C,W10 pro), just what i found, can be lower/better.
W-2175 - 14W IDLE package power
W-2150B - 18W IDLE package power
W-2140B - 14W IDLE package power
W-2123 - 13W IDLE package power
 
Last edited:

Primoz

New Member
Jun 30, 2025
5
1
3
And I'm guessing LGA-2066 might idle a bit lower due to less components (less PCIe lanes, less MC channels, etc.) compared to LGA3647? And the idle package power is CPU only, any info on complete system power then (I'm aware it will be highly dependant on add-in cards, memory, hard drives, etc.)?

High power as in to support higher TDP CPUs which aren't really of interest to me at this point. Something along the lines of 150ish W TDP and 12 cores sounds plenty at this point.