2011 Power consumption (sandy vs ivy, single versus dual)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fryguy8

New Member
Jul 27, 2017
5
0
1
40
So it's time to upgrade my home server a bit. It's currently an AMD Phenom 1090T w/ 16gb ram. Ram is becoming a bit of a bottleneck for what I'm doing, so I want to expand.

It's a 24/7 general purpose (primary NAS) server located in somewhat shared living space (home office), so power consumption and noise are important. Being a NAS, support for LFF SATA drives (min 8, ideally 12) is important.

Cost is a concern, but mostly from a value perspective. I don't mind paying an extra $100-200 or more for something substantially better.

After catching up on some research, I was leaning pretty heavily towards getting an R510 w/ L5640s. After seeing some numbers for power usage of those though (150w+), I decided that it would be worthwhile to step up to the next generation and get something from LGA2011.

Making this decision has opened up 2 somewhat orthogonal questions:

1. Is Ivy Bridge worth price premium over Sandy bridge? E5-2670 seems pretty comparable perf wise to E5-2650v2, at a 50% price premium ($100ish versus $150ish). This probably seems worth it, is it?

2. Dual socket versus single socket. The last time I upgraded this proc was back in 2011, so I tend to hold onto things for some time. Going a little overkill now with like dual 8-core will hopefully prevent the need to make another purchase for a few years.

If a single Ivy Bridge (like the E5-2650v2) idles in the 50-75w range (is this reasonably accurate?), what could I expect a dual socket to idle at? If I'm gonna pay a 15-20w penalty to have all of that extra power on tap when I need it, that seems worth it. If it's like another 30-50 watt, then I might start giving consideration to having a 2nd machine that isn't powered on 24/7.
 

Fryguy8

New Member
Jul 27, 2017
5
0
1
40
I haven't measured in a lonnnnng time, and a few things have changed (different/more hard drives, different HBA). But it was like 130-140 IIRC
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
My single E5-2650v2 (8x DDR3 1333) was closer to 100W idle IIRC, but it has a PCIE SSD, 10GbE NIC, and two LSI 2008s. I'll have access to a single 2620v1 (4x DDR 1333) with 6x 7K3000 SAS to check power this weekend.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
i am currently building new servers to replace a pair of old Dell PE2900s, so i've gone down the same path. here's some of my data from my notes:

common among all the info below:
- chassis = Supermicro 846, with 846A backplane, 2x 920W "SQ" PSUs, but testing was done with 1 PSU disconnected.
- tests below conducted with the standard fans for 846. this is something i'll probably mod later on.

1. I initially wanted to go with Westmere platform, so I got these:

a) X8DTH-6F, 2x L5640, 128GB (8x 16GB PC3L-8500R 1.35v 4Rx4 VLP-DIMMs) - idle power around 140W
b) X8DT6-F, 2x L5640, 128GB (8x 16GB PC3L-8500R 1.35v 4Rx4 VLP-DIMMs) - idle power around 135W

2. I then decided like you, to consider Sandy/Ivy Bridge platform:

a) X9DR3-LN4F+, 2x E5-2630, 256GB (16x 16GB PC3L-12800R 2Rx4 RDIMMs), 2x SSDS3700, 1x 9211-8i, 1x USB 3.0 card - idle power around 118W

At that point, I was considering trying IvyBridge CPU, but couldn't find something that satisfied my price per performance target. Then, I happen on a deal for a set of 4x E5-2660 (v1) for $100 ($25/each), so I went with that:

b) X9DR3-LN4F+, 2x E5-2660, 256GB (16x 16GB PC3L-12800R 2Rx4 RDIMMs), 2x SSDS3700, 2x 9211-8i, 1x USB 3.0 card, 1x dual port Mellanox 10Gb SPF+ card - idle power around 138W / under 100% CPU load 280W

The difference from (a) to (b) was: CPU, added 1x 9211-8i, added 1x dual port 10Gb card. These changes added 20W. I didn't test the components individually to see what the break down of that 20W was. Sadly, I'm about where I was with the Westmere platform in terms of power consumption at idle, although I guess I get to power a 10Gb dual NIC and HBA for "free" and more cores and faster CPU.

some interesting side notes:

3. Using a dual PSU configuration adds about 10W as measured from the wall outlet. A.k.a, if you don't need redundant PSU, you can save 10W

4. 4TB HGST UltraStars (7200RPM) SATA drives add about 7.5W each idle.

5. As far as I could measure at idle, going between 4x16GB DIMMs, 12x16GB DIMMs, and 16x16GB DIMMs, showed 0 difference in idle power consumption. In the past, on older systems, I thought each DIMM added about 4-5W, maybe more. For whatever reason, in my setup above, I measured absolutely 0 difference going from 4x DIMMs to 16x DIMMs. I don't know enough about how this works electrically, but maybe the DIMMs don't consume much power unless you start reading/writing to them a lot? I did not run any RAM benchmarks - I suppose that is something I could test to see what kind of difference that makes. I do know these DIMMs run a LOT cooler than my old PC2-5300F stuff (40C vs 90C).

6. With the system off and BMC powered on, I measured about 8.8W. If you pick a system without BMC, you might save almost 9W.

7. A 1W difference for an entire year costs me an extra $1.75 (local rate here is about $0.20/KW-h). If I had 100W difference, that would justify a $175 expense in 1 yr.

Since I never found a deal on Ivy Bridge that I liked, I don't know how all that would compare to (b) above. When I was comparing the cost of E5-2660 vs E5-2660v2, it was $25 vs $140 each, and I needed 4 of them. The numbers didn't make sense for me to spend on IvyBridge at this time. When V2 CPUs come down in price, I can upgrade later.

All of the above was measured with standard temperature controlled fan profile. The fans can consume quite a bit of power if for example, you had it set to full speed. Going with slower fans might also reduce noise and power.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
With regards to memory power consumption it differs under load, when idle with auto refresh or whatever they call it memory is not a massive consumer of power, when busy depending on what you will as you thought see a few watts per dimm.
 

Fryguy8

New Member
Jul 27, 2017
5
0
1
40
BLinux, that is super helpful. Did you ever get a chance to power any of those setups with a single CPU? Or did you go straight to dual cpu on all of them?
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
@Patrick thanks, this is what i guessed. Kind a surprised how big the difference from V1 to V2 actually is. hopefully higher spec V2 come down in price soon ... imho still not really worth to upgrade
 

james23

Active Member
Nov 18, 2014
441
122
43
52
anyone has the numbers from Haswell/v3 to Broadwell/v4 ?
would be interesting to see how big the gap between 22nm and 14nm is.
this is not a very good judge of a single e5 v3 , as im using a x10 dru-i+ board, which is pretty power hungry board / a proprietary layout. and i have a 12bay sas3 backplane connected (but i removed all drives in the idle tests)
(copy paste from my notes):

using

CPU: 1x E5-2620 v3 - HT 6c at 2.4g / 3.2g core / 85w (rev.SR207 - MB microcode patch= xx ) CPU BENCH = 10020m / 1696s - about the lowest end v3 cpu )
MB: X10DRU-i+ rev 1.02A (read from mb pcb) (BIOS = 3.1 latest as of dec2018 )
RAM: 4x 8gb ecc QVL
2x PSUs connected and running
1x LSI 9285cv-8e to the SAS3 Expander 12bay BP (no disks connected, unless specified)

TOTAL IDLE (ubuntu18 live, w config above) (top=0.15) pull 74.6w
(via my white 2019 wall monitor)


TOTAL IDLE FULL POWER OFF , no vKVM connected 20.9w-21.8w !! (via ePDU)
(NB: 1x of the onboard 10g copper nics shows a link light even powered off


With 1x NVME and 10x HDDs (all HUS 3tbs) - with all HDDs idle = 208-212w , with full load hddsent (10min after load started) = 224-236w

With 6x HGST 200gb SAS SSDs + 4x 3tb HUS (both on LSI 9285) + 2x NVME disks -- 3x VMs running but idle (is ESXi 6.5u2)- system mostly idle through - 166w (input power) == when LOAD ON THE 6x SSD rd6 = 211w if WRITING to the array, 195w if READING from array

EDIT: the board's bios power setting, is set to the default setting (not sure what that is, but i have not changed it)