My humble 10Watt server put AWS EB2 to shame!

Discussion in 'Linux Admins, Storage and Virtualization' started by MrCalvin, Feb 13, 2020.

  1. MrCalvin

    MrCalvin IT consultant, Denmark

    Joined:
    Aug 22, 2016
    Messages:
    45
    Likes Received:
    10
    Headlines are supposed to be dramatic and only half-truth, right :p

    After some talk on other threads about power-consumption, AMD, Intel and servers, I thought I'll get some real power numbers on the on-prem servers I usually setup at my customers.
    Some questioned the power numbers I've been claiming on those threads, so I got a new power-meter to double check, and then I also did a quick benchmark up against a midsize AWS VM.
    Who cares about low-power-consumption if the system don't have power to do any work!?

    Server-configuration:
    Mobo: SM X11SSM-F with IPMI enabled
    CPU: Intel i3-7100 (idle)
    RAM: 2 x ECC modules
    Fans: 1 x CPU(1000rpm), 1x80mm fan (1000rpm)
    Storage: 1 x NVMe HDD
    PSU: SM PWS-203-1H, 200W
    srv.jpg

    Power-meter reading, whole system:
    Volt: 235,2, Current: 0,09, Power-factor:0,47 = 9,95 watt
    With additional 4 x Seagate Barracude Pro 1TB 2.5", spinning, not sleeping = 14,18 watt
    PF.jpg A.jpg Volt.jpg
    (My 2nd and old meter shows the same, I double checked V and A with my Fluke, tested power-factor on non-SMPS equipment etc., I believe the numbers)

    And so what, i3 can only be used to run a clock-radio?

    Benchmark:
    A quick and dirty VM benchmark, Win Srv 2016, using WinSat tool:
    VM 1: My above humble server, KVM VM with 2 CPU assigned
    VM 2: AWS, t2.medium, Xeon CPU E5-2676 v3 @ 2.40Ghhz, 2xCPU

    CPU LZW compression:
    VM1: 319,78 MB/s
    VM2: 232,00 MB/s

    CPU AES256 Encryption:
    VM1: 1.604,07 MB/s
    VM2: 607,83 MB/s

    Uniproc CPU AES256 Encryption:
    VM1: 801,37 MB/s
    VM2: 358.09 MB/s

    Memory performance:
    VM 1: 29.168 MB/s
    VM 2: 24.921 MB/s

    I was only running default out-of-the-box KVM/QEMU CPU/memory settings, so there should be some room for improvement. You can bet that AWS's hypervisors are top-optimized.
    On the other hand I admit I didn't manually apply any Intel CPU vulnerability mitigation (host running Debian 10, I don't know if some/all are enabled by default).

    Conclusion:
    I can only repeat myself from other threads: If your server-use-case is a std. business server (KVM host, Win AD, storage, web, mail, SQL, etc, even with multiple Win10 VMs for RDP MS-Office sessions), don't ditch "small" CPUs just because. This CPU has never come short in any of my installations.
    What I see out there is many people buying overkill CPU's for their servers, and it's a shame. It hurt the power-bill and the climate environment for no reasons. Of course it depends of use-case!
    But how many servers in the world are using overkill CPU's wasting tremendous amount of power? One can only guess, but it think it's a BIG problem that should be addressed.
    Not to mention those power-hungry SAS controllers. They use almost same power in idle than my whole server (mobo, chipset, PSU). That's insane! Again, it all depend of use-case, but don't use it just because. MDadm RAID will do the job fine in many cases, and in some even better.
    Intel CPU vulnerabilities (spectra, meltdown etc). is another story. But the newer i3 (8100,8300,9100,9100F) are without Hyperthread, that should help some I guess.
     
    #1
    Last edited: Feb 13, 2020
    Evan, itronin and niekbergboer like this.
  2. SRussell

    SRussell Active Member

    Joined:
    Oct 7, 2019
    Messages:
    207
    Likes Received:
    110
    14.18W for 4x platters is pretty damn good.
     
    #2
  3. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    501
    Likes Received:
    193
    Yeah, but is that number taken at full power idle...?

    Re-run those power readings with the disks at:
    Full disk reads across 4 platters
    Full disk write across 4 platters
    With at least a 4.0 system load, 8.0 system load or 10.0 system load.
    With one network port lit, with both network ports lit, with the machines running a stock Linux kerner, or with specific tuning done for power efficiency, or improved latency, or etc.

    We'll need to see how that 10w figure holds up when the machine is actually performing various tasks. Claiming best-case/idle for one condition, and then claiming performance figures without power consumption figures while at that state...is unhelpful.
     
    #3
    Last edited: Feb 14, 2020
    T_Minus likes this.
  4. kapone

    kapone Well-Known Member

    Joined:
    May 23, 2015
    Messages:
    683
    Likes Received:
    297
    I'm not surprised.

    The "cloud" is more about scalability (from the provider's point of view) and ease of use (from the consumer's point of view). Neither of those two things are directly correlated with efficiency and cost to the consumer. Think about it:

    - Your test was run on specific hardware. You had to know which hardware to choose. Not everybody can.
    - You put together the whole system in probably 10 minutes. Do you have any idea how many so called techies have no clue where a connector goes on the motherboard? or even what a motherboard is? :)
    - You said "IPMI enabled" !!! You'll be surprised how many techies couldn't spell or explain what it is.

    Now, tell them to spin up an AWS instance...and voila! they know that in a heartbeat!

    Now try cramming 100,000 of what you have on your bench, in a data center. :) Can you? Should you? or are there better approaches to scalability?
     
    #4
    T_Minus and tsteine like this.
  5. Hrast

    Hrast Member

    Joined:
    Oct 5, 2013
    Messages:
    31
    Likes Received:
    9
    Your methodology is a little flawed, because you are testing against a Burstable instance type (t2 or t3). There's a whole CPU credit thing that depending on how you launched the system has wildly different impact. The T family have some other behaviors that are configured for their intended use case (intermittent workloads). If you want to have a more apples to apples test, use either an M, R, or C instance family (large being the smallest instance size). And while the instance size name is "medium", you have to understand the whole scale to see what that means. With M, R, and C instances, each new instance size is double (or so) the previous. For example, the M5 runs from "m5.large" (2 vCPUs) to "m5.24xlarge" (96 vCPUs). I wouldn't consider something a midsize instance until m5.4xlarge, which has 16 vCPUs.

    There's also a possible "noisy neighbor" affect on the AWS side as well, where another instance has consumed enough CPU time to start stealing from other instances (there's a steal percentage in top).
     
    #5
  6. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,930
    Likes Received:
    460
    10w is amazing low power, seems really too low actually but kind of close enough to what I would have expected around 18-20 idle without the drives.
    People laugh at even the e-2288 being only 8 core in the day of 64core AMD chips but most people run really a lot on 8 core !
     
    #6
    T_Minus likes this.

Share This Page