Wow, this thread is still going.
The platforms are often what consume most of the power, not the CPUs, unless under load. Now days we have 50~100 core server CPUs that are rated to pull a ton of Wattage each - hence more heat and large power supplies, but annoyingly, for most non-parallel workloads, the server CPUs today are not really faster than CPUs from a decade ago. Clock for clock, yes, but to the extreme end, if I spin up a machine in Azure or AWS, which have other loads from other consumers and don't seem to offer noticible turbo speeds, the things run at a pathetic 2.3GHz - a nehalem can do a similar amount of work for lightly-threaded workloads. Even better if we bump up to sandy or ivy bridge, which is still more than 10 years old. Yes, there are lower-core, higher clocking server CPUs today too, which also have a higher TDP than the CPUs we used 10-15 years ago.
Most "server" boards, controllers and all, seem to idle in the 150-200W range, whether new, or anything going back to the Core2 era. That's why I try to stick to ATX boards in supermicro cases
I do have a couple lower-power prebuilt 1U machines from Dell and Lenovo with xeon e3 v5/v6 CPUs that idle in the 40-50W range, but those are more like sticking workstation components into a rack chassis - the same type of thing I do in the supermicros cases.
For my supermicro-chassis builds, MB + CPU + ATX psu idles at 20W. Switch to 1x 920W supermicro psu, now at 33W. Add a second for redundancy, then 42W. Add a RAID controller - up to 50W. Plug in the backplane - 63W idle with just a single m.2 SSD. Add powered drives, it goes up. Then there's prebuilts using proper server components, buffered RAM, etc, that then idle at the 150W+ mark wthout drives. That adds up to a lot more heat, and of course more load on the circuit. Multiply that by numerous servers and it becomes annoying in a homelab. Some efficiency can be had by going 240V.
I've been frustrated for a long time that software/bloat has been far surpassing per-core and IPC performance improvements. So much isn't multi-threaded well, and the software we use, workstation or server, is getting frustrating to use. It wasn't until recently with AMD's zen3 and zen4 before we started to see some tangible gains again. I felt like skylake until about 3 years ago we saw nothing tangible! 6 years of stagnation, other than core counts increasing.
Just some random tangent writing to stop the circular argument above
The platforms are often what consume most of the power, not the CPUs, unless under load. Now days we have 50~100 core server CPUs that are rated to pull a ton of Wattage each - hence more heat and large power supplies, but annoyingly, for most non-parallel workloads, the server CPUs today are not really faster than CPUs from a decade ago. Clock for clock, yes, but to the extreme end, if I spin up a machine in Azure or AWS, which have other loads from other consumers and don't seem to offer noticible turbo speeds, the things run at a pathetic 2.3GHz - a nehalem can do a similar amount of work for lightly-threaded workloads. Even better if we bump up to sandy or ivy bridge, which is still more than 10 years old. Yes, there are lower-core, higher clocking server CPUs today too, which also have a higher TDP than the CPUs we used 10-15 years ago.
Most "server" boards, controllers and all, seem to idle in the 150-200W range, whether new, or anything going back to the Core2 era. That's why I try to stick to ATX boards in supermicro cases
For my supermicro-chassis builds, MB + CPU + ATX psu idles at 20W. Switch to 1x 920W supermicro psu, now at 33W. Add a second for redundancy, then 42W. Add a RAID controller - up to 50W. Plug in the backplane - 63W idle with just a single m.2 SSD. Add powered drives, it goes up. Then there's prebuilts using proper server components, buffered RAM, etc, that then idle at the 150W+ mark wthout drives. That adds up to a lot more heat, and of course more load on the circuit. Multiply that by numerous servers and it becomes annoying in a homelab. Some efficiency can be had by going 240V.
I've been frustrated for a long time that software/bloat has been far surpassing per-core and IPC performance improvements. So much isn't multi-threaded well, and the software we use, workstation or server, is getting frustrating to use. It wasn't until recently with AMD's zen3 and zen4 before we started to see some tangible gains again. I felt like skylake until about 3 years ago we saw nothing tangible! 6 years of stagnation, other than core counts increasing.
Just some random tangent writing to stop the circular argument above
Last edited: