Yeah, my lab keeps getting bigger, and so does the electric bill. I often wonder if it wouldn't be cheaper in the long term to pick up ~5 R630's and get rid of my 7 Desktops/Servers.
To echo
@infoMatt, this was one of the primary reasons I consolidated into a single BFM (Big Fracking Machine). All of the machines I had running was something like 400 to 500W idle, and a LOT of heat in the closet. I'm on my third build of the BFM right now and finally got the power levels decent (~75W).
The two other primary reasons was heat generation and exploit/attack surface (I'm in CyberSec, and just can't stand to run unsecure hardware/ports no matter the "low threat model".
Previously, I build an Intel 2690V3 (12C) with an engineering sample and 128 GB of ECC DDR4 Buffered ram. Idled with 14 HDDs at 75W due to the 9W LSI card and 12W Supermicro SC846 chassis backplane. It wasn't enough cores.
Then I went with dual 2640LV4s (14C each, 28C/56T total) ES in a Supermicro board with no IPMI (to remove those CVEs from my scans, rigged up a RPi with serial console instead to power on/off remotely on an airgapped VLAN). Got the CPUs cheap on ebay since they were ES. Same 128 GB DDR4 ECC Buffered ram. Idle was now 81W, due to the chipset and dual CPUs, along with a Q2000 nvidia card for hardware encoding/decoding of movie streams. If I removed a single CPU, I could get it down to around 70W idle. Which led me to my collection of SBC UP Boards to now handle most of my home infra and apps needs.
Finally, I got sick of the Intel exploits, Intel ME disable from each BIOS update, etc... And went all in with AMD.
Current server is AMD 2950X (16C, 32T), more than double the Mhz of the engineering sample versions of the Intel chips above (and more than double, sometimes 3x the CPU Mark ratings per core!). And, I replaced my Port Expander backplane in the SC846 chassis with a JBOD version (saving 12W, 24 SATA hookups now)), dumped the LSI card (~9W saving) for 10 onboard SATA ports along with replacing the 12x 4TB drives with 8x 10TB and 12TB drives (this was the most expensive upgrade of them all - I've done it piece meal over the last two years, slowly...). And, I had to switch to Unbuffered ECC 64 GB DDR4 ram as the AMD X399 chipset only supports Unbuffered, not the Buffered ECC I had in the Intel boards.
But i did notice I wasn't using that much ram any longer, with all the UP Boards I have running in clusters.
I'm now down to ~75W idle with that beast. AMD Threadripper and X399 isn't as power efficient at idle as the Intel chipsets. In a few years, I may downscale to the Ryzen consumer models with 16C as this was kinda overkill, and the newer 3000 series is just about as powerful - with much lower TDP. The UP Boards, per 4x board cluster of the UP Squared Pentium versions hovers around ~8W when idle. But they are hardly idle, always processing some NZB, Tor relay and services, and constant metric/logs/SNMP collections. All in all, I'd say per cluster averages around 15 to 20W per 24 hours - remember, this is running like two dozen docker images and services, log collections and parsing, metric compaction nightly, etc - they get heavy heavy use. I have two clusters I switch between as I am constantly tearing down and rebuilding the k8s clusters in my homelab for work. I have a pretty good immutable setup now using PXE boot images created from Packer - so I can easily replace a single bare metal UP board at any time (I really need to blog this setup).
I run the latest Xen on ArchLinux on everything for the latest CVE fixes.
Which lead me now to this thread about Brocade switches... I need to get a handle on my VLANs and get some POEs for new security cameras, as well as lowering even more wattage. I'm now building up my single ICX 7250 48P for that task, thanks to this thread, to replace a number of POE, managed/unmanaged, and dumb switches.