Like many I built a dual Xeon e5-2670 in a S2600CP a few years back.
I have 128 GB ram (16 x 8GB)
3 LSI SAS2308-8i SAS controllers connected to a backplane that hosts 24 3.5 inch HDD's
a Intel P3605 SSD
a Nvidia Quadro P2000
and a Intel X710 network card.
I run about 30-40 Docker containers on an Ubuntu system. They include Plex, Radarr, Sonarr, Readarr, Prowlarr, Lidarr, Sabnzbd, qBittorrent, TeamCity, servers for SVN and Git and a some others that don't do much unless interacted with.
On top of that I also run 3 VM's using KVM that include a Windows Server and two Windows 10 machines used by TeamCity as build agents.These all sit idle 99% of the time.
The current setup regularly hovers around 300-400W "idling" with these containers running.
Since it has started to show it's age and energy costs have gone up my first thought was to try and re-use my 3950X machine to replace the server when I get a 7950X to replace machine with. But a consumer grade platform has such limited connection capabilities that it would be hard to plug everything into the motherboard. I almost started contemplating using a m.2 -> PCIe-adapter and connect one of the LSI cards that way and then mayyyybbbeeee it will work. But it felt like a hack and not a real solution to upgrading my server.
Then i stumbled across the myriad of 2'nd gen Epyc's available on Ebay and started feeling that this is the way to go. I will retain an abundance of PCIe connectivity and could easily transplant all of my peripherals over.
But the question is, what would the difference in performance be and would I save any energy if I switched to something like a AMD EPYC 7542 and a H12SSL-i? My plan is to go with 8 x 32GB sticks of memory.
Not sure how well they conserve power when under low load or if another model would be better suited like the 7402 or 7502.
I would like to have cores and power to last some years into the future but also try to get a system that is more efficient that what I have today.