Looking for an efficient CPU+MB

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stephan

Well-Known Member
Apr 21, 2017
947
715
93
Germany
Since there's a constant stream of people landing on this post (wazzup Google...) a bit of a followup. Meanwhile I built a fourth 3647 system with same 8259CL CPU, but X11SPL-F mainboard. Switched to Micron DDR4-2666. Stayed with CX3. Idle power more like 60 watts.

The good: Inexpensive board, enough PCIe slots. My custom kernel compiles in 11 minutes. Supermicro produces updates for BIOS and IPMI, Asrock Rack while it works, produced nothing. Stable platform, couldn't crash it. Supermicro and Intel go way back and you can tell. Have to keep temperatures in view though, especially RAM and the CPU voltage regulator. I have two 140mm PWM fans blowing straight over PCIe cards, RAM, and under CPU heatsink towards the VRM. Also one more 140mm Noctua industrial with 2000rpm on the back of the tower case. No longer a job for quiet 1000rpm Noctuas I am afraid. But, PWM fans are ramped nicely by the IPMI. Machine is very quiet when doing nothing but will sound like a mid-sized colony of angry bees when doing something intensive. VRM peaks at around 75 deg C in European summer heat. Tested one Optane PMEM 100 on the board, also working. Two Intel i210 ethernet chips. Ya, don't even have to turn off half the acceleration features on these, so they don't mess up connectivity.

The bad: While only costing a few %, no x16 slot for GPU. VRM needed MCP2221a I2C reprogramming treatment to run a 255A TDC CPU, and BIOS also needed a TDP patch. I knew that before buying. Not sure what super-important secret code Intel is running in the C62x series of chipsets, but they're hot and wasteful space heaters.

The ugly: Nothing yet.

I also benchmarked a non-Pro ConnectX3 using Linux kernel namespace tricks to force traffic out of one physical port and back into the other. With an FDR (56 Gbps capable) cable and at four threads ntttcp reports 44-48 Gbps thoughput. If you read old articles from 2011 like these Mellanox forges switch-hitting ConnectX-3 adapters you will see they're talking about "below 3 watts" per port. Recent 2.5 Gbps cards struggle to achieve that efficiency, 12 years later. Huge heatsink and a fan for only 2.5 Gbps. For anything beyond 1 Gbps, if you can, skip twisted pair ethernet and go straight for SFP+ QSFP etc. and 25/40/56/100/200/400 Gbps.
 

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
Since there's a constant stream of people landing on this post (wazzup Google...) a bit of a followup.
:p

op here, a little update from my side that might help some people,
so i ended up getting a supermicro X11DPI-NT board for around 270€ and the cheap xeon 4108 i mentioned (65€), everything powerd by a 750w corsair rmx, worked perfectly and draws around 65w idle in proxmox and around 75w with a couple vms running measured at the pdu, just keep in mind since only one cpu is populated i am only able to use the first three pcie slots (2x x8 and 1x x16)

a couple months later i got a nice deal on two xeon platinums so ended up switching the role of the server to a render server,
so i needed a replacement for my low power endurance system,
after a bit of looking i settled on a supermicro xeon d embeded board (X10SDV-TLN4F)
8c/16t, full ipmi / rdimm capability, dual 10gb nic, 20 usable pcie g3 lanes and 6 sata slots, everything at 25w idle and around 40w with medium load at the wall, 10/10 would recommend if you get a decent deal on them, there is also a 4c version that is currently serving as my nvr,

also maybe a little note here, look into solar/small pv, that can offset a lot of the power costs
 
Last edited: