Can't decide - Converge into bigger server(s) or keep multiple smaller server(s)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Folks, I'm in the middle of redesigning my home (and production) lab, and having a tough time deciding on a strategy for convergence. The goal (like anybody else) is to have maximum performance, at the lowest power consumption, while spending the least amount of money.

I can "converge" multiple, physical server(s) into bigger boxes with dual CPUs, but after going blind on Intel's ARK site, comparing things, I'm still at a loss. If I go for higher core count CPUs, I lose clock speed, need (potentially) more expensive motherboards, and the power savings may not be worth it. This is purely from a compute perspective, features (like VT-d, AES-NI, vPro etc) play very little part in this.

What would you suggest is the best "bang for buck" compute only motherboard/cpu combo in 2018? I'm not limited by rack space at all, although somewhat dense would be nice. I can even jury rig custom chassis/DIY for custom board form factors.

Many el-cheapo, small compute nodes or a few big ones?
 

Netwerkz101

Active Member
Dec 27, 2015
308
90
28
You may want to post the current hardware you are working with and what your production / lab workloads look like currently.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Use case description is key - some systems need many boxes (vsan/ceph etc), some just Ram and or tight integration...
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Well, this is going to be a web cluster. Their only function is to serve web pages. The only functionality required on the boxes is one or two 10g nics, which can be added.

My initial capacity planning says, start with 100 cores, ~4GB per core RAM and leave some potential for expansion. Like I said, there is nothing more required from these boxes/servers.

I can build/buy 50 boxes with a dual core i3 in them or I can build/buy 6 boxes with 16 cores each. The question is which is more bang for the buck?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Pure bang for buck at 100 cores (real cores or virtual cores ?) would probably be E5 v1/v2
16-20 cores per dual cpu box

That’sa lot of compute power !!!
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Pure bang for buck at 100 cores (real cores or virtual cores ?) would probably be E5 v1/v2
16-20 cores per dual cpu box

That’sa lot of compute power !!!
That's what I'm leaning towards. I even have a bunch of these Tyan S7067 boards coming in. The uncertainty came when I started costing out the CPUs and RAM for them.

Getting 16-20 cores of E5-xxxx v2 is easy, getting 16-20 fast cores, is a lot more tricky and expensive. A 3.3GHz i3 with 2 cores can be had for ~$20ish. An 3.3GHz 8 core e5 is more expensive than ~$80
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I originally had a post waving the flag for many smaller boxes before I realized that we are looking at ~100 cores.

I this case I think question is whether you can cope with the overhead of many boxes (both hardware eg network, power suplies, ups and software - updating, deploying new versions) or not.
If it is no trouble (existing hardware and automation capabilities) then go for best bang for the buck.

If you don't have that then its probably better to go for a few (not too few) bigger boxes to reduce the overhead (but higher impact per box)
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
See, the strategy that _alex mentioned is exactly why the decision making process is so difficult. That Supermicro box is amazing, for the density it offers, bit it is certainly not cheap. Even at the fire sale price it is being sold for, it's $1800 without any CPUs/RAM.

I can outfit an el-cheapo i3 based node with motherboard/CPU (i3-3220)/RAM (8GB)/heatsink/PSU for about $60-70. This is nowhere near as dense or power efficient as the Supermicro, but I can still DIY two of these into 1U of space.

For $1800, I can already deploy about 25 of these boxes (gives me 50 cores), where in the case of the Supermicro, I still need to buy CPUs and RAM. The 10GB nics will be needed in both cases, so that's a wash. But, 25 of these boxes will certainly use more power than a converged solution like the Supermicro.

I don't need things like SAS3/hotswap etc on these nodes. They are diskless, do a PXE boot and go their merry way. Software is all open source, so no licensing costs. Management is certainly more difficult, no IPMI etc. Switch costs will be higher (Hello Arista 7050-QX...) as the number of nodes increases as well, but the flip side is that these nodes will be more performant than an equivalent number of converged cores, as fewer cores have a bigger network pipe.
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
With 'el cheapo' vintage i3 boxes for webhosting i'd wait a bit until the dust around meltdown/spectre settles.
Might turn out that it's not a good idea at all deploying anything below haswell anymore.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Unless you use mainboards with long term support that will implement the necessary microcode patches in a new BIOS...
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
assumed there will be microcode-updates for the CPU in question, and the cpu does not lack pcid / invpcid that can limit the impact on performance. given this, the vendor still needs to supply a new bios.
personally, i wouldn't buy anything atm until it's clear what will work with reasonable impact on performance (and therefore power draw) and what not.