First post here, but long time lurker. This entire build was motivated by the excellent Open Compute guide provided by NorCalm over in deals.
Fair warning: I'm a developer, not an ops guy, so apologies for improper terminology and a slightly janky setup
Background
My primary development machine for the last few years has been a Macbook Air, which is decidedly underpowered. It was fine when I first joined the company, but as I've moved onto new projects it struggles with our builds/tests/benchmarks. We make a clustered search/analytics engine, so a lot of our tests involve spinning up multiple nodes and can be pretty resource intensive. To date I've been renting a beefy server at Hetzner for these kinds of tasks.
I was recently given budget to update my setup. Most people get a Macbook Pro or build a desktop. I opted for a third option: build a mini cluster based on E5-2670's.
The Build
Given how cheap E5-2670's are, the challenge was finding cheap motherboards. I opted for the Open Compute route, and essentially followed this STH guide. The chassis + 2x motherboard + 4x heatsinks + power supply all together together was cheaper than most LGA2011 motherboards alone, which made it a hard deal to pass up.
Final price came out to $504 per node ($469 without shipping).
Since I don't really have any other home infra other than a FreeNAS tower, I wasn't worried about the non-standard rack size. I'll probably build a true homelab some day, but my current living circumstances (renting, moving soonish) prevent me from expanding. I'll deal with other rack sizes later.
I'm currently using one node as a desktop machine, and leave the other three powered off unless I need them. The BIOS claims to support suspend-to-RAM, but I'm still trying to get that to work. For now they are either physically off, or suspended to disk. Also working to get SOL and WOL working so that cluster booting is a bit more automated.
Node specs:
Cluster stats:
Practical considerations
The components all laid out.
One of the OCP nodes, unpacked before installation of components.
The final setup, including the second node. Setup is temporary for now, until I can build a rack.
The first node sits on three strips of neoprene rubber. The second node sits on top of the first, also separated by a layer of neoprene. The top is covered by some cardboard (temporary). It isn't needed for airflow, since the nodes have plastic baffles... It's just for my peace of mind, so I don't accidentally drop/spill something.
128 threads burning power
More photos of the build:



















Fair warning: I'm a developer, not an ops guy, so apologies for improper terminology and a slightly janky setup
Background
My primary development machine for the last few years has been a Macbook Air, which is decidedly underpowered. It was fine when I first joined the company, but as I've moved onto new projects it struggles with our builds/tests/benchmarks. We make a clustered search/analytics engine, so a lot of our tests involve spinning up multiple nodes and can be pretty resource intensive. To date I've been renting a beefy server at Hetzner for these kinds of tasks.
I was recently given budget to update my setup. Most people get a Macbook Pro or build a desktop. I opted for a third option: build a mini cluster based on E5-2670's.
The Build
Given how cheap E5-2670's are, the challenge was finding cheap motherboards. I opted for the Open Compute route, and essentially followed this STH guide. The chassis + 2x motherboard + 4x heatsinks + power supply all together together was cheaper than most LGA2011 motherboards alone, which made it a hard deal to pass up.
Final price came out to $504 per node ($469 without shipping).
Since I don't really have any other home infra other than a FreeNAS tower, I wasn't worried about the non-standard rack size. I'll probably build a true homelab some day, but my current living circumstances (renting, moving soonish) prevent me from expanding. I'll deal with other rack sizes later.
I'm currently using one node as a desktop machine, and leave the other three powered off unless I need them. The BIOS claims to support suspend-to-RAM, but I'm still trying to get that to work. For now they are either physically off, or suspended to disk. Also working to get SOL and WOL working so that cluster booting is a bit more automated.
Node specs:
- 2x Xeon E5-2670
- 96gb DDR3 1333MHZ ECC RAM (12 of the 16 slots used)
- 1TB 7200rpm Seagate HDD
- Ubuntu 14.04.4 LTS server / desktop
Cluster stats:
- 64 cores, 128 with hyperthreading
- 384gb RAM
- 4TB hdd
Practical considerations
- Temperatures idle around 32-35C. At 100% utilization, the temp cranks up to ~93C before the fans kick into high gear, then settle down to ~80C.
- Noise is a pleasant 35dB idle, which honestly is quieter than the Thinkserver on the floor. At 100% utilization the fans go into "angry bee" mode and noise rises to around 56dB. It's loud, but not unbearable, and I'll only be using the full resources for long tests, likely overnight. While idling, the 60x60x25 fan in the PSU is far louder than the node fans, and gives off a bit of a high-pitch whine due to thinner size.
- Unsure of power consumption, I don't own a kill-a-watt (yet). The spec sheet claims 90-300W per node depending on activity. So probably around 360-1200W when the full cluster is running, plus some overhead of the step-up transformer
- As someone who has never worked with enterprise equipment before...these units were a pleasure to assemble! Only tool required was a screwdriver to install the heatsinks, everything else is finger-accessible. Boards snap into place with a latch, baffles lock onto the sushi-boat, PSU pops out by pulling a tab, hard drive caddies clip into place. Just a really pleasant experience, I kept saying "oh, that's a nice feature" while assembling the thing.
- The only confusing aspect was the boot process and what the lights mean. As NorCalm discovered, blue == power, followed after a few seconds by yellow which equals HDD activity. If you boot and it stays blue, something is wrong. In my case it was a few sticks of bad RAM.
- Nodes are configured to boot in staggered sequence. The PSU turns on, fans slow down, first node powers on, about 30 seconds later the second node powers on.
- There are two buttons on the board: red (power), grey (reset). I think... it's confusing because the boards like to reboot themselves if they lose power, even when you manually switch them off. The power-on preference can be changed in BIOS, default is "last state". I haven't played with it yet

The components all laid out.
- PCI USB hub - Radeon HD6350
- x4 Seagate 1TB 7200 RPM hdd (ST1000DM003)
- TP-LINK TL-SG108 8-Port unmanaged GigE switch
- SanDisk Ultra II 120GB SSD
- x50 Kingston 8GB 2Rx4 PC3L-10600R
- x8 Xeon E5-2670
- ELC T-2000 2000-Watt Voltage Converter Transformer

One of the OCP nodes, unpacked before installation of components.

The final setup, including the second node. Setup is temporary for now, until I can build a rack.
The first node sits on three strips of neoprene rubber. The second node sits on top of the first, also separated by a layer of neoprene. The top is covered by some cardboard (temporary). It isn't needed for airflow, since the nodes have plastic baffles... It's just for my peace of mind, so I don't accidentally drop/spill something.

128 threads burning power
More photos of the build:


















