One of the servers I use to run my home projects started failing recently, and being a complete broke-ass, I spent a while trying to find a super-cheap replacement that might also be a slight upgrade. Overall goal are to be competitive with my current setup, which is a dual E5-2660 V1 system. I'm going to reuse the RAM from that machine once I get this running. The machine will eventually run proxmox. Anyways, I have no real requirement for rack compatability, so I wound up deciding to buy a Quanta "Winterfell" Open-Compute node (there's a long thread on them here). Anyways, one of the issues I had when considering buying this thing was the lack of decent pictures of these things, so here we are (plus bringup). So, if you haven't heard of them, "Winterfell" is the name of the second-generation intel-based Open compute project server nodes. It's a highly unusual chassis design done by Facefuck for their internal use that they wound up releasing, presumably in hopes that they'd get cheaper servers by virtue of more people buying them. The open compute project has substantial online documentation. I went with winterfell, because I want E5 V2 CPUs. In this case, I'm using 2 E5-2650 V2 CPUs (mostly because they're also super cheap). Here's the server itself: It's a 4" x 7" x 35" (!!!) box. Yes, they're enormously long. They're designed to slot into a custom rack, and the end of the server shoves onto a set of bus-bars which provide power. The server itself therefore requires 12V at lots of amps, and nothing else. Considering the mobos are specced to support 2x either 95W or 135W TDP (depending on the manual), the overall dissipation is therefore in the range of 250-400W, which translates to 20-33 amps at 12V. The power cabling is unsurprisingly ridiculous. The overall chassis: Power input. The whole server is cooled by the two 60mm fans powered by the power input board. They run surprisingly quiet. The loudest part of the whole thing is actually the power supply I'm using for the 12V These servers have a mezzanine 10GB SFP+ module. They also have 2 PCI-e slots that can be set up as either 1 8X and 1 1X, or 2 4X slots. Physically, it's 1 16X and 1 8X that's open ended. There's also a location for toollessly mounting one 3.5" HDD. I'm going to stick 2 SSDs in there. The mobo itself has 2 SATA ports and 2 headers for a custom power cable to run storage devices. Fortunately, the cable is pretty simple (it's basically a floppy power connector -> SATA power connector), so it should be trivial to make one. The bus-bar connector for the chassis is frankly kind of ridiculous and massively overkill here (they're rated to 105 amps!!!!!). One nice thing about the whole "Open" bit of the open-compute project is you can actually find documentation (here is the connector documentation). It's designed for blind-mating. The server should have a airflow duct, but in my case I had to specifically ask the seller to include it. I have no idea what they think you'd do without it. My solution to getting 12V at all the amps is pretty simple. Just use bitcoin miner crap! It turns out you can buy breakout boards for common power supplies for ~$10. In this case, I'm using a Supermicro PWS-1K21P-1R, mostly because I have it and it's 80+ gold rated. I powered the thing up in several steps. First, I checked the polarity of the cables, as I'm using leads from broken power supplies (whenever I have a power supply fail, I open the chassis and cut the leads off and save them). I then disconnected the power supply board from the main mobo, and powered that alone (see the next picture). Finally, I just booted the whole thing. Fortunately, no smoke was emitted. The next challenge was the fact that it booted, but didn't *do* anything. There's a 1GB nic on the mobo, and when connected it appeared to come up (the link lights illuminate, and my switch saw some traffic from it), but plugging it directly to a test machine and trying to tshark the interface or arp-scan it yielded nothing. Either the onboard NIC isn't fully configured, or it's configured to not talk to anything. I stuck a video card in the thing, but that didn't do anything either, so I suspect that the PCI-e connector is configured wrong (there's a *lot* of jumpers everywhere for it), or the crappy old 1-wide graphics card I have is dead (not unlikely, it's been floating about for a while on my workbench) Now, again, there's a nice thing about the whole "open" bit, as the spec dictates a diagnostic header for the mobo. It has a 8-bit POST status code output, and a TTL serial tx/rx pair. Reading the POST code manually yielded 0xAA, which the manual claims means the boot sequence has jumped to the BMC, but I'd expect the BMC to be the thing that talks on the 1GBE nic, and it's not talking. Anyways, lets see if the serial port does anything. Some really horrible dangly wiring (it's a 2MM header, and I only have 0.1" header sockets on hand): Hmm, it looks like it's talking 57600 baud. Let's hook up a serial interface: It boots! And is generating terminal control codes that confuse putty. I only have RX connected because of the nightmare headers at the moment. I'll grab some 2MM female headers tomorrow at work (fortunately, we use them extensively for hardware there. Convenient!). My long term goal here is to assemble a rack chassis that fits two of these + power-supply in a traditional 3U rack-mount chassis. I'm actually working up a 3d-printable bus-bar support so I can use the node completely stock, without having to do any adapting of the wiring. More to come!