Compact rack shelf system build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vrod

Active Member
Jan 18, 2015
241
43
28
31
Greetings everyone. I am currently starting a company and of course that involves MONEY! That's why I decided to go untraditional ways when it comes to building and designing my servers. It's a hosting business so compute power is greatly needed, together with some sort of density to minimize the costs for the equipment.

Back in the days when I lived in Denmark, I had a company as well but shut it down when I moved to Germany. I used to purchase 1U/2U cabinets for my boards, however this time I decided to be a little more creative.

Just the mandatory specs stuff:

Build’s Name: 1337-Shelf (Version 1)
Operating System/ Storage Platform: Any OS
CPU: E5-2670 (2 pieces)
Motherboard: S2600CP
Chassis: None :D
Drives: Single 950 PRO SSD
RAM: 128G (256G is not an issue)
Add-in Cards: 2 x QLE2460's
Power Supply: 2U Zippy Emacs
Other Bits: 2 x 80mm temp controlled fans


Meet the 1337-Shelf! (yes I just made up that name):







The shelf is made up simply by the board, a plate underneath the board, a single TFX or 1/2U EPS PSU and the shelf itself. Of course I have attached fans as well to move the cold air from the front of the rack to the rear of the system.

This shelf is based on a S2600CP4 board with dual E5-2670's. As PCIe devices I have a couple of QLE2460's as well as a Samsung 950 PRO to host my pfSense firewall instance. The host is supposed to host virtual machines residing on a SCST SAN, hence no drives. ESXi boots off an usb which is attached behind all the cabling to the "inside" usb plug on the board.

So... back to the shelf...

The idea behind this design is to:

1) Minimize costs - With this I don't need to purchase an expensive unecessary rack cabinet plus accessories. I got 2 of these 2U shelves for 40€
2) Increase density - This design is supposed to be modular as well and to hold 2 of these systems within the same rack units (more on that later).
3) Be geeky - Of course why not

Apologies for no illustrations as of now, I'm working on that.. This shelf you see now is the front-version of the 1337-Shelf. This version is mounted with normal screws (like you would mount any shelf) on the front of the rack, so cables will be going to the front as well. However, the only 2 cables I have running is a couple of network cables and the mandatory fibre channel cables for my ESXi storage.

The shelf is not too deep so this makes it possible to mount a second system on the rear of the rack. However, this is where the "rear-version" comes into play. The board and PSU are positioned the same way as the front, but the CPU and SYS fan's are turned the other way to let the airflow continue through and out of the rack on the rear. This makes it possible to have 2 Dual 2670 systems WITH 6 lowprofile pcie cards on just 2U of space! (awesome...)

In the first samples I am using some kind of "sawable" plate. It may look like wood but it's some artifical stuff that (luckily) can't catch on fire. However, my goal is to get some custom thin plastic plates made for the boards, especially because sawing those plates is not so funny :)

The plate is cut to the edge of the system board I put in + half a centimeter of space to avoid any sides touching the metal shelf or PSU. I have drilled holes as well to be able to zip-tie the board onto the plate. Afterwards I simply just place the plate on the board with the PSU on the side and connect the components. None of my 1337-Shelves needs any fastening between plate and shelf, they stay in place very well.

To cool the other components than CPU (pretty much only the RAM in my case since I disable HDD controllers and other unused stuff), there are located 80mm fans (2) on the front of the shelf. I use Arctic F8 TC fans which has a built-in temperature sensor.



I place these on the board itself in front of them (or behind them for the rear-version) so they regulate the fan speed depending on what their temperature sensor tells them. This has given me pretty decent results for the cooling. Of course there is a minor hot spot in the middle between the 2 shelves but nothing that could cause issues. If you don't have any pcie cards which needs a cable connected, you can cover that area with even more fans!

Currently I am working with a 2U version as on the picture but in the future I will try to make a 1U version, which would technically double the density! There's just some things I need to find out, especially front fans, since I don't want the small crazy 40mm fans. :)

Operation and power consumption has been very positive. With the dual CPU setup I've hit about 70w of idle power (with the hungry 2U PSU). If I add the 950 pro and the fiber cards, I land on about 85w of usage. This is when ESXi has loaded fully. When fully loaded, I've seen around 230w but I guess this is to be expected since the system is packed with 2 x 115TDP processors. :)

Stability-wise I have nothing to put on it. It just really runs well and uses very little power. Since this is 24/7 systems it's especially nice to save some extra bucks on a "green" system which uses significantly less power than most OEM servers. I saw some people that liked to use a full plate (going from front to rear) for multiple systems but unfortunately that removes the "modularity" which we have with this shelf here. Good thing here is also that IO connections are facing the person, so it's easy to connect/disconnect cables/things.

The design could def. be used for other boards as well, like a quad MINI-ITX setup with some i7's. :)

I guess there's not much else to say, other than I think I'll be going forward with this design for my future nodes. Currently looking to get a 20U rack at Interxion, if I substract 8U for storage/network then I'm looking at being able to pop 12 dual systems into the remaining 12U and 24 if I finish up on my 1U version. :)

I might just build a couple more 2U shelves, then move onto 1U version. Let me know your thoughts and comments, anything is welcome. :D
 

YetAnotherMinion

New Member
Mar 22, 2016
15
1
3
Chicago
You might want to look at the open compute windmill nodes. Potential Deal: 2 x Dual 2011 nodes @$199, Quanta Openrack You can probably get twice the compute density than your setup. You do have to make a custom blade style chassis to fit them into a regular 19" rack, but if you do, you can fit 12 system boards into 11U. Each board can have 256GB in 16GB dimms, and 2 LGA2011-R sockets. You also can cram 2 3.5" drives per system board, and slip in a switch into the same space as that 11U as well. A more conservative and cheaper per unit setup would be 128GB per node, 2x 2TB drives, 2x E5-2670, and 128GB DDR3 ECC. It works out to $646 per node. Including the switch and fabrication of bracket, it comes out to about 9.5k for the whole 11U drop in unit: 192 Physical cores, 1.5TB of RAM, 24TB of mirrored slow disk, and 10G networking.

The power supplies are platinum, with >94% at 50% load, and >91% at full load.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Definitely looks interesting... However the system lacks ipmi, video and there's too few PCIe IO ports for my needs.

As mentioned in my thread, I could potentially double the density by leaving out the PCIe adapters (for a pfsense router for example) which would then technically give me up to 24 systems in 12U ;)

One of the major advantages of this setup is to avoid being bound to proprietary stuff. The Quanta stuff may be a little cheaper but here I have the freedom to put any boards in I like.

There is the kind of people as well who prefer FC over NFS/iSCSI and I'm one of them so I am looking to keep my adapters for now. However when it comes to dedicated server hosting who does not need that, then it might be beneficial to go 1U style!
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
Greetings everyone. I am currently starting a company and of course that involves MONEY! That's why I decided to go untraditional ways when it comes to building and designing my servers. It's a hosting business so compute power is greatly needed, together with some sort of density to minimize the costs for the equipment.
Well, it worked for Google, so you're in good company!
The plate is cut to the edge of the system board I put in + half a centimeter of space to avoid any sides touching the metal shelf or PSU. I have drilled holes as well to be able to zip-tie the board onto the plate. Afterwards I simply just place the plate on the board with the PSU on the side and connect the components. None of my 1337-Shelves needs any fastening between plate and shelf, they stay in place very well.
I'd make a suggestion for mounting the motherboard - get rid of the cardboard and the zip ties, and use hex standoffs for all of the motherboard mounting holes. Modern motherboards have large BGA components and the combination of heat, weight and time can cause all sorts of strange problems if the motherboard isn't adequately supported. As you point out, the shelf is inexpensive and you can probably re-use the same holes / standoffs if you change motherboards but stay in the same form factor. Otherwise just drill new holes and move the standoffs. I suggested male / female standoffs like the ones used in a traditional case. You can use female / female standoffs if you prefer, but make sure you have long enough standoffs / short enough screws that the screws don't meet in the middle and prevent tightening them all the way.

I linked to Newark / Element14 above, but this type of standoff should be widely available from distributors throughout Europe.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
You are right about that, I could maybe even turn some MB screwhole-heads around so they pointed upwards. I just need to make sure that they do not cause any kind of trouble for the electrical circuits. :)

Some updates on the 1U version: I bought some 1U riser cards and will test them out. So far I have estimated I can get 2 PCIe cards and if some future motherboards supports a M.2 onboard slot, then it will be perfect because I will still be able to have a local SSD in place. :)
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Are these running anything locally or is the ssd just for boot? you could boot over fiberchanel or use cheaper SM951's, or even USB boot.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
No the esxi boots off USB. The local m.2 is just for my pfsense firewall in case I have to take down my storage and work on it. I did consider boot from SAN but I don't know if I want my entire infrastructure to be depending on one device. :) in the future I might as well use the m.2 ssd's for some caching