Greetings everyone. I am currently starting a company and of course that involves MONEY! That's why I decided to go untraditional ways when it comes to building and designing my servers. It's a hosting business so compute power is greatly needed, together with some sort of density to minimize the costs for the equipment.
Back in the days when I lived in Denmark, I had a company as well but shut it down when I moved to Germany. I used to purchase 1U/2U cabinets for my boards, however this time I decided to be a little more creative.
Just the mandatory specs stuff:
Build’s Name: 1337-Shelf (Version 1)
Operating System/ Storage Platform: Any OS
CPU: E5-2670 (2 pieces)
Motherboard: S2600CP
Chassis: None
Drives: Single 950 PRO SSD
RAM: 128G (256G is not an issue)
Add-in Cards: 2 x QLE2460's
Power Supply: 2U Zippy Emacs
Other Bits: 2 x 80mm temp controlled fans
Meet the 1337-Shelf! (yes I just made up that name):
The shelf is made up simply by the board, a plate underneath the board, a single TFX or 1/2U EPS PSU and the shelf itself. Of course I have attached fans as well to move the cold air from the front of the rack to the rear of the system.
This shelf is based on a S2600CP4 board with dual E5-2670's. As PCIe devices I have a couple of QLE2460's as well as a Samsung 950 PRO to host my pfSense firewall instance. The host is supposed to host virtual machines residing on a SCST SAN, hence no drives. ESXi boots off an usb which is attached behind all the cabling to the "inside" usb plug on the board.
So... back to the shelf...
The idea behind this design is to:
1) Minimize costs - With this I don't need to purchase an expensive unecessary rack cabinet plus accessories. I got 2 of these 2U shelves for 40€
2) Increase density - This design is supposed to be modular as well and to hold 2 of these systems within the same rack units (more on that later).
3) Be geeky - Of course why not
Apologies for no illustrations as of now, I'm working on that.. This shelf you see now is the front-version of the 1337-Shelf. This version is mounted with normal screws (like you would mount any shelf) on the front of the rack, so cables will be going to the front as well. However, the only 2 cables I have running is a couple of network cables and the mandatory fibre channel cables for my ESXi storage.
The shelf is not too deep so this makes it possible to mount a second system on the rear of the rack. However, this is where the "rear-version" comes into play. The board and PSU are positioned the same way as the front, but the CPU and SYS fan's are turned the other way to let the airflow continue through and out of the rack on the rear. This makes it possible to have 2 Dual 2670 systems WITH 6 lowprofile pcie cards on just 2U of space! (awesome...)
In the first samples I am using some kind of "sawable" plate. It may look like wood but it's some artifical stuff that (luckily) can't catch on fire. However, my goal is to get some custom thin plastic plates made for the boards, especially because sawing those plates is not so funny
The plate is cut to the edge of the system board I put in + half a centimeter of space to avoid any sides touching the metal shelf or PSU. I have drilled holes as well to be able to zip-tie the board onto the plate. Afterwards I simply just place the plate on the board with the PSU on the side and connect the components. None of my 1337-Shelves needs any fastening between plate and shelf, they stay in place very well.
To cool the other components than CPU (pretty much only the RAM in my case since I disable HDD controllers and other unused stuff), there are located 80mm fans (2) on the front of the shelf. I use Arctic F8 TC fans which has a built-in temperature sensor.
I place these on the board itself in front of them (or behind them for the rear-version) so they regulate the fan speed depending on what their temperature sensor tells them. This has given me pretty decent results for the cooling. Of course there is a minor hot spot in the middle between the 2 shelves but nothing that could cause issues. If you don't have any pcie cards which needs a cable connected, you can cover that area with even more fans!
Currently I am working with a 2U version as on the picture but in the future I will try to make a 1U version, which would technically double the density! There's just some things I need to find out, especially front fans, since I don't want the small crazy 40mm fans.
Operation and power consumption has been very positive. With the dual CPU setup I've hit about 70w of idle power (with the hungry 2U PSU). If I add the 950 pro and the fiber cards, I land on about 85w of usage. This is when ESXi has loaded fully. When fully loaded, I've seen around 230w but I guess this is to be expected since the system is packed with 2 x 115TDP processors.
Stability-wise I have nothing to put on it. It just really runs well and uses very little power. Since this is 24/7 systems it's especially nice to save some extra bucks on a "green" system which uses significantly less power than most OEM servers. I saw some people that liked to use a full plate (going from front to rear) for multiple systems but unfortunately that removes the "modularity" which we have with this shelf here. Good thing here is also that IO connections are facing the person, so it's easy to connect/disconnect cables/things.
The design could def. be used for other boards as well, like a quad MINI-ITX setup with some i7's.
I guess there's not much else to say, other than I think I'll be going forward with this design for my future nodes. Currently looking to get a 20U rack at Interxion, if I substract 8U for storage/network then I'm looking at being able to pop 12 dual systems into the remaining 12U and 24 if I finish up on my 1U version.
I might just build a couple more 2U shelves, then move onto 1U version. Let me know your thoughts and comments, anything is welcome.
Back in the days when I lived in Denmark, I had a company as well but shut it down when I moved to Germany. I used to purchase 1U/2U cabinets for my boards, however this time I decided to be a little more creative.
Just the mandatory specs stuff:
Build’s Name: 1337-Shelf (Version 1)
Operating System/ Storage Platform: Any OS
CPU: E5-2670 (2 pieces)
Motherboard: S2600CP
Chassis: None
Drives: Single 950 PRO SSD
RAM: 128G (256G is not an issue)
Add-in Cards: 2 x QLE2460's
Power Supply: 2U Zippy Emacs
Other Bits: 2 x 80mm temp controlled fans
Meet the 1337-Shelf! (yes I just made up that name):
The shelf is made up simply by the board, a plate underneath the board, a single TFX or 1/2U EPS PSU and the shelf itself. Of course I have attached fans as well to move the cold air from the front of the rack to the rear of the system.
This shelf is based on a S2600CP4 board with dual E5-2670's. As PCIe devices I have a couple of QLE2460's as well as a Samsung 950 PRO to host my pfSense firewall instance. The host is supposed to host virtual machines residing on a SCST SAN, hence no drives. ESXi boots off an usb which is attached behind all the cabling to the "inside" usb plug on the board.
So... back to the shelf...
The idea behind this design is to:
1) Minimize costs - With this I don't need to purchase an expensive unecessary rack cabinet plus accessories. I got 2 of these 2U shelves for 40€
2) Increase density - This design is supposed to be modular as well and to hold 2 of these systems within the same rack units (more on that later).
3) Be geeky - Of course why not
Apologies for no illustrations as of now, I'm working on that.. This shelf you see now is the front-version of the 1337-Shelf. This version is mounted with normal screws (like you would mount any shelf) on the front of the rack, so cables will be going to the front as well. However, the only 2 cables I have running is a couple of network cables and the mandatory fibre channel cables for my ESXi storage.
The shelf is not too deep so this makes it possible to mount a second system on the rear of the rack. However, this is where the "rear-version" comes into play. The board and PSU are positioned the same way as the front, but the CPU and SYS fan's are turned the other way to let the airflow continue through and out of the rack on the rear. This makes it possible to have 2 Dual 2670 systems WITH 6 lowprofile pcie cards on just 2U of space! (awesome...)
In the first samples I am using some kind of "sawable" plate. It may look like wood but it's some artifical stuff that (luckily) can't catch on fire. However, my goal is to get some custom thin plastic plates made for the boards, especially because sawing those plates is not so funny
The plate is cut to the edge of the system board I put in + half a centimeter of space to avoid any sides touching the metal shelf or PSU. I have drilled holes as well to be able to zip-tie the board onto the plate. Afterwards I simply just place the plate on the board with the PSU on the side and connect the components. None of my 1337-Shelves needs any fastening between plate and shelf, they stay in place very well.
To cool the other components than CPU (pretty much only the RAM in my case since I disable HDD controllers and other unused stuff), there are located 80mm fans (2) on the front of the shelf. I use Arctic F8 TC fans which has a built-in temperature sensor.
I place these on the board itself in front of them (or behind them for the rear-version) so they regulate the fan speed depending on what their temperature sensor tells them. This has given me pretty decent results for the cooling. Of course there is a minor hot spot in the middle between the 2 shelves but nothing that could cause issues. If you don't have any pcie cards which needs a cable connected, you can cover that area with even more fans!
Currently I am working with a 2U version as on the picture but in the future I will try to make a 1U version, which would technically double the density! There's just some things I need to find out, especially front fans, since I don't want the small crazy 40mm fans.
Operation and power consumption has been very positive. With the dual CPU setup I've hit about 70w of idle power (with the hungry 2U PSU). If I add the 950 pro and the fiber cards, I land on about 85w of usage. This is when ESXi has loaded fully. When fully loaded, I've seen around 230w but I guess this is to be expected since the system is packed with 2 x 115TDP processors.
Stability-wise I have nothing to put on it. It just really runs well and uses very little power. Since this is 24/7 systems it's especially nice to save some extra bucks on a "green" system which uses significantly less power than most OEM servers. I saw some people that liked to use a full plate (going from front to rear) for multiple systems but unfortunately that removes the "modularity" which we have with this shelf here. Good thing here is also that IO connections are facing the person, so it's easy to connect/disconnect cables/things.
The design could def. be used for other boards as well, like a quad MINI-ITX setup with some i7's.
I guess there's not much else to say, other than I think I'll be going forward with this design for my future nodes. Currently looking to get a 20U rack at Interxion, if I substract 8U for storage/network then I'm looking at being able to pop 12 dual systems into the remaining 12U and 24 if I finish up on my 1U version.
I might just build a couple more 2U shelves, then move onto 1U version. Let me know your thoughts and comments, anything is welcome.