Blue Sky Mining Cluster

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RBE

Member
Sep 5, 2017
60
34
18
Several months ago I found myself between work projects, and during this time I accidentally caught the Monero mining bug from the STH forums. It turned out to be quite a virulent strain, and in the end it took the purchase of 12 Wiwynn SV7210 servers to get it under control. Each server has since been fitted out with four Intel Xeon E5-2660 CPUs, four 4GB RDIMMs, and two small capacity HDDs.

Ubuntu Server 16.04 LTS has been installed on each node, and together they form a 24-node Docker swarm that runs one of @Patrick's many Monero mining images. The 12 servers are currently housed in an industrial shelving unit that I salvaged, but now the time has come to do things properly.

To this end I have designed a custom 20-unit rack to house the 12 Wiwynn servers, two Ubiquiti network switches, HP 32A PDU core, and four intelligent extension bars that currently comprise the Blue Sky Mining Cluster (BSMC). The rack is being manufactured at the moment, and will look something like this:

Assembly.JPG

As you can see, its nothing fancy - a welded SHS frame that will be powder coated matte black, along with several electrogalvanised CRS panels. These panels are riveted to the frame and not only hold the servers, but also provide anchor points to tie the many power and network cables the cluster requires.

Left Server Panel.JPG

Cable Panel.JPG

Once I take possession of the frame, self-levelling feet will be inserted into the outer SHS members, and castors bolted into the holes in the SHS you can only just make out in the render above to make the rack easy to move into its final position.

For those of you wondering why the rack is 20 units in height, it is so I can add an additional six Wiwynn SV7210 servers in the future. This will take the total number of servers in the BSMC to 18, and the total number of nodes to 36 - one of which will have to be designated as a cold spare. This is to ensure that the maximum current rating of the PDU is never exceeded. Once the power consumption of the two Ubiquiti network switches is taken into consideration, the 35-node BSMC will consume 7.2kW or thereabouts.

The two network switches are mounted in a common sled, and occupy a single rack unit. Only the larger of the two (a 48-port switch) features active cooling, and so it will be mounted at the back of the sled to prevent it blowing hot air over the passively-cooled 24-port switch. Again, the network switch sled and the PDU core sled are currently being manufactured, and should look something like this:

Network Switch Sled.JPG

PDU Core Sled.JPG

For now, all I can do is wait for the rack and sleds to arrive...
 

RBE

Member
Sep 5, 2017
60
34
18
I picked up the network switch and PDU core sleds late last week. Both look good, but unfortunately there is an issue with the former, and it is entirely my fault. See if you can spot what it is - here is a shot from the front:

ocr-network-sled-populated-front.jpg

And here is a shot from the back:

ocr-network-sled-populated-back.jpg

The problem is that I forgot to account for the increased depth of the ES-48-LITE compared to the ES-24-LITE when designing the sled, and as a result the ES-48-LITE extends beyond its back edge. This does not stop the network switch sled fitting into the rack, as it will still do so just fine. The problem is that it will now look out of place, as everything else is flush with the back edge of the rack. This would annoy me forever, and so the only solution is to revise the design of the network switch sled and have it remade.

In order to avoid any more unforced errors, I am going to wait until the right angled C13 to C14 power leads I have ordered for the network switch sled have arrived so that I can shift the location of the ES-48-LITE just enough to ensure that the ES-48-LITE power lead remains out of sight, whilst still preserving as much room between the two network switches for cabling as possible.

I also heard from the fabricator making the rack late last week. It seems their powder coating division messed up and coated not only the SHS frame, but the CRS panels as well. As a result, the panels are having to be sent out to be dip stripped to remove the unwanted powder coat. This will delay the arrival of the rack by several days.
 
Last edited:

RBE

Member
Sep 5, 2017
60
34
18
Now we are starting to get somewhere. I have just returned from the fabricator, having paid for the custom rack and arranged delivery. Here are a couple of photos I took of the rack sitting on the factory floor. The first one is a shot of the rack from the back. Note the full-length tabs designed to stop the servers from sliding out the back of the rack. The cut-outs in these tabs are needed to accommodate a particular feature of the Wiwynn servers - they each have a small hem above the fans that will poke through these cut-outs. I hope.

ocr-back.jpg

Next is a shot of the rack from the side. Here you can see the two different types of cable panel. The ones closest to the camera are for the network cables, whilst the ones furthest from the camera are for the PDU and associated power cables. I should probably have added more slots to both types of panel for cable management, but its a bit late now. Something for the next rack perhaps ;).

ocr-side.jpg

The horizontal slots you can see in the server panels furthest from the camera near the cable panels are for the spring retention clip on each Wiwynn server. These are designed to stop the servers from sliding out the front of the rack.

I am still awaiting completion of the revised network switch sled, and once this has been delivered I will begin populating the rack and sorting out the cabling. In order to keep things neat and tidy, I will be making liberal use of hook-and-loop ties and cable combs. Given the horrendous price of cable combs and how simple they are to produce, I think I will be making my own.
 
Last edited:
  • Like
Reactions: JustinH

RBE

Member
Sep 5, 2017
60
34
18
I went and picked up the revised network switch sled this week. As you can see from the following shot, the ES-48-LITE no longer extends past the back edge of the sled. I also took the opportunity to change the material thickness from 1mm to 1.6mm CRS, and to move the cable slot forward a bit. Network cables from the ES-48-LITE will pass through this slot and forward to the rack cable channels. Note the strip of white plastic that is used around the edge of the slot to prevent the network cables from abrading.

ocr-network-sled-revised-populated-back.jpg

The next shot shows the network slice positioned in the rack.

ocr-network-slice-in-situ.jpg

I am just about to start running the network cabling, and I have made myself a variety of different cable combs in preparation to help keep things tidy. These combs are shown in the shot below, with those at the back yet to have the protective film removed from either side. The combs were created using an industrial laser cutter from 4.5mm acrylic.

cable-combs.jpg
 

SGN

Member
Oct 3, 2016
36
11
8
Looks great! I love the details and all those small improvments.
How do you plan to cool this beast?
 
Last edited:

RBE

Member
Sep 5, 2017
60
34
18
Good question. Once I have finished running the network cabling, and have installed the power slice and associated intelligent extension bars, the rack will be moved into an industrial space. Based on the E5-2660 V1's TDP of 95W, the rack will throw out approximately 6.6kW of waste heat when operating at full capacity. I have an HVAC engineer currently checking the air conditioning in the space to ensure it will be able to handle the load.
 

RBE

Member
Sep 5, 2017
60
34
18
Not at all. Having designed the rack, I made sure that the rivets would be able to handle the applied shear stress, and added a decent factor of safety to account for the fact that the rack will be located in an earthquake zone. Speaking of which, the castors you can see in post #8 are a temporary measure to enable me to move the rack around. Once the rack is in the industrial space, the castors will be removed and it will be bolted to a plinth that is anchored to the concrete floor.
 
  • Like
Reactions: T_Minus

RBE

Member
Sep 5, 2017
60
34
18
The network cabling continues apace. Rather than make up 72 cables from scratch, I went and purchased a whole heap of black and blue Cat5e patch cables instead. By cutting these 5m long patch cables in two, I have been able to reduce the number of RJ-45 plugs that I have to crimp by half. As you can see from the shots below, I chose to use the moulded plugs at the network switch end of the cable rather than the server end.

network-cabling-1.jpg

The pieces of card you can see twist-tied to the cables on the left carry the port number that the cable is attached to - 1 to 48 for the rear switch, and 49 to 72 for the front one. Once the cables are cut to length and terminated, I will print out the port numbers and wrap them around the cables.

network-cabling-2.jpg

As you can see, the cable combs that I laser cut came in very useful. They certainly help to keep the cable runs neat and tidy.

network-cabling-3.jpg

The next step is to terminate all those cables, test them, and then begin the installation of the PDU and intelligent extension bars.
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
For those of you wondering why the rack is 20 units in height, it is so I can add an additional six Wiwynn SV7210 servers in the future.
Like that would need explaining! What does is why didn't you build 10 of these????? :)

If you don't mind me asking, what's the power cost and profitability like?
 

jamesy_1988

New Member
Oct 17, 2016
9
2
3
36
As you can see, the cable combs that I laser cut came in very useful. They certainly help to keep the cable runs neat and tidy.
Awesome work. Can't wait to see the end system

Any chance you would be willing to share the design of the cable combs?
 

RBE

Member
Sep 5, 2017
60
34
18
@Joel I did consider making the BSMC a multi-rack system early on, and went so far as to get quotes for five racks to be manufactured. I used to day-dream about being able to walk in and see these five racks humming away, power LEDs blinking. Looking back, I am glad that I did not pursue a multi-rack solution further, as the cost of getting just one rack up and running is already far more than I had anticipated.

Furthermore, there is also much to recommend prototyping any item you intend to manufacture in volume. This allows design defects to be identified and corrected, as well as for design improvements to be incorporated before batch/mass production begins. In the case of the BSMC, there are plenty of both.

Once the rack is up and running, I will provide a line-by-line breakdown of the cost so that any STH readers who are considering doing something similar know what they are in for. Suffice to say that I would not recommend my approach to those for whom profit is the only motive. Speaking personally, I always try to follow through on my original vision for a project, so that is what I have done.

I want to be able to look at the BSMC and feel a sense of pride in my work, whether it makes financial sense or not. Don't get me wrong - I sincerely hope that the cluster yeilds a healthy ROI, but that is not its only purpose. It will also be my home lab - one that I intend to use to learn the rudiments of system administration and computer networking.

With regards to power and profitability, there is not much more that I can say. Total power consumption is likely to be 7.2kW, with a forecast XMR hash rate of around 30kH/s. How much I will end up paying for electricity has yet to be determined...
 

RBE

Member
Sep 5, 2017
60
34
18
@jamesy_1988 Here you go. Note that these combs have been designed for UTP Cat5e network cable with an outside diameter of 5mm. If you are laser cutting the combs yourself, be sure to run a heat gun over both sides of each one afterwards to ensure there are no sharp edges that could ultimately abrade through the outer layer of the network cable.
 

Attachments

RBE

Member
Sep 5, 2017
60
34
18
Progress on the BSMC has temporarily stalled due to the late arrival of the black network cable boots I ordered several weeks ago. In the mean time, lets talk money. The following table provides a high-level overview of the costs involved in setting up the BSMC. Note that this table includes the cost of items that have yet to arrive - specifically the plinth for the rack, and the last batch of CPUs, and that it uses a nominal 0.7 NZD/USD exchange rate.

bsmc-provisional-costing.jpg

If this seems like a lot of money to you, welcome to the club. As I mentioned in the first post of this thread, the BSMC started out as a way to fill in the time between work projects. Since then however it has become a work project. Why? So that I am able to claim back local taxes and depreciate the cost, thus reducing my taxable income. Putting the BSMC on the books does mean that any profit it makes is taxable, but I am fine with that.

Being located in New Zealand, a large part of the cost of getting the BSMC up and running has been freight related. How much? About $4,269 NZD, or $2,988 USD. Suffice to say that the time to pay the investment in hardware off is likely to be quite long.