Let's watercool a 42U Rack and some servers

s0lid

Active Member
Feb 25, 2013
259
34
28
Tampere, Finland
The plan would be plain and simple: Watercool 2 C6100 nodes, my 2U fileserver and my main pc.

Goals:
Shared watercooling with 2 9x120mm rads, yes 2 Phobya 1080mm's. Mounted with custom made aluminium brackets, 9U high. 6mm thick stuff :p
Going to need manage water distribution somehow. Custom parts probably.
Quick release couplings will be used pretty much everywhere possible.
Blocks will be rather massive GPU blocks with custom made 3mm thick aluminium LGA1366 brackets.

Not known yet:
Pump? This has to be some sort of beast.

Pictures!


Radiator mounts, 9U high, made from 6mm thick aluminium. Got 2 pairs of these. Rack holes align pretty well, slight design flaw there :eek:


Waterblocks, custom made brackets, 3mm thick aluminium. I got 8 of these already :cool:
 
Last edited:

RimBlock

Active Member
Sep 18, 2011
838
28
28
Singapore
I have been considering this for a while but have not seriously worked on a design.

My angle was more to providing cooled air to the servers rather than cooling the servers processors directly and so maintaining the servers in original state so, as will the C6100, nodes could be removed and replaces as needed.

To that end I would have some sort of rad at the top with the cabinets top fans pulling air through it and out rather than venting out of the back. The rack top also could provide space for a pump, rez etc but would have to be safely setup to prevent any leaks dripping down on to the server equipment.

Cooling the water may also be aided by the water flow through piping to the bottom of the rack at the front of the cabinet with the hot water returning up the rear to absorb some more heat vented out of the back of the servers and deliver to the top.

There would need to be a heat exchanger at the bottom of the cabinet to remove heat from the air (via the cold water) and then vent the cooled air to the front of the servers at different U heights. A series of spray bars on the front door but spraying cool air rather than water at the front of the servers through vents that can be shut when that section is not in use.

Asetek produce a number of watercooling solutions for servers and racks so you may want to take a look at what they are doing.

There is also a patent granted to Hitachi for a water cooled server rack. Lots of info in it.

Hot water cooling for datacenters IBM paper (pdf). There are also some pictures of the microchannel water cooling IBM are using on the MUC Supercomputer. Interesting stuff.

Hot water cooling.

Oil cooling ??. Ok, that is probably going a bit too far ;).

RB
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you know the tripplite portable ac unit is pretty quiet at low speed. The fan gets quite noisy but the compressor is pretty damn quiet in a rack of servers, it has a blow-hose so you could focus the air to your intercooler/radiator . are you using an air water intercooler?
 

mattr

Member
Aug 1, 2013
120
11
18
You wouldn't need multiple loops. Just a radiator before each block. Depending on how high you would need to go one pump should be fine for 2 cpus. Those Rads are way overkill though. They won't do any better than a 2x 120. A 9x 120 is for cooling something like 4 overclocked GPUs, CPU, chipset, memory and mosfets all in 1 rad.
 
Last edited:

mattr

Member
Aug 1, 2013
120
11
18
Pfft overkill is cool ;-)
Well... it'll do more harm than good. With a somewhat lengthy vertical run you don't want an unnecessarily large rad decreasing the pressure and flow rate.

If OP wants to do it just to do it then I'd suggest putting 1 pump right before and 1 pump right after the rad.
 
Last edited:

mattr

Member
Aug 1, 2013
120
11
18
How does a loop work without a pump?
It wouldn't. I was just saying without that huge rad 1 pump would probably be fine. With that rad that 2 would be minimum and placement of right before and after would be ideal.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
You wouldn't need multiple loops. Just a radiator before each block. Depending on how high you would need to go one pump should be fine for 2 cpus. Those Rads are way overkill though. They won't do any better than a 2x 120. A 9x 120 is for cooling something like 4 overclocked GPUs, CPU, chipset, memory and mosfets all in 1 rad.
I like the idea of a common infrastructure for the whole rack, doing a loop for each server has been done. 2x 9x 120s is still a lot for that. IIRC a single 120 is good for about 100W with low speed fans, so at least 5x120.

A lot of pump is going to be needed for the rise and to get through the rads. Maybe look into a larger pond pump for the return and another on the cold side if doing a single loop. I would go with larger supply and return lines, like 3/4in. I think a single loop with vales on the hot side to control flow to each system would work out.

If you haven't bought the Phobya yet, look into doing 3x 3x120s in parallel. That will get you much less pressure drop across the rads.
 

mattr

Member
Aug 1, 2013
120
11
18
Pump 1 > 2x120 Rad 1 > CPU > Res > Pump 2 > 2x120 Rad 2 > Pump 1 would be my suggestion. Simplest most efficient setup.

When I'm talking 2x120mm rad I'm talking about one like this: RX240 Dual Fan Radiator V2 — XSPC - Performance PC Water Cooling that is 60mm+ thick. These are plenty for cooling a single CPU. The 3x120 will look cooler though. You could also run two reservoirs mounted to each side of the rack with some kind of cool looking flow meter in there. Running in opposite directions that would look pretty cool.
 

s0lid

Active Member
Feb 25, 2013
259
34
28
Tampere, Finland
Well I've put some thought on this... especially about pumps and how to do the loop in general.

Why am I going to use 2x 1080mm rads?
I already got one in use with 1 cpu block and 1 highly restrictive gpu blocks (used 2 earlier), single DDC pump handles that loops just fine, never seen gpu temps go over 45c or cpu go over 50c. And the fans in rad are ran at 5V.

The pumps.
Going to need 1+n pumps, n being the amount of servers/cooling block. The loops would look something like this:
Code:
reservoir => rad1 => pump1 => rad2 => || => pumpn1 => blockn1 => reservoir =>
                                      || => pumpn2 => blockn2 => reservoir => 
                                      || => pumpnX => blocknY => reservoir =>
                                      || => bypass valve => reservoir =>
Why like this:
Easy maintenance, I can entirely remove a cooling block from the loop by turning off it's pump and disconnecting the tubing. If I need to remove all of the cooling blocks I can open the bypass valve, so I don't have to turn off the pump1. I don't need to worry about pressure changes too much either.

Other things.
I'll need to order new mounting brackets for those cpu blocks, nothing wrong with the current ones but I need ones where the block is rotated about 20 degrees ccw. Just to make running tubing easier and more simple in C6100 nodes.
This is about cooling 3 servers and single PC in total, 2 of sharing a single chassis thus being single cooling block. But in general the cooled stuff will be: 5x L5630's, 1x highly overclocked 3820 and 1x highly overclocked HD7970. Not much of thermal output there but it's future proof solution.

@mrkrad:
Can't get those AC units from this side of the planet and those pretty much mean more €€€ to the electricity bill.
 

S-F

Member
Feb 9, 2011
148
5
18
I have a sort of centralized water cooling. I have yet to connect my new server to it but my desktop is. I have a couple heater cores and a huge 5' hydronic radiator, about 40' of tubing currently and it's driven by an Iwaki MD-20 (Japanese motor). Personally I wouldn't screw with a bunch of pumps and loops. Just use manifolds. Like is done for hydronic heating. The technology for this has been around for a long time in the HVAC industry.