server build for passively cooled compute cards

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

proxima

New Member
Sep 15, 2016
9
2
3
35
Greetings,

I'm looking for a little help on putting together a server to host two Tesla K20 compute cards I recently obtained. I'm looking for case/motherboard/CPU suggestions. Cost is a priority, parts that can be found used/ebay are a big plus.

What I had in mind:
  • 4U case -- in my basement so low noise is a concern.
  • memory bandwidth and PCIx lanes are a priority -- cpu doesn't have to be amazing -- just fast enough to shuttle data back and forth to/from the cards.
  • motherboard, would like IPMI, enough slots to host the K20 cards and an SFP+ card
  • What is all needed to handle the passive cooling of the cards?
  • OS needs to be Centos 7 so I can integrate it into my slurm cluster.
I'm primarily a machine learning researcher and am not as up to date as you guys on hardware & admin. type of stuff. I've been running a freenas server and a small SLURM cluster in my basement for over a year now. So I think if you can point me in the right direction I should be good!
 

proxima

New Member
Sep 15, 2016
9
2
3
35
Trying to get a little more specific in hopes of garnering some feedback, what about something like this?

Motherboard: Supermicro X10SRA-F-O
CPU: E5-2620 v3
Case: Something along the lines of Rosewill RSV-L4000

The biggest question for me is still if the motherboard can get the temp of the K20 cards and cycle the case fans up/down as needed.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,251
1,548
113
34
Germany
Supermicro 745/747 are optimized for gpu workstations. Not sure if these are still "low noise" with data center gpus, but they have the necessary fans for cooling them and there are optional kits for cooling the pci(e) slots.
 

proxima

New Member
Sep 15, 2016
9
2
3
35
Supermicro 745/747 are optimized for gpu workstations. Not sure if these are still "low noise" with data center gpus, but they have the necessary fans for cooling them and there are optional kits for cooling the pci(e) slots.
Thanks for the advice -- though I wasn't really able to find anything that I liked for what I thought was a reasonable cost.

I ended up going more towards the original list -- I found that IPMI has no mechanism to read the temperatures of the cards and spin the fans up/down. I ended up setting the fans to default on full speed, the 120mm fans really aren't too loud at all.

For cooling the compute cards -- I broke out the hot glue gun and constructed a duct, much more DIY than I would like but it keeps the cards pretty cool (Idle: 33 C, Load: ~60 C). 60 C is well under the rated 70 C max operating, so far so good.

cooling.jpg