SuperMicro X9DRG-HF 2U Barebones Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Havs

New Member
Sep 23, 2015
17
5
3
49
The model number in the title is for the motherboard, not the actual server model number.

SuperMicro X9DRG-HF 2U Barebones Server Just Add CPU/RAM LGA2011

The listing is just a wee bit vague. I spoke with the vendor and they confirmed that it is the entire server (2 heatsinks, riser boards, motherboard, and fans were specifically confirmed), not just the motherboard. They even reiterated that "all you need to do is add cpu and ram." They accepted a $400 offer.

Looking at 2U model numbers compatible with the X9DRG-HF, all seem to work with xeon phi's. The possible model numbers would be:
SYS-2027GR-TRF (glogal sku #)
SYS-2027GR-TRFH
SYS-2027GR-TRF-FM475
SYS-2027GR-TRF-FM409

Dual e5-2600 v1/v2 family. All have at least 4x pci-e 3.0 x16. Unless I'm mistaken, the dual 1800w psu's auto-switch down to 1200W at 120v.
 

azev

Well-Known Member
Jan 18, 2013
769
251
63
Good price for barebone system though, I am so tempted to get one
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Those are very nice. Once again I am saved from myself by a lack of cash. Otherwise I would probably buy 3.

There are too many opportunities to spend money on this forum. 4 phis or tesla k2s a couple of 2670's, 128gb of ram, a couple of fast ssds and you have a very nice little compute node. x8 plus qdr infiniband and you have a small supercomputer.
 
  • Like
Reactions: Boddy

Havs

New Member
Sep 23, 2015
17
5
3
49
Those look like GPU compute platforms to me.
Yeah, all 4 x16 slots are internal. I know some people have been having issues cooling the passive Phi's in some of the more open cases; this case puts them in two wind tunnels. I've heard 1U Supermicro GPU Superservers running and they sound like they have 8-10 hair dryers running inside them; I'm guessing the 2U aren't much quieter.

Looking a bit closer, it would almost have to be SYS-2027GR-TRF or SYS-2027GR-TRFH, since the other SKUs list 4x on-board Fermis. TRFH puts an additional 2 more pci-e 3.0 x16 slots in the back, for a total of 6. I'm not sure how that works out, since the X9DRG-HF only supports 4 x16.

I'm guessing that the x16's aren't restricted to a GPU are they? Cooling an oddball card shouldn't be an issue, but you'd have to get creative with any cable routing ;) For $400 it'll be a nice way to play with the extra 2670's I've got laying around.
 

Boddy

Active Member
Oct 25, 2014
772
144
43
Those are very nice. Once again I am saved from myself by a lack of cash. Otherwise I would probably buy 3.

There are too many opportunities to spend money on this forum. 4 phis or tesla k2s a couple of 2670's, 128gb of ram, a couple of fast ssds and you have a very nice little compute node. x8 plus qdr infiniband and you have a small supercomputer.
We all would love a super computer?

I'd like to build a computer to make the world a better place. :cool:
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
You would definitely need to put this beast in the garage.... Here where the temperature is currently -30C condensation might be a problem. Water cooling is an option with a drill and a dremel in the 2u case. It would be a lot easier in the 4U chassis that holds 8 gpus in a more standard configuration. I haven't seen one for sale for $400 :)... I have one x9drg-qf (workstation board) that I got for about $300us. Patrick hasn't said anything about the Phi group buy in a couple of weeks and I haven't been reminding him while I wait for my B0's to go out the door but I suspect we are overdue for a reminder email. I know I will buy 1. I want to buy 2... It would be great to pick up 10 for an even dozen but my wife wants us to make our mortgage payments for some odd reason.

A 3 node, 3 card per node compute box is just about a perfect "n" node sandbox for working on SMP programming. That you can buy this and set it up in your basement lab as a learning tool is nothing short of amazing. Unlimited compute time on a fully functional supercomputer platform.

I've started relearning linear algebra, calculus, statistics (coursera, mit ocw, khan academy) so that I can study machine learning, data science, heterogeneous parallel computing. It's a pretty exciting time when this kind of compute is available to the hobbyist/soho. Tools like R, Octave, Python have embraced the smp libraries as have gcc and gfortran. Intel also makes its compilers available for academic use and there are an unlimited number of presentations on opencl/acc/mp/mpi etc using the phi or cuda if you are an NVidia fan. Kaggle contests and other data science communities have gamified a great deal of this learning making it significantly more fun than playing video games.

The MOOC movements in combination with accessible hardware really creates a unique entry point into previously inaccessible and exciting area of computer science. It's a shame I'm not 20 again.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Here where the temperature is currently -30C condensation might be a problem.
Condensation is not typically an issue with computers, since almost every part generates at least a little bit of heat (or is close enough to something hot). Condensation will only form on surfaces that are cooler than ambient temperature - as long as the server is on it will be (significantly) warmer than ambient. Air that cold will also be VERY dry and will suck up any moisture that it can - we're talking dry to the point where ice will evaporate without melting first.

The day you decide to work on the server though is when you'll have problems. The moment you move it into your warm/humidified house (assuming you don't want to work in the garage) it will get a layer of condensation faster than you can put it down.

Condensation is mainly just a problem for people with extreme cooling setups for OC'ing and whatnot. If you're trying to cool your CPU below ambient temp (phase-change coolers, liquid nitrogen, etc.) then it is very likely you are going to get condensation somewhere, which could be directly on electronic parts or could drip down onto electronics.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
3 Nodes drawing 1600watts per node each plus a fast file/database server and a workstation can be done in the home.
Better to run on 220 for more efficiency but a couple of 110 circuits should support it just fine.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
3 Nodes drawing 1600watts per node each plus a fast file/database server and a workstation can be done in the home.
Better to run on 220 for more efficiency but a couple of 110 circuits should support it just fine.
A standard 15A 110V circuit only has ~1500W available, and really you should only plan to use 80% of that for any kind of sustained load. For 3 nodes actually drawing 1600W each I would recommend 6 regular 110V circuits, with each node having its dual-psu's spread over 2 circuits - and if you trip a breaker or a PSU dies, when it all fails over to the other side that breaker will probably trip shortly after too. So let me re-phrase that - I wouldn't recommend running these on regular 15A 110V circuits - use 20A 110V circuits if you really need to stay on 110V for some reason, or much preferably move to 220V. Just a pair of 220V 30A (or 40A/50A) circuits like what your stove/oven plugs into will run all that stuff easy.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Thanks for the clarification.... One compute node with 4 phi's or GPU's when fully spun up draws in the neighborhood of 1500w (14-16) actual power depending on efficiency/ram/cpu/peripherals. My current planned ATX hack is to run 1 PSU for the motherboard and a second PSU for the 4 cards. Which of course is the opposite of redundant power supplies. My current config - 2 compute cards and 1 video card in a workstation configuration will fit into the 15A 110V space fairly well unless you are playing video games on the workstation at the same time as spinning up the compute cards.

The brother-in-law wire pulling visit is likely unavoidable. 220 30A will save some money on the power bill as well once the lab is finally assembled. It would be very nice to have 2000w SQ redundant power supplies drawing less than 80% at peak. Setting the basement up like a data center with separate circuits for fail-over seems a bit overkill but 2 cables isn't much more work than 1. Happily it's all in the basement and nothing is closed in. I have to wait for paychecks to start arriving for this part of the adventure however.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
This is still a solid 2-node server with 4 expansion slots capability is it not? I mean the power drain is almost not really a big deal if you don't use Xeon Phi's or other compute cards.