Ultra Hot - 2U 4 socket SWATX case with power supplies $160!!!!!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Nice find! This is the same Supermicro case used in the 2042G quad-AMD server and, with a fan transplant, should fit the new quad Xeon E5 board.

I really shouldn't have, but I bought one. Perhaps it'll be my first Xeon E5 server when fitted with a X9QR7-TF+ motherboard.

I have owned several quad-AMD SM servers with this case and I like it. It is massive and loud, but you'd expect that in a 2U designed to cool four fast CPUs. Cooling for CPU1 is a bit weak, but that's not an issue unless you run all-out 24x7. I recommend picking up a spare power supply if you can - I have had two blow up before their time, both of which SM replaced under warranty without a hassle.

Supermicro case. Fits 4 socket AMD server boards -- swatx --- redundant power supplies $99 obo + 60 shipping

Supermicro CSE 828TQ R1200LPB Case Refurbished | eBay

omg
 
Last edited:

vv111y

Member
May 6, 2011
76
4
8
Niagara Falls, Canada
I'm thinking of a GPU workstation - any idea if those PSU's can be modded to feed video cards? That alone cuts PSU cost in half, and then whatever for the rest of the case.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The output from the chassis includes two eight-pin plugs that normally attach to the motherboard. I suppose that you could bifurcate those to provide power to the GPU.

If you do, there should be enough power. The PS is rated for something like 100A at 12V and I know that one motherboard supported by this chassis allows, via BIOS, up to 150W to each x16 slot without external plugs - in addition to four 115W CPUs and 32 DIMMS.

Then again, those are low-profile PCIe slots unsuitable for most GPUs...

...Of course you could always "go big" with a Cell C410x sub-chassis with space and power for 16 GPUs:
PowerEdge C410x PCIe Expansion Chassis Details | Dell

These are surprisingly cheap on eBay:
Dell B02S001 PowerEdge C410X Computational Power | eBay

I'm thinking of a GPU workstation - any idea if those PSU's can be modded to feed video cards? That alone cuts PSU cost in half, and then whatever for the rest of the case.
 

vv111y

Member
May 6, 2011
76
4
8
Niagara Falls, Canada
Holy smokes what a beast. I'm looking up how the GPUs connect and to what mobo's... this could be an incredible deal, thanks dba! :thumbsup:

PCIe 2.0, not 3, but still for the price...

For the OP's box, one could just not put the top on, or hack some kind of cover I suppose.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The GPUs live in trays which plug in to the C410x. Finding trays might take some time - you either have to buy GPUs from Dell or find a source of empty trays. Dell sells empty trays with part number 331-1709, but it appears that these sleds support only half-height GPUs whereas the one that comes with their Tesla cards support full height GPUs.

The C410x chassis has 16 GPU slots and eight external PCIe x16 connectors. You configure the 410x to present those sixteen GPU slots to each connected PC - two GPUs to each of eight computers, eight GPUs to each of two computers, etc.

The external PCIe connectors on the 410x connect via fat cables to PCIe extension cards that have the same connector. Basically, you buy a special PCIe card and cable that "extends" a motherboard x16 PCIe connection to the 410x chassis. Here is one such card and cable for $100: NVIDIA Tesla P797 HIC Host Interface Card x16 PCI Express Cable S1070 S2050 GPU | eBay

If you can find the GPU trays then you have the makings of the greatest home folding rig the world has ever seen.

Also, take a look a the "little brother" of the Dell C410x - the NVidia S1070. Here is one with cards and everything: http://www.ebay.com/itm/151007825231 I have seen others sell for as little as $150.

Holy smokes what a beast. I'm looking up how the GPUs connect and to what mobo's... this could be an incredible deal, thanks dba! :thumbsup:

PCIe 2.0, not 3, but still for the price...

For the OP's box, one could just not put the top on, or hack some kind of cover I suppose.
 
Last edited:

Scout255

Member
Feb 12, 2013
58
0
6
How does the expansion chasis work exactly? Does it just split the 1X 16X PCI 2.0 slows among 2-4 PCI slots in the expansion module (and hence lowering the number of lanes availible per card), or is it doing something else to limit the bandwidth reduction?
 
Last edited by a moderator:

vv111y

Member
May 6, 2011
76
4
8
Niagara Falls, Canada
clear as mud.
Neat idea and it's geeky cool, but now that doesn't seem like such a deal. Looks like you're paying for density - from 16u to 11u but paying ~$136/GPU for the privilege. Or... if you don't need much bandwidth from GPU's to mobo's, then this could save cash (less servers). I'm pretty sure I need the bandwidth though - it's for neural nets.

RimBlock thanks for the links - those Dell videos are actually useful and good! Wow, from Dell. Go figure.
And I get more of the value of your C6100 thread now.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
dba - on the MrRackables tesla chassis only 208-220V input for those looking in the US this is not standard
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Ultra-geeky I admit. I was evaluating this solution to increase IO when using high density servers with few (but fast) PCIe slots. My idea was to extend x16 slots from multiple servers to a c410x chassis into which I'd install lots of inexpensive LSI HBAs. I imagined a 2U Dell C6145 server with two motherboards, each of which has all four of its x16 slots extended to the C410x. That's eight CPUs (96 or 128 cores total) and 16 LSI HBAs in 5U which is pretty darn good. My current solution has four CPUs and 11HBAs in 4U. In the end I did not find reliable sources for either the C6145 servers or the c410x sleds. Also, I found out that the C6145s only have two IO controllers so I'd be wasting half of the available disk bandwidth.

While you are right that it's not the cheapest way to run one or two GPUs, I have to disagree about the bandwidth. With eight x16 slots worth of host connectivity and bandwidth, that's 640 gigabits of raw bandwidth and around 48 Gigabytes per second of real-world disk IO.

clear as mud.
Neat idea and it's geeky cool, but now that doesn't seem like such a deal. Looks like you're paying for density - from 16u to 11u but paying ~$136/GPU for the privilege. Or... if you don't need much bandwidth from GPU's to mobo's, then this could save cash (less servers). I'm pretty sure I need the bandwidth though - it's for neural nets.

RimBlock thanks for the links - those Dell videos are actually useful and good! Wow, from Dell. Go figure.
And I get more of the value of your C6100 thread now.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The 1200 and 1400 watt power supplies are interchangeable - you can even use one of each at the same time. One of my 1200 watt supplies died recently and SM replaced it, under warranty, with the 1400 watt version.

Jeff

Re-listed with 10+ qty available: Supermicro CSE 828TQ R1200LPB Case Refurbished | eBay

Looks like sm changed these to 1400w psus in a later rev. dunno if that matters.
 

vv111y

Member
May 6, 2011
76
4
8
Niagara Falls, Canada
... I was evaluating this solution to increase IO when using high density servers with few (but fast) PCIe slots. My idea was to extend x16 slots from multiple servers to a c410x chassis into which I'd install lots of inexpensive LSI HBAs. I imagined a 2U Dell C6145 server with two motherboards, each of which has all four of its x16 slots extended to the C410x. That's eight CPUs (96 or 128 cores total) and 16 LSI HBAs in 5U which is pretty darn good. My current solution has four CPUs and 11HBAs in 4U....

While you are right that it's not the cheapest way to run one or two GPUs, I have to disagree about the bandwidth. With eight x16 slots worth of host connectivity and bandwidth, that's 640 gigabits of raw bandwidth and around 48 Gigabytes per second of real-world disk IO.
Apologies dba, I was going to get to this but got sidetracked. You're right, I was meaning it in terms of reducing x16 server slots for $, then the sleds/cards are now sharing slots. That's some serious storage throughput you're working on, I've never had to look at something that big. Nice.