Supermicro SuperBlade System with GPU Blade Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I just finished physically installing a 7U Supermicro SuperBlade in the Sunnyvale datacenter test lab. It is a big and heavy monster. We "only" have 4x dual Xeon E5-2600 nodes that each support dual GPUs in the chassis at this point.

Here is a pic with two other multi-node enclosures: a 3U MicroBlade (top) and a 2 node 7U SuperBlade (middle).

Supermicro SuperBlade GPU x2 and MicroBlade.JPG Just to power the monster - 4x 3kW power supplies. Here is one next to the 1kW 80+ Titanium PSU from our Windows Server 2012 R2 machine in the lab:
Supermicro SuperBlade 3kW Power Supply.jpg

The IPMIview Management Interface for the CMM (which is onboard the 10Gb SFP+ switch):
upload_2016-3-22_10-39-15.png

SuperBlade BIOS via IPMI - very similar to a normal Supermicro motherboard, just less cabling!
upload_2016-3-22_12-45-6.png

More to come in a few days.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
That's funny, the HP c7000 is rather light once you remove PSU and fans. Makes for a reasonably easy install actually.
If you have the need love blades in the enterprise space. Can beat the density or ease of management + ease of cabling (except OBM nearly always just Fiber)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Evan these are possible to install with 1 person. Took a forklift/ pallet to load it into a cargo van when it had all the shipping materials/ rails and etc. I did have to get some help with unboxing (pics when I get on a faster Internet connection.) I was OK moving it when the blades were removed but with even 4 blades I thought it was best not to try it solo.
 
  • Like
Reactions: T_Minus

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I've racked C7000's solo before - not recommended, but not too difficult if you remove everything possible and aren't sticking it up too high in the rack. I do like the ease of management of blades, and that I haven't had to touch a cable to any of the enclosures after install yet have made quite a few changes since then. But they are actually lower density than all of the many 4x nodes in 2U on the market. A C7000 will get you 16 2-socket nodes in 10U - the same 10U of quad-node 2U boxes is 20 nodes with approximately the same restrictions for number of DIMM/IO slots.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@TuxDude - I have been leaving the bottom portions of the rack open for these just because I know that they are so big! Thank goodness I did not try to put this in the top 20U. I think we should be getting a microblade platform which would be nice as well as those have higher density. The GPU blade system that we have would get you 20x double-slot GPUs, 20x CPUs with 4 DIMMs per CPU. I am working on getting 8x GPUs from NVIDIA but that is... fun.
 
  • Like
Reactions: badskater

badskater

Automation Architect
May 8, 2013
129
44
28
Canada
I know the feeling of blades. I install UCS on a weekly basis, and it's "fun" to do solo. ;) Waiting on the review for these, as I may move to Supermicro blades at home (Would be far more compact than the multiple 2Us i have right now...)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Updated this thread with the Supermicro IPMIview chassis management module screenshot.

BTW @badskater I think we need a UCS review for STH! :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Awesome @badskater ! One thing I would want to be sure of is that we do not release any information related to your work. Happy to chat via PM on this, but likely your boss will have a similar concern. Best to be up front about that.
 
  • Like
Reactions: badskater

badskater

Automation Architect
May 8, 2013
129
44
28
Canada
Awesome @badskater ! One thing I would want to be sure of is that we do not release any information related to your work. Happy to chat via PM on this, but likely your boss will have a similar concern. Best to be up front about that.
Not a problem will contact you when home to check what could be possible.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hello

I own UCS (blade,racks, other).what is interesting for you ?
Awesome.

I would love to have a piece that talks about management, power consumption and etc. Maybe even some cool pictures of them.
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
We have a bit of UCS here at work. Have a few VBLOCK's and a CI racks full of UCS. Supposed to be getting training soon. Perhaps after that I could write up a little bit. As of right now I dont have much experience with it.

I cant claim to ever written much of anything formal for a forum though.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Here is a combined power graph of the 2 node dual E5-2690 V3 blade chassis running Linux-Bench on node 1:

upload_2016-3-25_8-3-59.png

Under 700watts for:
  • 1x dual E5-2690 V3 node running benchmarks
  • 1x dual E5-2690 V3 node at idle
  • 1x 10GbE switch
  • 1x chassis management module
  • 1x chassis running a TON of fans, including those cooling empty slots 3-10.
Not too bad since we normally see a 2P E5 system like that take 500w alone.