I just finished physically installing a 7U Supermicro SuperBlade in the Sunnyvale datacenter test lab. It is a big and heavy monster. We "only" have 4x dual Xeon E5-2600 nodes that each support dual GPUs in the chassis at this point.
Here is a pic with two other multi-node enclosures: a 3U MicroBlade (top) and a 2 node 7U SuperBlade (middle).
Just to power the monster - 4x 3kW power supplies. Here is one next to the 1kW 80+ Titanium PSU from our Windows Server 2012 R2 machine in the lab:
The IPMIview Management Interface for the CMM (which is onboard the 10Gb SFP+ switch):
SuperBlade BIOS via IPMI - very similar to a normal Supermicro motherboard, just less cabling!
More to come in a few days.
Here is a pic with two other multi-node enclosures: a 3U MicroBlade (top) and a 2 node 7U SuperBlade (middle).
Just to power the monster - 4x 3kW power supplies. Here is one next to the 1kW 80+ Titanium PSU from our Windows Server 2012 R2 machine in the lab:
The IPMIview Management Interface for the CMM (which is onboard the 10Gb SFP+ switch):
SuperBlade BIOS via IPMI - very similar to a normal Supermicro motherboard, just less cabling!
More to come in a few days.
Last edited: