Intel Xeon D 3U MicroBlade

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,805
113
I managed to get these powered on in the datacenter yesterday. SM let me borrow 4x of their MicroBlade nodes. Two are dual Xeon D nodes on one sled and one SSD per node. The other two are single Xeon D nodes on a sled but with spots for 4x SSDs per node.

Here are the nodes getting ready for installation:

Supermicro 3U MicroBlade Xeon D Sleds Pre-Install.JPG

And during the install (sorry these pics stink, lighting in the datacenter is... sub-optimal for phone cameras.)
Supermicro 3U MicroBlade Xeon D nodes racked.JPG

Hoping these become a Docker Swarm cluster soon.

The really cool bit about this system is that we have 6 nodes in 4 of the 14 blade spots. I think you can get up to 28 Xeon D nodes in the 3U enclosure and there are 4x 40GbE uplinks in the rear of the chassis.
 
  • Like
Reactions: Quasduco

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,805
113
For this clustering stuff - (middle one someone else is using for a review.)Supermicro SuperBlade GPU x2 and MicroBlade.JPG

They are not even close to filled, but enough that I have multiple nodes in multiple chassis to work with. Now just waiting to get GPUs!
 
  • Like
Reactions: Emulsifide

Ramos

Member
Mar 2, 2016
68
12
8
44
That is probably the optimal hardware for a 10+ node Hadoop cluster right now because the superb util of power/watts and still enough powah! per node for it to matter on it's own. And in an enterprise environment, they are dirt cheap. Hell might even get 1587's out of the box, or 1567's if those are avail in enterprise nodes like these.

I have worked on three clusters so far,
- Two of them are test clusters of 1+3 and 2+3 nodes of 2x E5-2620's of 128/64 GB ram (masters/slaves)
- 3rd is a prod cluster of 1edge+3+3 of 2x 2690 v3 256 GB RAM each.

Oh man to have a 4+24 node cluster of D-15xx's. In 3U. Including a 48+ port 10 Gbps switch!

Jeezus. Maybe a dual E5 with 256 GB for master-1 but the rest could easily be D-15xx's.

And 10 Gbps would be SO welcome. Hell, if we ever need more, add more and bond bond bond, that is the easy part.

HP Mellanox 40/56 Gbps was so a <female dog> to get working with drivers. Worked with CentOS 6.5, not with 6.7 and not with 7.1 (weird OS kernel error on boots). Even when used as 2x 40 <--> 4x 10 Gbps as SFP+ (the looong plugs, with converters) and then bonded, they were a hassle. Also every netcard cost $1500 and our switches for the Mellanoxes were $15,725 each. I am OVER those cards until they start making sense after installs... /rant :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,805
113
@Ramos Yea I noticed that the ConnectX-3 Pros do not come standard with Flexboot so PXE booting from them is not possible. The MicroBlades all are equipped with Intel NICs.

The really annoying thing that I am learning is that it seems like MAAS 1.8.3 is still using Ubuntu 14.04 which does not have the Intel NICs in the commissioning image so they are not supported. I spent from 12AM - 3AM this morning between the 14.04 issue and the ConnectX-3 FlexBoot.