Jan 26, 2011
Love it. Best part is they did this with 100% off-the-shelf orderable parts. No custom packaging or anything. Very nice.


Well-Known Member
Mar 26, 2013
from the article:
(In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!)



Well-Known Member
Sep 17, 2011
Love it. Best part is they did this with 100% off-the-shelf orderable parts. No custom packaging or anything. Very nice.

Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines.

To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver.
and lower down

Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet.
Of course, you could build the same thing using a pair of ConnectX2-EN cards and a single SFP direct-attach cable for about the same price, or many other options that don't require a switch.

But their end price of $1100 per node is not really accurate. More like $1100 + some kind of networking + a bunch of drives + licensing of server 2016 datacenter.


Staff member
Dec 21, 2010
Good to see. I was reading this and thought to myself "I wonder what they have MSFT deal wise for 2x Datacenter licenses plus hardware at $1100 per node."

I then realized that basically they have a cheap motherboard, case, power supply, CPU, low RAM included.

Also, the Thunderbolt chip is not really accurate because an add-on board will cost a bit more as would it integrated on the motherboard.

I tend not to promote these types of builds on the STH site as it is unlikely to scale beyond 2-3 nodes. Maybe I should do more of them?


Well-Known Member
Sep 17, 2011
it won't scale beyond 2 nodes if you stick with direct network connections like that - but that doesn't mean there isn't a use for it. For some sales/marketing types, having a cluster with enough performance and able to show off all the advanced features while fitting into a carry-on sized package is very useful and doesn't ever need to scale. It's a cool build, just not one destined to ever see production workloads - the same could be said about many of our home labs.


May 12, 2013
For the past week, I been working on automatic deployment for a 2 nodes hyper-converged S2D hyper-v cluster.
I usually spent enormous amount time upfront to build process for auto deployment.

For my homelab, within 6 month, 99% of my systems would be tore apart and rebuild. Microsoft eval trail software is perfect for my lab.

For now, I am experimenting on a pair of Intel 1u E3 v1 as well as a pair of HP DL120e G7 E3 v1 system ( HP system have two pice slots ).

The finally goal is deploy S2D cluster on a pair of low power E3 v3 ITX boards with 2 Intel NIC + a dedicated IPMI port , Mellanox ConnectX-3 for PTP high speed RDMA interconnect. Without a noisy power hungry switch. hopefully the entire S2D cluster would come in under 100 watt.

Already have ITX boards $50 each , Mellanox CX-3 $50 each, RAM on hand, have few Intel I5 CPU already but no E3 v3 CPU yet.


Active Member
Dec 27, 2015
Interesting ..wish I could have attended to see it in person.

I can think of some alternatives based on items listed with prices:
2 x Supermicro 5028D-TN4T Mini Tower Intel Xeon processor D-1541 8-Core System-on-Chip
with 4 x 6TB or 4 x 8TB

Maybe a performance hit with 4 vs. 8 drives .. but compute node wise ...the potential kills that ASrock build.
That would be $1200 per node without disks ... 'cause we have 4 x 8TB drives laying around :)

=== === === === ===
or.... for $900 per node and some creative tinkering ...'cause we love to tinker:

Motherboard: Supermicro X10SDV-TP8F

CPU: Xeon-D 1518 4c/8t (SoC)

Memory: 32GB (1x32GB) Hynix DDR4-2133 32GB/4Gx72 ECC/REG CL15 Chip Server Memory HMA84GR7MFR4N-TF

Boot Device: Supermicro SSD-DM032-PHI 32GB SATA DOM
SSD-DM032-PHI Supermicro 32GB SATA DOM - Server Accessory Other -

Networking Cable: Cable Matters 0.5m 20 Gb/s USB Type-C Thunderbolt™ 3

SATA Cables: 8 x SuperMicro CBL-0481L
Supermicro SATA Flat Straight-Straight Cable 81cm (CBL-0481L)

Chassis: Supermicro Superchassis Cse-504-203B 200W Mini 1U Rackmount Server Chassis (Black)
Supermicro Superchassis Cse-504-203B 200W Mini 1U Rackmount Server Chassis (Black) -

Power Supply: 200w 80+ Certified - Included with Supermicro chassis above

Heatsink: SNK-C0057A4L
Supermicro SNK-C0057A4L Accessories this case, we just happen to have 8 x 2TB SSDs laying around per node along with an 8 port HBA.

This won't be all that quiet ... but we just happen to have some noise cancelling heaphones laying around too. ;)