https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47/

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Love it. Best part is they did this with 100% off-the-shelf orderable parts. No custom packaging or anything. Very nice.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
from the article:
(In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!)

Chris
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Love it. Best part is they did this with 100% off-the-shelf orderable parts. No custom packaging or anything. Very nice.
Almost....

Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines.

To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver.
and lower down

Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet.
Of course, you could build the same thing using a pair of ConnectX2-EN cards and a single SFP direct-attach cable for about the same price, or many other options that don't require a switch.

But their end price of $1100 per node is not really accurate. More like $1100 + some kind of networking + a bunch of drives + licensing of server 2016 datacenter.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Clickbait, considering the OS cost for S2D, which requires Datacenter Edition is many times the cost of the hw they provided.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Good to see. I was reading this and thought to myself "I wonder what they have MSFT deal wise for 2x Datacenter licenses plus hardware at $1100 per node."

I then realized that basically they have a cheap motherboard, case, power supply, CPU, low RAM included.

Also, the Thunderbolt chip is not really accurate because an add-on board will cost a bit more as would it integrated on the motherboard.

I tend not to promote these types of builds on the STH site as it is unlikely to scale beyond 2-3 nodes. Maybe I should do more of them?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
it won't scale beyond 2 nodes if you stick with direct network connections like that - but that doesn't mean there isn't a use for it. For some sales/marketing types, having a cluster with enough performance and able to show off all the advanced features while fitting into a carry-on sized package is very useful and doesn't ever need to scale. It's a cool build, just not one destined to ever see production workloads - the same could be said about many of our home labs.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
For the past week, I been working on automatic deployment for a 2 nodes hyper-converged S2D hyper-v cluster.
I usually spent enormous amount time upfront to build process for auto deployment.

For my homelab, within 6 month, 99% of my systems would be tore apart and rebuild. Microsoft eval trail software is perfect for my lab.

For now, I am experimenting on a pair of Intel 1u E3 v1 as well as a pair of HP DL120e G7 E3 v1 system ( HP system have two pice slots ).

The finally goal is deploy S2D cluster on a pair of low power E3 v3 ITX boards with 2 Intel NIC + a dedicated IPMI port , Mellanox ConnectX-3 for PTP high speed RDMA interconnect. Without a noisy power hungry switch. hopefully the entire S2D cluster would come in under 100 watt.

Already have ITX boards $50 each , Mellanox CX-3 $50 each, RAM on hand, have few Intel I5 CPU already but no E3 v3 CPU yet.
 

Netwerkz101

Active Member
Dec 27, 2015
308
90
28
Interesting ..wish I could have attended to see it in person.

I can think of some alternatives based on items listed with prices:
2 x Supermicro 5028D-TN4T Mini Tower Intel Xeon processor D-1541 8-Core System-on-Chip
with 4 x 6TB or 4 x 8TB

Maybe a performance hit with 4 vs. 8 drives .. but compute node wise ...the potential kills that ASrock build.
That would be $1200 per node without disks ... 'cause we have 4 x 8TB drives laying around :)

=== === === === ===
or.... for $900 per node and some creative tinkering ...'cause we love to tinker:

Motherboard: Supermicro X10SDV-TP8F
$488
http://www.compsource.com/ttechnote.asp?part_no=MBDX10SDVTP8FO&vid=428&src=F

CPU: Xeon-D 1518 4c/8t (SoC)

Memory: 32GB (1x32GB) Hynix DDR4-2133 32GB/4Gx72 ECC/REG CL15 Chip Server Memory HMA84GR7MFR4N-TF
$160
https://www.amazon.com/Hynix-DDR4-2133-Server-Memory-HMA84GR7MFR4N-TF/dp/B00WSW6TEE


Boot Device: Supermicro SSD-DM032-PHI 32GB SATA DOM
$60
SSD-DM032-PHI Supermicro 32GB SATA DOM - Server Accessory Other - SuperBiiz.com


Networking Cable: Cable Matters 0.5m 20 Gb/s USB Type-C Thunderbolt™ 3
$21
https://www.amazon.com/USB-IF-Certified-Cable-Matters-Thunderbolt/dp/B01AS8U7GU


SATA Cables: 8 x SuperMicro CBL-0481L
$14
Supermicro SATA Flat Straight-Straight Cable 81cm (CBL-0481L)


Chassis: Supermicro Superchassis Cse-504-203B 200W Mini 1U Rackmount Server Chassis (Black)
$99
Supermicro Superchassis Cse-504-203B 200W Mini 1U Rackmount Server Chassis (Black) - Newegg.com

Power Supply: 200w 80+ Certified - Included with Supermicro chassis above

Heatsink: SNK-C0057A4L
$37
Supermicro SNK-C0057A4L Accessories



Now...in this case, we just happen to have 8 x 2TB SSDs laying around per node along with an 8 port HBA.

This won't be all that quiet ... but we just happen to have some noise cancelling heaphones laying around too. ;)