Storage Spaces Direct platform

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DavidRa

Infrastructure Architect
Aug 3, 2015
330
153
43
Central Coast of NSW
www.pdconsec.net
S2D requires minimum of 2 nodes. It was announced more than two months ago and finally available in TP5 released today.
Storage Spaces Direct in Windows Server 2016 Technical Preview still says,
Microsoft said:
Storage hardware: The storage system consisting of a minimum of four storage nodes with local storage. Each storage node can have internal disks, or disks in an external SAS connected JBOD enclosure. The disk devices can be SATA disks, NVMe disks or SAS disks.
and is marked as updated yesterday. Emphasis mine.

Do you have a source for the two node configuration? It seems strange that two would be the supported minimum after TP4 and all the documentation showing 4.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
478
83
What you are missing is the hybrid solution. Multiple converged SOFOS/Hyper-V Clusters. Then your VHDX's can span these clusters and you can get the coarse tiering you need. S2D is not designed to scale up, it is designed to scale out.

There may be other options to use S2D $$$ wise. But I am an OPS guy... I have no crystal ball into the future.

Chris
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
I just want to add that I never considered the cost of Windows licencing because I'm on SPLA and Datacentre edition is something I am already paying for. It costs nothing extra to go to S2D.

I do not agree that S2D is not scalable. To me the cluster becomes the building block. 16 nodes in a cluster is fine in my opinion.

If massive number of nodes in a cluster is required then non-converged model is appropriate and all resources in the S2D cluster can be put towards storage performance. RDMA will handle the compute to storage just like what we have with SOFS right now.
 
  • Like
Reactions: felmyst

felmyst

New Member
Mar 16, 2016
27
6
3
28
Storage Spaces Direct in Windows Server 2016 Technical Preview still says, and is marked as updated yesterday. Emphasis mine.

Do you have a source for the two node configuration? It seems strange that two would be the supported minimum after TP4 and all the documentation showing 4.
If I'm not mistaken, that was mentioned by A. Kibkalo in his webinar about storage in WS2016.

I just want to add that I never considered the cost of Windows licencing because I'm on SPLA and Datacentre edition is something I am already paying for. It costs nothing extra to go to S2D.

I do not agree that S2D is not scalable. To me the cluster becomes the building block. 16 nodes in a cluster is fine in my opinion.

If massive number of nodes in a cluster is required then non-converged model is appropriate and all resources in the S2D cluster can be put towards storage performance. RDMA will handle the compute to storage just like what we have with SOFS right now.
If you don't need to pay for licenses, S2D becomes very cost-effective solution.
It doesn't scale well because of high amount of redirected i/o and no data locality, just like VSAN: no matter what, it's still just a network-RAID. I'm pretty sure, this technology will become mature enough for any scale deployment (like Nutanix, for example) in future, but today this will not work for high load latency-sensitive applications like datacenters.
You can workaround these limitations by using disaggregated scenario.
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
Really liking the new change to TP5 where you can now have 3 tiers of storage.
All I need to take advantage of that in my design is to add a single controller JBOD and fill it up with 3.5" drives.
 
  • Like
Reactions: Chuntzu

FrankvanLight

New Member
Dec 15, 2015
1
0
1
52
Starting Windows Server 2016 Technical Preview 5, Storage Spaces Direct can be used in smaller deployments with only 3 servers.

Deployments with fewer than four servers, support only mirrored resiliency. Parity resiliency or multi-resiliency are not possible, since these resiliency types require a minimum of four servers. With 2-copy mirror the deployment is resilient to 1 node or 1 disk failure, and with 3-copy mirror the deployment is resilient to 1 node or 2 disk failures.