Performance Storage Spaces 2-way-mirror with SAS-SSD-JBODs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I need a plan for a new, high-performance-database-failover-cluster

My current idea is:

- 2x Windows 2016 head
- 2x SAS-JBOD with 2x10 SAS-SSDs
- Storage-Spaces as 2-way-mirror

Did one of you ever use Storage-Spaces in a small, but high-performing-setup? What do you think?

My alternative would be Starwind...

Thank you and best wishes
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Assuming 2-way mirror Clustered Storage Spaces (C/S/S) Vs. StarWind vSAN.

StarWind will be faster on reads (because it will read from two sets of data aggregating I/O), but Clustered Storage Spaces will be faster on writes (because they don't need to send second copy of data to remote tier over Ethernet, overall write path is shorter so latency will be lower). That's worst case scenario with 100% cache miss... With some reasonably high % of cache hits StarWind will complete writes faster as well because it uses NVMe and DRAM for write-back caching, while C/S/S can cache with much slower SAS SSDs only.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Sorry for my late answer - I had to read a lot of descriptions about CSS.

Now, I am totally disoriented :)

Is this really still possible on Windows 2016? Every document in technet speaks about S2D (Storage Spaces Direct), but nothing about Clustered Storage Spaces. Is it still supported?

Thank you for your help!
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
It is still supported. It is just not the cool new technology ...

Just make sure that both nodes see all of the disks.

Chris
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Sounds good. I do not need to take the "cool way" :)

One other question:
I found a strange hint on the docs about Windows 2012:
http://social.technet.microsoft.com...ow_does_Storage_Spaces_decide_how_many_to_use


--> I do only have "enclosure-awareness" with three enclosures

Can you explain my, why this should be true?
A two-way-mirror should provide the data in everyone of two enclosures. Why doesnt it handle the failure of one enclosure?

Thank you for your help!
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
Hi!

I need a plan for a new, high-performance-database-failover-cluster

My current idea is:

- 2x Windows 2016 head
- 2x SAS-JBOD with 2x10 SAS-SSDs
- Storage-Spaces as 2-way-mirror

Did one of you ever use Storage-Spaces in a small, but high-performing-setup? What do you think?

My alternative would be Starwind...

Thank you and best wishes
It is recommended to have at least 3 nodes in S2D to gеt acceptable redundancy and stability.
Also, for some time Microsoft S2D was purged from Windows Server 2016, release 1709. However, in release 1803, Microsoft S2D were back in action with some sort of improvements, but still have a lot of problems with ReFS, Cluster-Aware updating, caching, hashing (checksumming), etc.
In terms of support, StarWind provides full support for entire setup with maximum response time up to 1 hour for Premium clients, while the Microsoft support is pretty rigid and can spend a whole day (or even multiple days) just to propose to install hotfix for Microsoft products (known fact). So, with StarWind you totally get a solution, not a problem.
 

Stril

Member
Sep 26, 2017
191
12
18
41
I had a very big problem with Starwind... There was a full data corruption after a problem with the Intel NIC on the replication-path. Support was/is absolutely great, but I am worried to go on using Starwind. S2D is not an option, too. Thats too complex for only ONE service.

What do you think about Clustered Storage Spaces? Did you ever use it? It seems to be simple.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
It works. I used to run a 3 node system and it worked fine.

Just use the 2012 guides with 2016. Nothing has really changed in that space.

Chris
 

Stril

Member
Sep 26, 2017
191
12
18
41
Good news!
Did you use it in a "high-performance" environment with SAS SSDs?

Would you prefer ReFS over NTFS in that setup?

Thank you for your help!!
 

Jeff Robertson

Active Member
Oct 18, 2016
429
115
43
Chico, CA
I run a few 2 node clusters using both S2D and Starwind. My preference is towards S2D. I have a cluster with 4 SATA SSDs per node and it performs well (3GB/sec read, 1+GB/sec write, up to about 50k iops, varies depending on what is happening on the cluster and the high read is due to the memory cache). 10 SAS SSDs ought to get you a very performant system. I've been using S2D since it came out and have been unable to kill it (and I've REALLY tried). It always bounces back even with just two nodes. Something to consider is adding 2 NVMe SSDs to each node to act as a cache, probably not necessary but you might eke out a bit more performance. One bit of advice, when creating the volume make the size N-1 (so the size of 9 of your 10 SSDs, 9TB if they are 1TB drives as an example). This allows the cluster to rebuild to the free space you left if a single drive fails without having to replace the failed drive. Good luck!
 
  • Like
Reactions: psannz and leebo_28

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I run a few 2 node clusters using both S2D and Starwind. My preference is towards S2D. I have a cluster with 4 SATA SSDs per node and it performs well (3GB/sec read, 1+GB/sec write, up to about 50k iops, varies depending on what is happening on the cluster and the high read is due to the memory cache). 10 SAS SSDs ought to get you a very performant system. I've been using S2D since it came out and have been unable to kill it (and I've REALLY tried). It always bounces back even with just two nodes. Something to consider is adding 2 NVMe SSDs to each node to act as a cache, probably not necessary but you might eke out a bit more performance. One bit of advice, when creating the volume make the size N-1 (so the size of 9 of your 10 SSDs, 9TB if they are 1TB drives as an example). This allows the cluster to rebuild to the free space you left if a single drive fails without having to replace the failed drive. Good luck!
Talking about S2D, that’s really good to see somebody say it works well in 2-node, tempting try myself in a config like that to see what I can get it to do.

Question I have is only to ask is there any way to use it without $10k worth of MS server enterprise licensing?
Even for a home lab type setup the MS action packs don’t have licenseing as I understand now.