Storage spaces mirror fault tolerance vs raid 10

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
I've been considering a move to Windows Storage Spaces (single box, not S2D), with 8 HDD in a 4-column mirror configuration.

My thought had been that this would provide fault tolerance comparable to RAID 10 ... worst case tolerating 1 failure, best case tolerating 4 failures. But in doing more reading it seems that no matter how many columns, mirrored storage spaces won't tolerate more than one drive failure.

See for example: Windows Storage Spaces - 4+ drive mirror not really RAID 10? : sysadmin

Is that correct? If so, how does anyone get comfortable with that? I know rebuild time should be shorter than in a RAID5 array, but still, it seems like playing with fire. I've seen posts on this site from folks that have used Storage Spaces in massive deployments so there must be a way. Parity space performance is poor enough that it doesn't really seem like an option.

Any advice would be much appreciated.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
The issue is that it is mirror like and not a true mirror. Mirrored Spaces just makes sure that the SLAB is on 2 (or 3) different drives. and there will be differences if you are thin vs thick provisioned. you only tolerate the loss of 1 (or 2) disks in the whole pool. not individual disks like you are accustomed to in a traditional HW raid.

Chris
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
Chris, thanks very much for replying -- it was one of your posts that I was actually referencing when I mentioned "massive deployments"! If I remember correctly, you said that you've managed Storage Spaces servers with 100 disks.

How did you get comfortable with only one drive of failure tolerance? Am I being overly paranoid to worry about this for the configuration that I described?

Thanks again.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
how I did it was to cheat. and is technically not supported.

I had a standard HW raid on the system. Then spanned the logical disks with simple storage spaces. This allowed my tier 1 people to replace disks with little to fear.

Chris
 
  • Like
Reactions: realtomatoes

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
That's a super clever idea. There's no reason that wouldn't work with tiered storage spaces, right? I'm hoping to have an SSD hot tier as well.

Sorry to keep asking questions, but if I wanted to stick with just software raid would it be bad to trust Mirrored Spaces 1-drive failure tolerance on an 8xHDD array? Or is that just asking for trouble?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
If you are using a he raid card, there is really no need to have an ssd tier.

Chris
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
I have build recently tiered storage spaces with based on SSD and HDD RAIDs. Tiering works like a nut!
 
Last edited:

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
That's awesome, thanks very much both of you!

Two quick follow-up questions if you don't mind:

- Any recommendations on what LSI chip generation (and other features) I should be looking for in my eBay shopping? Guessing I want 1-2GB of cache, but do I need a battery back up as well to avoid the storage spaces paranoid about power loss protection?

- I know it is important to use PLP SSDs with Storage Spaces to get decent performance, but does that matter with an HDD array as well? I think I've read that even with HDDs, Storage Spaces will disable the drive cache unless it is a PLP drive (like the HGST he4/6/8). Is that correct?

Thanks very much again
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
both questions are answered with get a battery or capacitor backed cache board. any HW raid card will really do.

Chris
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
I have build recently tiered storage spaces with based on SSD and HDD RAIDs. Tiering works like a nut!
I've done some base line testing and it seems the SSD volume fills up first then windows fills up the HDD. I didn't notice how it would move the data from the HDD to the SSD. I tried to find documentation but I couldn't find any on how it would move the data around.

My array was 8TB SSD and 70TB HDD, then I did the mirror with tiering and it was something like 39TB total. I then copied 16TBs onto the array and noticed after the first 4TB was used the performance dropped. I'm assuming it's because it started using the HDDs. After it was done copying, nothing else happened. So I'm curious, is the tier just going to sit there with all it's original data. Or will it move some data around after a month or two?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Storage Spaces uses defrag to move data between tiers on a scheduled task, usually every 6 hours.

Look at increasing the WB cache to help with writes.

Chris
 

gregsachs

Active Member
Aug 14, 2018
559
192
43
LSI 9265CV-8e cards are really cheap now, even new, and work in JBOD mode really well. Make sure you are using the latest firmware, some of the older firmware had issues w JBOD.
Note that I think that JBOD mode disables the cache, but I haven't done a tiered space yet.
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
I was thinking of an LSI 9270CV-8i card + an intel RES2SV240 sas expander controlling:
- 8x WD Red drives in RAID 6 and
- 4-8x Samsung Cloudspeeds (or Intel S3500 or similar) SSD in RAID 5

Will then use storage spaces as discussed above for tiered storage. Thinking about splitting the SSD array and using a part for a pure flash pool as well, but will depend on what performance I get out of tiered and what size SSDs I end up getting.

Mixed workload of Hyper-V VM storage, read-intensive DB, high performance file share (4k raw video editing, which consist of 24 8MB image files for each second of video), and low-performance file share / archive. Reason I want the SSD hot tier is for the read IOPS ... suspect that as cesmith says above, the RAID cache will be enough to meet my needs on write performance even with just the HDD array, but the video files are random iops intensive and too large to keep on a dedicated SSD pool.

I'm thinking I can get away with the one LSI card because it gives me 8x6gbps = 54 gb/s, and max from the drives in sequential read would be 8 x 4gbps = 32gbps from the SSDs + 8 x 1.6gbps = 12.8gbps from the HDD or 45 gb/s theoretical max in sequential reads. And of course in practice, very little of the read will be sequential, and random iops will be a fraction of that number.

Any feedback would be very much appreciated!! Am planning on testing all this with a small number of drives first of course.
 
Last edited:

gregsachs

Active Member
Aug 14, 2018
559
192
43
I would do raid with built in raid ssd cache, or storage space tiering with ssd cache, not both.....
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
Thanks very much for the reply!

I know combining raid and storage spaces is an unusual and unsupported configuration, but earlier in the thread both cesmith and lamerk said that they had had success with doing so (although cesmith advised against adding tiering to the mix). If it is workable it really seems like it could be a good solution for my particular needs, but I also recognize that I have no idea what I'm doing with this stuff compared to 99% of the people on the forum so am very grateful for any feedback even if it is "don't do that".

Just to clarify one place where my original post might have been unclear ... I wasn't planning on using the raid ssd caching functionality. Just defining a (raid 5) volume on the LSI card using the SSDs, defining a separate (raid 6) volume on the LSI card using the HDDs, and then presenting both volumes to storage spaces. Realize that it probably won't change your feedback, but just wanted to clarify my earlier explanation.
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
I suspect it probably won't but was planning on overriding via:
Set-PhysicalDisk -FriendlyName [disk name] -MediaType [SSD orHDD]

One or two of the tiered storage spaces tutorials note that it's sometimes necessary to do that even with a more standard configuration.

Am in the process of assembling the pieces to do a proper test with a few drives but probably will take me another week or two to finish my ebay shopping and build up a test server, so for now I'm just reading as much as I can.
 
Last edited:

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
I suspect it probably won't but was planning on overriding via:
Set-PhysicalDisk -FriendlyName [disk name] -MediaType [SSD orHDD]

One or two of the tiered storage spaces tutorials note that it's sometimes necessary to do that even with a more standard configuration.

Am in the process of assembling the pieces to do a proper test with a few drives but probably will take me another week or two to finish my ebay shopping and build up a test server, so for now I'm just reading as much as I can.
That would be very interesting to try. Say two RAID controller cards, one with SSDs the other with HDDs both in RAID6 on the controller card, then import them into Storage Spaces as RAID 0 as two independent drives, one as SSD the other as HDD then create a tiered drive managed by Storage spaces.

The tiering would then transfer data back and forth using disk defragmenter as previously noted. I wonder what kind of performance caching this would do for say a SAN server.
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
I've ordered the parts for a proof of concept test! Should be here in the next week or so, and hoping I will be able to post an update shortly thereafter.

Want to make sure that the basic concept works OK before going too far down this road, so the first test may be a little bit limited in terms of drive count and RAID modes tested. But it should be enough to establish feasibility and maybe some preliminary benchmarks.

Will keep everyone posted, and would be grateful for any feedback once I start testing!