Server 2012 R2, Storage Spaces and Tiering

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Unfortunately no - I never did figure out exactly what was up with the "missing" space. I tried for a while but sorta lost interest. The array was meeting my needs for performance and I wasn't under pressure for capacity so I just dropped it.

Sorry. If you do find out please post. I - and I am sure others - would be very interested.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
You'll get them faster from frozencpu. And to me when you find a vendor with great selection and service I'll buy from them even at a small premium price. It's worth it to me to help keep them around.
 

chinesestunna

Active Member
Jan 23, 2015
621
194
43
56
You'll get them faster from frozencpu. And to me when you find a vendor with great selection and service I'll buy from them even at a small premium price. It's worth it to me to help keep them around.
Yes I agree, just throwing out another option I found for folks that might need these in bulk :)
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I am fighting the same issue. I don't think it is a limit of using the GUI vs PS. Ignore the center one but using the same pool and only deleting the virtual disk, 12x 6TB with a 100GB cache looses 3 drives of space in dual parity vs 2 in single parity x2 or 6 columns. Also 3x journal drives for cache gives me horrible write performance 2x or 4x give about the same performance of 400-500MB/s but that may be the limit of the Intel 530's and Avoton C2750 I am using.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
After many hours of searching I think I found the answer and I don't like it. It's vary misleading from the n-2 I expected but if it can survive a triple failure it may have some uses. I'll do some testing tonight pulling drives and see what happens.

"Storage spaces uses erasure coding for its dual parity scheme, which optimizes recovery for the common case (single disk failure). This comes at the cost of higher overhead, which is 3 columns of “parity” information instead of the traditional 2. So for a 7 disk, 7 column dual parity space the amount of usable capacity is (7-3)*disk size, so you get 8TB with 7x2TB disks." source

edit:
Found this from MS. It kinda sounds like the 3rd drive is for global parity which would be really cool with more drives. For instance 25 drives 12 columns or 10+2 parity x2 and a global parity to step in if one dies. If I am reading it correctly anyways. Now I kinda wish I didn't decide on a modular approach with 12 drives per 1u with it's own MB.
 
Last edited:

JSchuricht

Active Member
Apr 4, 2011
198
74
28
Well it can't tolerate 3 disks being disconnected so it looks like dual parity is just wasting one extra drive worth of space for a supposedly faster recovery.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
So did you confirm that available space for a single-parity drive is N-1 (vs the N-3 that appears consumed for dual parity)?
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I am still testing and searching through the extremely limited documentation but I have a few things to add on the performance side.

Parity and dual parity use a journal disk even if a SSD is not set as a journal. All incoming writes are written to the journal disk before parity is calculated and written to the disks. That is the basis of the slow write speed being about equal to the speed of one drive.

The dedicated journal does not completely help write speed. I am using 120GB Intel 530's so I see 400-500MBs write speed over the network, HDD's are 12x 6TB Hitachi in dual parity. After ~120GB has been written with a 100GB journal created, file copies pause briefly while parity data is written. There is no gap created in the file copy progress window which is a bit misleading. Still it's not a deal breaker for me, it's rare that I move more than 40GB at a time.

Now I am testing how the pool handles being full. So far 10TB copied, 40TB to go. There is a lot of mixed information about a 75% usage limit where the pool can go offline preventing you from deleting data and only allowing more disks to be added as a solution. Not a situation I want to be in with a Supermicro 5018A-AR12L.
 
Last edited:

JSchuricht

Active Member
Apr 4, 2011
198
74
28
Yes and no. With a SSD journal drive you can specify up to 100GB write back cache which is how I see 400-500MBs write speed in the beginning. The problem is when that is used up it drops back down to single drive speeds or pauses for a few seconds to flush. I have a huge 40TB transfer going now which is slightly hindered due to a patrol read on the source server but it does 300MBs for 5-10min, pauses for 2-5 seconds and resumes. Other times it drops to 90-140MBs and maintains that speed for a while. It's not horrible for my uses. The initial load will take several days but I am mainly storing video in 8-20GB segments so after a file or two are written there is plenty of time for the pool to flush the cache to the spinning disks.
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
I've been following this thread as I've been considering using windows 2012r2 as my storage server OS. Your results have had me reconsidering my plan. I have found a website that talks about this, and another person doing tests similar to yours. He found a different solution to the problem without using ssd cache

Storage Spaces and Parity – Slow write speeds | TecFused

Thought it might help. Please ignore and delete if this is inappropriate for this forum

Thanks
Marshall Simmons
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I've been following this thread as I've been considering using windows 2012r2 as my storage server OS. Your results have had me reconsidering my plan. I have found a website that talks about this, and another person doing tests similar to yours. He found a different solution to the problem without using ssd cache

Storage Spaces and Parity – Slow write speeds | TecFused

Thought it might help. Please ignore and delete if this is inappropriate for this forum

Thanks
Marshall Simmons
Setting "IsPowerProtected" on a Storage Spaces parity volume that is not actually "power protected" is a REALLY BAD idea. It puts your data at significant risk. And it only gets performance to a fraction of what you achieve with an SSD journal/cache. If you don't value your data it can be an OK approach - until the day it bites you - and rest assured that it will.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
Like PigLover said, IsPowerProtected is dangerous. I used it for testing the system and will likely use it again for the initial data load when I make the system live because the 100GB cache runs out but it will be disabled again for production use.

As for loading the pool, capacity is 48.9TB with 12 6TB drives in dual parity and I took that to 9GB free. Performance got down to 80-110MBs towards the end with IsPowerProtected enabled. There were no errors I could find maxing the space out so that's one less thing to worry about.

Now I just need to make up my mind on taking this setup live. My big hangups right now are loosing that 3rd 6TB drive to erasure coding and lack of notifications. I have found a list of event logs which I can setup to forward by email but it's not as foolproof as a buzzer. In my situation, the servers sit a few feet away from my desk behind a solid door but anytime there is an issue with my current arrays the LSI controllers have a piezo that goes off which I can hear well enough to annoy me into looking at the issue. With an email alert I have to worry about something breaking and the email not getting sent from the server, email server going down or a spam filter ignoring it's white list. Has anyone come up with a good monitoring solution for storage spaces?