2012 R2 SS, Parity Mode + SSD config build log

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ninja6o4

Member
Jul 2, 2014
92
22
8
45
This thread has turned into a personal migration log for me, so I'll update the first post here. Hopefully it will help someone in the future that is planning a similar migration.

This is a great thread to follow if you are:
a) wanting to give 2012 R2 Storage Spaces a try and
b) have disks with data on them already

So I currently have a 2008 R2 server running Flexraid's tRAID storage pooling solution. In a word, it's.. troublesome.

Currently set up with 10x 3TB Red drives - 8x data, 2x parity. Using a single SSD "landing disk" as a cache for writes to the pool. It is strictly a media server for only 2-3 devices max - music, TV shows, movies. While I don't need high IOPS (just good write performance when ripping new content), I used to run a hardware based RAID6 and the disk write performance was excellent. On tRAID it is pitiful: 17-20MB/sec without SSD cache, or 40-42MB/sec with SSD. Combine that with general instability and questionable parity reliability and I've had it.

Anyway, I'm reading up on PigLover's research on SS and dual parity and it sounds promising. The questions I had earlier have now been answered, so I've revised it more into a FAQ style format.

Can I set up Storage Tiering and set up an SSD Tier to make things super fast?
I didn't try this, because you cannot use Tiering on a Thin Provisioned vdisk. The best option for Parity vdisks is Journaling and WriteBackCache.

So will this migration take a long time without Tiering?
Yes. Once your WriteBackCache fills up (quickly), expect write speeds of less than 40MB/sec.

Does the number of columns affects how many disks I would need in order to expand the pool, so if I do 8 columns, I need 8 disks; for 4 columns, I need 4 disks?
Yes

Would the number of columns have any affect on the overall performance considering my relatively low demand usage?
Yes, very much so. Read more here: Why Column Size does matter with Storage Spaces ← MIRU.CH

Third, is there any benefit for me to create smaller volumes to separate the various media catalogs vs. one large pool with subfolders for each?
None that I have found.

Should I use ReFS instead of NTFS?
No. Stick to NTFS. ReFS is not ready for prime time. It lacks many enterprise features you can read elsewhere. But it also does not currently support metadata tags (for music). I originally believed it was unable to shrink its volume usage, but it seems that my thin provisioned temp ReFS disk is, in fact, shrinking itself as I move data off of it.

Finally, 6 of the 10 disks overall have data on it already. What would be the best way to migrate this data? Here is the process I would do:
  1. Create temporary SS Simple thin vdisk in 6 columns:6 disks configuration (Be aware of the risks when using a simple vdisk - NO REDUNDANCY)
  2. Fill this pool with the data from the full drives
  3. Delete the old volumes from the now empty drives
  4. Create a second final SS Dual Parity thin vdisk, 12 column:12 disk configuration
  5. Move small portions of the data from temp vdisk to final vdisk - the final vdisk will run out of space as it is sharing physical space with the temp vdisk
  6. Use defrag /k and defrag /x to shrink the temp vdisk, giving the final fdisk room to continue growing.
  7. Repeat 5-6 until temp vdisk is empty.
  8. Remove temp vdisk, and we're done!
Thank you to cesmith9999, PigLover and dba for your input in this sub btw - it is been invaluable for my research.
 
Last edited:
  • Like
Reactions: Patrick

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
For dual redundant parity space you will need at least 3 SSD's.

4 columns is a 2 data:2 parity configuration with dual redundancy. you may need to start a vdisk with single redundancy then add space... then do a 2nd copy to a new vdisk that has dual redundancy. it all depends on how you can add space to this layout.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
For dual redundant parity space you will need at least 3 SSD's.

4 columns is a 2 data:2 parity configuration with dual redundancy. you may need to start a vdisk with single redundancy then add space... then do a 2nd copy to a new vdisk that has dual redundancy. it all depends on how you can add space to this layout.
Is your suggestion based on the amount of data I have vs. amount of available space I will have when starting out? Let me check that I did my math right.
If I go and top up to 12 disks, I will have 6 full disks (18TB), and 6 empty disks.
  1. In 4:4 configuration, I will use 4 empty disks to create 6TB of space available, and 2 empty disks not utilized yet (and 6 full disks.)
  2. Empty 2 of the 6 full disks into the pool, now leaving me with 4 empty disks, and 4 full disks.
  3. Expand to 4:8, which will add 12TB of space (I don't need to add more parity since I've already got 2 from step 1, right?). This point is where I'm not 100% certain.
  4. Empty the remaining 4 full disks (12TB) into the pool, now leaving me with 8 disk pool, and 4 empty disks.
  5. Expand to 4:12, and gain the final 12TB of free space.
I'm afraid I just don't understand columns and SS enough yet, but I would like to get over this particular hurdle before I invest more $ into equipment. Please, if I am missing something, feel free to chime in. :)
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Yes it is.

Is this an in place upgrade or are you migrating from one server to another?

Storage Spaces does not rebalance like you are thinking it does. when you add more disks to the storage pool, the configuration of the Vdisks (4 column:dual resiliancy) does not change. 4 (columns?):8(disks) is only 12 TB of available space, not the 18 that you think it is. in the 4:12 configuration you will only have 18 TB of available space for disks. you are going 6tb => 12tb => 18tb.

if you have 4 free disks now and 2 unused disks. I would create a 6:6 dual parity configuration with 12 TB of available space. move all of your data to it, then add the remaining 6 disks. that would give you 8 disks with data and 4 disks of parity, for a 24 TB of available space instead of 18 tb. I am proposing that you do a 12 tb => 24 tb.

You will have more IO with this configuration and an easier migration (I hope). since I do not know how flexraid works, I can only hope that it an easy migration.

One thing to note is are you planning on using a fixed or thin vdisk? that affects a few things. I have 55 JBOD servers and 45 raid based servers running spaces, in both fixed and thin configurations. There are considerations based on what you are planning on doing. both for performance and for disk balancing.

If you want to change the vdisk configuration you will need to have enough space to create a new vdisk. then migrate the data from the old vdisk to the new vdisk. To do this you will need to use thin vdisks. I can help you with the migration plan for this but I will need a few more details.

Chris
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
It would be an in place upgrade, and I would create a fixed vdisk. As a media server, there is no reason for me not to provide all available space for my media.

I think I understand now. When I create the space and columns, the vdisk I start with in 4:4 isn't actually expanded as I progress, I would be creating a second 4:4 vdisk, and expanding the drive pool across 2 vdisks, and end up with 3x 4:4 vdisks. This is definitely not what I am trying to do, although based on this, your 6:6 suggestion makes a lot more sense.

Is it possible to expand the vdisk so that I retain the 2 physical disk redundancy across all 12 disks? Maybe SS expansion ability was not as lenient as I thought it was. Maybe I am better off with my Dell H310 controller and run 2x 6-disk RAID5 and split my media up.
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Parity has a column limit of 8. You cannot get a true 10+2 raid 6 configuration with 12 disks. the best space you get is a 6+2 dual parity configuration.

you can have a 12 disk storage pool and an 8 column parity vdisk. What happens is that the first set of slabs is written on 8 disks, then the next set of slabs is across the remaining 4 and 12 disks in the pool, the next on the remaining 8, etc...

The secret sauce is the provisioning. fixed provisioning is just that, fixed. once you setup the vdisk those slabs are fixed, expansion is usually a duplicate number of disks to your existing disks. if you use thin provisioning, new slabs are allocated on a as needed basis across all disks in the pool, based on your column count and % remaining on the disks.

If this is for home, use I would recommend that you use thin provisioning, it will give you the flexibility of doing some kind of migration from a vdisk with one column count to another vdisk with a different column count. if this is for work, well than... it depends.

Chris
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
It is for home use.

I don't quite follow you regarding the example on the slabs. If I did a 12 disk pool and 8 column parity vdisk, the last 4 would be left unused, wouldn't it? Reason being that SS can only "expand" in multiples of the number of columns (another 8.) Or I could create a second 4 column and be left with a 6+2 and a 2+2, neither of which is ideal for me.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
That is incorrect. what happens is that when an allocation for a vdisk occurs (fixed or thin), there are multiple factors involved in which disks are picked. This is not raid as you expect, where you have a rigid these disks are part of this vdev/raid set. the column count is fixed when you create the vdisk and there is no way to change that.

here are a couple of examples:

3 * 1 TB disks in a pool. and you create a mirrored vdisk. you will get a max fixed vdisk size of 1.5 TB. because the first slab is on disk 1& 2, next is on 3 & 1 and the next is on 2 & 3, etc...

2 * 1 TB and 2 * 2 TB disks in a pool if you do a 1 column mirror you can have a 3 TB fixed vdisk. because the first slab will be on the 2 TB disks. next will be on the 2 TB disks, next will be on the 1 tb disks, etc... until the disk is full

same pool but a 2 column mirror, you can only provision a 2 TB fixed vdisk because it can only use the first 1 tb on all 4 disks. because it writes out to all 4 disks at the same time. after that is full you could provision a 2nd vdisk that is a 1 column mirror that is 1 TB in size.

does this help you understand how columns work?
 
Last edited:

ninja6o4

Member
Jul 2, 2014
92
22
8
45
Yes, clear as mud, lol :confused:

I think I follow you now. So, does Thin provisioning allow for expansion of a vdisk? If so, I actually would have to use Thin provisioning in order to accomplish what I am trying to do, which is ultimately end up with a single large vdisk with dual parity for my media storage, whether it be 4 column:12 disk or 6 column:12 disk.

The main thing I am trying to do is ensure I only keep 2 "disks" of redundancy across the entire pool.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Thin provisioning allows for 2 things
1) vdisks as large as is supported by the OS (NTFS == up to 256TB : REFS == really freaking huge) without having the physical disks for space
2) space reclaim - as you migrate data from one vdisk to another, if you do a defrag /k (slab consolidation) then a defrag /l (trim) on the old vdisk and it will shrink and then you can add more to your new 8(6 data:2 parity) in a 12 disk pool vdisk

with dual parity the best you will get is 3/4 of your disk space will be data. the rest will be redundancy.

Chris
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
I think I am finally starting to get it. So, if I wanted to achieve the 10+2 configuration, I essentially need to start out as a 12 column:12 disk dual parity configuration, correct? Then, if I wanted to expand, I have to do it with another 12 drives.

I suppose this really isn't going to be the solution for me, since there isn't any way for me to offload 18TB of data temporarily. :(
 

Terabitdan

New Member
Jun 21, 2013
15
0
1
Redford, MI
since SS only supports 8 columns in your configuration you would end up with a 12 disk:6 column to maximize available space and performance.

However, I wonder if you could add drives in 2's to the StorageSpace, then create a new virtualdisk with a 14 disk:7 column configuration and just copy the data over? It may take a while to migrate the data but using links & junctions you could make the process fairly seamless to end users. (move a file/directory, create a link, repeat until done) then change your share location once its all complete.

I am doing something similar right now, I had 6 4TB drives in a SS parity setup as passthrough from Hyper-V to Server Essentials 2012. After doing some reading I realized that using thinly provisioned VHDX was much more flexible with no downside. So, I added 3 drives to the pool, created a new parity virtualdrive, added a VHDX to the Server Essentials and am copying everything between the two. With 12 TB of data its taking about 24 hours, but just manually moving shares rather than writing a powershell script. The next step will be to delete the old passthrough disk and then repeat the process with a 10 drive:5 column configuration (with one additional drive).

Each 2 drive addition provides both higher performance and better space utilization until hitting the 8 column limit. I'll probably go to dual parity with the next 2 drive increase anyway.

Any reason this wont work?

Dan
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
since SS only supports 8 columns in your configuration you would end up with a 12 disk:6 column to maximize available space and performance.

However, I wonder if you could add drives in 2's to the StorageSpace, then create a new virtualdisk with a 14 disk:7 column configuration and just copy the data over? It may take a while to migrate the data but using links & junctions you could make the process fairly seamless to end users. (move a file/directory, create a link, repeat until done) then change your share location once its all complete.

I am doing something similar right now, I had 6 4TB drives in a SS parity setup as passthrough from Hyper-V to Server Essentials 2012. After doing some reading I realized that using thinly provisioned VHDX was much more flexible with no downside. So, I added 3 drives to the pool, created a new parity virtualdrive, added a VHDX to the Server Essentials and am copying everything between the two. With 12 TB of data its taking about 24 hours, but just manually moving shares rather than writing a powershell script. The next step will be to delete the old passthrough disk and then repeat the process with a 10 drive:5 column configuration (with one additional drive).

Each 2 drive addition provides both higher performance and better space utilization until hitting the 8 column limit. I'll probably go to dual parity with the next 2 drive increase anyway.

Any reason this wont work?

Dan
What is your original SS column configuration? It sounds like doing this will result in more total redundant drives than 2, from my very limited understanding so far.
 

Terabitdan

New Member
Jun 21, 2013
15
0
1
Redford, MI
I started with 6 columns, 6 drives and 2 journaling ssd. When I'm all done it will be 5 columns, 10 data drives and 2 journal drives. So yes I will have 1 more parity drive.

The point to me is that it is possible to add drives in sets of 2 so long as your willing to copy the data when you do.

In the same storage space I also have 2 way mirrored virtual drives setup for documents which are written more often.

Dan
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
I apologize... I just came across this FAQ. and you will be able to have a 10+2 configuration

Storage Spaces Frequently Asked Questions (FAQ) - TechNet Articles - United States (English) - TechNet Wiki

Dual redundancy supports up to 17 columns.

to get there with your config you will need to 2 set the migration. configure the 4+2 configuration. move your data to there. then add the other 6 disks. create a new vdisk and then migrate the data again to the new vdisk. now you will have a 10+2 configuration.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
I apologize... I just came across this FAQ. and you will be able to have a 10+2 configuration

Storage Spaces Frequently Asked Questions (FAQ) - TechNet Articles - United States (English) - TechNet Wiki

Dual redundancy supports up to 17 columns.

to get there with your config you will need to 2 set the migration. configure the 4+2 configuration. move your data to there. then add the other 6 disks. create a new vdisk and then migrate the data again to the new vdisk. now you will have a 10+2 configuration.
Hmm.. so this won't leave me with a 2 vdisk configuration of 4+2 each? I'm sorry I feel like this should make sense to me but it doesn't. And I've used HW RAID for quite a number of years and have a good understanding of it.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Storage spaces in NOT hardware raid.

I do not know how much data you are migrating. that is the key. I have 1PB of data a day moving thorough my systems here at work. and we migrate a lot of data.

the question is. how much data do you have that you need to migrate? because you may want to do a 5+1 single redundancy thin provisioned vdisk first. then create a new 10+2 dual thin provisioned vdisk as your final destination.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
I understand that SS isn't like HW RAID, and I have been playing with other software alternatives (currently using FlexRAID's tRAID), but the whole column thing is throwing me for a loop.

I currently have 18TB of data - 6 completely full 3TB disks. So I could have another 6 empty disks to work with by ordering 2 more if necessary.

After reading some more, I see the advantage in thin provisioning now. So let me try again:
  1. Create a storage pool with 6 empty disks.
  2. Create a thin SS vdisk with 6 columns, single parity 5+1 configuration.
  3. Scrounge some temporary storage, and empty 1 of the 6 full drives.
  4. Migrate data from the remaining 5 full disks to the new SS vdisk.
  5. Add the newly emptied 6 disks into the storage pool for a total of 12 disks.
  6. Create a second thin SS vdisk with 6 columns, dual parity 10+2 configuration.
  7. Move data from first vdisk to second.
  8. Remove first vdisk, freeing up space for the second vdisk to grow to expected capacity (10x3TB=30TB), and maintain 2 disks of parity across all 10 disks.
If I want to allow expansion at a later date, I will have to install 6 more drives at once, correct?

Also - let's say I opt for 4TB drives when I buy 2 additional drives. Does this mean I have room to squeeze an extra SS out of the 1TB from each drive? Could I configure it as a fixed simple SS, for example?
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
#6 needs to be: create a second thin SS VDisk with 12 columns (10+2) dual redundant parity configuration.

#7a after you migrate some of your data over (I suggest 1 TB at a time), run defrag /x <first vdisk> and defrag /k <first vdisk> to reclaim space from the first vdisk to allow your migrations to continue.

The one thing that is the current Achilles heal for SS is the inflexibility (and understanding) of the column count once the vdisk is created. I hope that Microsoft will fix that in the future. That feature alone would help you in your quest.

It took me a while to imagine how this works. and now that I have more and more systems at work using spaces, it is clear how it was intended for. A one time setup of a SS SOFS cluster. Where this inflexibility is not an issue. like I said before, I hope they fix this in the future.

Chris
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
Hm. Does it need to be 12 columns because the minimum required for dual parity is 7 and I was proposing 6? The column count will determine how many drives are needed to expand the vdisk later, correct?

Any thoughts on if I bought 2 larger drives and made a second small SS as per my previous post?