Server 2012 R2, Storage Spaces and Tiering

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Luke Ward

New Member
Apr 8, 2015
1
0
1
35
This has a been a very good read. Thank you to Piglover, JSchuricht, and others who have shed some light and spent time on the subject. I just recently switched from esxi to hyper v, specifically for caching and storage tiers. Until I get flash I'm using 4 standard drives in a mirrored SS (read/right is about 280 MB/s sequential, happy with it so far). After reading through everything and seeing the benchmarks on the parity configurations and ssd caching, what would happen if we did storage spaces with a hardware parity raid? Sadly I am not in a position to test this, otherwise I would. Is it possible to create the raid arrays with a raid card (separate ssd and hdd arrays) and have storage spaces manage the flash cache or automated tiering? My thought is that if you can provide the performance of a hardware parity raid with the automated tiering or flash cache it would be a very good combination.

What do you think?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Storage Spaces is not meant for Hardware parity. that said, I have many configurations where I am creating hardware raid and adding those LUNs into a storagepool and then doing a raid 0 (simple) vdisk in storage spaces. I get real good (read and write) performance there. I use both thin and thick volumes in this configuration.

This helps with normal mechanical faults as you have the hardware help you replace faulted disks. This is not a good strategy if you plan on expanding your Storage spaces pool or upgrading as the simple vdisk means that you lose all data if a (hardware raid) disk goes offline.

Chris
 
Last edited:

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
Hello
So I have just a couple of questions that I can't find anywhere online.
I currently have 4 128GB SSD and 4 1TB HDD.

I'm trying to get the fastest setup I possibly can out of this right now with a mirrored setup. The default wizard setting is very good with crystaldiskmark scores of:
1322 seq read, 795 seq write
23.85 4k random reads, 41.70 4k random write

I've tried upping the -WriteBackCache to 100GB, but it destroys my scores to
200 seq read, 150 seq write
0.1 4k read 10 4k random wirte

Is there a setting that I could use to improve my storage speed more than the default? If I understand storage spaces correctly it is putting the SSD's to raid 10 (256GB total space with 1GB WBC) and HDD to raid 10 (2TB total space) Is there a way that I can tell storage spaces to run the 4 SSD's as raid 0 and the HDD to raid 10?

Thanks!
Marshall
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
You increase your performance by adding spindles and having a large column count in the vdisk. the WBC is not effective past 10 GB in normal (non benchmark) tests.

Chris
 

Morgan Simmons

Active Member
Feb 18, 2015
134
25
28
44
You increase your performance by adding spindles and having a large column count in the vdisk. the WBC is not effective past 10 GB in normal (non benchmark) tests.

Chris
So If I understand correctly, my best option as I replace the 1tb disks and add disks is to destroy the virtual disk, and recreate it with a larger column count. So if I'm at 2 columns, when I get to 8 disks, I should destroy and change to 4 columns (2 disks per column in mirrored)
Will I also need to add another 4 SSD's when I get to 8 HDD?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Unfortunately, yes that is how it goes.

if you have 2 SSD and 8 disks... you are restricted to 2 columns. to gain more columns you will need to add 2 more SSD's...

this is one case where I want a modified 36 drive 4U supermicro case. 24 * 3.5" in the front and 24 * 2.5" in the rear if the case for a large column count...


Chris
 

Deci

Active Member
Feb 15, 2015
197
69
28
This has a been a very good read. Thank you to Piglover, JSchuricht, and others who have shed some light and spent time on the subject. I just recently switched from esxi to hyper v, specifically for caching and storage tiers. Until I get flash I'm using 4 standard drives in a mirrored SS (read/right is about 280 MB/s sequential, happy with it so far). After reading through everything and seeing the benchmarks on the parity configurations and ssd caching, what would happen if we did storage spaces with a hardware parity raid? Sadly I am not in a position to test this, otherwise I would. Is it possible to create the raid arrays with a raid card (separate ssd and hdd arrays) and have storage spaces manage the flash cache or automated tiering? My thought is that if you can provide the performance of a hardware parity raid with the automated tiering or flash cache it would be a very good combination.

What do you think?
I have also done this configuration, drive arrays on their own do 1.3GB/s read and 1GB/s write for 10x sas disks in raid6 and 1GB/s read and 900MB/s write for the SSD raid 10.

Combined into a simple stripe on storage spaces you can see ~700/600MB/s for read/write, you lose peak performance but gain the tiering and write cache, you can also add dedupe into the mix if most of the machines are the same (which in this boxes case it is the same 3 virtuals running ~10 times each with only a few small text config file differences as worker nodes) this saves a huge amount of space and gives reasonably solid performance, even though the total peak speeds are down on what they were seperately, the SSD space can be better utilised than before as it wasnt big enough to hold the virtual machine files as they are a few hundred GB each.

To make it work you do have to force storage spaces to reclassify the disks from "raid" or "unknown" to HDD and SSD before it will allow you to set up a tiered pool.
 

Deci

Active Member
Feb 15, 2015
197
69
28
Unfortunately, yes that is how it goes.

if you have 2 SSD and 8 disks... you are restricted to 2 columns. to gain more columns you will need to add 2 more SSD's...

this is one case where I want a modified 36 drive 4U supermicro case. 24 * 3.5" in the front and 24 * 2.5" in the rear if the case for a large column count...


Chris
Chenbro make a 4u case that can take 3x 2u drive units of any combination you like, (12x 3.5") or (24x 2.5")

Chenbro - Products
 

Xlup

New Member
Jul 29, 2015
3
0
1
43
Hi, I'm Xlup, and I'm from Italy :)

I've read all this thread and is very interesting so I just registered on the forum to have you opinion about a configuration I'm thinking about...

I've a (not too) old server, that is an Ibm X3650 M3, loaded with 8 disks....
A couple of 192Gb sas (actually in raid 1 for system volume), and six 1tb nl-sas (actually in raid 5 for data) ...

I've inserted in the server a pci-e ssd (kingston predator) and it works perfectly, performance is great.

So the question is: how to increase speed on the data disks keeping as much space as I can?

I'm thinking about using a tier in simple mode using pci-e ssd and the 5tb vd (maybe 4 if I keep 1 hot spare)...

This way the data on hd is protected (raid 5 for my usage is enough)

My question is: what happens if the SSD fails?... is the system smart enough to take out the ssd tier and keep running on the hd tier?

Maybe I could insert a second pcie ssd in the tier...

The ideal could be to have the ssd tier mirrored and the hd tier simple (raid is in hardware) but it's not possible.... leaving a pci-e ssd as hot spare seems to me really a waste of resources...

Another idea would be create a mirrored volume on the two ssd, create a vhd on this volume, and use that vhd as ssd tier and keeping the hd tier on the raid 5 vd.... I don't know how much overhead this could take into game.....

What do you think about it? :-D

Ciao!
Xlup
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Can not use tiering with parity spaces. Can use ssds as a write cache with parity and dual parity but nor tiering. I would say mirroring with tiering is a great choice.
 

Xlup

New Member
Jul 29, 2015
3
0
1
43
Can not use tiering with parity spaces. Can use ssds as a write cache with parity and dual parity but nor tiering. I would say mirroring with tiering is a great choice.
I have not much clear your point...
If I understand you suggest to have the disks as jbod without using at all raid features of the hardware card...
and use windows tiering in mirror mode.
This way I'll have less than 3tb of usable space...

If I configure the 6 hdd in hardware raid5 i get 5tb of usable space protected ....

To accellerate this I would like to use tiering but the problem is how to secure data on the ssd tier....
If I had a couple of sas ssd i could put them in hardware raid1
Then configure windows tiering in simple mode (to use 1 vd ssd and 1 vd hdd redundancy is done on the hw)

The problem is that having pcie ssd I cannot put a couple of them in hardware raid 1...

Another option could be to try something like hgst servercache...

Ciao!
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
You can not use tiering built into storage spaces on dual parity or parity spaces only on mirrored spaces or simple spaces. So if you use hardware raid 5 and then layer pcie and hardware raid 5 as simple spaces with tiering....maybe this is technically achievable but definatly not a supported config.
 

Xlup

New Member
Jul 29, 2015
3
0
1
43
You can not use tiering built into storage spaces on dual parity or parity spaces only on mirrored spaces or simple spaces. So if you use hardware raid 5 and then layer pcie and hardware raid 5 as simple spaces with tiering....maybe this is technically achievable but definatly not a supported config.
Yes, I've found much indications that tiering should not be used on top of hardware raid but it's not clear to me why.... are there ms whitepapers that state this, or reliability tests, etc...?
It could take all the good of a real hw raid i.e. write back cache backed by flash/battery.. or more in general of any storage system...

Example: at work we use a san for storage, it's a bit old hw and is based on normal sas hd....
I have several servers that use this storage system (via iscsi) and some of them are quite intensive on io....

I could insert a couple of ssd in that server and tier them with the iscsi vd they are bound to...

This way I should gain performance on that server and also reduce the workload on the san....
And with a really modest investment (upgrading the san storage system is another magnitude...)

I surely dont'say that this would be the best solution, but it could be a good tradeoff while waiting to migrate on a total ssd solution....

The fact that the hd is not physically in the server but on a scsi vd is transparent for w2k12....Why this should be avoided? Are there known problems if the disks are not a jbod of disks?


The hamletic doubt is ...use ssd for tiering or use them for caching...


Ciao!
Andrea
 

talsit

Member
Aug 8, 2013
112
20
18
Thanks for this thread, it answered the questions I had and got me started down the right path.

I'm not getting anywhere near PigLovers speeds but so far my array has been reliable. I went with 2 SSDs as journal disks and six 2TB disk. My drive capacity was 9.1TB, SS had setup a default of 7 columns over a total of eight disks (double parity).

I'm using this to mirror and eventually replace my unRAID setup so I'm going to manually specify 3 columns (single parity) and see how the speeds are. This will also make it easier as I rescale my unRAID server, it's a 2009 build with a mix of 1, 2 and 3TB drives (10+1 disk for 24TB, 13TB used). Rather than needing to add 7 drives to expand the array, I'll only need to add three, I can double my current SS array with 3 4TB disks.
 

m00t

New Member
Feb 17, 2016
8
6
3
42
Thanks for all the info in this thread. It helped me along with several other articles get the performance I was looking for out of a storage server. I did end up upgrading to Win 2016 as there is a bit more support and tools (including rebalancing) for Storage Spaces using SSD tiering. Note these are 'Simple' tiers, not parity or mirrored. The SATA drives are on RAID6 at the storage controller level for redundancy.

My config is a SuperMicro Dual socket server (E5's) with 64GB of memory.
We installed 24 x 3TB WD Red Pro drives in the server, along with an NVMe Intel P3700 800GB cache drive.

Performance to start was kind of a joke (see first screenshot with inconsistent speeds).

I found an article explaining to set the default logical sector size to 512 for the pool, which along with upgrading to 2016 seems to have done the trick, the speed differences are completely off the charts. (see second image with consistent writes and reads off the chart). Thanks piglover and others for all the gathered tips and helping me get things healthy and happy.
 

Attachments