Storage Spaces design on a pure SSD setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Eson

New Member
Oct 14, 2012
26
0
1
Hi

If I didnt know any better it seems like HP is doing something deliberate to third party SSD on the new Gen9 controllers P840 and P440. I cant for the life of me get any more the 20MB/write an 100 IOPS on 8 Intel 530 SSD. Doesnt matter what RAID-level I try. I have tried two other Intel SSD models.

So know Im left with using a P840 in HBA mode and going with Storage Spaces. So I have for now 8 Intel 530 480gb SSDs and will probably expand to 16 soon.

Im going to be running a bunch of RDS VMs on top of this so nothing insanely I/O intensive. How would you guys recommend i configure the Storage Spaces? I want to maximize usable storage but not sacrifice to much in write performance and that I can handle two disk failure.

Just use dual parity from the GUI or any reasons for going Power Shell?
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Everything I've ever seen from Storage Spaces suggests that performance when using parity is horrible. You mention two drive failures and parity, so I assume that's what you're doing?
 

Eson

New Member
Oct 14, 2012
26
0
1
The reason for not going with mirror is that I dont want to run out of storage to quickly. Price is not a huge issue but right now I only have 16 slots, havent got a clear answer from HP that I can install a third drive cage in the Gen9 to get to 24SFF. It seems like one have to factory order one that has additional power connectors.

If im going with parity and two drive failures are there any parameters to tune for best perfomance in a scenario like this?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The reason for not going with mirror is that I dont want to run out of storage to quickly. Price is not a huge issue but right now I only have 16 slots, havent got a clear answer from HP that I can install a third drive cage in the Gen9 to get to 24SFF. It seems like one have to factory order one that has additional power connectors.

If im going with parity and two drive failures are there any parameters to tune for best perfomance in a scenario like this?
You are probably not going to be happy with the performance of Storage Spaces in parity mode. If you really can't do mirroring, I'd suggest a RAID card with cache and a capacitor.
Now while you mention Storage Spaces, you are also talking about an HP RAID card. Are you doing RAID on the card or in Storage Spaces?
 
  • Like
Reactions: Jeggs101

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
You are never going to be happy with the performance of Storage Spaces in parity mode. If you really can't do mirroring, I'd suggest a RAID card with cache and a capacitor.
Seconded. Now the next-gen maybe.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
477
83
Yes, HP is doing some very deliberate things with the smart array controllers in Gen9. I cannot discuss what I know...

Chris
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
477
83
and with only 8 disks you will get 5 disks worth of space. dual parity +global parity + 5 * data... that is almost what you would get by mirroring those SSD's

Chris
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Parity Storage spaces have horrible write performance, that's just how it is. I see that others have now said that before I was able to hit post. :)

Why Windows Server 2012 Parity Storage Spaces Might Perform Slowly - Premier Field Engineering - Site Home - TechNet Blogs

“The caveat of a parity space is low write performance compared to that of a simple or mirrored storage space, since existing data and parity information must be read and processed before a new write can occur. Parity spaces are an excellent choice for workloads that are almost exclusively read-based, highly sequential, and require resiliency, or workloads that write data in large sequential append blocks (such as bulk backups).”

 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I wish Microsoft had a $10-15 license for use in a low power DFS brick device. I'm thinking get a NUC, load windows for hw compatibility, then get some sort of distributed network storage. It'd change windows storage forever.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
477
83
It is called Threshold. ScaleOut File Server share nothing. I can get it to work, still working out the kinks in the paint job. not certain about the licensing part though

but as for the brick part... an Avoton with multiple SSD and 4 - 8 spinning disks... and 10/40 GB networking (dual) in a Silverstone Ds380b... would be cool

or you can ask EMC how much ScaleIO costs / TB...

Chris
 

Eson

New Member
Oct 14, 2012
26
0
1
You are probably not going to be happy with the performance of Storage Spaces in parity mode. If you really can't do mirroring, I'd suggest a RAID card with cache and a capacitor.
Now while you mention Storage Spaces, you are also talking about an HP RAID card. Are you doing RAID on the card or in Storage Spaces?
Yea, I bought the P840 with the intention of running all the SSDs in RAID6, but since HP seems to be sabotaging third party SSDs I had no choice but to put it in HBA-mode and run Storage Spaces. A shame on such an expensive card with 4gb of cache to just act as a HBA but what do you do, buying the SAS expander will cost as much so I might as well keep it since it can handle 16 drives.

Reading your comments I might just fill up all 16 and go with mirror. When I need to go beyond 16 Ill ghetto-mod the thing to fit 24.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
Dual parity in storage spaces uses erasure coding so you end up with 2 drives lost to parity and 1 to make rebuilds faster. Speed should be tolerable on an all SSD array. Initial writes are done to one drive then the files are broken down and written again across all drive with parity data. The issue with that is you only see write speeds equivalent to one drive but if that one drive is a SSD doing 500+ MBs it may be alright depending on your needs.

Some other things to keep in mind, you can't expand the array. If you build it with 8 drives and add 8 more, you will have two separate dual parity arrays. Those can be seen as one big space but you loose more drive to parity. IIRC 17 drives is the limit to dual parity so you could pull the data off and recreate as 16 drives later on.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Sell the P840 and get a better (for this use case) controller? Then you could do the RAID6 you want.
+1 for this.

In order to get acceptable write speeds with storage spaces parity configs you MUST have two SSDs (for single parity) or 3 SSDs (for dual parity) in the pool that are configured as Usage Type Jounal. You also need to configure the write cache for the virtual drive in powershell. This means with 8 SSDs you get at most 5 drives worth of useful storage with single parity - that's 2 drives for Journal, 1 drive worth for parity and 5 drives worth for data.

Good news in your use case is that if you use 16 SSDs it's still just 3 drives worth of overhead for single parity - though you do need to do your own risk assessment about running that many drives in single party.

My suggestion: if you have a NAS or otherwise and can do frequent backups consider just running the SSDs raid0 (storage spaces simple pool). Just be sure your backup procedures are automated, reliable and you know how to mange the restore.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I don't entirely agree with that. I originally bought 3 Intel 530 120GB ssd's for journal on a 12 drive 6TB Hitachi dual parity setup. From all the testing I have done, 3 drive journal is significantly slower than 2 drive journal. With two Intel 530's 120GB on an Supermicro 5018A-Ar12L which is a 8 core 2.4GHz Avoton Atom setup with 32GB ram running 2012r2 Storage Server, 2 drive journal I see 400-500MBs write over 10GBe, with 3 drive in Journal I see about the same speed as a raw 6TB Hitachi spinner which is around 120MBs. There may be some difference due to the higher speed cores you use but I haven't found a faster config than what you previously posted. Running 4x Intel 530 120GB Journal drives also made no difference in speed.
 

Eson

New Member
Oct 14, 2012
26
0
1
Dual parity in storage spaces uses erasure coding so you end up with 2 drives lost to parity and 1 to make rebuilds faster. Speed should be tolerable on an all SSD array. Initial writes are done to one drive then the files are broken down and written again across all drive with parity data. The issue with that is you only see write speeds equivalent to one drive but if that one drive is a SSD doing 500+ MBs it may be alright depending on your needs.

Some other things to keep in mind, you can't expand the array. If you build it with 8 drives and add 8 more, you will have two separate dual parity arrays. Those can be seen as one big space but you loose more drive to parity. IIRC 17 drives is the limit to dual parity so you could pull the data off and recreate as 16 drives later on.
Are you sure I cant extend a virtual disk in storage spaces? I tried now with a lab and could at first create a 4 drive single parity array. As long as I follow the columns which was four, and added four more disks I could extend the virtual disk.

Or is what your describing specific to the dual parity scenario.

Right now Im leaning towards going with single parity 8 drives. I can live with one disk failure since we backup all the VMs with Veeam every night.

Question, if I create my first virtual disk with four drives and get 4 colums. Can i then extend it to 8 right away and the next time to 12 if I dont want to grow with 8 next time.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
You can extend the pool size but data is not redistributed. In you example of starting with 4 columns in single parity ie. raid 5, adding another 4 disks creates an aditional 4 column single parity space spanned with the first one. If you start with 8 columns and add 4 then you have no redundancy on the last 4 you added, they are just a span on top of the original 8 columns. Any expansion needs to be done with the same number of columns to make everything happy.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
477
83
your description of spanning is not entirely correct. the behavior varies depending on if you are thin or fixed provisioned VDisks

If Thin, then storage spaces will place the slabs based on available space and available IO's. this has a possibility of increasing performance

If Fixed, the slabs are fixed to the positions on the disks at creation. if you add disks to the pool (in your case double) and then extend your vdisk. you essentially are concatenating your storage. and not increasing performance.

in Threshold server there is a new tool that will allow you to restripe/configure your VDisks. This was a requirement for adding/removing nodes to the share nothing SOFS and rebalancing performance.

Chris