Storage Spaces with a RAID volume & SSD Cache

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
I'm interested in trying to use an SSD cache drive for some of my RAID volumes. The only problem is the 3ware 9750-8i doesn't have SSD caching as a feature like some of the LSI controllers with CV, etc.

Has anyone ever taken their volumes pre-configured by a RAID controller, created pools in Storage Spaces, and then gave those pools an SSD cache? What were your experiences with this? Did it create issues or performance problems from being layered by a RAID controller and then storage spaces? Did it increase performance as expect?

To reiterate, I'm basically saying

1. Create RAID volume in controller(In my case I have 10TB RAID10 and 18TB RAID6).
2. Create 2 Storage Spaces pool out of those volumes, having the same capacities and not mirroring or doing parity
3. Create SSD cache for them

Thanks!
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
try the new multi-tenant feature in storage spaces in Tech preview 4... expose all of the disks as raw disks to the OS

mixing hardware raid and software SSD in a storage pool is not supported.

Chris
 
  • Like
Reactions: Patrick

soLost

New Member
Dec 1, 2015
2
2
3
Long time lurker and first post so be gentle.

I did something very similar over the weekend using 8 x 3TB WD Reds in RAID5 (software) and a 256GB 850pro

As far as I know Storage Spaces doesn't care if your drives are in a raid array or not. Storage Spaces will see the Array?
  1. Create the Storage Pool selecting one of the arrays, and a SSD
  2. Create virtual disk using the Power Shell commands in this thread (Scroll down to the post that best fits your goals, starting at post #18).
  3. Set Tiering, and Write Back Cache using some of the Power Shell commands in the thread in #2
  4. Create Volume and enjoy

In my case I didn't need/want tiering(Read Cache of frequently used data?) only Write Cache(Used for all writes before being saved to disk). I don't know of any size limitations for tiering, but Write Cache has a limit of 100GB.

Write Cache size has to be set at virtual disk creation, and cannot be changed afterwards.

Performance wise writes from a 4 x 600GB 10K RPM RAID 0 array peaked at 500MB/s and tapered down to about 300-350MB/s copying 1.2TB off the 10k RPM drives (Sequential writes, can't say anything about random writes). Reads for Sequential data held steady at 400MB/s copying data back to the 10K RPM array.

Feel free to correct me on any information that is wrong.

EDIT: Hardware raid is indeed not supported in a storage pool. I just tried adding a RAID 5 array to a new pool and it wasn't detected. "RAID adapters, if used, must have all RAID functionality disabled and must not obscure any attached devices"
 
Last edited:
  • Like
Reactions: Dajinn

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
That's interesting. I'm surprised you didn't see a better gain in performance. I get very high burst transfer speeds sometimes between my RAID10 and RAID6 away and between those 2 arrays and some of the RAID 10s on my hypervisor nodes, sometimes 800MB-1.2GB/s(quickly dropping down to 300-500 MB of course when the transfer sizes are larged).
 

soLost

New Member
Dec 1, 2015
2
2
3
As it turns out while trying to do everything with powershell, I allocated all the drives that were in my software raid into a parity storage space, which explains the low write speeds, which will never be faster than the ssd, or the horrible write limitations of storage spaces.

For my usage as archive storage it works fine. Important stuff like hyper-v vms sit on SSDs or 10k or 15k drives.

I apologize for providing inaccurate information regarding storage spaces.
 
  • Like
Reactions: Dajinn

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
As it turns out while trying to do everything with powershell, I allocated all the drives that were in my software raid into a parity storage space, which explains the low write speeds, which will never be faster than the ssd, or the horrible write limitations of storage spaces.

For my usage as archive storage it works fine. Important stuff like hyper-v vms sit on SSDs or 10k or 15k drives.

I apologize for providing inaccurate information regarding storage spaces.
You're good man! I was just expecting maybe more of a benefit from using an SSD cache.