Windows 2012 R2 Storage Spaces - Tiering with hardware RAID?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

suggestable

New Member
May 3, 2015
5
0
1
38
Admins: I'm not sure where best to ask this, but this sub-forum seems most appropriate. Please move this thread if you feel it would be better off elsewhere.

Current scenario: PERC H700 with 2x 500GB disks in RAID-1, 12x 3TB disks in RAID-6, 6x 2TB disks in RAID-6. Windows Server 2012 R2 Datacenter installed on RAID-1.

I'm currently looking to expand my storage and improve its performance. I am intrigued by Storage Spaces, and have heard it supports "tiering" to allow SSDs to cache frequently-accessed files to improve performance.

I'd like to know if the following is possible:

Keep my existing hardware RAID arrays intact, and add SSDs in a separate tier, thereby using Windows Storage Spaces to accelerate access to the already-existing hardware RAID arrays.

If this is not possible, I am looking at migrating across to a new hardware platform (Asus P9A-i C2750 with 16GB DDR3) with new disks (12x 4TB WD Red Pro) and new SSDs (4x 250GB Samsung EVO 850) and moving to using FreeNAS with RAID-Z2. Would the board and RAM I have support such an amount of storage? I head that FreeNAS requires a lot of RAM to work properly when dealing with large arrays. I'm also interested in long-term usability, as if I moved across to using FreeNAS I'd also want it to support adding a second disk shelf (of 12x 6TB disks and 4x 250GB SSDs - I'm using Lenovo SA120 JBOD shelves).

Current server hardware specs:
Asus KGPE-D16 motherboard,
2x Opteron 6134 CPUs,
96GB (8x8GB and 8x4GB) DDR3 ECC Reg,
PERC H700 with BBU,
Chenbro CK23601 SAS expander,
X-Case RM-424 (similar to Norco RPC-4224),
2x 500GB HDD (RAID-1),
12x 3TB HDD (RAID-6),
6x 2TB HDD (RAID-6),
PC Power & Cooling Turbo Cool 860.

Proposed server hardware specs:
Asus P9A-i motherboard,
Intel Avoton C2750 CPU,
16GB (2x8GB) DDR3 non-ECC UDIMM,
2x on-board 8-port SAS controllers,
Chenbro CK23601 SAS expander (primarily to provide two external SFF-8088 connectors),
2x Lenovo SA120 JBOD shelves,
12x 6TB HDD (in SA120),
4x 250GB SSD (in system chassis),
SSD OS storage (capacity and type TBD),
PicoPSU 160-XT powering main system chassis and self-powered SA120 disk shelves.

Please let me know what you guys think would be the optimum hardware for maximum performance, maximum capacity and minimum power usage.

Thank you!
 

suggestable

New Member
May 3, 2015
5
0
1
38
To note: I don't mind using the H700 in the new Avoton board, but would prefer that I am able to (somehow) add SSD caching/tiering to this configuration.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Storage Spaces wants to use JBOD HBA's and not HW raid, much like ZFS.

Storage Spaces SSD tiering, by default, uses a 1GB write cache, and then you can assign what you want size wise for read cache. If you need to create a write cache > 1GB , you need to do all the config via the CLI, which there is plenty of blogs showing how to do this.

Note that the read cache is not real-time, by default it schedules once a day to move "blocks" to the SSD cache. You can change this, as it is just a scheduled job, which I think is a horrible solution for tiering. The IO pattern from yesterday most likely will not be the same for today, at least in the world I live in (multi-tenant, service provider).

I don't think Storage Spaces nor will ZFS balance data when you add another shelf of disk's, you'll most likely have to clear off the pool, recreate it, and copy your data back.
 

suggestable

New Member
May 3, 2015
5
0
1
38
...also using JBOD HBAs is not an issue, but as I understand it, Storage Spaces does not allow parity pools to have tiering, so it would have to be ZFS/FreeNAS if I were to go down that route.
 

tjk

Active Member
Mar 3, 2013
481
199
43
So... CacheCade might be a better option for me?
If you want to stay Windows, I would go with HW raid w/CacheCade and SSD's for caching. I'd venture to guess this will be plenty faster then SS's anyhow.
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Did I understand your first post right:
You are able to get a P9A-I with the C2750?
I tried to get that board for months and than took the c2550 version.

2 questions came to my mind when reading your post:
1.) with all the SSD an Spinners in your setup:
Are you aware that each port of the Marvell controller on the P9A is just capable of 5 gb/s instead of 6 gb/s or 12?

2.) how are you planning to access your storage?
If you would like to "bond" the 4x1GB/s nics to a 4gb pipe you are required to use windows 8.x for SMB 3.0 multichannel

Using a 10GB Nic: the pcie 2.0 x 4 (in a physical x8) slot will cripple your bandwidth to between 350 - 500 mb/s in Windows and 600 - 700 mb/s using xpenology or other light weight Linux distros
 

Deci

Active Member
Feb 15, 2015
197
69
28
Storage spaces can be used over the top of hardware raid but it's data destructive either way.

Zfs pools can have vdevs added, but there is no auto balancing so the data will fill both as much as it can then just continue to fill the remaining space on the new shelf.
 

suggestable

New Member
May 3, 2015
5
0
1
38
Thanks for the replies, guys.

Kristian: Yes. I bought one used and had it shipped from the US. Got a pretty sweet deal on it. It doesn't even look like it's ever actually been used. I still haven't tested it, though...
Dual 10GbE NICs was another option I was considering. Shame that the 4x PCIe 2.0 bandwidth wouldn't be enough to support iSCSI booting (which is why I'm so interested in accelerated storage).

Deci: That's good to know. I don't mind if it's data-destructive as I'd be looking to migrate away from the current arrays onto a complete set of new disks anyway (the warranties are almost out on the 3TB disks and the 2TB disks have been out of warranty for over three years now... so it's overdue).

I'm now leaning more in favour of upgrading the RAID controller to one that can support CV and CC and using some cheap, fast SSDs for the cache. What are the best ones I should consider? I'm guessing LSISAS2208-based would be a good way to go?

What about this idea:

Disable some of the cores of my Opteron 6134 CPUs to save power, and continue using that board with a new, faster, more efficient RAID controller?

Those CPUs idle around 30W each, which is a little high for my liking. The machine is running 24/7/365, so minimal power usage is important.

Thank you for the input. Much appreciated.
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
I know this is old but I want to use two HW raids one HDD the other SSD. I want to import them via Storage Spaces and then I want to create one big Tiered volume. I've done it before so I know it works, but someone mentioned it was data destructive, how? Also, I'm using Windows Server 2016.

Additionally, how does the defragmenter tool move the data around from the SSds to the HDDs? Someone said it's on a schedule?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
It is on a schedule. There is an command to change it. I do not remember what it is off the top of my head.

Chris
 

gregsachs

Active Member
Aug 14, 2018
562
192
43
Ok, I was looking at the link from Starwind, and wanted to see about this concept:
HW is a dual e5-2620 with onboard LSI raid controller with BBU (Intel board I forget), plus I have another LSI running external JBOD SAS enclosures.
Right now I boot H-V 2016 with dynamic mirror drives, one sas, one sata, attached to the onboard LSI. I just got two 800gb DC3500 SSDs, and i was intending to use them basically for VM hosting duties, but it occured to me that I could also use them to create three virtual drives; one ~60gb mirror for boot, and two 740 gb virtual disks to use for vm hosting on a storage space tiered space. With only two drives I'm locked to single column, but this is home lab so I'm not really worried about that.
This would give the advantage of making use of the LSI cache and ssd protection features which are not used in JBOD mode as far as I can tell.
Any downside to this approach?