Windows Server 2012 R2 Storage Spaces / Storage Pools vs Hardware RAID

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RamGuy

Member
Apr 7, 2015
35
2
8
34
I'm debating whether I should make the move to hardware RAID or keep using the built-in "Storage Pools" solution in Windows Server 2012 R2 on my home-server.

I've been running Windows Server 2012 R2 with a parity (1-drive parity) solution consisting of 8x Western Digital RE4 2TB + 2x Western Digital Black Edition 2TB for quite some time and it has been working decently enough.

I decided to give ReFS a go and have been using it with a single virtual disk making use of all the space. It's mostly used for hosting my library of Blu-Ray rips (20-40GB MKV's), music library, audio-book library and various software and driver installers and Windows Backup from our various Windows systems.

The read speeds have been good, here we are mostly limited by the network interface (1Gbps) and even when trying to configure LACP from Windows Server 2012 R2 to my Cisco SG-300 switch it seems like we are still more or less bottlenecked by the 1Gbps interface so the link aggreggetion does not seem to do wonders.


Sadly it seems like the write speeds are rather poor. I have uTorrent configured to write directly to the ReFS virtual disk and do full pre-allocating and this will lag the entire software each and every time a new torrents starts. It's like the write speeds are too slow for uTorrent to actually function as it should. After the allocation of a new torrent is done thing seems to be working again but I often run into various disk overload messages which I suspect is due to the actual write speeds dipping below the actual uTorrent throughput at times (11.5-12 MBps) which quite frankly is rather silly.


I have also noticed that heavy write traffic seems to cripple the virtual disk / storage pool, it almost seems like Windows Server 2012 R2 is having a hard time distrubiting the data between the disks crippling the performane.


Now I have ran into an issue where it seems like I don't have enough space on some of my disk for the storage pool to work.. And all the GUI is telling me is "Warning: In Service" like that tells me anything.. The perfomance is pretty much useless at this stage. All the data is still there, and it is accessible but trying to copy from or to the virtual disk it takes for ever. Getting like 1 GB of the virtual disk will littarly take me hours.

Doing a repair does not seem to work because of the lack of free space, it will get to 20% and then just drop back down to 10% and keep doing that with no end.



So I'm currently at a cross-road. I can add 1-2x Western Digital RE4 4TB drives and repair the pool and get some additional storage added to the virtual disk. Or I could stop using the IBM M1015 with LSI 9211-8i IT-firmware and move to hardware RAID using a IBM M5015 or some other RAID card + expander or keep using Storage Spaces / Storage Pools.



What do you recommend? I guess I would get more presistent and stable performance from Hardware RAID, especially in-terms of write speeds. But would additional cost be worth it?

How does a Storage Pool react if I add a new, larger hard drive (4TB compared to the existing 2TB ones?) will it bascially render 2TB useless? ShouId I be able repair the virtual disk if I add an additional drive? The GUI doesn't really tell you anything...


How does RAID-controllers like M5015 (9260/61) compare to M5110 (9265) and M5210 (9360) in-terms RAID-5 performance over 10-12 disks? Is M5110 or M5210 worth the additional cost? How easy is it to flash M5015 to 9260, M5110 to 9265 and M5210 to 9360? Just as easy as with M1015 to 9211 IT? Any potential drawbacks?
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
Hmm, I might give that a go. But do you think adding another 2 or 4TB to the pool will make the repair finish so I can take a backup before switching to NTFS?
 

markpower28

Active Member
Apr 9, 2013
413
104
43
worth a try. One of the problem for Storage Space is SMART info is not quite accurate. There is a chance repair will never finish.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
A couple of notes:

1 - LACP won't necessarily break the 1Gbps bottleneck for your NICs. Unless you are dealing with multiple readers/writers. In general, any single "flow" (e.g., file transfer) is still limited to the speed of a single link even in LACP bundles. LACP really helps when you have a server with multiple client systems simultaneously requiring access - but for mostly point-to-point transfers it doesn't help much.

1a - if your clients are also Windows 7, 8, 10 or Server 12/12R2 then all you really need to do is configure multiple links on the clients and servers. Windows SMB 3.0 does multipath networking quite nicely.

2. If you want to speed up writes for a single-parity storage space, rebuild it with 2 SSDs. Configure them as Journal drives and then build your Storage space drive with write-caching enabled. Its a PITA because you have to do it in Powershell (the GUI wizards can't set it up). But it does help significantly with writes.

3. Unless the above makes sense you may be better off with a hardware raid. It is unfortunately true that SS is fairly complex and needs some level of understanding to make it work reasonably well. And even then "reasonably well" is probably the best you can hope to get out of it.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Microsoft is the master at making things more complicated than they have to..

Chris
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
I guess I might add 1-2x 4TB RE4 drives then. But how efficient is Windows Storage Spaces in terms of distributing the data when running Parity? Will it render the addition 2TB of both disks useless due to all other disks being only 2TB? Sadly I don't have any 2TB disk on hand besides Green Power ones and I do not want to add Green Power drives into my mix of 8x RE-4 2TB and 2x Black Edition 2TB.
A couple of notes:

1 - LACP won't necessarily break the 1Gbps bottleneck for your NICs. Unless you are dealing with multiple readers/writers. In general, any single "flow" (e.g., file transfer) is still limited to the speed of a single link even in LACP bundles. LACP really helps when you have a server with multiple client systems simultaneously requiring access - but for mostly point-to-point transfers it doesn't help much.

1a - if your clients are also Windows 7, 8, 10 or Server 12/12R2 then all you really need to do is configure multiple links on the clients and servers. Windows SMB 3.0 does multipath networking quite nicely.

2. If you want to speed up writes for a single-parity storage space, rebuild it with 2 SSDs. Configure them as Journal drives and then build your Storage space drive with write-caching enabled. Its a PITA because you have to do it in Powershell (the GUI wizards can't set it up). But it does help significantly with writes.

3. Unless the above makes sense you may be better off with a hardware raid. It is unfortunately true that SS is fairly complex and needs some level of understanding to make it work reasonably well. And even then "reasonably well" is probably the best you can hope to get out of it.

How large should the SSD's be? I've got 2x Intel 520-series 60GB SSD's, and a few Intel X25-M 80GB's laying around. Why should one use two instead of just one? Is there some way to add the SSD's to an existing parity storage pool or do I need to redo the whole pool from the begging?

I have just added a 4TB Red drive in order to see if Windows Server 2012 R2 is capable of repairing the pool so I might make a backup of the whole thing. In it's current "reduced" state it would take me about 3-4 weeks to copy all the data..
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
How large should the SSD's be? I've got 2x Intel 520-series 60GB SSD's, and a few Intel X25-M 80GB's laying around. Why should one use two instead of just one? Is there some way to add the SSD's to an existing parity storage pool or do I need to redo the whole pool from the begging?
60-80GB should be good enough. You can configure a 40-50GB write-cache on those and things will write just under the single-drive write speed of your SSDs (which will easily saturate your 1 Gbe link even with those older SSDs).

The reason you need two is that MS requires the Journal to have the same level of redundancy as your pool. So if your pool is designed to survive the loss of a single drive (single parity) then your Journal needs to survive the loss of a single drive too (single mirror for the journal). If you were using double-parity then you'd need to deploy Journal SSDs in sets of 3. Its overkill in your application, but then SS was really designed for datacenter deployments and not to support simple Raids.

Unfortunately there is no way to add the SSDs to an existing virtual disk. You can add them to the pool any time you want - but they are only allocated to the vDisk at the time it is created.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
adding 4 TB disks will only add 2 TB of their space to your Virtual disk. are the Vdisks thin or fixed? that will determine if they are used at all.

Chris
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
I'm not going to add the 4TB drives to the pool. I have only added the 4TB Red drive in a hope of being able to repair the virtualdisk (it's using thin provisioning) so I might be able to get some backup from the storage pool before I'm going to redo the whole configuration.

I'm going to add 2x SSD's for journalling in order to boost write-performance. Would you pick 2x Intel X-25M 80GB's or 2x Intel 520 60GB? The X-25M's have slightly larger size, but the Intel 520 should provide higher throughputs.

What are the recommended interleave and columns amount for my configuration consisting of 10x 2TB hard drives (8x WD RE4 2TB + 2x WD Black 2TB) and with 2x SSD's for journalling in parity mode. And where can I locate the PowerShell commands for adding SSD Journal drives?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113

RamGuy

Member
Apr 7, 2015
35
2
8
34
What do you know.. It seems like one of the WD Black 2TB had gone bad after all, one just have to connect it directly to the motherboard controller instead of th LSI 9211-8i RAID controller (IT-mode) in order to get any actual SMART feedbacks from it.. I simply removed the hard drive and then the repair took like 5 minutes and the performance is right back where it was before.

One thing I'm not able to figure out is how to get the Virtual Disk out of "degraded" state? Window Server won't let me remove the disk from the Storage Pool (it's been removed from the system..) due to the Virtual Disk being in "degraded" state as that is not "safe" to do for some stupid reason. Why it's not safe to remove a disk from the pool when the disk is not being present in the first place I have no clue and as I'm not able to actually remove this disk from the pool the virtual disk does not seem to be using the new disk I have added in order to rebuild so it's stuck in degraded state...

What is all this nonsense? Doesn't make any sense if you ask me. I tried to remove it using PowerShell but I just face the same issue where Windows Server tells me I cannot do that because of my virtual disk being in degraded state..
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
Okay.. Seems like it's re-building now. I had to set the faulty (removed..) drive to retired using PowerShell for the Virtual Disk to ignore it and then start to re-build using the new disk I had added to the storage pool.

It seems and feels really awkward and stupid that you are not allowed to actually remove a faulty drive by simply trying to remove it using the GUI, especially when you have already added a new drive to the pool before you try to remove it. And why is there no option for retiring drives in the GUI? This whole Storage Pool feels like a work-in-progress.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
yes that is the trick. I posted powershell code a while back that allows you to upgrade all of your disks in a storagepool. and that is one of the fundamental needs.

Chris
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I think MS views Powershell as the preferred method to manage storage spaces. The actvity wizard GUI only exists to demonstrate the product and to facilitate the simplest of tasks. Frankly, the GUI looks like an afterthought or work in progress precisely because it actually is an afterthought. In fact, they have exposed only a small subset of the actions and options of SS in the wizards.

Remember - SS is targeted at data centers managing drives onto big JBODs. They aren't really interested in small systems using local disks. For the most part these larger MS data centers use powershell for everything.