Win10 Storage Spaces - Is it that bad?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ViciousXUSMC

Active Member
Nov 27, 2016
264
140
43
41
Just curious, lots of experts here that spent a lot more time and money than me on NAS and data serving setups.

I have been wanting to build a FreeNAS setup for a few years, really getting down to it now picking parts and such. The reason for me is the built in error correction of ZFS and setting up mirrors or RaidZ2 to protect myself from a drive failure.

The dilemma is this. I have always run a high end gaming desktop with overclocked cpu, its on 24/7 eating up power and usually has 6 to 8 drives on it.

I have not updated my PC in the last couple of years and feel the pain when encoding videos my 2600K just is not up to snuff with some of the newer CPU's or dual cpu server boards.

As a result I have been looking at ways to use this FreeNAS box to do my video encoding, suddenly I will have 2 boxes on 24/7 and use more power etc and I may not end up with an optimal solution.

Researching the last couple of weeks I ran into Windows Storage Spaces and the ReFS format it uses. Apparently ReFS has the built in error correction to prevent data erosion just like ZFS, its built right into my Win 10 machine, and it can do Raid or Mirrors. Even more so I found a neat feature called Tiered storage where SSD and HDD work together to first use the SSD for immediate data, but then pass it off to the HDD behind the scenes later.

So benefits here are normal Windows environment, just 1 machine instead of more than one, some neat features, etc.

Is storage spaces really not so bad and could make a good NAS?

I could just go build out a new system and create this mass storage/video editing/gaming monster and probably save money in the long run and use all the systems power for any given task rather than have an asymmetric setup with 2 systems.

The other thing I was looking at is a much more complicated hypervisor setup having FreeNAS in a VM, but that seems just a bit over the top.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
A lot of people just dislike Microsoft so no matter what they do, they get hated on. I played around with storage spaces and ended up returning to Stablebit Drivepool. With the exception of parity drives, it does similar things and I could just pull a drive out of machine "A" and it could be read on machine "B". Not so with storage spaces.

So for me personally it was too closed off and seemed like a lot of clumsy screens just to create a damn pool. So it just wasn't for me. And yes, I'm also one of those guys that doesn't trust Microsoft at all. So giving them control over my data was never going to happen.

Now I'm on unRAID as it for my needs at the time. Will be checking out FreeNAS 10 when it releases on stable.


Sent from my iPhone using Tapatalk
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
As long as you do mirroring, Storage Spaces is fine for all around utilization. I configure it in multiple ways depending on what I need, and I use powershell to do that.

Parity is slow on writes. so be careful there.

Chris
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
I have not tried tiered parity recently. If you are doing NTFS and you have enough SSD's you can get decent performance. you have to be careful as you will get double/triple writes to your SSD's this way. one to the WBC and again to the SSD tier (if you tier) and the Parity journal.

What you have to do is make sure that you turn on a larger WBC (have to create the vdisk with PowerShell to do this - no ability to change in the UI) and if you choose to use ReFS you can now turn on a read cache on the SSD's (again need to create the vdisk in PowerShell).

The issue I have with tiering parity is that writing down to the spinners is still slow. You just may not see it since it will happen as it is usually a background process.

If you use double redundancy you lose another disk as a global parity spare. which means that mirror is still the more economical route in the low HD count servers.

I have been doing most of my testing with 2016. so I do not know what the difference is with Win10 Anniversary Edition. I try to keep all of my data not on my workstation for central backup reasons.

Chris
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Client windows doesn't support the tiering from what i've seen, server does, it does support parity spaces though you can't pick the parity level (my 7x1tb array automatically set to dual parity for only 4.23TB available) tiering on 2012 required mirrored spaces instead of parity as well, so it wasn't as good as say cachecade but better than nothing for small deployments(I use cachecade at work, storage spaces at home on my small server)
cesmith9999 isn't kidding about the write performance hit with parity either unfortunatley, even with 7200rpm spinners I should see faster writes than I do, the reads are OK and it would be good for archival/bulk storage but the initial fill for those use cases is going to take a while.
upload_2016-11-30_14-30-49.png
 

ViciousXUSMC

Active Member
Nov 27, 2016
264
140
43
41
Hmm, the main reason I wanted to use this was for the benefit of protection against BitRot while still having one machine to do it all.
Storage Spaces has some great features, but it looks like performance is terrible and a lot of the features are still hidden away in Powershell or not even available on home OS.

I am ok with Powershell, I use it at work, but stripped features...

Now I am starting to look into DrivePool+Snap Raid+MD5Hash or perhaps UnRaid with some of its CRC plugins.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Storage spaces can be damn fast if configured properly. I have posted numerous times bench marking 20+ gigabyte per second reads and writes with storage spaces. Server 2016 takes it up a notch with refs and read and write cache. The are other changes as well having hot and cold tiers and mixing nvme,ssd,and spinners with mixed resiliency with the tiers. I digress but I have had set ups with all sorts of performance you can hit whatever performance you like it just needs to be arranged properly.

I will say stablebit drivepool is nice but this is not going to be where you put your vms. Same with a storage spaces parity drive with no ssds. But with for media storage it's a pretty great arrangement.

Sent from my SM-N920T using Tapatalk
 
  • Like
Reactions: Jon Massey

modder man

Active Member
Jan 19, 2015
657
84
28
32
Storage spaces can be damn fast if configured properly. I have posted numerous times bench marking 20+ gigabyte per second reads and writes with storage spaces. Server 2016 takes it up a notch with refs and read and write cache. The are other changes as well having hot and cold tiers and mixing nvme,ssd,and spinners with mixed resiliency with the tiers. I digress but I have had set ups with all sorts of performance you can hit whatever performance you like it just needs to be arranged properly.

I will say stablebit drivepool is nice but this is not going to be where you put your vms. Same with a storage spaces parity drive with no ssds. But with for media storage it's a pretty great arrangement.

Sent from my SM-N920T using Tapatalk

Are any of those posts on here? I would like to give them a read if so. Have been considering giving it a try myself but good write ups are few and far between.
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
46
Microsoft have always lagged miles behind with their software RAID offering in Windows. Very poor performance even with the fastest drives.
Dynamic disk sucks and always will.
ZFS is much more efficient and resilient however personally I prefer hardware RAID as I like to expand my array when I need more space and that is a major issue with ZFS.
 

DieHarke

New Member
Mar 16, 2017
19
0
1
32
I have not tried tiered parity recently. If you are doing NTFS and you have enough SSD's you can get decent performance. you have to be careful as you will get double/triple writes to your SSD's this way. one to the WBC and again to the SSD tier (if you tier) and the Parity journal.

What you have to do is make sure that you turn on a larger WBC (have to create the vdisk with PowerShell to do this - no ability to change in the UI) and if you choose to use ReFS you can now turn on a read cache on the SSD's (again need to create the vdisk in PowerShell).

The issue I have with tiering parity is that writing down to the spinners is still slow. You just may not see it since it will happen as it is usually a background process.

If you use double redundancy you lose another disk as a global parity spare. which means that mirror is still the more economical route in the low HD count servers.

I have been doing most of my testing with 2016. so I do not know what the difference is with Win10 Anniversary Edition. I try to keep all of my data not on my workstation for central backup reasons.

Chris
Hey cesmith9999,

is it maybe possible to share those powershell commands you used to create such a tiered parity space? I'm am currently struggling with putting it together in the correct way.

Thanks in advance!
 

Fritz

Well-Known Member
Apr 6, 2015
3,372
1,375
113
69
I'm a hater. Like Cyberskulls, I don't trust MS and will never use Windows 10 for any reason. My next move, from Windows 7 will be to Linux. At this point, Windows is a necessary evil and nothing more.

Having said this, I have 2 mirrored FreeNAS servers that contain my valuable data. One has been up and running for about 2 years now and the other about a year. I've had a couple of drive failures during that time and FreeNAS handled both gracefully with no loss of data. I like the scalability and resilience of FreeNAS and will continue to use it as long as they don't screw it up.
 

DieHarke

New Member
Mar 16, 2017
19
0
1
32
@cesmith9999 Hey Chris,
I already own two 250gb SSDs and my idea was to start with three spinners in parity. Ultimatively I want to expand the count of spinners up to six.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
single or dual redundancy? Single I am assuming because of the low disk count. you will need 1 more SSD. on a single system, you can not mix redundancy types (mirror, parity). you can just have different column counts.

and when you add disks later and run optimize-storagepool. it will not update column counts. it just spreads the data across more spindles.

so if your plan is to have tiered parity with a larger column count later, you need to not use all of your SSD space initially so that you can create a new tiered Virtual Disk later and copy your files over to the new Volume.

Chris
 
  • Like
Reactions: DieHarke

DieHarke

New Member
Mar 16, 2017
19
0
1
32
single or dual redundancy? Single I am assuming because of the low disk count. you will need 1 more SSD. on a single system, you can not mix redundancy types (mirror, parity). you can just have different column counts.

and when you add disks later and run optimize-storagepool. it will not update column counts. it just spreads the data across more spindles.

so if your plan is to have tiered parity with a larger column count later, you need to not use all of your SSD space initially so that you can create a new tiered Virtual Disk later and copy your files over to the new Volume.

Chris
Yeah single redundancy and thank you for the clarification. So let's assume I will initially build up a pool with 3 ssds and 3 spinners. Both tiers would have a column count of 3. Later when I want to extend the spinners tier I would have to create a new vdisk to update the column count (ssd tier would stay at 3). Than I can copy my data to the newly created vdisk.

So far so good, but now I have still two questions.
How exactly should I create the initial vdsik, so that I am able to create another vdsik later on the same storage pool, since I would want to use the capacity of the spinners fully?
--> Can I utilize the optimize-storagepool command for that purpose?

After succesfully upgrading to my final setup, how can I extend the vdsik to use the maximum capacity for both tiers?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
when you create the volume, only use under half of you SSD capacity (you will need it for the future VDisk). plus the journal (32MB) and any WBC cache and Read cache (ReFS only).

after you add the next 3 spinners to the storage pool, then you use optimize-storagepool to level out the data in the spinners to all 6 disks.

create your new vdisk with the new column counts.

after you have finished copying your data to the new vdisk,

delete the old vdisk,
re-run optimize-storagepool. this will place the slabs of data from the middle of the disk to the beginning of the disk.
expand your vdisk to its new size.

remember that vdisk size is kinda attached to your ntfs/refs sectorsize. 4K sectors means that you will have issues if your new size disk is larger than that. so please take care of that.

Chris
 
  • Like
Reactions: DieHarke

DieHarke

New Member
Mar 16, 2017
19
0
1
32
Is there a recommendation for the sector size? The vdisk size would be 16tb and later 40tb. I want to store mainly big multimedia files on the space.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
at 40 TB you will need to have 16K sector sizes. if you are storing big files. just go 64K.

4K - 0 to 16 TB
8K - 16TB to 32TB
16K - 32TB to 64TB
32K - 64TB to 128TB
64K - 128TB to 256TB.

and yes I have had to present a 256TB volume...

Chris