RAID for Windows (That's Not Storage Spaces)?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

macdaddy2012

Member
Oct 10, 2025
32
0
6
Hi all, I'm looking for recommendations for a windows based software* raid solution.

*preferably

Must be windows. Why? I am working on deploying a multi tiered storage solution with a nas, the nas will backup to this software raid on a separate machine, and that software raid will backup to backblaze for the 1 is none, 2 is 1, and 3 is adequate data protection.

Yes I know raid is not a backup by itself. Hence getting the data in multiple places, backblaze compatibility is the driver.

I'm currently using storage spaces and it's putting me through hell. I can use it with backblaze which is why I chose it, but I had one drive go down and just trying to get the array to rebuild will cause hangs of windows explorer (and a bunch of other stuff). Storage spaces is also doooog slow and has no advanced management or monitoring/ alerting features to speak of. You're practically in the dark about your storage if you're not watching it like a hawk.

Solutions:
I'm not equipped to be an advanced sys admin, I'm looking for something preferably easy to use. Bit rot and other protections are welcome.

I've heard of Snapraid but it's CLI only right? Looks like it's more esoteric and complex than I can stomach.

OWC SoftRaid? It looks full featured and importantly easy to use but it is also pricey. If it's the best I might consider purchasing.

Does anyone have suggestions?

Also if you have suggestions for backup tools those are welcome too. I would be pulling data from a nas and cloning it to this storage pool for backup to backblaze. I've heard of EaseUs ToDo, any opinions?

Thanks all!
 

gea

Well-Known Member
Dec 31, 2010
3,578
1,406
113
DE
Your options:

1.) A hardware raid adapter preferable with BBU
This is the classic raid on Windows if you want security and performance. Modern softwareraid mainly with Copy on Write and checksums (ReFS, ZFS) is faster, safer and without vendor lockin.

2.) A Storage Spaces Pool. This is not a diskbased raid and the current Microsoft software raid solution. You define a pool of disks of any type or size. You can then create Spaces with (mirror,parity, dual parity) or without redundancy and format with ntfs or the modern ReFS. You can define location of the Space (hd,ssd,nvme) and hot/cold data auto tiering between hd,ssd and nvme. Redundancy is managed by file copies (not datablocks like classic raid) what limits write performance especially with parity Spaces. Very flexible but you need Powershell or a web-gui for all options.

3.) A software raid 1/5 in Windows Disk Manager. It works but not very safe, fast or sophisticated.

4.) A software raid based on mainboard chipset and mainboard drivers. This is like 2.) with vendor lockin

5.) OpenZFS on Windows.
ZFS is a superiour raid concept with best of all data security and features, nearly ready on Windows. Current state of the Windows filesystem driver for OpenZFS is 2.3.1 release candidate 12 what means beta but already quite usable. Download and issue tracker with remaining bugs see OpenZFS on Windows

For web-gui based management of Storage Spaces and ZFS you can use napp-it cs (just download and run, free for noncommercial use).

For backup (data sync) on Windows via SMB, use robocopy, a very fast and robust sync/copy tool in Windows. For OS disaster recovery, I prefer Aomei backupper.
 

JSchuricht

Active Member
Apr 4, 2011
206
76
28
Is there any reason you can't use hardware RAID? Used LSI/Broadcom RAID adapters are cheep, have good performance, provide monitoring and scan the disks for issues.

For the file transfer, I would probably do a simple robocopy batch file.
 

gea

Well-Known Member
Dec 31, 2010
3,578
1,406
113
DE
Modern software raid (btrfs, ReFS, ZFS) can guarantee atomic writes that must not be interrupted or filesystem/raid can become corrupted
(write data + update metadate, write raid stripe sequentially over disks)

Modern software raid (btrfs, ReFS, ZFS) has checksums on data and metadata to verify/autorepair data during read (bitrot protection)

A hardwareraid needs a BBU to be partly protected but in the end even a CoW filesysstem ontop a hardwareraid is not fully protected and a raid itself is in danger (ex a mirror/raid 5/6 where not all disks are updated on a crash)

Other aspects: softwareraid is nowadays faster without a vendor lockin
Hardwareraid+BBU+ ntfs or when you need a bootmirror is still the best
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
And “modern software raid”, I.e. ZFS on top of a hardware raid (BBU protected) block device? How would that not be power protected, if the stripe to be written persists even in the event of a power crash?
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
softwareraid is nowadays faster without a vendor lockin
Sorry, not agreeing. ZFS is dog slow if your pool is north of 50% full. A hardware raid volume has the same performance throughout.
 
  • Like
Reactions: nabsltd

Dev_Mgr

Active Member
Sep 20, 2014
180
63
28
Texas
OP posted that he is stuck having to use Windows.

ReFS might be an option, but only for certain versions of Windows. A few months ago I was using 2 disks formatted ReFS and wanted to set up a dual boot between my usual Windows 10 and add Windows 11 (to see how much work Win11 was going to take to get it to a workable state for my gaming needs).

When I booted back to Windows 10, I could no longer read those 2 ReFS disks/filesystems. It took me a bit to figure out what had happened, but Windows 11 had upgraded my ReFS disks to a newer ReFS version. I ended up booting back to Win11, move the data to a spare external/USB harddrive, then reformat the disk NTFS, and copy the data back.

So, if you're using Windows 10 still (maybe unpatched, the LTS version, or with ESU), or the server equivalent, I'd be conservative about trying out a new (major) Windows version to prevent running into what I ran into.
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
Your options:

1.) A hardware raid adapter preferable with BBU
This is the classic raid on Windows if you want security and performance. Modern softwareraid mainly with Copy on Write and checksums (ReFS, ZFS) is faster, safer and without vendor lockin.

2.) A Storage Spaces Pool. This is not a diskbased raid and the current Microsoft software raid solution. You define a pool of disks of any type or size. You can then create Spaces with (mirror,parity, dual parity) or without redundancy and format with ntfs or the modern ReFS. You can define location of the Space (hd,ssd,nvme) and hot/cold data auto tiering between hd,ssd and nvme. Redundancy is managed by file copies (not datablocks like classic raid) what limits write performance especially with parity Spaces. Very flexible but you need Powershell or a web-gui for all options.

3.) A software raid 1/5 in Windows Disk Manager. It works but not very safe, fast or sophisticated.

4.) A software raid based on mainboard chipset and mainboard drivers. This is like 2.) with vendor lockin

5.) OpenZFS on Windows.
ZFS is a superiour raid concept with best of all data security and features, nearly ready on Windows. Current state of the Windows filesystem driver for OpenZFS is 2.3.1 release candidate 12 what means beta but already quite usable. Download and issue tracker with remaining bugs see OpenZFS on Windows

For web-gui based management of Storage Spaces and ZFS you can use napp-it cs (just download and run, free for noncommercial use).

For backup (data sync) on Windows via SMB, use robocopy, a very fast and robust sync/copy tool in Windows. For OS disaster recovery, I prefer Aomei backupper.

1) I presume you mean Battery Backup? My concern with a hardware raid card is that if the card fails the array is shot without and exact copy of the card to slot back in correct?

2) That's what I'm currently suffering with

3) I've heard that was depreciated and replaced with storage spaces?

4) vendor lock is the problem

5) Iwas unaware of ZFS for windows, I'll look into it. Tell me more. I'm planning to deploy truenas so that would slot right in.

napp-it cs and robocopy, I'll check them out
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
Is there any reason you can't use hardware RAID? Used LSI/Broadcom RAID adapters are cheep, have good performance, provide monitoring and scan the disks for issues.

For the file transfer, I would probably do a simple robocopy batch file.
I'm concerned about card failures borking the array, and I don't know if hardware raid is compatible with backblaze. I'm looking into it.
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
Sorry, not agreeing. ZFS is dog slow if your pool is north of 50% full. A hardware raid volume has the same performance throughout.
I have not heard of this, why would it slow? HDD performance doesn't degrade as drives fill like SSDs can, unless you're talking about SMR hard drives which absolutely no one should be using.
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
OP posted that he is stuck having to use Windows.

ReFS might be an option, but only for certain versions of Windows. A few months ago I was using 2 disks formatted ReFS and wanted to set up a dual boot between my usual Windows 10 and add Windows 11 (to see how much work Win11 was going to take to get it to a workable state for my gaming needs).

When I booted back to Windows 10, I could no longer read those 2 ReFS disks/filesystems. It took me a bit to figure out what had happened, but Windows 11 had upgraded my ReFS disks to a newer ReFS version. I ended up booting back to Win11, move the data to a spare external/USB harddrive, then reformat the disk NTFS, and copy the data back.

So, if you're using Windows 10 still (maybe unpatched, the LTS version, or with ESU), or the server equivalent, I'd be conservative about trying out a new (major) Windows version to prevent running into what I ran into.
I've only heard of ReFS in passing but I've similarly heard of it having some jank, which I am trying to avoid.
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
1) I presume you mean Battery Backup? My concern with a hardware raid card is that if the card fails the array is shot without and exact copy of the card to slot back in correct?

2) That's what I'm currently suffering with

3) I've heard that was depreciated and replaced with storage spaces?

4) vendor lock is the problem

5) Iwas unaware of ZFS for windows, I'll look into it. Tell me more. I'm planning to deploy truenas so that would slot right in.

napp-it cs and robocopy, I'll check them out
1. Grandma’s tale. This hasn’t been true for over 15 years.
4. Again, Grandma’s tale. A hardware raid array from LSI or Adaptec can easily be read and reconstructed by MDADM in Linux. You’re not locked into the vendor.
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
I'm concerned about card failures borking the array, and I don't know if hardware raid is compatible with backblaze. I'm looking into it.
Card failures are a fact of life. How’s that any different from your HBA or motherboard SATA ports failing? And hardware raid has no relationship to Backblaze. Your software stack on top of your RAID will dictate that.
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
I have not heard of this, why would it slow? HDD performance doesn't degrade as drives fill like SSDs can, unless you're talking about SMR hard drives which absolutely no one should be using.
It has to do with cows … and what and how ZFS works.
 

alaricljs

Active Member
Jun 16, 2023
271
119
43
Copy-on-write and the inherently increasing fragmentation of available space as you use the disk. The allocation algorithm in ZFS struggles to find a usable extent the less available space and higher the fragmentation.
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
napp-it cs and robocopy, I'll check them out
Man alive, Napp-it cs is not made to make your life easy. Downloaded and looked at the readme, it looks like it needs no less than 13 separate dependencies manually and individually installed. I just unfortunately do not have the time/ energy for these complexities.

openzfs on windows looks intriguing (man their website is the worst), but I just can't deploy something that isn't production ready.

Also, CLI installers. I'm looking for simplicity, I do wish developers would make life easier. I'm just not equipped for highly esoteric sys-admin tasks. I come from a world of one-click installers. This makes me lean OWC softraid
 

macdaddy2012

Member
Oct 10, 2025
32
0
6
Copy-on-write and the inherently increasing fragmentation of available space as you use the disk. The allocation algorithm in ZFS struggles to find a usable extent the less available space and higher the fragmentation.
I'm looking that up but not seeing what you're talking about. I've heard of deploying a SLOG to speed up writes which I might do.
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
I'm looking that up but not seeing what you're talking about. I've heard of deploying a SLOG to speed up writes which I might do.
I'll say this as gently as I can. Please feel free to do your research but nothing that has been said about ZFS is untrue.
 
  • Like
Reactions: nexox

macdaddy2012

Member
Oct 10, 2025
32
0
6
I'll say this as gently as I can. Please feel free to do your research but nothing that has been said about ZFS is untrue.
Responses like this: not helpful

I am on the forums to *do research*. I am not on the forums to be told to "google it", we all know how to google I wouldn't be asking if I found the answers I was looking for.

I'm not even challenging what you're saying, I'm asking for an explanation. Why bother replying to a forum post with "google it"?