software raid with real-time protection

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
I was going to mention FlexRAID but just read the dev seems to have done a runner. I wouldn't have recommended it for a number of reasons but it did hit a lot of the OP requirements as I recollect.

FWIW I've had bad experiences with REFS every time I've used it. Last one was all the volumes going raw - on Windows Server 2016. That was the last time I used REFS :)

Also FWIW with Snap|RAID and drivepool (which I use) the magical thing to me is that at any point in time I can take that disk away and use it natively.
does the combination of snapraid and stablebit scanner give me some kind of real-time protection? StableBit - The home of StableBit CloudDrive, StableBit DrivePool and the StableBit Scanner

i dont need drivepool, because i dont want pooling of drives.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Only newest filesystems (apfs, btrfs, Refs and ZFS) offer realtime checksums of metadate and/or data. Only CopyOnWrite filesystems offer crash resitancy (no corrupt filesystem on a crash during write) with snaps as Ramsomware protection.

Only option would be some sort of (Non) Raid system ontop a modern filesystem. But this would only offer checksum protection not (realtime) raid protection against disk failures and not the superiour performance of realtime raid.
 
  • Like
Reactions: Bert

Bert

Well-Known Member
Mar 31, 2018
820
383
63
45
When I look at the requirements, I feel like something is off with them. First of all, scrubbing is not real time protection and what are we protecting against? Disk failures? Software failures? Anyhow, since you already have a SnapRaid based solution, you may want to create a batch and schedule the workflow in Windows Task Scheduler and be done with it. IIUC, SnapRaid requires some external task scheduler.

Perhaps you need different tiered solutions for your data storage solution. Let me share my personal set up that is based on Windows:
- File Server OS: Windows Server 2016
- One Storage Space on the File Server itself. All virtual disks are Simple parity and All volumes are formatted with NTFS with Dedup.
- A few storage spaces on external disk shelves for back ups. All virtual disks are Simple parity and all volumes are formatted with REFS.

- Workstation OS: Windows 10
- One LSI RAID 5 based disk for hosting critical data like pictures. One Intel RAID 0 based disk for hosting high IOPS usage like VMs, lightroom catalogs etc. All volumes are NTFS.

Protection:
I rely on back ups for protection from hardware, software and user failures. My workflow involves frequent backups from Workstation to File Server. This is done over LAN. File Server has attached disk shelves and disk shelves hold the backup of File Server which are sync'ed as I collect enough data. This is done external SAS connection. I use FreeFileSync to trigger backups and I preserve deleted files against accidental deletes. This workflows has been working perfectly for me as long as I maintain the discipline of performing backups before making any File System changes. I have not lost data because of disk failures so far. 6-8 years ago, I lost data due to bad controller card. I lose data because I make mistake while managing my data. 3 months ago, I deleted wrong volume and lost all the data I didn't back up.

I am not worried about BitRot so I don't take any precaution against it. IIUC, this is quite uncommon problem and disks have built-in protection against bitrot.

Remarks

- If you care about BitRot, ReFS will require parity or mirroring for correction. Parity on WinServer 2016 Storage spaces is not usable due to very slow speeds.
- IMHO, user errors are biggest source of data losses so it is more important for having a strong back up story.
- RAID 5 rebuild is super time consuming and risky, again having backups to be able to restore is better strategy for recovering from disk failures.
- Storage Spaces is not very flexible. For example, fixed column count for virtual disks makes expansions very hard.
- You can also hit weird issues with Storage Spaces like being not able to use a disk because it was not removed from a previous storage spaces. I saw disks being marked as read-only for some reason and you may need to use fdisk to fix the disk etc. Basically weird problems and fixes that shows Storage Spaces is not streamlined.
- You need to use the powershell commands to properly administer the storage spaces.
- Performance of my system is questionable. I cannot get the speeds I would expect to see but I am not sure where the bottleneck is.
- There is pretty good forum discussions that helped me to get familiar with Storage Spaces.