server 2019 tierd storage vs unraid vs stablebit

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Tiering is a mechanism where "hot" data is hold on a faster part of the array while "cold" or less performance sensitive data is on the slower part of an array. The main problem of tiering is that the copy over of data between creates a lot of load and affects performance itself negatively during the action. The plus is that data on the faster part has a higher performance on read and write. Data caching for example helps only on reads.

The other aspect is realtime raid vs nonraid with a backup option to a redundancy disk on demand. Beside the realtime aspect this has a performance impact. Only with realtime raid, performance scale with number of datadisks while otherwise the performance is always like a single (the active) disk.

The main reason to use nextgen filesystems like btrfs, ReFS or the champion ZFS is Copy on Write. This gives crash protection (no corrupt filesystem on a crash during write) with snaps that holds the previous data state and realtime checksums to detect any data corruption and repair from redundancy on the fly. This is why they do not have or needs a chkdsk/fschck command as there should never be a corrupted filesystem beside a real disaster scenario where traditional chkdsk/fschk will also fail. This is why you need a backup even on the best filesystem. As protection against unwanted modifications/file delete/Ransomware you have read only snaps (accessable via Windows > previous versions). When using Raid, you are also not affected by the Raid writehole problem like with traditional raid/filesystems, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays..

If you care about performance raid + Windows storage spaces is an option but neither the fastest nor the easiest to use. Faster and easier to handle is ZFS where there is a on many cases faster alternative to tiering available, the special vdevs. This is a faster part of the pool ex an NVMe/SSD mirror. Data is forced to this array part based on its data structure like small io, metadata, dedup table from ZFS realtime dedup or single filesystems that you can force to use the special vdev. This is very new on ZFS (develeopped by Intel) and available for a year on Linux and the Solaris forks like OmniOS. On Free-BSD this is a upcoming feature, see some tests about https://www.napp-it.org/doc/downloads/special-vdev.pdf
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
i would stay away from ReFS. no any recovery program can handle it.
In addition, there are multiple BSOD issues with ReFS. I would avoid using it in production, it could create more issues than solve problems. ZFS with properly configured caching can be a great option to use.
 

edge

Active Member
Apr 22, 2013
203
71
28
In addition, there are multiple BSOD issues with ReFS. I would avoid using it in production, it could create more issues than solve problems. ZFS with properly configured caching can be a great option to use.
Would you enumerate the bsod issues, their dates, and the kb articles or lack there of? Active links will suffice, I need to argue management away from refs so well documented issues are necessary.
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
Would you enumerate the bsod issues, their dates, and the kb articles or lack there of? Active links will suffice, I need to argue management away from refs so well documented issues are necessary.
Couple links:
The last time I have faced BSOD with ReFS was in April or March, I am not using it since then.