unRaid 6

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Churchill

Admiral
Jan 6, 2016
838
213
43
Yea that's what I thought. For this reason I keep a cold spare ready to go in case of a disk failure.

I honestly didn't even notice the data being missing as the array kept on trucking without a problem. I was surprised at how even though I had lost 2 disks all my data was seemingly present and available, the content I was missing was older and unused. Sadly I was mistaken at how UnRAID functioned, but now I know better.

A second parity disk may help in these cases of losing 2 disks before parity can be rebuilt, but a rigorous backup makes far more sense to spend time on.

Limetech supports BTsync in a docker container which could be used to replicate data from one UnRAID To another.
BitTorrent Sync
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
For a system like unraid (single parity with a dedicated parity drive), you can only lose one disk. At that point, you don't need another drive to fail for there to be data loss - if you get a single read error (bad sector or otherwize) during the rebuild/repair you have lost data, and I'm not sure how unraid would handle that error but its entirely possible that the entire rebuild job will fail at that point - if the one bad sector was near the beginning of the drive you might have lost most of the array. And especially when dealing with very large arrays needing to read tens or potentially hundreds of TB's to complete the rebuild your chances of getting at least one read error in there somewhere are worryingly high.

The above would also apply in the case of a corrupted sector (aka bitrot), except that you won't know there was an error - but with one drive already failed unraid may be able to detect that there was corruption but won't be able to repair it.

I use dual-parity at home, and it has saved me from data loss more than once when encountering an unrecoverable read error during a rebuild after a single-disk failure. Instead of thinking about the amount of raw capacity it will cost (eg. 8TB), think of it as a percentage - if you're running a 20-drive array then dual-parity is only 10% of it - whether that be 2x 8TB drives, 2x 4TB drives (my case), or whatever. Personally I wouldn't run more than a 7-drive array with single-parity (6 data + 1 parity), and after 16-20ish disks in a single raid set I'd start thinking about triple-parity.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
And especially when dealing with very large arrays needing to read tens or potentially hundreds of TB's to complete the rebuild your chances of getting at least one read error in there somewhere are worryingly high.
The key point is product name is called "UnRaid". It is not RAID.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
The key point is product name is called "UnRaid". It is not RAID.
It is not traditional raid - neither is ZFS, btrfs, windows storage spaces, snapraid, or any of the other multitude of alternative storage systems out there. But UnRAID definately is a Redundant Array of Independent Disks, and all of the same principles apply. Until they start supporting dual-parity, UnRAID is very similar to RAID-4 (single parity with a dedicated parity disk) except that they work on top of normal filesystems instead of at the block layer. But all of the standard RAID-4 concepts still apply (for the most part, RAID-5 concepts, except with dedicated parity disk instead of distributed-parity).

The only real advantage unraid has over traditional raids when it comes to recovery, is that in the event of an array failure any still-working data drives can simply be accessed as a stand-alone disk and the files on that particular disk will still be intact. So there is very low risk of losing all data on the entire array - but the chances of losing some data is exactly the same as raid.
 

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
Dual parity is coming. You can see it in the linus videos as he was given early access to the newest version.

I only use unRaid for home media and automation anything that is worthwhile is backed up via crashplan.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
RAID is not backup.
If you have complete backup, just restore 1 disk. The other data disks are not at risk.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
The only real advantage unraid has over traditional raids when it comes to recovery, is that in the event of an array failure any still-working data drives can simply be accessed as a stand-alone disk and the files on that particular disk will still be intact. So there is very low risk of losing all data on the entire array - but the chances of losing some data is exactly the same as raid.
That difference is a big one...at least for me with regard to my home server. Is the data (TB's of media) mission critical? No, so some data loss is acceptable. But the ability to recover most of the data is pretty key considering how much time and energy I've put into acquiring and organizing said media.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
That difference is a big one...at least for me with regard to my home server. Is the data (TB's of media) mission critical? No, so some data loss is acceptable. But the ability to recover most of the data is pretty key considering how much time and energy I've put into acquiring and organizing said media.
Yup - it is an important difference. It is also what allows keeping all of the drives spun down except for the one holding whichever file you are currently streaming to a TV somewhere (assuming that most of these setups are home media servers, and that most of the time they aren't simultaneously streaming a lot of different files). Most of the drives spun down most of the time is a lot less power usage than traditional raid which will spin up the entire array to read a single file. Though from a performance perspective - reading a single file from an unraid can only take advantage of a single disk, where a traditional raid could get significantly more performance by utilizing all of the drives.

These are the reasons that eventually led me to running snapraid on my box at home. I can run as many parity drives as I want within reason (max 6, but that is a LOT of parity), if I do have multiple drive failures I can still access the data from individual drives, I can keep most of my array spun down most of the time, and actually performance is often better when there are multiple users compared to traditional raid. I've been very happy with my snapraid setup since I switched to it around a year ago now I think, planning to stick with it at least until btrfs-raid-6 gets a few more features that are important to me and a lot more stable. But the chances of ever finding an unraid install at my house is slim to none - I don't trust the guy behind it and am reasonably certain there's some GPL violations going on in there somewhere.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Y....

These are the reasons that eventually led me to running snapraid on my box at home. I can run as many parity drives as I want within reason (max 6, but that is a LOT of parity), if I do have multiple drive failures I can still access the data from individual drives, I can keep most of my array spun down most of the time, and actually performance is often better when there are multiple users compared to traditional raid. I've been very happy with my snapraid setup since I switched to it around a year ago now I think, planning to stick with it at least until btrfs-raid-6 gets a few more features that are important to me and a lot more stable. But the chances of ever finding an unraid install at my house is slim to none - I don't trust the guy behind it and am reasonably certain there's some GPL violations going on in there somewhere.
well said..
I am going to try to snapraid since my old drives is 1T and 500G with dual or triple parity for sure :D
snapraid is very good for cold datastorage on my understand.
plus will use mergerfs as a pooler..

I trust snapraid and compiled by myself, by knowing the source code is better :D and free...

I read ruby... website(blog) that make me to try snapraid since 9X 1T and 10X 500G HDs are unused

haha I am waiting btrfs 5/6 getting mature :D and would move to that too..
 

Continuum

Member
Jun 5, 2015
80
24
8
47
Virginia
well said..
I am going to try to snapraid since my old drives is 1T and 500G with dual or triple parity for sure :D
snapraid is very good for cold datastorage on my understand.
plus will use mergerfs as a pooler..

I trust snapraid and compiled by myself, by knowing the source code is better :D and free...

I read ruby... website(blog) that make me to try snapraid since 9X 1T and 10X 500G HDs are unused

haha I am waiting btrfs 5/6 getting mature :D and would move to that too..
+1

For all the reasons stated by @canta and @TuxDude, I too am using SnapRaid with 2 parity drives with mergerfs for union mounting for my AIO file/home media server.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
+1

For all the reasons stated by @canta and @TuxDude, I too am using SnapRaid with 2 parity drives with mergerfs for union mounting for my AIO file/home media server.
Do you or any other posters know if those solutions would support/perform well with SMR drives? My bulk media server consists of only Seagate 8TB SMR drives (160TB in total between main and backup server) which have been tested and work great in UnRAID. However I'm well aware of their limitations and thus moving to another storage solution is out of the question if they don't perform well or aren't supported.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
The SMR drives should do as well with snapraid as they do with unraid. The snapraid 'sync' jobs may take longer to run on an SMR drive - but thats why you schedule them to run at night. So long as its finished before next morning does it really matter how long it takes?
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
The SMR drives should do as well with snapraid as they do with unraid. The snapraid 'sync' jobs may take longer to run on an SMR drive - but thats why you schedule them to run at night. So long as its finished before next morning does it really matter how long it takes?
Interesting. I'll have to do some digging to see if anyone else is using these drives with SnapRAID because with all the other projects I have going on I just don't have the time to be the guinea pig haha.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
The way I see it - snapraid doesn't do writes to your data drives, so it doesn't matter if they are SMR drives. You put your files on the drives either by manually distributing them around, or using some kind of pooling software (which still shouldn't be impacted by SMR as they mostly just pick a drive) - so for the data drives its about how you are putting the data there that may have issues with SMR, snapraid itself only reads from them except in the case of rebuilding after a failure.

The potential issue is the parity drive(s) - snapraid will do updates to the parity file on those drives on every sync. As I said above, this is a batch job and not performance sensitive, so its probably fine on the SMR drives. If it does become a problem, the easy solution is to add 2 more (assuming dual-parity) 8TB non-SMR drives to the array to use for parity, and use all the SMR drives only for data - its the more expensive option but it does come with 16TB of additional capacity too.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
The way I see it - snapraid doesn't do writes to your data drives, so it doesn't matter if they are SMR drives. You put your files on the drives either by manually distributing them around, or using some kind of pooling software (which still shouldn't be impacted by SMR as they mostly just pick a drive) - so for the data drives its about how you are putting the data there that may have issues with SMR, snapraid itself only reads from them except in the case of rebuilding after a failure.

The potential issue is the parity drive(s) - snapraid will do updates to the parity file on those drives on every sync. As I said above, this is a batch job and not performance sensitive, so its probably fine on the SMR drives. If it does become a problem, the easy solution is to add 2 more (assuming dual-parity) 8TB non-SMR drives to the array to use for parity, and use all the SMR drives only for data - its the more expensive option but it does come with 16TB of additional capacity too.
Are there any AIO solutions that pool your drives, protect them with snapraid (or the like), and feature virtualization options (Docker and KVM)?

The reason I ask is that as much as I'd be interested in greater data protection, my UnRAID server is primarily (95%) bulk media storage and for that I prefer a set it and forget it solution over ultimate data protection. Something I can configure rarely and check on often through a WebGUI is all I'm looking for. I'm not interested in moving to a solution that requires a lot of command line to get up and running. And since moving to Dockers for all my media serving/retrieving apps, I can't imagine going back. In fact I'm in the process of building an all SSD storage server for fast centralized access to my Docker appdata files amongst multiple systems that will run Docker.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
The only solution I'm aware of that bundles snapraid and a web UI is Open Media Vault - there may be others out there, I haven't looked as I'm not interested in any of those features. And for the same reason, I have no idea if OMV supports docker or kvm.
 

RyC

Active Member
Oct 17, 2013
359
88
28
SnapRAID and other file-based parity protection schemes probably aren't the best for storing VMs. Since the data changes so frequently, the parity is effectively out of date immediately after syncing. It's why SnapRAID doesn't recommend adding your C or OS drive into SnapRAID.

Depending on how static your Docker containers are, it might still work, but I wouldn't rely on SnapRAID to protect traditional VM storage.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
SnapRAID and other file-based parity protection schemes probably aren't the best for storing VMs. Since the data changes so much, you're effectively invalidating the parity immediately after syncing. It's why SnapRAID doesn't recommend adding your C or OS drive into SnapRAID.

Depending on how static your Docker containers are, it might still work, but I wouldn't rely on SnapRAID to protect traditional VM storage.
I currently store my VMs, Docker containers and appdata on a BTRFS cache pool and will be moving them to a centralized all SSD datastore so I wouldn't be looking to use SnapRAID for those files. If I was to use SnapRAID at all it would be to protect my bulk media files, not configs or vdisks.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I currently store my VMs, Docker containers and appdata on a BTRFS cache pool and will be moving them to a centralized all SSD datastore so I wouldn't be looking to use SnapRAID for those files. If I was to use SnapRAID at all it would be to protect my bulk media files, not configs or vdisks.
FYI - BTRFS is not great for storing VM disk images (or other large often-updated files like databases and such) either. It's performance with that kind of workload sucks, the fact that you have it on SSD is why you haven't run into problems already. Docker on the other hand works great on BTRFS, at least so long as you use its BTRFS back-end.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
FYI - BTRFS is not great for storing VM disk images (or other large often-updated files like databases and such) either. It's performance with that kind of workload sucks, the fact that you have it on SSD is why you haven't run into problems already. Docker on the other hand works great on BTRFS, at least so long as you use its BTRFS back-end.
I was under the impression that if you disable copy-on-write (which I have for my vdisks share stored on the cache pool) that any performance issues are mitigated for the most part. I'm not worried about the data integrity of my VMs as they are not mission critical.