WANg , my preference is to keep this respectful and civil . Not sure why you are so bothered by any opinion different that yours.
If you want to prove to me that having single points of failure (i.e. single volume subject to corruption vs two volumes) is good, that is unlikely to happen. If you want to believe that , sure - you are entitled to your opinion and I respect it. you are choosing different tools to address the same needs (data resiliency), great! It does not mean they are the only tools or that they only work in combination you use (i.e. snapshots work just fine on various RAID levels or without, etc). The attitude of 'X is required' (be it RAID 1 for 2 bay NAS or RAID 5 for 3+ bay NAS) and you can only run X in Y way is arrogant and in my opinion misguided..
I also find it funny that you are trying to somehow 'prove' to me that having single volume is bad due to concern of potentially losing incoming future data . Compare it please with GUARANTED loosing ALL of the data if you deleted it without backup from RAID 1 volume (but I had it redundant per this guy on the forum excuse) or losing access to ALL of the data (potentially losing all of your data) on the single volume if that is all you had (forget redundancy underneath)
you may see the difference between "assured lose of access to all of the data" vs "potentially impacting future incoming data inflight" or you may not. who cares about the second if you already suffered the first? Get real. Same for the disks from same production batches. get real. that would be the last of your concern if you suffer real data loss (vs scrambling to find some place, some where , anywhere, this data exists so you can restore).
2 is always greater than 1. Math does not care. it just works...peace .
First of all, since when am I "not civil or respectful" here? Do you understand the American practice of a civil debate? People on opposing sides present facts and evidence and argue their points across...which is normal. Just because someone disagrees with you doesn't make it uncivil or disrespectful. In fact, the tone of your language makes you sound uncivil and disrespectful (what the heck is "per this guy on the forum" excuse"?)
Second, my argument isn't against how bad a single volume or single drive is - what I am merely saying is that the basic premise of your method (2 separate volumes, synchronized) does not address the issues of guarding against physical drive loss, data corruption or deletions either.
Let's have a theoretical scenario. I have a pair of SD cards formatted to FAT32, one on slot A, one on slot B. I have a process that can write to A or B (not both at the same time). There is also another process that detects changes to A and updates B, and detects changes to B and updates A.
a) Card on slot A failing does not automatically mean the card on slot B cannot fail (pretty obvious)
b) Corruption on either card can result in several things:
- New data being written can fall on the good card, in which case the data captured would be good.
- New data being written can fall on the bad card, in which case the data captured would be bad.
- During the update stage, the good card tries to update the bad card, fails (bad volume) and in which case the entire thing turns into a similar scenario as a degraded 2-disk RAID1 array
- During the update stage, the bad card tries to update the good card, fails (well, either the update is not triggered by the bad card since it was never written successfully, or the bad card flags it as good and tries to write back some nonsensical data, which might or might not be accepted by the good card). At the very least the original data on the good card stays put or is corrupted by gibberish.
c) If I overwrite the files on one card, I overwrite the files on the other, and if I do it in a way where the inodes are overwritten, the data cannot be
recovered on the second card either.
For drive resiliency, that's just buying good drives (pay more for more resilient drives), and if you are being pedantic, buying them from different batches so they won't run into manufacturing defects. For guarding against volume corruption, that's the job of the filesystem sitting on top of the bit bucket to ensure integrity, and that has nothing to do with whether it is a standalone volume on a single disk or if it's RAID. For overwrite/deletion protection you would still need an filesystem feature like snapshots or shadow copies (which can supported on standalone filesystems in RAID setups (like zpools with snapshots). In both cases there is no deletion protection unless one is explicitly put in.
At the very least, the setup as you described functions similar to RAID, but instead of having it function as bitwise redundant, it's filewise redundant. But in terms of efficiency, instead of just repeating a write call on a different SATA port like the way you do , it has to run an alert process, a watcher process and do the write later.
If your argument is that RAID is somehow not protected against deletion, yeah, sure - deleting a RAID array will instantly put you into shit creek, but even then there are ways to recover assuming that you are fast about it (Christophe Grenier's testdisk saved me at least once in my career - so no, deleting a RAID1 array is not guaranteed to lose data). Once again, the same line of reasoning also exist on single volume setups.
And no, you don't need to use RAID on a multi-spindle setup - that's why Unraid also exist. However, it is just as presumptuous for you to assume that RAID is overrated and that it does not have useful purposes. Like I've said before - RAID is there to increase media resilience and to change disk I/O characteristics (better throughputs on writes, etc). And in the case of a synology DS2, it's just a cheap easy and well understood way to add media resilience to a small, not-that-critical NAS.