These are the reasons that I personally switched. ZFS is a great solution and it may fit your needs better, but this is what motivated me to move on. Incoming wall of text, so skip it if you are just not interested in my personal reasoning.What was it that made you switch?
It is much easier to add additional drives to an Unraid array. If I wanted to remain fault tolerant on my ZFS setup, I have to create an entirely new vdev and then add it to the zpool. This means at least adding two drives in a mirror, but usually meant adding three drives in my case as I was using RAID-Z1. With Unraid, I just throw another drive in, preclear it, and add it to the array whenever I want. It is a bit more complicated if the new drive is bigger than the parity drive(s), but still easy to do.
My hardware build for Unraid was much cheaper than my build for ZFS. My ZFS build had a server motherboard, ECC memory, the "recommended" 1GB of memory for every 1TB of hard drive space, and a pricey ZFS ZIL drive as it gets thrashed. I ended up going somewhat over the top with my Unraid build, but it isn't necessary like people claim it is with ZFS.
Unforseen circumstances led to much more churn on my vdevs than I would have liked. I started my ZFS build with three RAID-Z1's of 3 Seagate 3TB drives each. Those of you who are familiar with the Seagate 3TB debacle will understand that I was constantly rebuilding those RAIDZ's with replacement or new drives.
I even lost a couple of these RAIDZ's over time when a second drive would fail while attempting to rebuild the array. When that happens with ZFS, recovery of the remaining data becomes complicated at best. If you were to lose the 2 parity drives + another drive in Unraid, all of the remaining data on those other drives is still easily accessible. I eventually started doing RAIDZ-2's with 5 drives, and that helps the reliability but makes expanding your pool size that much more expensive.
Unraid has evolved significantly since I first started using it. It is a fairly good turn key "homelab" at this point in time. It has solid community support for dockers and virtual machines. I am running a dozen or so dockers, a few VM's, and a dedicated HTPC VM with GPU pass-through for my theater. This is of course all possible with other things like FreeNas, OpenSolaris, OpenMediaVault, and any linux distro with ZFS on Linux. This will make me sound like a filthy casual, but the ease of use with these features on Unraid is paramount. I deal with enough complication at work, and sometimes it is nice to just have things that work without a significant amount of tinkering.
Last but not least, I find that the Unraid community is far more approachable. The ZFS community can be quite toxic at times. I have never personally been the victim of it, but I have read hundreds of threads where ZFS zealots have made me want to do anything but use ZFS.
I am not sure I follow what you are getting at with the parity checks. With my ZFS setup, I was doing weekly scrubs of each vdev, and on Unraid I am doing a monthly parity check for the entire array. You can customize the frequency in which these occur with both products.Same here, I'm on ZFS now.
Was on unraid for years but it got messy, seemed like it was ALWAYS doing parity checks, I had a 20 disk array.
Swapped to Server 2012 storage spaces, lost the entire array to a silent double disk failure, no indication of failing drives, one day the array wouldn't start and it said it was two drives short (happened after a major server 2012 update).
Now I'm on a poorly constructed ZFS on Linux array (8 - 3 TB drives, should have gone with 2 arrays of 4 I think). I really like the ability to see what is happening with ZFS. I like that I can get disk and data information without looking a lot of different places.