Looking to move off UnRAID - Alternative Bulk Storage OS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I'm looking to migrate my bulk storage arrays (mainly media) from UnRAID to an alternative OS. I won't get into the reasons in this post as I'd like to focus to remain on my objectives/needs for a new storage OS:

  • Non-stiped Data Array: I have all 8TB SMR drives and thus they don't play nice in a typical striped array setup. Furthermore, I like the pros of a non-striped array such as only active disks are spun up as well as the ability to lose drives beyond parity without losing the entire array.
  • Support SSD caching: I have a 10Gb network and with the write limitations of SMR drives I need the ability to write to an SSD cache drive/pool for all writes before the data is moved to the protected array. I'd like this to be a built in feature of the OS and not require a lot of grunt work on my part to get working through scripting.
  • Supports both NFS/SMB shares. My media servers are Linux/dockers but I also have a home Windows domain where I need to map drives to the array.
  • Can be run in a VM in VMware. This is in an ideal world because currently all my servers are part of a 4-node VMware vSAN cluster. However, I am considering moving to shared storage solution for my VMs instead of vSAN so this not a requirement.
I'd love to hear some suggestions/recommendations from those who either have a similar home storage setup or at least have some expertise.
 
Last edited:

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
Sounds like ZFS is right up your alley or possibly Storage Spaces Direct. I am not sure if Storage Spaces Direct will support Unix so guessing.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Sounds like ZFS is right up your alley or possibly Storage Spaces Direct. I am not sure if Storage Spaces Direct will support Unix so guessing.
I'm no ZFS pro but from what I've read of it doesn't seem to fit very well with my #1 priority (non-striped data array where inactive disks are spun down).
 
Last edited:

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
I apologize I totally misread that part. Subscribing as I'm interested in what you mind out.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
zfs works if you are willing to go to the zfs equivalent to raid10 .. aka stripped mirrors. You loose half the capacity but can expand the array 2 drives at a time. Each expansion needs to be identical sized drives but not necessarily the same as the previous pairs.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
zfs works if you are willing to go to the zfs equivalent to raid10 .. aka stripped mirrors. You loose half the capacity but can expand the array 2 drives at a time. Each expansion needs to be identical sized drives but not necessarily the same as the previous pairs.
I don't mean to be rude but please read the OP (notably my first bullet point). ZFS will not work based on my requirements, current drives.
 

Geran

Active Member
Oct 25, 2016
332
91
28
39
I'd recommend StableBit Drivepool and SnapRaid together. You can spin down the disk when they aren't needed and you can also set up Drivepool to have an SSD Cache and have it sync to the array during off hours.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I'd recommend StableBit Drivepool and SnapRaid together. You can spin down the disk when they aren't needed and you can also set up Drivepool to have an SSD Cache and have it sync to the array during off hours.
StableBit Drivepool is only supported on Windows correct? NFS support?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I think that SnapRAID + MergerFS is going to be your best solution based on your criteria. As an FYI, trapexit has been investing caching via a few different methods. You can read more about it here.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I think that SnapRAID + MergerFS is going to be your best solution based on your criteria. As an FYI, trapexit has been investing caching via a few different methods. You can read more about it here.
I've been keeping an eye on it @rubylaser but being that it's not even fully developed yet I'm weary to jump in at this point. I will certainly consider it down the line though.

Yes it is Windows only and yes there is NFS support in DrivePool
Nice, the NFS support is key for me. With regard to SSD caching in Drivepool, can you create cache pools so that one can pool multiple SSD's together in order to saturate 10Gb links?
 

Geran

Active Member
Oct 25, 2016
332
91
28
39
Nice, the NFS support is key for me. With regard to SSD caching in Drivepool, can you create cache pools so that one can pool multiple SSD's together in order to saturate 10Gb links?
Yes, here is the link for the plugin in Drivepool...StableBit - The home of StableBit CloudDrive, StableBit DrivePool and the StableBit Scanner

Here is a summary of it:
  • With this plug-in you designate one or more disks as SSD disks.
  • SSD disks will receive all new files created on the pool.
  • It will be the balancer's job to move all the files from the SSD disks to the Archive disks in the background.
  • You can use this plug-in to create a kind of write cache in order to improve write performance on the pool.
  • Optionally, you can also set up a file placement order for your archive disks, so that they will be filled up one at a time.
What it is doing is creating a write cache and you write to the cache and "job" moves the files to the archive disks based on the rules you set for it (can be size driven or time based)
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Nice, the NFS support is key for me. With regard to SSD caching in Drivepool, can you create cache pools so that one can pool multiple SSD's together in order to saturate 10Gb links?
Yes, you can use multiple disks with the SSD Optimizer in Drivepool. I think one issue you may have with the SSD caching in Drivepool is that you can't use it with Ordered File Placement, so you end up with files just sprinkled across your disk(s) with the most space, rather than grouped together on the underlying disks. Although, the SSD Optimizer does have a placement option that allows you to fill each disk one at a time (this may be good enough for what you want).

http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt

My other concern that the documentation doesn't mention is if the transfer from the cache pool to the archive pool is copy on write or not. It would be nice to "know" that your files made it correctly from the cache to the archive disks.
 

Geran

Active Member
Oct 25, 2016
332
91
28
39
The SSD Optimizer has a built in placement which works similarly to the Ordered File Placement, that's why it says not to use the Ordered File Placement plugin at the same time.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
The SSD Optimizer has a built in placement which works similarly to the Ordered File Placement, that's why it says not to use the Ordered File Placement plugin at the same time.
You are correct. @IamSpartacus has used MergerFS though which has much more fine-tuned file placement policies that just write to the disk with the most free space, or fill each disk in order. With my large storage pool, I want to ensure that files are all grouped together to minimize the chance for disks to spinup. The SSD Optimizers placement options may be good enough. I just wanted to make sure he knew the difference. Here are all of the policies that mergerfs supports as a comparision.

GitHub - trapexit/mergerfs: a FUSE based union filesystem
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
You are correct. @IamSpartacus has used MergerFS though which has much more fine-tuned file placement policies that just write to the disk with the most free space, or fill each disk in order. With my large storage pool, I want to ensure that files are all grouped together to minimize the chance for disks to spinup. The SSD Optimizers placement options may be good enough. I just wanted to make sure he knew the difference. Here are all of the policies that mergerfs supports as a comparision.

GitHub - trapexit/mergerfs: a FUSE based union filesystem
Thanks for the comparison @rubylaser. However as my current implementation of MergerFS is just pooling two sets of NFS mounts, I'm not taking advantage of any of the file placement policies. My mounts are pooled using FF thus all writes go to my UnRAID01 cache pool and then moved to my protected array that way.

So in reality, it is UnRAID that is handling my file placement which is determined by split level. The split level is what determines what files are kept together.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I cringe every time I read NFS support on a supported M$ config, almost always asking for performance woes.
I was about to propose an OMV build with SnapRaid and MergerFS
but windows support rec. stopped me.
I also want to add several key observations that might have been overlooked here.

I haven't found anything yet that works even similar to unraid. I have my own reasons not to run unraid setup, (I had run an small unraid setup for 3+ years)
I have been searching for the last 3 years for a better replacement of unraid system and did not find a 100% drop-in setup.
BUT

if your needs are file server only, and you can work with Linux well enough, an OpenMediaVault + SnapRaid+MergerFS setup with an addition of several scheduled scripts will do everything you need.
yes you can use ZFS with it as well. and now you do not have to use ZFS as raid if you don't want to.
I understand that ZFS have builtin support for SSD cache and not sure how to replicate that with other systems.
the other solutions is to use OMV setup as described with BTRFS on the disks(BTW unraid starting with version 6.5!? also supports BTRFS use on data disks so you are not truly going completely novel route here)
one caution though only use raid 1 or 10 option with that as raid 5/6 is still unstable, even today. that is if you want to use raid like setup. if you do use the raid function of BTRFS you do not need snapraid and MergerFS and you gain the real time data protection, you loose individual disk flexibility though.
using SnapRaid + MergerFS replicates the unraid setup but unlike unraid it is not a real time protection.
data is vulnerable until next SnapRaid scan/sync is run,which can be set to any interval depending on how busy or powerfull your system is.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I would have said snapraid + something to tie it together to look like a big volume, maybe just btrfs can work with the ssd cache but no real experience.

Ok so for media, how about just using md5deep comparisons and backup ? Sure you have to restore something if you have an issue but at least you can know when you need to do something.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I would have said snapraid + something to tie it together to look like a big volume, maybe just btrfs can work with the ssd cache but no real experience.

Ok so for media, how about just using md5deep comparisons and backup ? Sure you have to restore something if you have an issue but at least you can know when you need to do something.
well SnapRaid+MergerFS was suggested, most of my research points to this duo.
it seems that MergerFS is most stable of Fuse based FS at the moment and it works with SnapRaid nicely.
the only negative for this setup is that SnapRaid is not realtime protection. you need to run a sync/scrub task on schedule, and if you want to run it often than your setup(hardware) needs to be very powerful and underutilized to have power to spare for that.