Looking to move off UnRAID - Alternative Bulk Storage OS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
well SnapRaid+MergerFS was suggested, most of my research points to this duo.
it seems that MergerFS is most stable of Fuse based FS at the moment and it works with SnapRaid nicely.
the only negative for this setup is that SnapRaid is not realtime protection. you need to run a sync/scrub task on schedule, and if you want to run it often than your setup(hardware) needs to be very powerful and underutilized to have power to spare for that.
I'm honestly fine with SnapRAID not being real time backup since 95% of the files residing on these arrays are media. The only reason I haven't gone SnapRAID + MergerFS yet is no built in caching mechanism. All my UnRAID shares are cache enabled so writes go directly to a RAID0 SSD cache pool thus I can saturate my 10Gb link. I also have some shares (Downloads for example) that reside only on cache. I don't want to lose this type of configuration.
 
Last edited:

vl1969

Active Member
Feb 5, 2014
634
76
28
I'm honestly fine with SnapRAID not being real time backup since 95% of the files residing on these arrays are media. The only reason I haven't gone SnapRAID + MergerFS yet is no built in caching mechanism. All my UnRAID shares are cache enable so writes go directly to a RAID0 SSD cache pool. I also have some shares (Downloads for example) that reside only on cache. I don't want to lose this type of configuration.
I don't think you will have to.
the only thing is, unlike Unraid, which has this options built int and configurable via more or less intuitive GUI
you will have to figure out how to get there on your own.

possible example of a setup
an OMV setup with SnapRaid and MergerFS installed.
OMV has plugins you can install and use to setup both tools. (SnapRaid plugin and Unionfilesystem plugin that provides UI for MergerFS pool setup.)
OMV also has plugin and UI options to setup cronjobs so you can setup data moving script.
FYI, I use OMV as an example as I have been playing with it for a while and currently planning out a similar setup myself. I am also planning to run this as a VM under Proxmox VE.
OMV is debian based if you care.

so at the quick glance :

an OMV setup with SR+MFS plugins
1 - OS drive
2 - A data drives pool with SR and mergerfs
3- a cache drive or drive pool using SSDs.

by default OMV does not share OS drive, so if you plan on using SSD as OS drive either plan for small SSD or partition the drive beforehand as needed. the primary OS partition(s) are not shareable or usable in any easy way from OMV UI.
so, setup OMV, finish config and install all plugins.
setup Data Pool with SnapRaid and MergerFs.
you can use any FileSystem you want on the drives themselves. unfortunately if you used default FS in unraid is probably a raserFS and not supported so you will have to copy the data off them and reformat.
not sure what your preferences are, I use BTRFS for my data drives if I can.

now setup your cache drive/pool.
setup the cache share, this will be used by all apps that you want to use cache for.
setup data share.

build the scripts that will recreate your current unraid functions as needed.
i.e. a script that would monitor a folder or folders on the cache share for files and copy/move them to your data pool.
a script that will run snapraid sync/scrub (this actually is build in into OMV snapraid plugin and you can just configure and activate it.) FYI a SMART also is built in the OMV by default and will monitor your disks live.

I guess you only need the script that will monitor folders on cache drive and copy/move files appropriately.
you might actually be able to take the script from unraid and modify it to work for you.
the only true functionality you will lose is automatic cache-to-data folder linkage . as in if you create a new folder to use on data pool you will have to add it to the cache drive manually and add it to the monitoring script manually. unless you manage to write a script that would look at the folder name on cache drive and search for the same on data drive. not really well versed in linux scripting so cannot say if possible.
 

trapexit

New Member
Feb 18, 2016
17
7
3
New York, NY
github.com
The only reason I haven't gone SnapRAID + MergerFS yet is no built in caching mechanism. All my UnRAID shares are cache enabled so writes go directly to a RAID0 SSD cache pool thus I can saturate my 10Gb link. I also have some shares (Downloads for example) that reside only on cache. I don't want to lose this type of configuration.
What kind of caching behavior are we talking about? To increase write speeds of small'ish bursts of writes? I'm trying to put together some solutions to solve the common situations in which people request caching. Often the usage pattern wouldn't actually be helped by caches (for instance putting a traditional cache in front of a network filesystem... you aren't watching the same video over and over normally so it wouldn't cache it for reads) but with writes it can help in some situations.

Two different solutions: 1) put a SSD cache (using dm-cache) in front of spinning disks. Mount those cached drives through mergerfs. or 2) Place some SSDs in the mergerfs pool with the spinning disks and have mergerfs prioritize that drive via existing policies and use an outofband application to shuffle files from the cache to the harddrives as it fills. This could also be used to 'cache' files which ultimate destination is a network drive / cloud service so consumption over the network is limited. Prioritizing which files to keep & copy isn't clear but for the write cache (SSD -> slow drives) situation it's not bad.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
What kind of caching behavior are we talking about? To increase write speeds of small'ish bursts of writes? I'm trying to put together some solutions to solve the common situations in which people request caching. Often the usage pattern wouldn't actually be helped by caches (for instance putting a traditional cache in front of a network filesystem... you aren't watching the same video over and over normally so it wouldn't cache it for reads) but with writes it can help in some situations.

Two different solutions: 1) put a SSD cache (using dm-cache) in front of spinning disks. Mount those cached drives through mergerfs. or 2) Place some SSDs in the mergerfs pool with the spinning disks and have mergerfs prioritize that drive via existing policies and use an outofband application to shuffle files from the cache to the harddrives as it fills. This could also be used to 'cache' files which ultimate destination is a network drive / cloud service so consumption over the network is limited. Prioritizing which files to keep & copy isn't clear but for the write cache (SSD -> slow drives) situation it's not bad.
Ideally, the caching behavior would function similarly to how I have my UnRAID cache pool setup. Certain shares can be designated to write to the cache first and moved to the protected array at scheduled intervals. I also would like to be able to use cache only shares so that certain share not only write to the cache pool, but stay there to take advantage of the read speeds of the cache.

For example I have a DVR and Downloads share currently that sits only on cache. I'm not that concerned about the data protection of those shares but i do want high read/write performance out of them.
 

trapexit

New Member
Feb 18, 2016
17
7
3
New York, NY
github.com
Sounds more like 1 than 2 (if you want per drive caching rather than caching across the whole. I'm looking to create 3 different solutions.

1) dm-cache's on top of existing block devices with the benefits and complications of such.
2) Cache drives being priority write targets through mergerfs with an out of band tool to move files from cache -> slower filesystems.
3) Cache drives being priority read targets through mergerfs with an out of band tool to copy files from slow filesystems to the faster ones. (for use with network / cloud filesystems mainly). If I can scrape data from Plex, Kodi, etc. then it may be pretty easy to determine what to copy and what to remove.

I *think* that covers all the main usecases where a cache or the like is really useful.

Put a watch on GitHub - trapexit/backup-and-recovery-howtos: Guides to setting up a media storage system, backing it up, and recovering from failures to keep track. I'll be posting a howto there once I get these worked out. I'll probably do the dm-cache one first since it shouldn't require too much code.
 
  • Like
Reactions: rubylaser

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Sounds more like 1 than 2 (if you want per drive caching rather than caching across the whole. I'm looking to create 3 different solutions.

1) dm-cache's on top of existing block devices with the benefits and complications of such.
2) Cache drives being priority write targets through mergerfs with an out of band tool to move files from cache -> slower filesystems.
3) Cache drives being priority read targets through mergerfs with an out of band tool to copy files from slow filesystems to the faster ones. (for use with network / cloud filesystems mainly). If I can scrape data from Plex, Kodi, etc. then it may be pretty easy to determine what to copy and what to remove.

I *think* that covers all the main usecases where a cache or the like is really useful.

Put a watch on GitHub - trapexit/backup-and-recovery-howtos: Guides to setting up a media storage system, backing it up, and recovering from failures to keep track. I'll be posting a howto there once I get these worked out. I'll probably do the dm-cache one first since it shouldn't require too much code.
Awesome. Thanks @trapexit I will be watching closely. I appreciate all the work you've done with MergeFS as it's a great tool and I look forward to these new developments. I think you will find that adding these caching features will allow MergerFS + SnapRAID to be used as a solid (if not better) alternative to UnRAID users and the like.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
but you do understand that the type of caching that unraid does is basically setup I described in my last post.
it is not part of any Filesystem or anything, it just a set of built in scripts that you configure and that unraid than run on predefined schedule that take data from folders(shares) on designated cache drives and move them to folders on protected array.. and the cache only shares are simply reside on drives that have been designated as cache drives thus not part of protected array and not processed by the scripts.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
but you do understand that the type of caching that unraid does is basically setup I described in my last post.
it is not part of any Filesystem or anything, it just a set of built in scripts that you configure and that unraid than run on predefined schedule that take data from folders(shares) on designated cache drives and move them to folders on protected array.. and the cache only shares are simply reside on drives that have been designated as cache drives thus not part of protected array and not processed by the scripts.
I am aware of that. But UnRAID's built it pooling is what allows the shares you setup for the array to automatically be created on both cache/bulk array and linked together. That part is key so that one doesn't have to manually created directories for those shares in both locations each time before then editing said scripts.

I want as little manual labor as possible for my home network as my time is more and more limited with each passing day. Less regular maintenance is the goal for me.
 

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
For me personally that's why I run a Synology. I'm fiddling with something was better spent with my wife or daughter.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
For me personally that's why I run a Synology. I'm fiddling with something was better spent with my wife or daughter.
I hear you. Synology isn't an option for me though with my SMR drives. Furthermore, with the size of my bulk arrays (pushing 80TB each now), having the piece of mind of being able to lose disks beyond parity without losing the entire array is a comfy pillow at night.
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
From the sound of it, UnRaid fits the requirement quite well. Let me be the first to ask why you decided to move away from UnRaid when it seems to fit your need best.

I don't like the way UnRaid handle its license so I won't be using it. However it's hard to deny that it does a great job for media storage.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
From the sound of it, UnRaid fits the requirement quite well. Let me be the first to ask why you decided to move away from UnRaid when it seems to fit your need best.

I don't like the way UnRaid handle its license so I won't be using it. However it's hard to deny that it does a great job for media storage.
you are right in one thing nk215, UnRaid is well suited for media storage, but today the only true advantages it still holds are, a real time parity raid like protection, it's hard to put together a setup unraid presents using most OSes, not impossible but hard, and the way it handles share catches.
again it is not impossible to recreate but hard.
everything else can be done and a lot cheaper if not entirely free. also as you pointed out the licensing schema is tedious to say the least. this is one of the reason that pushed me to try and move away from using windows wherever I can.
Tying license of software to a peace of hardware is insane and not very customer friendly.

but Unraid have other issues that make people leave it behind.
yes it is easy to setup, but the process is a bit convoluted in comparison to plain simple install.
and adding drives to array is still takes a lot of time and preparation involving several steps that are a must rather than optional, unless that have changed in last revision.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
What about Flexraid?
I looked into FlexRAID before choosing UnRAID years ago and decided UnRAID was a better option. At this point if I'm going to move off of UnRAID after paying for two licenses, it won't be for another pay for license software.


From the sound of it, UnRaid fits the requirement quite well. Let me be the first to ask why you decided to move away from UnRaid when it seems to fit your need best.

I don't like the way UnRaid handle its license so I won't be using it. However it's hard to deny that it does a great job for media storage.
As I mentioned in the OP I don't want to muddy the discussion with a bitch session about UnRAID nor would I as it's served me very well for many years. If you have specific questions feel free to send me a PM.
 

maze

Active Member
Apr 27, 2013
576
100
43
Im considering the move to unraid.. so if you have your reason written up already @IamSpartacus please do share in PM - wouldnt wanna make a mistake ill regret
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Im considering the move to unraid.. so if you have your reason written up already @IamSpartacus please do share in PM - wouldnt wanna make a mistake ill regret
I can list several of mine here :)

#1 - Licence tied to the USB stick. I know they are supposedly good and reissuing new one if flash stick go bad
but if you dread down time, you need to buy 2 licenses to have a spare.
#2 - which may not apply anymore (I had moved from Unraid over 3 years ago) bad choice for default FS
I believe it supports BTRFS now , but back then raiserFS (are you kidding me?)
#3 - not very intuitive setup, not difficult but not overly user friendly
#4 - not very user friendly disk addition process. (again it may have changed since the last time I used it)
#5 - overly dependance on 3rd part plugins/scripts that LimeTech does not support but yet lucks the functionality they provide.
#6 - very long development/release cycle. in my days I run on RC for year an a half before they even released the final code.

need more???

but make no mistake, even with all those things it still a pretty good NAS server that you can build and deploy in about 5 hours (need time for prepare the disks to build array on. preclear process can run for several hours on 2TB drive )
 

TType85

Active Member
Dec 22, 2014
630
193
43
Garden Grove, CA
I can list several of mine here :)

#1 - Licence tied to the USB stick. I know they are supposedly good and reissuing new one if flash stick go bad
but if you dread down time, you need to buy 2 licenses to have a spare.
#2 - which may not apply anymore (I had moved from Unraid over 3 years ago) bad choice for default FS
I believe it supports BTRFS now , but back then raiserFS (are you kidding me?)
#3 - not very intuitive setup, not difficult but not overly user friendly
#4 - not very user friendly disk addition process. (again it may have changed since the last time I used it)
#5 - overly dependance on 3rd part plugins/scripts that LimeTech does not support but yet lucks the functionality they provide.
#6 - very long development/release cycle. in my days I run on RC for year an a half before they even released the final code.

need more???

but make no mistake, even with all those things it still a pretty good NAS server that you can build and deploy in about 5 hours (need time for prepare the disks to build array on. preclear process can run for several hours on 2TB drive )
I run Unraid. It is far from perfect but I like it. Some of these things have changed since the 6.x releases.

I agree on #1, I had to replace my usb stick and had no issue getting a replacement key but if it happens again I may have an issue. The trial now does 30 days, no limits on drives so you could be back up and running easily till you get it worked out.
#2 I think xfs is default now but you can change to BTRFS
#3 Agreed
#4 Stop the array, add the drive.
#5 This has gotten a bit better but still is an issue.
#6 This has gotten MUCH better we are already on 6.3.x

The pre-clear process takes a long time but is a good way to weed out bad drives.

I like the ease of use of adding extensions and containers (Community Applications Plugin). They also use KVM so you can run VM's but their gui for setting them up sorta sucks. Best make a linux VM and install virtmanager. You can do almost all of what Unraid does for free using most linux distributions. The OP's requirements are the harder ones to deal with (cache drives).
 

vl1969

Active Member
Feb 5, 2014
634
76
28
I run Unraid. It is far from perfect but I like it. Some of these things have changed since the 6.x releases.

I agree on #1, I had to replace my usb stick and had no issue getting a replacement key but if it happens again I may have an issue. The trial now does 30 days, no limits on drives so you could be back up and running easily till you get it worked out.
#2 I think xfs is default now but you can change to BTRFS
#3 Agreed
#4 Stop the array, add the drive.
#5 This has gotten a bit better but still is an issue.
#6 This has gotten MUCH better we are already on 6.3.x

The pre-clear process takes a long time but is a good way to weed out bad drives.

I like the ease of use of adding extensions and containers (Community Applications Plugin). They also use KVM so you can run VM's but their gui for setting them up sorta sucks. Best make a linux VM and install virtmanager. You can do almost all of what Unraid does for free using most linux distributions. The OP's requirements are the harder ones to deal with (cache drives).
Like I said before, I have been off the unraid for the last 3+ almost 4 years. and before that I run a free version (a trial limited to 3 drives) for 2+ years with no issues.
I was planning to buy a licence, so I build out a small white-box server and loaded with data.
and it was just sitting in the basement corner for 2+ years just chugging along as I postpone the bigger build do to work, family etc.

so new license rules are nice if you have a hardware to test it out.
#4 is true but you have to pre-clear the drive first if not the array will be offline until drive is prepared.

and the GUI does suck for many things you want to do.
maybe better now but can not say.

and except for cache functions, you can build out maybe even better setup with a mainstream distro.
even if you do want to use USB stick for OS , just Load ESXi on it.
no hardware dependency what so ever.
setup ESXi on USB stick, configure all. shut down the server and image the drive onto second drive.
you have perfect emergency copy that you can switch to in 5 min. or 10 if you need to recover config changes from backups.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
My biggest issue with UnRaid is the drive limit.
This is my main issue as well. LimeTech provides a sandbox and you have to play it. Question it and the pitchforks come out. A 28 drive limit for a Pro license is just silly.

To my knowledge the array no longer has to come offline to pre clear a drive.

I will say the development cycle may have appeared to speed up but if you read up on it it's basically been maintenance or catch up releases since 6.2. So the development cycle may very well still be crawling.

And I would fully agree they rely way way too much on forum members to develop third party plugins to add functionality.

Something as simple as removing a drive by moving data off said drive and onto another drive in the array is completely missing. IMO that should be a standard feature.

I use unRAID for a media server, not a hypervisor. If I want a hypervisor I will use s purpose build one. I just personally don't like where unRAID is headed.


Sent from my iPhone using Tapatalk