Storage Server and JBOD

pettinz

Member
May 1, 2018
42
0
6
23
I love MergerFS along with SnapRAID. I have been using SnapRAID plus a pooling solution for years. It is a great build media storage solution.

If you are interested in seeing how I set things up, I have a write up in my site.

Setting up SnapRAID on Ubuntu to Create a Flexible Home Media Fileserver - Zack Reed - Design and Coding


Sent from my iPhone using Tapatalk
To setup a MergerFS volume, the hard drives must be empty? Or I can create a volume by drives not empty? I will lose data?
 

rubylaser

Active Member
Jan 4, 2013
842
229
43
Michigan, USA
To setup a MergerFS volume, the hard drives must be empty? Or I can create a volume by drives not empty? I will lose data?
No the disks do not need to be empty. MergerFS will just pool all the disks and present them as one solidified mount point (you will not lose data doing this). I would suggest using SnapRAID along get with it so that losing a disk does not cause you to lose all the data that was on it.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: pettinz

pettinz

Member
May 1, 2018
42
0
6
23
No the disks do not need to be empty. MergerFS will just pool all the disks and present them as one solidified mount point (you will not lose data doing this). I would suggest using SnapRAID along get with it so that losing a disk does not cause you to lose all the data that was on it.


Sent from my iPhone using Tapatalk
Perfect! And if I unmount the pool, can I use the drives each as single drive?
 

vl1969

Active Member
Feb 5, 2014
607
68
28
Checkout OpenMediaVault.
It has all the features you want. Supports mergers and snapraid with all setup via WebUI using plugins.
It shoild even have support for zfs now. If your hardware has enough power you can even add virtualization with virtualbox plugin.

That said, if you do want some redundancy and ok with mirrored raid1 I say look into zfs.

With zfs mirrored pool you can expand it by adding 2 drives at a time. Grunted drives must be of the same size and speed but it easily expandable. I just build out a pool using 2tb drives.
Started with 2 disks and as I moved the data off others added the existing disks in pairs.
Have now 6tb zfs pool on 6 2tb drives in 3 vdevs.

Sent from my LG-TP450 using Tapatalk
 

pettinz

Member
May 1, 2018
42
0
6
23
Thank you all for your answers. After reading about MergerFS, ZFS, SnapRaid etc..., I decided to use mdadm: I thought to setup a RAID-5 initially, and grow it up to a RAID-6 in the future. Now I have some questions about how to manage a RAID system.
(1) When I find a disk failure, I remove it and then I add a new disk to mdadm as a spare disk. Then it starts to reconstruct the array. Is it right? Or have I to add firstly the new disk and then remove the failed drive after the rebuild?
(2) How can I move the array to another machine on which there is already another RAID setup? Is it possible to merge them into an unique RAID?
 

rubylaser

Active Member
Jan 4, 2013
842
229
43
Michigan, USA
Thank you all for your answers. After reading about MergerFS, ZFS, SnapRaid etc..., I decided to use mdadm: I thought to setup a RAID-5 initially, and grow it up to a RAID-6 in the future. Now I have some questions about how to manage a RAID system.
(1) When I find a disk failure, I remove it and then I add a new disk to mdadm as a spare disk. Then it starts to reconstruct the array. Is it right? Or have I to add firstly the new disk and then remove the failed drive after the rebuild?
(2) How can I move the array to another machine on which there is already another RAID setup? Is it possible to merge them into an unique RAID?
You want to fail, and then replace the disk. Besides SnapRAID tutorials, I have a bunch of tutorials about mdadm on my site as well (I used it for years before I switched to using either ZFS or SnapRAID + mergerfs).

This shows how to replace all the disks, but the idea is the same.
mdadm replace smaller disks with larger ones - Zack Reed - Design and Coding

To move the array to a new system, you need to scan one of the disks and write out an /etc/mdadm/mdadm.conf file or copy the existing file from your current system to the new host. You will also need to and a line to /etc/fstab for it. Finally, you will want to update initramfs. Here is a tutorial about the initial setup.
Software RAID 5 in Ubuntu/Debian with mdadm - Zack Reed - Design and Coding

Merger the arrays is possible, by doing something more complex like RAID50, but I would avoid doing that, and either migrate your data to the new array, or add the disks from your old array. mdadm is flexible, but it's still not quite as flexible as SnapRAID + mergerFS for just adding/removing disks or changing parity levels. That's why I use it for bulk media.

If you were using SnapRAID, you would just move the disks from the old machine to the new machine, update your /etc/snapraid.conf file to include the new disks and run a sync. Once, it's done, you have have everything protected, and no need to do anything fancy to add the disks. Plus, each disk can be mounted alone and does not require the RAID array to be intact to read the data.
 
Last edited: