Storage Server and JBOD

Discussion in 'DIY Server and Workstation Builds' started by pettinz, May 1, 2018.

  1. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    42
    Likes Received:
    0
    To setup a MergerFS volume, the hard drives must be empty? Or I can create a volume by drives not empty? I will lose data?
     
    #21
  2. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    No the disks do not need to be empty. MergerFS will just pool all the disks and present them as one solidified mount point (you will not lose data doing this). I would suggest using SnapRAID along get with it so that losing a disk does not cause you to lose all the data that was on it.


    Sent from my iPhone using Tapatalk
     
    #22
    pettinz likes this.
  3. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    42
    Likes Received:
    0
    Perfect! And if I unmount the pool, can I use the drives each as single drive?
     
    #23
  4. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    Yes you can [emoji3] Each disk will have its own file system and can be mounted with or without being a part of the pool.


    Sent from my iPhone using Tapatalk
     
    #24
  5. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    572
    Likes Received:
    63
    Checkout OpenMediaVault.
    It has all the features you want. Supports mergers and snapraid with all setup via WebUI using plugins.
    It shoild even have support for zfs now. If your hardware has enough power you can even add virtualization with virtualbox plugin.

    That said, if you do want some redundancy and ok with mirrored raid1 I say look into zfs.

    With zfs mirrored pool you can expand it by adding 2 drives at a time. Grunted drives must be of the same size and speed but it easily expandable. I just build out a pool using 2tb drives.
    Started with 2 disks and as I moved the data off others added the existing disks in pairs.
    Have now 6tb zfs pool on 6 2tb drives in 3 vdevs.

    Sent from my LG-TP450 using Tapatalk
     
    #25
  6. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    42
    Likes Received:
    0
    Thank you all for your answers. After reading about MergerFS, ZFS, SnapRaid etc..., I decided to use mdadm: I thought to setup a RAID-5 initially, and grow it up to a RAID-6 in the future. Now I have some questions about how to manage a RAID system.
    (1) When I find a disk failure, I remove it and then I add a new disk to mdadm as a spare disk. Then it starts to reconstruct the array. Is it right? Or have I to add firstly the new disk and then remove the failed drive after the rebuild?
    (2) How can I move the array to another machine on which there is already another RAID setup? Is it possible to merge them into an unique RAID?
     
    #26
  7. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    You want to fail, and then replace the disk. Besides SnapRAID tutorials, I have a bunch of tutorials about mdadm on my site as well (I used it for years before I switched to using either ZFS or SnapRAID + mergerfs).

    This shows how to replace all the disks, but the idea is the same.
    mdadm replace smaller disks with larger ones - Zack Reed - Design and Coding

    To move the array to a new system, you need to scan one of the disks and write out an /etc/mdadm/mdadm.conf file or copy the existing file from your current system to the new host. You will also need to and a line to /etc/fstab for it. Finally, you will want to update initramfs. Here is a tutorial about the initial setup.
    Software RAID 5 in Ubuntu/Debian with mdadm - Zack Reed - Design and Coding

    Merger the arrays is possible, by doing something more complex like RAID50, but I would avoid doing that, and either migrate your data to the new array, or add the disks from your old array. mdadm is flexible, but it's still not quite as flexible as SnapRAID + mergerFS for just adding/removing disks or changing parity levels. That's why I use it for bulk media.

    If you were using SnapRAID, you would just move the disks from the old machine to the new machine, update your /etc/snapraid.conf file to include the new disks and run a sync. Once, it's done, you have have everything protected, and no need to do anything fancy to add the disks. Plus, each disk can be mounted alone and does not require the RAID array to be intact to read the data.
     
    #27
    Last edited: May 14, 2018
    pettinz and dawsonkm like this.
Similar Threads: Storage Server
Forum Title Date
DIY Server and Workstation Builds Storage server using Athlon CPU and 24HDD Saturday at 3:47 PM
DIY Server and Workstation Builds NVMe storage server on a budget Friday at 3:29 PM
DIY Server and Workstation Builds Home storage server May 8, 2019
DIY Server and Workstation Builds Looking for advice on build, storage spaces server Mar 4, 2019
DIY Server and Workstation Builds PMS 4.0...PMS 5.0...PMS 6.0...No PMS 7.0! Plex/Storage server upgrade [PICS] Jan 9, 2019

Share This Page