Storage Server and JBOD

Discussion in 'DIY Server and Workstation Builds' started by pettinz, May 1, 2018.

  1. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    Hi,
    I'm building a custom rack mount storage server (it's my first time) and I choose a Chenbro RM23612 chassis (6GB/s backplane with 3 mini-sas ports) for this. For this build, I need my hard drives to show as one to the OS and I don't need redundancy: on internet I found that JBOD should be fine for this, and I read that if a disk fails, it does not compromise others (unlike RAID0). Now I searched on internet for some controllers and I found that the LSI 9211-8i it's a good card but it is not clear how to set up a JBOD (or even if it can) with this card.
    So I'm asking advice on how to make a JBOD, which controllers are right for me.
    Thanks in advance.
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,800
    Likes Received:
    604
    Are you sure you do not want any redundancy with 12 disks?
    Depending on brand, age and vibration sensitivity you must expect a bad disk every 6 months to a best of one per two years in average with enterprise disks.

    Even if you do not care about the increased performance of a realtime raid or the superiour data security ex with raid on btrfs, ReFS or ZFS you should care about bad disks, at least with a snapshot raid/backup mechanism where you create a redundancy on demand with one disk for redundancy.

    Pooling is an add-on of them. Realtime raid always does pooling.
     
    #2
  3. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    Thank you for your answer! I started reading about ZFS this morning and I have a lot of confused ideas. Using a ZFS file system and creating a pool with 4 hard drive in raidz2, how can I add a 5th drive to the pool?
    I read that it could be done using snapshots, but for a newbie like me it is not well explained.
     
    #3
    Last edited: May 1, 2018
  4. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,357
    Likes Received:
    368
    If you want pure JBOD mode (ITmode) from an 9211-8i and similar cards in order to use them for software based raid, you need to flash the card with the appropriate firmware. There are lots of guides on here describing the process. As for ZFS in RAIDZ, the pool is fixed at build time, you cannot simply keep adding disks to the pool. What you can do is add more vdev's to the pool. The other way to expand the pool size would be to swap one disk at a time and let it re-silver the array, after the last disk has been re-silvered the extra space will become available.

    Have a read here :)
     
    #4
    pettinz likes this.
  5. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,800
    Likes Received:
    604
    There is work in Open-ZFS to increase a raid-Z vdev with a single disk (ex 4disk raid-Z -> 5disk Raid-Z, expected end of this year). Removing a vdev to shrink a pool is already in Solaris with support for all vdev types and in OpenZFS (available in current OI and OmniOS) but limited to basic and mirror vdevs.
     
    #5
  6. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    So, how should I set for now my 2x8TB WD RED hard drives thinking about a near future expansion? Buying 8 hard drives together is a little bit expensive, it would be easier adding drives in time...
     
    #6
  7. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    So, if I buy a LSI 9211-8i and I flash the firmware to IT-mode, it automatically manages the drives in a JBOD-mode? Or there are further settings to set (for example in the controller BIOS)? All the drives should be appear as a single one to the OS.
     
    #7
  8. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,357
    Likes Received:
    368
    When in IT mode, all disks are presented as individuals, there is no RAID/ JBOD at the hardware level, each disk has it's own port, just as if you had 8 SAS/ SATA ports populated with disks on your mainboard. The JBOD/ RAID layer then becomes software based and it is this layer that is then responsible for your redundancy etc. The nice thing about doing things that way, is that the storage pool no longer relies on any particular hardware, or to some extent the OS and can be moved easily in the event of a failure :)

    It's easy to get sent down a rabbit hole when learning something new, so at this point, it might be better to let us know what you are trying to achieve? What are you planning to use this for?
     
    #8
    Last edited: May 1, 2018
  9. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,800
    Likes Received:
    604
    #9
  10. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,330
    Likes Received:
    277
    What are you going to be using the server for? do you have an OS in mind? You keep asking for JBOD mode. But you also state that you want the hard drives to show as one to the OS.

    @pricklypunter has explained how a 9211-8i when flashed to IT mode will work.

    If you are not set on an OS and are looking for a basic file server for home use(media, backups etc) check out unraid. You have the option of adding one disk at a time. So expanding the array will not be as expensive. Similar to unraid is stablebit drivepool but that requires you to have a windows install.
     
    #10
  11. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    Perfect and how can I create a JBOD with a spanning mode? Using LVM would be worth? I will lose all the data if just a disk fail?
     
    #11
  12. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    The idea is to build a media server and about the OS I thought to Ubuntu Server
     
    #12
  13. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,357
    Likes Received:
    368
    Hehe, it rarely stops at just serving up media, soon after you'll want to store your family photos, back up your important files from your PC and etc :D

    My advice here would be to take a modular approach. Separate your storage requirements from your media server or anything else for that matter that you want to store in the pool. Think of the storage pool like a large pie, if you cut up slices of that pie and share it out everything else that needs it has it's own slice :)
     
    #13
    pettinz likes this.
  14. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    Ahah of course, is why I'm taking care of what will happen when I will need to add more hard drives.

    How should I realize that? I can't use ZFS because I can't extend an existing pool in a near future. I need something that I can extend over the years, adding drive by drive. So I thought to a JBOD with spanning mode. If I need more space, I can add a drive without any problem, and if something fails, it will not affect the others.
     
    #14
  15. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,357
    Likes Received:
    368
    There are other filesystems/ raid schemes besides ZFS, that will allow a traditional growth path. The downsize is you also lose the features that make zfs a first choice :)
     
    #15
  16. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,695
    Likes Received:
    756
    mdadm and be done. No redundancy is not something you want. Having lots of data even if you've got optical disk backups is a pain to restore.
     
    #16
    K D likes this.
  17. Blinky 42

    Blinky 42 Active Member

    Joined:
    Aug 6, 2015
    Messages:
    442
    Likes Received:
    143
    If you value the data that you put on there in any way shape or form, or don't want to collect it all again then I strongly recommend you add some level of redundancy or protection to your setup. Even a raid5 gives you a chance to save the data vs going in JBOD mode with no protection whatsoever. Assuming you want to have a single filesystem larger than any one of the drives you do have a few options:

    * You could use the RAID features of the controller. It is easy if you have sets of identical sized disks and the work to do the array maintenance is done in the controller. Ups and downs if that is good or bad deepening on your use case and philosophy, but if you wan to just get up and running it is an option. Additional memory on the card is helpful for performance, and you need battery/super cap backup of that memory for safety so it isn't the cheapest option to do well.
    * Use a filesystem with the block management built in like zfs. It has some fun features that have been mentioned above but you currently need to add drives in sets that become zdevs and can't change their size as easy.
    * Use mdadm / LVM to manage the array. mdadm takes care of the protection side across multiple drives and LVM collects those raid devices together into a logical block device that you can then carve up into sections and put normal filesystems on top of.
    * Use one of the filesystem merging layers to present a logical large filesystem on top of smaller component filesystems. This only works as long as your maximum file size can still fit on a single drive. It has some use cases but I would avoid this in general until you have a use case that can be better served with this type of setup than something like zfs.

    To get up and running quick, I would say stick with mdadm which is integrated pretty well into major distributions installers. With some minimal planning if you have enough free drive slots you can add drives to your arrays online.

    Once you start down this road - just get in the habit of buying 3 or more drives at a time if you are going to add storage. Drives WILL die. You have to plan around it so you don't loose everything on the server.
     
    #17
  18. rune-san

    rune-san Member

    Joined:
    Feb 7, 2014
    Messages:
    64
    Likes Received:
    14
    Others have already made good suggestions, but I just wanted to clarify this. A JBOD in and of itself does not compromise other disks, because each disk is presented to the OS individually as a bare drive. If one fails, it's simply an individual disk lost.

    However, if you create a *spanned volume* across the multiple drives presented to the OS, which is what you're wanting to do ("hard drives to show as one to the OS"), then for all intents and purposes you'll lose the entire volume if any one disk fails. You'd need to resort to recovery tools to recover files that were on that Volume, or your backups.
     
    #18
  19. pettinz

    pettinz Member

    Joined:
    May 1, 2018
    Messages:
    33
    Likes Received:
    0
    What do you think about MergerFS ?
     
    #19
  20. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    838
    Likes Received:
    224
    #20
    pettinz likes this.
Similar Threads: Storage Server
Forum Title Date
DIY Server and Workstation Builds Scalable Xeon SAS3 Storage Server Sep 3, 2017
DIY Server and Workstation Builds Storage Server Build Planning - Feedback Appreciated Jun 1, 2017
DIY Server and Workstation Builds High Capacity Storage Server for 10Gb Network Apr 6, 2017
DIY Server and Workstation Builds New(ish) 32TB Storage/Media Server/HTPC Build Jan 11, 2017
DIY Server and Workstation Builds Low-power mass storage media server Dec 27, 2016

Share This Page