Storage Server and JBOD

pettinz

Member
May 1, 2018
42
0
6
22
Hi,
I'm building a custom rack mount storage server (it's my first time) and I choose a Chenbro RM23612 chassis (6GB/s backplane with 3 mini-sas ports) for this. For this build, I need my hard drives to show as one to the OS and I don't need redundancy: on internet I found that JBOD should be fine for this, and I read that if a disk fails, it does not compromise others (unlike RAID0). Now I searched on internet for some controllers and I found that the LSI 9211-8i it's a good card but it is not clear how to set up a JBOD (or even if it can) with this card.
So I'm asking advice on how to make a JBOD, which controllers are right for me.
Thanks in advance.
 

gea

Well-Known Member
Dec 31, 2010
2,468
832
113
DE
Are you sure you do not want any redundancy with 12 disks?
Depending on brand, age and vibration sensitivity you must expect a bad disk every 6 months to a best of one per two years in average with enterprise disks.

Even if you do not care about the increased performance of a realtime raid or the superiour data security ex with raid on btrfs, ReFS or ZFS you should care about bad disks, at least with a snapshot raid/backup mechanism where you create a redundancy on demand with one disk for redundancy.

Pooling is an add-on of them. Realtime raid always does pooling.
 

pettinz

Member
May 1, 2018
42
0
6
22
Are you sure you do not want any redundancy with 12 disks?
Depending on brand, age and vibration sensitivity you must expect a bad disk every 6 months to a best of one per two years in average with enterprise disks.

Even if you do not care about the increased performance of a realtime raid or the superiour data security ex with raid on btrfs, ReFS or ZFS you should care about bad disks, at least with a snapshot raid/backup mechanism where you create a redundancy on demand with one disk for redundancy.

Pooling is an add-on of them. Realtime raid always does pooling.
Thank you for your answer! I started reading about ZFS this morning and I have a lot of confused ideas. Using a ZFS file system and creating a pool with 4 hard drive in raidz2, how can I add a 5th drive to the pool?
I read that it could be done using snapshots, but for a newbie like me it is not well explained.
 
Last edited:

pricklypunter

Well-Known Member
Nov 10, 2015
1,605
469
83
Canada
If you want pure JBOD mode (ITmode) from an 9211-8i and similar cards in order to use them for software based raid, you need to flash the card with the appropriate firmware. There are lots of guides on here describing the process. As for ZFS in RAIDZ, the pool is fixed at build time, you cannot simply keep adding disks to the pool. What you can do is add more vdev's to the pool. The other way to expand the pool size would be to swap one disk at a time and let it re-silver the array, after the last disk has been re-silvered the extra space will become available.

Have a read here :)
 
  • Like
Reactions: pettinz

gea

Well-Known Member
Dec 31, 2010
2,468
832
113
DE
There is work in Open-ZFS to increase a raid-Z vdev with a single disk (ex 4disk raid-Z -> 5disk Raid-Z, expected end of this year). Removing a vdev to shrink a pool is already in Solaris with support for all vdev types and in OpenZFS (available in current OI and OmniOS) but limited to basic and mirror vdevs.
 

pettinz

Member
May 1, 2018
42
0
6
22
So, how should I set for now my 2x8TB WD RED hard drives thinking about a near future expansion? Buying 8 hard drives together is a little bit expensive, it would be easier adding drives in time...
 

pettinz

Member
May 1, 2018
42
0
6
22
If you want pure JBOD mode (ITmode) from an 9211-8i and similar cards in order to use them for software based raid, you need to flash the card with the appropriate firmware. There are lots of guides on here describing the process.
So, if I buy a LSI 9211-8i and I flash the firmware to IT-mode, it automatically manages the drives in a JBOD-mode? Or there are further settings to set (for example in the controller BIOS)? All the drives should be appear as a single one to the OS.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,605
469
83
Canada
When in IT mode, all disks are presented as individuals, there is no RAID/ JBOD at the hardware level, each disk has it's own port, just as if you had 8 SAS/ SATA ports populated with disks on your mainboard. The JBOD/ RAID layer then becomes software based and it is this layer that is then responsible for your redundancy etc. The nice thing about doing things that way, is that the storage pool no longer relies on any particular hardware, or to some extent the OS and can be moved easily in the event of a failure :)

It's easy to get sent down a rabbit hole when learning something new, so at this point, it might be better to let us know what you are trying to achieve? What are you planning to use this for?
 
Last edited:

K D

Well-Known Member
Dec 24, 2016
1,425
305
83
30041
What are you going to be using the server for? do you have an OS in mind? You keep asking for JBOD mode. But you also state that you want the hard drives to show as one to the OS.

@pricklypunter has explained how a 9211-8i when flashed to IT mode will work.

If you are not set on an OS and are looking for a basic file server for home use(media, backups etc) check out unraid. You have the option of adding one disk at a time. So expanding the array will not be as expensive. Similar to unraid is stablebit drivepool but that requires you to have a windows install.
 

pettinz

Member
May 1, 2018
42
0
6
22
Perfect and how can I create a JBOD with a spanning mode? Using LVM would be worth? I will lose all the data if just a disk fail?
 

pettinz

Member
May 1, 2018
42
0
6
22
What are you going to be using the server for? do you have an OS in mind? You keep asking for JBOD mode. But you also state that you want the hard drives to show as one to the OS.

@pricklypunter has explained how a 9211-8i when flashed to IT mode will work.

If you are not set on an OS and are looking for a basic file server for home use(media, backups etc) check out unraid. You have the option of adding one disk at a time. So expanding the array will not be as expensive. Similar to unraid is stablebit drivepool but that requires you to have a windows install.
The idea is to build a media server and about the OS I thought to Ubuntu Server
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,605
469
83
Canada
Hehe, it rarely stops at just serving up media, soon after you'll want to store your family photos, back up your important files from your PC and etc :D

My advice here would be to take a modular approach. Separate your storage requirements from your media server or anything else for that matter that you want to store in the pool. Think of the storage pool like a large pie, if you cut up slices of that pie and share it out everything else that needs it has it's own slice :)
 
  • Like
Reactions: pettinz

pettinz

Member
May 1, 2018
42
0
6
22
Hehe, it rarely stops at just serving up media, soon after you'll want to store your family photos, back up your important files from your PC and etc :D
Ahah of course, is why I'm taking care of what will happen when I will need to add more hard drives.

My advice here would be to take a modular approach. Separate your storage requirements from your media server or anything else for that matter that you want to store in the pool. Think of the storage pool like a large pie, if you cut up slices of that pie and share it out everything else that needs it has it's own slice :)
How should I realize that? I can't use ZFS because I can't extend an existing pool in a near future. I need something that I can extend over the years, adding drive by drive. So I thought to a JBOD with spanning mode. If I need more space, I can add a drive without any problem, and if something fails, it will not affect the others.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,605
469
83
Canada
There are other filesystems/ raid schemes besides ZFS, that will allow a traditional growth path. The downsize is you also lose the features that make zfs a first choice :)
 

MiniKnight

Well-Known Member
Mar 30, 2012
2,984
888
113
NYC
mdadm and be done. No redundancy is not something you want. Having lots of data even if you've got optical disk backups is a pain to restore.
 
  • Like
Reactions: K D

Blinky 42

Active Member
Aug 6, 2015
559
200
43
44
PA, USA
If you value the data that you put on there in any way shape or form, or don't want to collect it all again then I strongly recommend you add some level of redundancy or protection to your setup. Even a raid5 gives you a chance to save the data vs going in JBOD mode with no protection whatsoever. Assuming you want to have a single filesystem larger than any one of the drives you do have a few options:

* You could use the RAID features of the controller. It is easy if you have sets of identical sized disks and the work to do the array maintenance is done in the controller. Ups and downs if that is good or bad deepening on your use case and philosophy, but if you wan to just get up and running it is an option. Additional memory on the card is helpful for performance, and you need battery/super cap backup of that memory for safety so it isn't the cheapest option to do well.
* Use a filesystem with the block management built in like zfs. It has some fun features that have been mentioned above but you currently need to add drives in sets that become zdevs and can't change their size as easy.
* Use mdadm / LVM to manage the array. mdadm takes care of the protection side across multiple drives and LVM collects those raid devices together into a logical block device that you can then carve up into sections and put normal filesystems on top of.
* Use one of the filesystem merging layers to present a logical large filesystem on top of smaller component filesystems. This only works as long as your maximum file size can still fit on a single drive. It has some use cases but I would avoid this in general until you have a use case that can be better served with this type of setup than something like zfs.

To get up and running quick, I would say stick with mdadm which is integrated pretty well into major distributions installers. With some minimal planning if you have enough free drive slots you can add drives to your arrays online.

Once you start down this road - just get in the habit of buying 3 or more drives at a time if you are going to add storage. Drives WILL die. You have to plan around it so you don't loose everything on the server.
 

rune-san

Member
Feb 7, 2014
78
15
8
Hi,
I'm building a custom rack mount storage server (it's my first time) and I choose a Chenbro RM23612 chassis (6GB/s backplane with 3 mini-sas ports) for this. For this build, I need my hard drives to show as one to the OS and I don't need redundancy: on internet I found that JBOD should be fine for this, and I read that if a disk fails, it does not compromise others (unlike RAID0). Now I searched on internet for some controllers and I found that the LSI 9211-8i it's a good card but it is not clear how to set up a JBOD (or even if it can) with this card.
So I'm asking advice on how to make a JBOD, which controllers are right for me.
Thanks in advance.
Others have already made good suggestions, but I just wanted to clarify this. A JBOD in and of itself does not compromise other disks, because each disk is presented to the OS individually as a bare drive. If one fails, it's simply an individual disk lost.

However, if you create a *spanned volume* across the multiple drives presented to the OS, which is what you're wanting to do ("hard drives to show as one to the OS"), then for all intents and purposes you'll lose the entire volume if any one disk fails. You'd need to resort to recovery tools to recover files that were on that Volume, or your backups.
 

rubylaser

Active Member
Jan 4, 2013
842
229
43
Michigan, USA
  • Like
Reactions: pettinz