Would like some more opinions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
After recently suffering some minor filesystem corruption, I've moved all of my data temporarily and now have the somewhat rare opportunity to re-layout all of my raid arrays. I should note that the OS (gentoo linux) boots from a mirrored pair of 160GB drives that are not included in any of this - changing the OS, or changing to a technology other than linux md-raid aren't really considerations. The case is a Norco 4020 - 20 bays if you aren't familiar (the OS drives are mounted internally, not in the hotswap bays).

As to data drives, this is what we have to work with:
6x 2TB 5900 RPM
3x 1TB 7200 RPM
9x 750GB 7200 RPM

I've currently got it built into 4 RAID-5 sets (still initializing, changes can be made without affecting data still), one each for the 1 and 2 TB drives, and two 4x750gb arrays, with an extra 750 ready for when I have my next failure. (I've had bad luck with 750's lately - used to have 12 of them). I take all of the arrays and use them as LVM physical volumes to tie it all together into a single filesystem.

I decided to split the 750's into two arrays to make it easier to slowly remove them to make room for newer/larger drives. As long as I have some free space, I can shrink the filesystem, then shrink the LVM logical volume, and then remove one of those arrays from the volume group.

Because of the pain involved in moving all the data around and rebuilding all the arrays, I don't plan on doing it again soon, so I want to make sure I get this right. Any other ideas on how to go about it?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
I know you said that changing the OS isn't an option but definitely make sure of that - it's surprisingly easy to install openIndiana + napp-it and have yourself a web-browser managed ZFS-filesystem raid array setup quite quickly. Goodbye, worrying about bit rot and silent data corruption... It took me less than an hour to install, fiddle a little bit, run one command and have napp-it auto setup.

I haven't got much input re: the RAID setups - I haven't ever had much to do with LVM.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
mobilenvidia - thats too much parity and not enough data for me. I don't think raid-6 is needed until an array has at least 8 spindles in it, which the 2TB array is still a few disks short of (though long-term I might grow that array by a few more disks and then migrate it to raid-6). And raid-60 on the 750's would mean 4x 750gb disks worth of parity - I could run raid 10 and have the same capacity with much better performance. Since I need capacity much more than speed, I don't see any reason to use any of the X0 nested raid levels - I would rather have LVM span the arrays together making it easy to grow/shrink/add/remove arrays from the pool, instead of striping across them locking all disks in both arrays into that configuration until I can replace the entire raid-set. But thanks for the idea's anyways - a raid-6 of 8 or 9 of the 750's instead of a pair of raid-5's is one of the things I'm pondering about.

sotech - I also run other things on this box. Besides my personal file server (mostly media server by capacity, but also dumping ground for everything else) this box is my test server for various server apps, and up until recently was also hosting some VMs. And as I mentioned - the OS is still up and running on its mirrored disks. Though the disks were previously in a very different layout (a 15-drive raid-5 of all of the 750's, plus the first 750gb of the 1tb's, and one of the 2tb's, that had been grown 1 disk at a time since it was a 5-drive array many years ago, with a few other small things done with the leftover space on each drive and again all tossed into an LVM group), all of this stuff is mounted at /home on that box. Since the problems started, I've disabled /home from mounting at boot and just only login as root, but the system continues to run with no issues and I don't want to format/rebuild. Also - ease of use is not a concern for me - I run gentoo for example (if you aren't familar with it - its not a user-friendly distro), there is no X server on that box, it is designed to run headless and access over SSH (or various file-sharing protocols) - I have no interest in fancy web-based or GUI management apps.

I do worry about bit-rot a little and would like a better solution to deal with it. I've read into ZFS and have considered using it on linux (either with the FUSE module, or the native kernel module - both seem reasonably well supported under gentoo), however not being able to add disks to a raid-5 pool is a problem for me. I'm looking forward to btrfs picking up all the functionality I would need and am considering using it as the filesystem on the LVM volume and eventually slowly migrating the disks out of raid to individual spindles once it picks up parity-based spanning of disks (hoping it will support growing, as it will be borrowing md-raid5's code which can grow).

I also came across snapraid while googling around last night - it might be an interesting stopgap solution until btrfs is capable of doing everything I would want. It's not a viable solution for the entire /home of my file server (multiple users with many small files changing rapidly), but I could use a few 750's or the 3 1tb's to make a small raid to be /home, and then within my home directory mount just the media directory as a snapraid using the remainder of the drives. In that scenario I would setup dual parity and let a single "array" span across every available drive. Anyone have any experience with snapraid?