Help on home storage pool

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ivan98

New Member
Jul 22, 2013
5
0
0
In my home Linux server, I’ve a bunch of disks, which is pooled to provide storage for my media (and backup and .... stuff). Now these disks weren’t born in a day, but accumulated over time (couple of years). Today, I’ve a total of 5 disks: from 1TB to 2TB, hanging off an IBM M1015 (good advise from STH) in IT mode.

When I’m running low on space, I’ll pop down to the IT mall, pick up a good price/capacity disk (currently 2TB green US$95), put into the pool, and poof, my storage space increases accordingly. I’m using ext4 over LVM2 in simple concat mode (RAID0).

If my once-a-week SMART scan finds a failing disk, I’ll take the disk out, and RMA it if it’s within warranty. If not, I’ll just throw it away.
If this operation requires a new disk to swap in before taking out the old disk, I’ll pop down to the IT mall, pick up a good price/capacity disk ...

If a disk dies suddenly, well, I’ve never experienced that in all my years.... <touch wood>

Now, I’m contemplating if I should continue on LVM2, or go with ZFS, Btrfs or h/ware RAID. (Please correct me if I am wrong)

Notice from above, that a fairly big chunk of my requirements is that the pool must move with the times while growing with me. For example, in 2014, it is expected 4TB disks will be in the market, and so maybe 3TB will be the new price/capacity king. If so, I will buy 3TB disks next year.

With h/ware RAID and ZFS, this is not easily done with my existing pool of 1TB & 2TB disks. In other words, h/ware RAID and ZFS cannot evolve, but stick to the initial RAID setup. If I need to change, I’ve to tear down the entire RAID and re-setup. (Which I can’t do, because I don’t have alternative living arrangements for my data)

LVM2 in RAID0 is quite scary, but in real life, moving underlying blocks (“pvmoveâ€) off a physical disk so that it can be removed, works great, very flexible.

H/ware RAID is expensive (the controller itself) and inflexible (drives in array must be of same size).
ZFS is inflexible (drives in array must be of same size).

RAID10 (s/ware or h/ware) is nice, but costly - I've to pay twice for the space of one (disk).

Btrfs is not production ready (as of Jul 2013), even if it does the equivalent of pvmove automatically, saving me the hassles.

If my understanding is wrong, please correct me - I'd love to get my bum off LVM2/RAID0.


 

xnoodle

Active Member
Jan 4, 2011
258
48
28
If you're comfortable in Linux, why not mdraid then? You can grow using mdraid, which is something you can't do with ZFS.
 

ivan98

New Member
Jul 22, 2013
5
0
0
OK I get it. This is like what I said about RAID10.

When I run low on space, I pop down to get x number of same-size disks. RAID1 or RAID5 them, then add it to the pool. Pay x disks and get x-1 disk of usable space, where x>=2. If x=2, it is RAID1; if x>=2, it is RAID5.
 

rubylaser

Active Member
Jan 4, 2013
847
236
43
Michigan, USA
I would say that you have been pretty luckily if you have run this setup for years to never have a single disk take out your RAID0 array:) An option if you are using this for just media storage (not storing VMs or things that are constantly changing) is to use SnapRAID. This is basically a JBOD setup that uses snapshot RAID to add a layer of protection to your array (backups are still needed for critical data). Each disk contains it's own independent filesystem, and even if all your parity disks go poof, the info is still intact on the other disks. This provides great flexibility and will easily allow you add larger disks in the future or replace older/inefficient/broken disks. Also, it only requires the disk containing the data to spin up for a read as opposed to the whole RAID array with a typical RAID solution.

I've encouraged others on the UbuntuForums and used myself, mdadm or ZFS (first on OpenSolaris now OmniOS) for years, but for most home users, SnapRAID plus a pooling solution like AUFS or mhddfs is pretty tough to beat. I wrote up tutorials for both if you are interested in either mdadm or SnapRAID. SnapRAID also includes a check & fix command to protect against bit rot like ZFS.

mdadm tutorial
SnapRAID tutorial
 
Last edited:

xnoodle

Active Member
Jan 4, 2011
258
48
28
This is not true; ZFS pools can be expanded, although not by a single disk (usually). See Adding Devices to a Storage Pool (Solaris ZFS Administration Guide)
True! I was thinking more along the lines of growing a RAIDZ* by one or two disks, which is typically what most home users would use.

My first home file server used mhddfs with mismatched PATA and SATA drives, I didn't like that losing one drive in a LVM group would cause complete data loss.

Do you need the performance from RAIDing drives? If not, SnapRaid is a decent idea.