In my home Linux server, I’ve a bunch of disks, which is pooled to provide storage for my media (and backup and .... stuff). Now these disks weren’t born in a day, but accumulated over time (couple of years). Today, I’ve a total of 5 disks: from 1TB to 2TB, hanging off an IBM M1015 (good advise from STH) in IT mode.
When I’m running low on space, I’ll pop down to the IT mall, pick up a good price/capacity disk (currently 2TB green US$95), put into the pool, and poof, my storage space increases accordingly. I’m using ext4 over LVM2 in simple concat mode (RAID0).
If my once-a-week SMART scan finds a failing disk, I’ll take the disk out, and RMA it if it’s within warranty. If not, I’ll just throw it away.
If this operation requires a new disk to swap in before taking out the old disk, I’ll pop down to the IT mall, pick up a good price/capacity disk ...
If a disk dies suddenly, well, I’ve never experienced that in all my years.... <touch wood>
Now, I’m contemplating if I should continue on LVM2, or go with ZFS, Btrfs or h/ware RAID. (Please correct me if I am wrong)
Notice from above, that a fairly big chunk of my requirements is that the pool must move with the times while growing with me. For example, in 2014, it is expected 4TB disks will be in the market, and so maybe 3TB will be the new price/capacity king. If so, I will buy 3TB disks next year.
With h/ware RAID and ZFS, this is not easily done with my existing pool of 1TB & 2TB disks. In other words, h/ware RAID and ZFS cannot evolve, but stick to the initial RAID setup. If I need to change, I’ve to tear down the entire RAID and re-setup. (Which I can’t do, because I don’t have alternative living arrangements for my data)
LVM2 in RAID0 is quite scary, but in real life, moving underlying blocks (“pvmoveâ€) off a physical disk so that it can be removed, works great, very flexible.
H/ware RAID is expensive (the controller itself) and inflexible (drives in array must be of same size).
ZFS is inflexible (drives in array must be of same size).
RAID10 (s/ware or h/ware) is nice, but costly - I've to pay twice for the space of one (disk).
Btrfs is not production ready (as of Jul 2013), even if it does the equivalent of pvmove automatically, saving me the hassles.
If my understanding is wrong, please correct me - I'd love to get my bum off LVM2/RAID0.
When I’m running low on space, I’ll pop down to the IT mall, pick up a good price/capacity disk (currently 2TB green US$95), put into the pool, and poof, my storage space increases accordingly. I’m using ext4 over LVM2 in simple concat mode (RAID0).
If my once-a-week SMART scan finds a failing disk, I’ll take the disk out, and RMA it if it’s within warranty. If not, I’ll just throw it away.
If this operation requires a new disk to swap in before taking out the old disk, I’ll pop down to the IT mall, pick up a good price/capacity disk ...
If a disk dies suddenly, well, I’ve never experienced that in all my years.... <touch wood>
Now, I’m contemplating if I should continue on LVM2, or go with ZFS, Btrfs or h/ware RAID. (Please correct me if I am wrong)
Notice from above, that a fairly big chunk of my requirements is that the pool must move with the times while growing with me. For example, in 2014, it is expected 4TB disks will be in the market, and so maybe 3TB will be the new price/capacity king. If so, I will buy 3TB disks next year.
With h/ware RAID and ZFS, this is not easily done with my existing pool of 1TB & 2TB disks. In other words, h/ware RAID and ZFS cannot evolve, but stick to the initial RAID setup. If I need to change, I’ve to tear down the entire RAID and re-setup. (Which I can’t do, because I don’t have alternative living arrangements for my data)
LVM2 in RAID0 is quite scary, but in real life, moving underlying blocks (“pvmoveâ€) off a physical disk so that it can be removed, works great, very flexible.
H/ware RAID is expensive (the controller itself) and inflexible (drives in array must be of same size).
ZFS is inflexible (drives in array must be of same size).
RAID10 (s/ware or h/ware) is nice, but costly - I've to pay twice for the space of one (disk).
Btrfs is not production ready (as of Jul 2013), even if it does the equivalent of pvmove automatically, saving me the hassles.
If my understanding is wrong, please correct me - I'd love to get my bum off LVM2/RAID0.