I would consider this a kludge at this point. I have done very little(assume "no") testing to see if things go wrong.
I have been looking into the pooling options for Linux and am dissatisfied with current solutions. The main limitation for me are files that are larger than an underlying disk. Ex. disk images
I start with a basic SnapRAID setup, two data disks with a single parity disk. This is a Ubuntu 12.04.1 LTS Server VM with SnapRAID from PPA, LVM, and a few small vmdks.
I set up a loop back device of a sparse file. Each data disk has one or more sparse files which are span'd using LVM. Do not over prevision or I/O errors will ensue when an underlying disk runs out of space. You can also add dm-crypt to have an encrypted FS.
Now, set up LVM with all the sparse files. I pretty much followed the Arch Wiki.
Now make a file system on the new Logical Volume(LV) and mount it.
Pros: contiguous pool across all disks; checksums and parity; this can be done on top of current SnapRAID setups; backup can be as simple as copying the sparse files. Cons: multiple layers, journaling file system in a journaling file system (in vmdk in a journaling file system); the LV cant see files on the individual data disks; if you delete a file in the LV you wont see the space return to the data disks unless you employ shenanigans like zeroing the file before deleting and compressing the sparse files, but allocated sparse file space will be used for subsequent files.
Edit: Actually, FALLOC_FL_PUNCH_HOLE has been supported by ext4 since kernel 3.0. This should allow deleted files to *give back* their space, but I'm not seeing it happen with just rm(1) on 3.2. I may be over simplifying what is going on...
At the end of the day, I think I am just going to use ZFS, but this was a fun exercise.
I have been looking into the pooling options for Linux and am dissatisfied with current solutions. The main limitation for me are files that are larger than an underlying disk. Ex. disk images
I start with a basic SnapRAID setup, two data disks with a single parity disk. This is a Ubuntu 12.04.1 LTS Server VM with SnapRAID from PPA, LVM, and a few small vmdks.
Code:
$ mount
...
/dev/sdc1 on /SnapRAID/Data1 type ext4 (rw,errors=remount-ro)
/dev/sdd1 on /SnapRAID/Data0 type ext4 (rw,errors=remount-ro)
/dev/sdb1 on /SnapRAID/Parity type ext4 (rw,errors=remount-ro)
Code:
$ cat /etc/snapraid.conf
...
parity /SnapRAID/Parity/snapraid_parity_file
content /SnapRAID/Parity/content
content /home/max/content
disk disk0 /SnapRAID/Data0/
disk disk1 /SnapRAID/Data1/
exclude /lost+found/
block_size 256
Code:
$ dd if=/dev/zero of=/SnapRAID/Data0/lvm_data0 bs=1 count=0 seek=2G
$ dd if=/dev/zero of=/SnapRAID/Data1/lvm_data1 bs=1 count=0 seek=2G
$ sudo losetup /dev/loop0 /SnapRAID/Data0/lvm_data0
$ sudo losetup /dev/loop1 /SnapRAID/Data1/lvm_data1
Code:
$ sudo pvcreate /dev/loop0
$ sudo pvcreate /dev/loop1
$ sudo vgcreate Bundle0 /dev/loop0
$ sudo vgextend Bundle0 /dev/loop1
$ sudo lvcreate -l +100%FREE -n Cavern
$ sudo vgscan
Reading all physical volumes. This may take a while...
Found volume group "Bundle0" using metadata type lvm2
$ sudo vgchange -ay
Code:
$ sudo mkfs.ext4 /dev/Bundle0/Cavern
*Happy output*
$ sudo mount /dev/Bundle0/Cavern /SnapRAID/Pool
$ dh -h
...
/dev/sdc1 2.0G 929M 1011M 48% /SnapRAID/Data1
/dev/sdd1 2.0G 729M 1.2G 38% /SnapRAID/Data0
/dev/sdb1 2.0G 1.1G 854M 57% /SnapRAID/Parity
/dev/mapper/Bundle0-Cavern 2.0G 813M 1.1G 43% /SnapRAID/Pool
$ mount
...
/dev/sdc1 on /SnapRAID/Data1 type ext4 (rw,errors=remount-ro)
/dev/sdd1 on /SnapRAID/Data0 type ext4 (rw,errors=remount-ro)
/dev/sdb1 on /SnapRAID/Parity type ext4 (rw,errors=remount-ro)
/dev/mapper/Bundle0-Cavern on /SnapRAID/Pool type ext4 (rw)
Edit: Actually, FALLOC_FL_PUNCH_HOLE has been supported by ext4 since kernel 3.0. This should allow deleted files to *give back* their space, but I'm not seeing it happen with just rm(1) on 3.2. I may be over simplifying what is going on...
At the end of the day, I think I am just going to use ZFS, but this was a fun exercise.
Last edited: