ZFS Raid + SnapRaid media server in a napp-it box

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
I think about including SnapRaid per default with napp-it

Reason:
- ZFS is superiour due to realtime checksums and unlimited snapshots
- ZFS Software Raid is superiour in data security, performance and is realtime

Limitations:
For a pure media server, where data mostly keeps the same, where data are not too valuable and the performance of a single disk is enough, there are some some limitations with any striped Raid (ZFS or other).

- Striped raid means, all disks are active during read or write, no one can sleep
- You only can expand a pool with other Raidsets like mirrors or Raid-Z if you like redundancy
- You cannot use different sized disks or the smallest determines the Raid-size.

This is where Snapraid can fill the gap and combine the best of both.
- Use ZFS Raid where you need the performance and the realtime Raid.
- Use Snapraid on ZFS for media data with different sized disks where unused disks can sleep. You can expand with any sized single disk.

How it can work:
- Use as many datadisks as you like, size does not matter
- Build a ZFS pool from each disk (1 disk=1vdev=1pool)
- Use one or two disks (must have the same size as the biggest datadisk) for redundancy

Use your datapools as usual, create zfs folders and share. If one Pool fails (due to no ZFS redundancy), the data of this disk/pool is lost. This is where Snapraid is used.

With snapraid, you can use one or two disks to save a raid-like redundancy information on one or two extra disks (similar to Raid5/6) but not in realtime but on demand. The consequence is, that you can only restore the state of the last Snapraid sync-run.

Snapraid is quite easy to use, its only a small app.
You can install via: (similar to http://zackreed.me/articles/72-snapraid-on-ubuntu-12-04)

cd $HOME
wget http://sourceforge.net/projects/snapraid/files/snapraid-2.1.tar.gz
tar xzvf snapraid-2.1.tar.gz
cd snapraid-2.1
./configure
./make
./make install
The app is then in /usr/local/bin. I may include this in napp-it
You need to create a conf file with settings in /etc

To have it running quite maintenance, i think about using poolnames like snapraid_p1, snapraid-p2, snapraid_d1..snapraid_dn to have a setup that is working without extra setup together with a napp-it control menu and autojobs to sync timer based.

What I like to know now:
- Are there any known problems with OmniOS or OpenIndiana?
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I really like this idea. I'd just been using SnapRAID on Ubuntu for my media and ZFS on OmniOS for my critical stuff and VM hosting. This is the best of both worlds. Also, thanks for the link to my site, and for all of your hard work with Napp-it, it's fantastic.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Napp-it 0.9a6 newest

- pre-installs SnapRaid (with a howto im menu pool-snapraid)
- monitor extension: setup of vlan, vnic and link aggregation
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Very interesting gea! That is a pretty killer feature! We need someone to try this out. Sadly I'm in the middle of the colocation build out so the lab is 100% dedicated to that at the moment.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
This works fine. I was running Napp-it 0.9a6 on OmniOS already, so I downloaded it again. I had the SnapRAID menu item, but SnapRAID was not installed. I just ran wget and compiled it quickly. I created 4 vdevs and then created the SnapRAID config file. Seems to be working well :)

Code:
root@zfs-test:~# snapraid sync
Self test...
Loading state from /var/snapraid/content...
Not found, trying with another copy...
Loading state from /disk1/content...
Not found, trying with another copy...
Loading state from /disk2/content...
No content file found. Assuming empty.
Scanning disk d1...
Scanning disk d2...
Scanning disk d3...
Using 2 MiB of memory.
Saving state to /var/snapraid/content...
Saving state to /disk1/content...
Saving state to /disk2/content...
Initializing...
Syncing...
Nothing to do
Code:
root@zfs-test:~# cat /var/snapraid/content 
blksize 262144
checksum murmur3
map d1 0
map d2 1
map d3 2
file d2 0 1360011418.1365092 8 testfile.out
sign 497463838
Does anyone know of a disk pooling solution for Solaris (other than the obvious, ZFS)? I like SnapRAID, but without disk pooling, it losing a lot of it's benefit for me as a media server. I use AUFS in Ubuntu for this purpose.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Without a ZFS mediaserver with SnapRaid, no one would need a pooling solution on Solaris,
so I suppose there is no solution.

Im not´t sure if solutions like gluster can help, but this seems absolute overkill.


ps
Menu Pools -Snapraid:
SnapRaid is preinstalled with napp-it in /var/web-gui/data/tools/snapraid/snapraid
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I missed snapraid in napp-its path. I was looking in /usr/local/bin. Thanks for the info. I'm sure this will be helpful to many people, I just use SnapRAID for my unimportant files and ZFS on OmniOS for everything else.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Just as an update to this, SnapRAID now includes a pool option in it's config file, so you can accomplish a nice SnapRAID media server with pooling and ZFS backed disks.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Just as an update to this, SnapRAID now includes a pool option in it's config file, so you can accomplish a nice SnapRAID media server with pooling and ZFS backed disks.
SnapRaid 3.0 is included in napp-it 0.9b2
But I expect, that the pooling option will work only with Samba, NFS and AFP, not the Solaris CIFS server where sharing is a ZFS property.
 

john4200

New Member
Jan 1, 2011
152
0
0
It is just done with symlinks. It is not full pooling. For example, if you rename or delete a file in the pool, you will only rename or delete the symlink, and the original file will stay the same. Also, if you add content, the file(s) will just sit there with the symlinks. Andrea said he might add a feature in a future version to propagate changes made to the symlink directory over to the actual files / drives.

In the meantime, the symlinks work fairly well as long as all you want to do is read / play the files from the pool (or write to existing files, I guess).
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
This is true, but you still do have access to the mounted disks, so adding/removing content isn't that hard, and it's better than nothing at all :) I agree, though, this is not "real" pooling.
 

that0n3guy

New Member
Jul 23, 2013
4
0
0
Hey, I'm new to the whole ZFS and Solaris thing (not new to linux or linux server admin) so please bare with me, but why do you need to create ZFS pools for each drive? Why not just use the drive without zfs or pass it through to a VM to use?

I ask this because I'm looking to do a home server with a partial ZFS setup like so:
* 2 drive in ZFS mirror for VM's.
* 4 other drives as media/backup storage -> I don't want to use ZFS on these but want to give a VM direct access to these 4 drives.

Those 4 drives will be managed by linux (pooled with aufs) and raided with snapraid. I do this because:
* I will have different sized drives
* I want to expand my drive space one drive at a time
* these will contain mostly media mostly.
* I don't want all the drives to need to spin up everytime I watch a single movie.

I'm wanting to do this w/ omnios or smartos... but don't know if I can do KVM passthrough to the sata drives.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Hey, I'm new to the whole ZFS and Solaris thing (not new to linux or linux server admin) so please bare with me, but why do you need to create ZFS pools for each drive? Why not just use the drive without zfs or pass it through to a VM to use?

I ask this because I'm looking to do a home server with a partial ZFS setup like so:
* 2 drive in ZFS mirror for VM's.
* 4 other drives as media/backup storage -> I don't want to use ZFS on these but want to give a VM direct access to these 4 drives.

Those 4 drives will be managed by linux (pooled with aufs) and raided with snapraid. I do this because:
* I will have different sized drives
* I want to expand my drive space one drive at a time
* these will contain mostly media mostly.
* I don't want all the drives to need to spin up everytime I watch a single movie.

I'm wanting to do this w/ omnios or smartos... but don't know if I can do KVM passthrough to the sata drives.
As far as I know, Illumos does not support pass-through with KVM currently.
Because ZFS is the only option in Solaris you can:

- create single disk pools on Solaris and build a snapraid for backup over them or
- use ESXi below and RDM single disks do a VM and a storage controller with disks to Solaris
 

that0n3guy

New Member
Jul 23, 2013
4
0
0
As far as I know, Illumos does not support pass-through with KVM currently.
Because ZFS is the only option in Solaris you can:

- create single disk pools on Solaris and build a snapraid for backup over them or
- use ESXi below and RDM single disks do a VM and a storage controller with disks to Solaris
I didn't know zfs was the only option in Solaris, interesting.

Ok, so without pci passthrough, I could still pass the drives (that are individual zpools) to a linux VM (virtio driver), right? That VM could then pool the drives and share the pools via the network via nfs, samba, etc... Would that have some crazy performance issues? If I were to do this, I would probably want snapraid on Solaris and not the linux VM I'm guessing.
 

that0n3guy

New Member
Jul 23, 2013
4
0
0
Ok, so without pci passthrough, I could still pass the drives (that are individual zpools) to a linux VM (virtio driver), right? That VM could then pool the drives and share the pools via the network via nfs, samba, etc... Would that have some crazy performance issues? If I were to do this, I would probably want snapraid on Solaris and not the linux VM I'm guessing.
Ignore that part for now. On proxmox, I think you can pass the drive through to the vm (without using the Vt-d passthrough)... I think this is similar to ESXi's raw device mapping (rdm)... Can smartos do that? An example of a proxmox command is step 17 on https://wiki.amahi.org/index.php/Amahi_in_Proxmox_with_Greyhole
 

that0n3guy

New Member
Jul 23, 2013
4
0
0
Ok, so after talking to some folks in the #smartos irc, it looks like the only thing you can pass is ZVOLs to a VM. So OP seems like the way to go... passing single drives to VM.