Internal ZFS pool backup onto external HDD

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

knubbze

Member
Jun 13, 2013
35
0
6
I now have my HP Microserver running Solaris 11.1+napp-it, with a 4 x 2TB RAIDZ2 zpool (~3.6TB usable space). I also have an external eSATA 4TB HDD, which I want to 'mirror'/'replicate' (I'm not sure if these are the right terms) the data from the internal zpool onto. Am I able to do this from within the napp-it webGUI, and if so, how?
 

knubbze

Member
Jun 13, 2013
35
0
6
Wow! That is impressive. 4 disk RAID-Z2 is very secure usually.
Yes it is, but I also want to protect myself against electrical storms/power surges, software/hardware error, fire, theft and flooding by keeping an offline/off-site copy :p
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
No matter how "secure" the raid - raid is not backup. If the data has any value at all then keeping a copy off-site is wise.

I don't know if you can manage this directly within napp-it, but from the Solaris command line you can always just do a "zfs send" to copy a snapshot onto the external drive. See the man pages. They have examples. Or google "zfs send".
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Wow! That is impressive. 4 disk RAID-Z2 is very secure usually.
Keeping a backup is good.

Can I ask a stupid question, do you just want offsite backup? If so, why not just rsync to Amazon Glacier or the like? First sync is huge then not that bad.
 

brutalizer

Member
Jun 16, 2013
54
11
8
Keeping a backup is good.

Can I ask a stupid question, do you just want offsite backup? If so, why not just rsync to Amazon Glacier or the like? First sync is huge then not that bad.
Does Amazon checksum every file all the time? If not, I would not trust Amazon. Maybe they are just using ordinary hardware raid, just like everybody else? If you are using Amazon to rely your backups on, then you might as well as abandon ZFS too.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
I now have my HP Microserver running Solaris 11.1+napp-it, with a 4 x 2TB RAIDZ2 zpool (~3.6TB usable space). I also have an external eSATA 4TB HDD, which I want to 'mirror'/'replicate' (I'm not sure if these are the right terms) the data from the internal zpool onto. Am I able to do this from within the napp-it webGUI, and if so, how?
Your options:
- create a pool with a basic vdev on your 4TB disk
- create a replication job, start manually (use zfs send, works incremental)

or
- create a "other job" with a rsync

or
- sync from your pc with a sync tool like robocopy (via SMB share)

- export and unplug the disk
- import the pool prior next replication/sync

I would prefer to have two backup disks.
You should set a snap history on your backup disks for previous versions .
 

knubbze

Member
Jun 13, 2013
35
0
6
Your options:
- create a pool with a basic vdev on your 4TB disk
- create a replication job, start manually (use zfs send, works incremental)
This one is my preferred option; is it possible to do this via the webGUI?

You should set a snap history on your backup disks for previous versions .
Again, is this possible to do via the napp-it webGUI, and how do I do it?

Thanks again.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
This one is my preferred option; is it possible to do this via the webGUI?
menu jobs >> replicate >> create replication job (manual start)

Again, is this possible to do via the napp-it webGUI, and how do I do it?
menu jobs >> snap >> create autosnap job
 

knubbze

Member
Jun 13, 2013
35
0
6
menu jobs >> replicate >> create replication job (manual start)
Thanks, I'm running this now; hopefully I chose the correct options:

Source pool: 'tank' (my RAIDZ2 zpool)
Enable recursive: Enabled (because my source zpool contains two datasets/filesystems, but the page advises against this, why? I just want to replicate the whole pool with all of its child filesystems)
zfs send incremental snapshot option: -i
Dedup: Disabled
Duplicate to to ZFS folder(s) below: 'backup' (the name of the basic zpool that I created on my 4TB HDD)

Does this all look fine?

menu jobs >> snap >> create autosnap job
Could you please explain the purpose of this command, and do I run it on the destination drive, or source drive?

Cheers
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Replication settings are fine
-i means, use newest source snap, -I means transfer all intermediate snaps as well.

Autosnap
You can setup any desired snaphistory on any filesystem ex

tank:
- one snap every 15minutes, keep 4
- one snap every day, keep 7
- one snap every week, keep 4

backup
- one snap every month, keep 12
- one snap every year, keep 10


you can go back to these snaps in case you need a previous version
 

knubbze

Member
Jun 13, 2013
35
0
6
Thanks, yeah that makes sense.

I ran the replication job, but unfortunately there was a problem: My pool contains two filesystems, one of which is encrypted. I forgot to unlock the encrypted one before I ran the replication job the first time, and only the unencrypted filesystem was replicated onto the basic zpool. So I destroyed and recreated the destination zpool, then unlocked the encrypted filesystem on source pool and ran the replication job again. But again, it didn't copy over the encrypted filesystem. Here are the pools after the replication job completed:



Could it be because a snapshot was created by the first replication job before I unlocked the encrypted filesystem, and both jobs used this snapshot as a reference? I'm going to delete all the snapshots and run the replication job again, to see if it copies both datasets over this time.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
I do not use Solaris myself but expect that you must unlock a filesystem prior a transfer.
To restart a whole transfer, you must delete target filesystems.

I would recomment to setup a replication for each filesystem. You can manage them then independently.
 

knubbze

Member
Jun 13, 2013
35
0
6
UPDATE:

I destroyed the destination pool and recreated it, unlocked the encrypted filesystem and tried to do a recursive replication of the whole source pool. Once again, only the other unencrypted pool was copied to the destination. So, I setup a separate replication job just for the encrypted dataset, but when this is run, it finishes immediately, and no data is copied over. Here is the replication job page:



As you can see, the replication jobs for the encrypted filesystem took 0s and 1s. So something is preventing the encrypted filesystem from being replicated :/

The weird thing is that after running the replication job, the 'backup' pool shows unexpected data used values:



The values on the 'tank' pool and its filesystems correspond with the data stored in each one, but I don't know why 'backup' shows 2.03TB used.

UPDATE #2: I setup the same job without the 'recursive' option checked; I ran it and this time it looks like it is doing something (% progress is slowly updating). I will leave it running overnight and update this thread tomorrow with the results.
 
Last edited: