Napp-it Offsite Drives, Keep-Hold, Source-Target

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ARNiTECT

Member
Jan 14, 2020
92
7
8
I have 4x 2.5” backup HDD, which I split into 2 pairs. I rotate a pair from the server with a pair offsite.

I would like some help on Keep/Hold settings, to ensure there is always a snap pair for replication.

I would like to just keep 2 replication snaps on the Target pools (as default, in case of problem with current snap); but keep 20 snaps on the source to allow for up to 20 replications on the second set of back up drives without losing the snap pair of the first set.

My process:
- Using Napp-it, I set up a number of replication jobs for half my filesystems to pool B1 on one drive; and the other half to pool B2 on the other drive.
- Export the pools and remove the drives.
- Plug in the other pair of drives and Import with the same pool names B1 and B2.
- Run the replication jobs again on the second pair with the same pool names B1 and B2.

The filesystem replication jobs use either [-i] or [-I] depending on whether I need to keep all intermediate snaps. I don't have the backup space for all intermediate snaps on some filesystems, where I frequently delete old data.

This seemed to work fine for a while until I seem to have accidentally exceeded the keep/hold of the snap pairs and so only the first pair of drives could do an incremental replication, and the other pair now needs to re-do a full replication as its snap pair no longer exists on the source.

I need to adjust the Keep / Hold settings, so I don’t lose the source snaps if I leave it too long, or replicate too many times on the same pair of drives without rotating to the offsite drives. When I create a replication job with Napp-it, there are options to Keep and Hold 'Target' snaps, which would be my back-up drives. What is happening to the 'Source' snaps, are they automatically the same as the target snap settings?

So if I set, eg:
- Keep: [hours:24,days:32,months:12,years:1], the target pool would keep its replication snaps for up to a year, but I don't know if it knows which one to keep as I would need to keep a snap pair for both sets of drives.
- Hold: [20s], the target pool would hold 20 snaps. Is the source pool also retaining 20 snaps? If so, I understand I would need to do 20 replications on one set of drives before the last snap used by the offsite drive is deleted. For filesystems with the replication option [-i] and Hold:20s, does this mean the backup target is storing previously deleted data contained in the 20 older replication snaps? if so, I might not have the space if I delete a huge amount.


Edit: I can't get this to work at all now, perhaps this was never supposed to work.
- On my first set of drives, I added option Hold:20s to the existing replication jobs, and ran them all successfully.
- On my second set of drives, I destroyed all filesystems, as the jobs could no longer find a snap pair (as noted above), I then ran the same replication jobs successfully
- I exported the second set and imported the first set to test another replication, and they are all failing to replicate, as they can no longer find a snap pair (error my_log end 1551: job-repli, source-dest snap-pair 3 not found)
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
some basics

Initial ZFS replication: Duplicates a whole filesystem or pool. On success you have an identical snap n on source and target that is the base of incremental replications.

Incremental ZFS replication:
You need an identical snap pair on source and target. The target filesystem does first a rollback to the common snap n. Then a new source snap (n+1) is created and send. On success you get a new destination snap n+1 as well that is the base of a following replication.

Without a common snap, you must redo an initial replication.

What I would do with 2 backup pools:
Name the first b1 and the second b2. Then create different replication jobs for both and start them manually depending on the currently mounted backup pool. This will allow different settings without dependencies.

Only a few snap pairs consume the same space like many as every snap holds only modified datablocks compared to a former snap.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Thanks for the suggestion Gea,

I had hoped it would be possible using the same replication jobs for simplicity. I'm sure I had this working for a while before.

Unfortunately I have 22 Filesystems to backup and I don't want to backup entire pools. If I do have to set this up as 2 separate sets (2 drives per set, 1 pool per drive), then I'll probably name the first set A1 & A2 and the second set B1 & B2.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
This is how I had hoped it would work using pools with same name in replication jobs:

First Pool, Initial ZFS replication:
Source:Tank1/FS1
Target:TankB1/FS1
Snaps on success:
Tank1/FS1@jobnumber_repli_zfs_server_nr_1
TankB1/FS1@jobnumber_repli_zfs_server_nr_1

First Pool, Incremental ZFS replication:
Target:TankB1/FS1 roll back to TankB1/FS1@jobnumber_repli_zfs_server_nr_1
Source:Tank1/FS1 create and send new snap Tank1/FS1@jobnumber_repli_zfs_server_nr_2
Snaps on success:
Tank1/FS1@jobnumber_repli_zfs_server_nr_1
Tank1/FS1@jobnumber_repli_zfs_server_nr_2
TankB1/FS1@jobnumber_repli_zfs_server_nr_1
TankB1/FS1@jobnumber_repli_zfs_server_nr_2

----- export 1st Pool: TankB1
----- import 2nd Pool: TankB1

Second Pool, Initial ZFS replication:
Source:Tank1/FS1
Target:TankB1/FS1 (same name as first pool)
Snaps on success:
Tank1/FS1@jobnumber_repli_zfs_server_nr_1
Tank1/FS1@jobnumber_repli_zfs_server_nr_2
Tank1/FS1@jobnumber_repli_zfs_server_nr_3
TankB1/FS1@jobnumber_repli_zfs_server_nr_3

Second Pool, Incremental ZFS replication:
Target:TankB1/FS1 roll back to TankB1/FS1@jobnumber_repli_zfs_server_nr_3
Source:Tank1/FS1 create and send new snap Tank1/FS1@jobnumber_repli_zfs_server_nr_4
Snaps on success:
Tank1/FS1@jobnumber_repli_zfs_server_nr_1
Tank1/FS1@jobnumber_repli_zfs_server_nr_2
Tank1/FS1@jobnumber_repli_zfs_server_nr_3
Tank1/FS1@jobnumber_repli_zfs_server_nr_4
TankB1/FS1@jobnumber_repli_zfs_server_nr_3
TankB1/FS1@jobnumber_repli_zfs_server_nr_4

----- export 2nd Pool: TankB1
----- import 1st Pool: TankB1

First Pool, Incremental ZFS replication:
Target:TankB1/FS1 roll back to TankB1/FS1@jobnumber_repli_zfs_server_nr_2
Source:Tank1/FS1 create and send new snap Tank1/FS1@jobnumber_repli_zfs_server_nr_5
Snaps on success:
Tank1/FS1@jobnumber_repli_zfs_server_nr_1
Tank1/FS1@jobnumber_repli_zfs_server_nr_2
Tank1/FS1@jobnumber_repli_zfs_server_nr_3
Tank1/FS1@jobnumber_repli_zfs_server_nr_4
Tank1/FS1@jobnumber_repli_zfs_server_nr_5
TankB1/FS1@jobnumber_repli_zfs_server_nr_1
TankB1/FS1@jobnumber_repli_zfs_server_nr_2
TankB1/FS1@jobnumber_repli_zfs_server_nr_5
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
The only problem that hinders same jobs and snaps for two destination pools is that you must ensure to have the identical snap pairs. A replication usually deletes older snaps (beside last two or due keep/hold settings). If you lack the highest destination snapnumber on source after a pool switch you cannot continue incremental replications.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
This is what I thought was happening.
I set all jobs with Hold:20s to retain 20 snaps
The snaps appear to be there, but it still fails.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Then you should create different replication jobs per backup pool
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Looks like that is the safe option.

I currently filter removable backup replication jobs by 'idle_manual'; it would be really useful to be able to filter by keyword, similar to the snapshot menu, or perhaps order by replication list headings, such as 'Opt2/ to'. Do you have any plans for such a feature?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I have added a filter option for selected jobs ex idle_active in 22.dev
Select menu Replicate first to display only replication jobs

After filtering you can start listed jobs all together
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Thanks for the update Gea.
We’ve just gone away on holiday and my servers are off; I’ll have to wait a couple of weeks to check this out. In 22.03 I remember filter options for idle_active and idle_manual etc; have you added an additional keyword filter? So I could type in TankB1 and it would filter all jobs to that pool?
Is 22.dev just for testing, or safe for typical use?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Thanks for the update Gea.
We’ve just gone away on holiday and my servers are off; I’ll have to wait a couple of weeks to check this out. In 22.03 I remember filter options for idle_active and idle_manual etc; have you added an additional keyword filter? So I could type in TankB1 and it would filter all jobs to that pool?
Is 22.dev just for testing, or safe for typical use?
Its a keyword filter over the whole line of a joblist.
Enter a searchstring then select ex idle.

Current 22.Dev is stable v22.06
 

Attachments

ARNiTECT

Member
Jan 14, 2020
92
7
8
Its a keyword filter over the whole line of a joblist.
Enter a searchstring then select ex idle.
Current 22.Dev is stable v22.06
This works great!
I returned from holiday, booted up my server and the next day I noticed 3x HDD were failed/degrade in an 8x HDD Z2 pool. It was a disaster recovery pool, so I just replaced the dead drives, recreated the pool with the same name and using the new keyword filter feature, I quickly set running all my existing replication jobs to that pool. Next, I'll set up my second set of backup drives with a full set of replication jobs.
...
(Very odd and a bit scary to lose 3 drives at once. I checked with data lifeguard and 2 of the dead drives failed the quicktest, the 3rd only got to 1.5% of the extended test. They are 3TB WD Red HDD from 2012-2016, I have ordered new 8TB drives to replace the DR pool)