napp-it replication extension questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mixer

Member
Nov 26, 2011
92
0
6
Hello Gea:

I'm testing your replication feature and it seems to be working well. I have a couple questions, sorry if I missed the info elsewhere:

I created the replication job before the backup destination ZFS folder was created (it was auto-created on the first run of the job). I assumed that the compression setting of the source would be inherited for the copy, but when I look at the replicated ZFS folder in napp-it it shows compression is off. How can I make sure all the data is compressed when it starts the copy?

I would like finer control over the scheduling -- sometimes choosing hour 15 or 18, or 21 is not fine enough. Can I modify some underlying file (is it a cron job?) to change it after creation of the replication job? As a feature request I'd like to be able to either have all choices available (hour 1-24, minute 0-59 etc) from the drop down menu or maybe you could just change it to a 'type in the blank' instead of menu. (I'd like this for all types of jobs)

Finally - if I have already created a job but then realize I want to modify the schedule, how can I do that? Do I have to delete the job and create it again, and if so will it know that is was previously sync'd (won't start the replication over again?)

Thank you.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
napp-it is a pre 1.0 release so not all wanted features are already in the GUI.
Regarding the timetable, you currently need to manually edit the job-filename /var/web-gui/data/napp-it/_log/jobs/*.job

The schedule is coded in the filename itself.

(The next 0.9 line allows editing schedules within the GUI among other new features)
 

mixer

Member
Nov 26, 2011
92
0
6
So is it correct there is no way to have compression on the replicated copy right now?
 

mixer

Member
Nov 26, 2011
92
0
6
Hello Gea: Unfortunately, that is not what I'm seeing here. Compression setting is not being carried through to replicated ZFS:

source system
Code:
me@OI-NAS:/reds$ zfs get compression reds/file-archives
NAME                PROPERTY     VALUE     SOURCE
reds/file-archives  compression  gzip-4    local

me@OI-NAS:/reds$ zfs get compressratio reds/file-archives
NAME                PROPERTY       VALUE  SOURCE
reds/file-archives  compressratio  1.04x  -
destination system
Code:
me@OI-BU:/RE750-1$ zfs get compression RE750-1/file-archives
NAME                   PROPERTY     VALUE     SOURCE
RE750-1/file-archives  compression  off       default

me@OI-BU:/RE750-1$ zfs get compressratio RE750-1/file-archives
NAME                   PROPERTY       VALUE  SOURCE
RE750-1/file-archives  compressratio  1.00x  -
The destination ZFS filesystem was created upon first run of the napp-it replication job.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
You may modify the zfs receive command in
/var/web-gui/data/napp-it/zfsos/_lib/grouplib.pl line 591

modify zfs receive to
zfs receive -o compression=on

(i have not yet tried but that may work)
 

mixer

Member
Nov 26, 2011
92
0
6
You may modify the zfs receive command in
/var/web-gui/data/napp-it/zfsos/_lib/grouplib.pl line 591

modify zfs receive to
zfs receive -o compression=on

(i have not yet tried but that may work)
I tried it ... well, I tried it with compression=gzip-4 and it didn't work. The job did not start, destination ZFS folder was not created. I returned the line to normal and then was able to run the replication job.

Command output seen in the monitor:
Code:
/usr/bin/nc -d -l -p 53026 | zfs receive -o compression=gzip-4 -F RE750-1/esx-nfs >>/tmp/monitor.log 2>&1 && perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/post_replication.pl 1352992505 1352993748 || perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/post_replication.pl 1352992505 error
and back to normal:
Code:
/usr/bin/nc -d -l -p 53026 | zfs receive -F RE750-1/esx-nfs >>/tmp/monitor.log 2>&1 && perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/post_replication.pl 1352992505 1352994182 || perl /var/web-gui/data/napp-it/zfsos/_lib/scripts/post_replication.pl 1352992505 error
 

mixer

Member
Nov 26, 2011
92
0
6
Actually, I when I was playing with compression types I found that though I could not do a zfs recv -o compression=x I could do a send/receive command where the destination used the same compression as the source (this was on the same box though, not sure if that makes a difference):
Code:
zfs send -v -R pool1/test@sendnow | zfs recv pool2/test
Was it the -R flag that did it? It's on the send side... where would I tweak napp-it to give that a try?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Actually, I when I was playing with compression types I found that though I could not do a zfs recv -o compression=x I could do a send/receive command where the destination used the same compression as the source (this was on the same box though, not sure if that makes a difference):
Code:
zfs send -v -R pool1/test@sendnow | zfs recv pool2/test
Was it the -R flag that did it? It's on the send side... where would I tweak napp-it to give that a try?
remote send is in grouplib.pl line 1302
but the -R flag is for recursiv send filesystems
 

mixer

Member
Nov 26, 2011
92
0
6
I'm sure you know this much better than I do, so it is perhaps silly for me to try to contribute here, but I was reading up on this page http://docs.oracle.com/cd/E19963-01/html/821-1448/gbchx.html which says regarding -R
You can use the zfs send -R command to replicate a ZFS file system and all descendent file systems, up to the named snapshot. When this stream is received, all properties, snapshots, descendent file systems, and clones are preserved.
I see from a previous post of yours, and noted in napp-it itself, you recommend against recursive send, but it seems to be required to preserve the compression (and perhaps other) properties.

If I just switch the checkbox to enable recursive, does that turn on the -R flag then? What danger is there in doing this within napp-it? Should I just buy a bigger disk? :p

Thanks, Gea.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
You may try a recursive setting but i would not expect any success without the zfs receive switch offered in Solaris 11.
And if your disk is so small that you may get troubles without compress you should buy bigger disks in any case,
 

mixer

Member
Nov 26, 2011
92
0
6
altering replication job schedule

napp-it is a pre 1.0 release so not all wanted features are already in the GUI.
Regarding the timetable, you currently need to manually edit the job-filename /var/web-gui/data/napp-it/_log/jobs/*.job

The schedule is coded in the filename itself.

(The next 0.9 line allows editing schedules within the GUI among other new features)
Hi Gea: I am about to try this and just thought it best to double-check with you. The job file now looks like this:

Code:
1352823127~replicate~create_replication_job~10.0.1.4-53641~OI-NAS%reds%media~Ncat~RE750-1%media~every~sun~21~0~1352823127~active~23.dec_21_00.job
so to change it from running at 21:00 every Sunday to running at 21:05 every Sunday I would change in two places: the ~sun~21~0 I would change to ~sun~21~5 and the ~active~23.dec_21_00.job I would change to ~active~23.dec_21_05.job ?

so it would then read:

Code:
1352823127~replicate~create_replication_job~10.0.1.4-53641~OI-NAS%reds%media~Ncat~RE750-1%media~every~sun~21~5~1352823127~active~23.dec_21_05.job
I'm a little concerned about the number just before the .dec -- 23.dec. Should I update that number at all? This number looks like the last date on which the job ran.. so leave it then? Thanks!
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
You should not edit anything beside the timer info in napp-it 0.8 (every~sun~21~0).

And think about an update to 0.9. There you can edit jobs (time and parameter) within the GUI
 

mixer

Member
Nov 26, 2011
92
0
6
Ok... I don't see 0.9 offered in the webgui update, how to update and ensure I get that version? It should be a smooth update?

Also -- very good news! As you suggested to try, the replication on my new hard drive I created the 'root' ZFS, then made a ZFS Folder under that called replication and I set that to have GZIP-9 compression. Then I created my replication jobs to end up under that level and the compression level WAS INHERITED! Yippie!
 

mixer

Member
Nov 26, 2011
92
0
6
So far so good: I deleted my jobs, destroyed my replicated sets, did the update, and my first re-created job is now running. I still have to check if I can set job run time to each minute now -- it seemed to still be only 0, 15, 30, 45 in the main job setup page.
 

mixer

Member
Nov 26, 2011
92
0
6
Ah.. I see. When you go in to edit the minute field of an existing job you can change it in 5 min increments, which is fine for me. Perhaps you can add the same increments into the initial replication job creation number picker. But Kudos! Thanks! Great stuff.
 

ghandalf

New Member
Dec 30, 2012
9
0
1
Hi,

you can use zfs send -p to send the properties. It copies all properties, including nfs share, compression... and it is working witout receive -o! zfs send -R should also work.
The zfs receive command with -o and -x flag is only available in solaris 11 and maybe in solaris 10, but not in ilumos zfs.
It is already in the features list: https://www.illumos.org/issues/2745

Ghandalf
 

mixer

Member
Nov 26, 2011
92
0
6
Ghandalf- I just tried to send a filesystem which had compression property gzip=9 to a another machine using:

Code:
zfs send -v -p sourcepool/sourcedataset@snapshot | ssh othermachine zfs recv destpool/destdataset
and the resulting dataset was created with compression=off. The compression property was not sent with the strem. However it did take the 'read only' property that was set on the source.
----
then I tried again on a regular dataset (not a replication target) and it worked with -p so I'm not sure what's happening.
----
then I tried again on a replicated dataset -- and it did not use the compression settings.
----
then I tried turning off 'read only' on the replicated dataset and this time it worked. A bug I guess?
 
Last edited: