Gea,
Here are the log entries for the first replication job with 21.dev
This job completed successfully.
Thank you for investigating this issue.
regards
Bob
zfs remote log for 1451663337 newest entry on top
log job 1451663337...
Hello Gea, i have updated both systems to 21.dev 04.mar.2021
I also noticed the remote log did not contain an entry for starting the sender.
Regards
Bob
Gea, Are there any system tests I could run to help you understand how my systems respond? It may be possible to create some sort of replication benchmark that could be used to fine tune the timers within NappIt.
Regards
Bob
Hello Gea, 20.06a3 is still working OK.
You often mention "Slow systems with low ram, many snaps/filesystems".
My ServerB system (the replication destination) is a Xeon E3 1230 with 32gig ram and a 9300-8i HBA with x3 SATAIII disks.
The system has 224 snapshots and 11 filesystems.
Would you...
Hello Gea,
Switching to NappIT 20.06a3 seems to be OK. 10 replications have completed successfully. I'll report back if they start to fail again.
Many thanks
Bob
...both systems are now running NappIT 20.06a3 Pro, I used the "activate last" option.
I have not restarted the sending machine.
I have confirmed the Applicance Groups are working OK on NappIT 20.06a3 Pro.
regards
Bob
Hello Gea, thank you for the reply, I hope you're well.
Check first if remote control works properly. Goto menu Extensions > Appliance group (receiver side) and klick on zfs in the line of the sender machine. This should give a zfs list.
Done. The appliance groups on both machines show up...
@gea, please help.
I'm getting replication errors again between my Omnios servers.
Both servers are running OmniosCE r151030cm and NappIT 21.01b4.
The machines are called ServerA and ServerB.
I had been having problems with replication jobs from ServerA to ServerB. Please see this forum post...
Hello @gea,
I'm currently running NappIT v20.06a3 on an OmniOS r151030bz machine.
I have a replication job set up as follows:
That job should only keep the replication snaps for 1 month, but so far it's holding on to them for 6months.
I think I set the job up with the default keep settings...
Gea, Please forgive me, I'm a little confused.
Here is the memory usage on the machine receiving the replication.
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 1250041 4882 15%
Boot...
Gea, I think the machine is having data pushed into it faster than it's able to write it to disk. My pools write performance seems very slow compared my network performance and so it takes a long time to write the data to disk.
I've slowed down the transfer speed of the replication jobs and...
Hello Gea, thanks for your help.
Sync is set to standard on the destination pool.
The destination pool has 2.16G available and 754G used.
I think the pool is just very slow, it's using slow (5700rpm) disks on a SATA-2 interface.
I've discovered that if I limit the netcat rate to 30meg/s the...
Hello Gea, thanks for the detailed explanation.
The next replication run executes OK, but the data has not changed so only a few bytes are transferred.
Other, smaller replication jobs execute OK, it seems to be very large replication jobs that generate the error message.
The system has 32gig...
Hello Gea,
I copied a file on the source machine to create an additional 5.5GB of data and ran the replication job again.
Here are the logs (sorry, they're quite long)
ServerA is the source, ServerB is the destination
ServerB last log
Log: Last run of Job 1577473221 (newest first)...
This error has happened again.
The replication was moving 5.5GB of data, the data has moved across but there is an error in the NappIT job log.
error, monitor info
time: 2020.07.09.09.54.14 line: 324
replication terminated: local receive=1, remote send=0 - check zpool status...
Hello Patrick, thanks for confirming the WD Golds are CMR.
How did you find out that piece of information?
As far as I can see on the WD website the WD Gold Product Brief doesn't state the drive technology.
The WD Red Pro product brief does state CMR.
I just want to point out that I'm not sure...
Cliff, all the articles on STH have come along at a great time (for me at least). I needed to replace a 3TB WD Re disk in one of my ZFS arrays so knowing which drive models were SMR has been very helpful. In the end I choose a WD 4TB Gold drive, which I found was CMR technology by using my...
Hay @Patrick
I'm over in England, but have visited the US quite a bit. US BBQ is the best!!
Would you mind sharing the ingredients in your BBQ rub...I'm planning my own virtualization BBQ project and thought I'd ask for some tips.
Bob
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.