Recent content by Bob T Penguin

  1. B

    More NappIT replication errors

    Gea, Here are the log entries for the first replication job with 21.dev This job completed successfully. Thank you for investigating this issue. regards Bob zfs remote log for 1451663337 newest entry on top log job 1451663337...
  2. B

    More NappIT replication errors

    Hello Gea, i have updated both systems to 21.dev 04.mar.2021 I also noticed the remote log did not contain an entry for starting the sender. Regards Bob
  3. B

    More NappIT replication errors

    Gea, Are there any system tests I could run to help you understand how my systems respond? It may be possible to create some sort of replication benchmark that could be used to fine tune the timers within NappIt. Regards Bob
  4. B

    More NappIT replication errors

    Hello Gea, 20.06a3 is still working OK. You often mention "Slow systems with low ram, many snaps/filesystems". My ServerB system (the replication destination) is a Xeon E3 1230 with 32gig ram and a 9300-8i HBA with x3 SATAIII disks. The system has 224 snapshots and 11 filesystems. Would you...
  5. B

    More NappIT replication errors

    Hello Gea, Switching to NappIT 20.06a3 seems to be OK. 10 replications have completed successfully. I'll report back if they start to fail again. Many thanks Bob
  6. B

    More NappIT replication errors

    ...both systems are now running NappIT 20.06a3 Pro, I used the "activate last" option. I have not restarted the sending machine. I have confirmed the Applicance Groups are working OK on NappIT 20.06a3 Pro. regards Bob
  7. B

    More NappIT replication errors

    Hello Gea, thank you for the reply, I hope you're well. Check first if remote control works properly. Goto menu Extensions > Appliance group (receiver side) and klick on zfs in the line of the sender machine. This should give a zfs list. Done. The appliance groups on both machines show up...
  8. B

    More NappIT replication errors

    @gea, please help. I'm getting replication errors again between my Omnios servers. Both servers are running OmniosCE r151030cm and NappIT 21.01b4. The machines are called ServerA and ServerB. I had been having problems with replication jobs from ServerA to ServerB. Please see this forum post...
  9. B

    NappIT not deleting old replication destination snaps

    @gea thank you very much! That's great service, to have identified and fixed a bug so quickly - many thanks. regards Bob
  10. B

    NappIT not deleting old replication destination snaps

    Hello @gea, I'm currently running NappIT v20.06a3 on an OmniOS r151030bz machine. I have a replication job set up as follows: That job should only keep the replication snaps for 1 month, but so far it's holding on to them for 6months. I think I set the job up with the default keep settings...
  11. B

    Napp-It replication error

    Gea, Please forgive me, I'm a little confused. Here is the memory usage on the machine receiving the replication. Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1250041 4882 15% Boot...
  12. B

    Napp-It replication error

    Gea, I think the machine is having data pushed into it faster than it's able to write it to disk. My pools write performance seems very slow compared my network performance and so it takes a long time to write the data to disk. I've slowed down the transfer speed of the replication jobs and...
  13. B

    Napp-It replication error

    Sorry, that was a typing error. The pool has 2.15T available and 759G used. Sorry about that!! bob
  14. B

    Napp-It replication error

    Hello Gea, thanks for your help. Sync is set to standard on the destination pool. The destination pool has 2.16G available and 754G used. I think the pool is just very slow, it's using slow (5700rpm) disks on a SATA-2 interface. I've discovered that if I limit the netcat rate to 30meg/s the...
  15. B

    Napp-It replication error

    Hello Gea, thanks for the detailed explanation. The next replication run executes OK, but the data has not changed so only a few bytes are transferred. Other, smaller replication jobs execute OK, it seems to be very large replication jobs that generate the error message. The system has 32gig...