Search results

  1. B

    More NappIT replication errors

    Gea, Here are the log entries for the first replication job with 21.dev This job completed successfully. Thank you for investigating this issue. regards Bob zfs remote log for 1451663337 newest entry on top log job 1451663337...
  2. B

    More NappIT replication errors

    Hello Gea, i have updated both systems to 21.dev 04.mar.2021 I also noticed the remote log did not contain an entry for starting the sender. Regards Bob
  3. B

    More NappIT replication errors

    Gea, Are there any system tests I could run to help you understand how my systems respond? It may be possible to create some sort of replication benchmark that could be used to fine tune the timers within NappIt. Regards Bob
  4. B

    More NappIT replication errors

    Hello Gea, 20.06a3 is still working OK. You often mention "Slow systems with low ram, many snaps/filesystems". My ServerB system (the replication destination) is a Xeon E3 1230 with 32gig ram and a 9300-8i HBA with x3 SATAIII disks. The system has 224 snapshots and 11 filesystems. Would you...
  5. B

    More NappIT replication errors

    Hello Gea, Switching to NappIT 20.06a3 seems to be OK. 10 replications have completed successfully. I'll report back if they start to fail again. Many thanks Bob
  6. B

    More NappIT replication errors

    ...both systems are now running NappIT 20.06a3 Pro, I used the "activate last" option. I have not restarted the sending machine. I have confirmed the Applicance Groups are working OK on NappIT 20.06a3 Pro. regards Bob
  7. B

    More NappIT replication errors

    Hello Gea, thank you for the reply, I hope you're well. Check first if remote control works properly. Goto menu Extensions > Appliance group (receiver side) and klick on zfs in the line of the sender machine. This should give a zfs list. Done. The appliance groups on both machines show up...
  8. B

    More NappIT replication errors

    @gea, please help. I'm getting replication errors again between my Omnios servers. Both servers are running OmniosCE r151030cm and NappIT 21.01b4. The machines are called ServerA and ServerB. I had been having problems with replication jobs from ServerA to ServerB. Please see this forum post...
  9. B

    NappIT not deleting old replication destination snaps

    @gea thank you very much! That's great service, to have identified and fixed a bug so quickly - many thanks. regards Bob
  10. B

    NappIT not deleting old replication destination snaps

    Hello @gea, I'm currently running NappIT v20.06a3 on an OmniOS r151030bz machine. I have a replication job set up as follows: That job should only keep the replication snaps for 1 month, but so far it's holding on to them for 6months. I think I set the job up with the default keep settings...
  11. B

    Napp-It replication error

    Gea, Please forgive me, I'm a little confused. Here is the memory usage on the machine receiving the replication. Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1250041 4882 15% Boot...
  12. B

    Napp-It replication error

    Gea, I think the machine is having data pushed into it faster than it's able to write it to disk. My pools write performance seems very slow compared my network performance and so it takes a long time to write the data to disk. I've slowed down the transfer speed of the replication jobs and...
  13. B

    Napp-It replication error

    Sorry, that was a typing error. The pool has 2.15T available and 759G used. Sorry about that!! bob
  14. B

    Napp-It replication error

    Hello Gea, thanks for your help. Sync is set to standard on the destination pool. The destination pool has 2.16G available and 754G used. I think the pool is just very slow, it's using slow (5700rpm) disks on a SATA-2 interface. I've discovered that if I limit the netcat rate to 30meg/s the...
  15. B

    Napp-It replication error

    Hello Gea, thanks for the detailed explanation. The next replication run executes OK, but the data has not changed so only a few bytes are transferred. Other, smaller replication jobs execute OK, it seems to be very large replication jobs that generate the error message. The system has 32gig...
  16. B

    Napp-It replication error

    Hello Gea, I copied a file on the source machine to create an additional 5.5GB of data and ran the replication job again. Here are the logs (sorry, they're quite long) ServerA is the source, ServerB is the destination ServerB last log Log: Last run of Job 1577473221 (newest first)...
  17. B

    Napp-It replication error

    This error has happened again. The replication was moving 5.5GB of data, the data has moved across but there is an error in the NappIT job log. error, monitor info time: 2020.07.09.09.54.14 line: 324 replication terminated: local receive=1, remote send=0 - check zpool status...
  18. B

    WD Gold 16TB 18TB and 20TB Models Inbound Plus EAMR

    Hello Patrick, thanks for confirming the WD Golds are CMR. How did you find out that piece of information? As far as I can see on the WD website the WD Gold Product Brief doesn't state the drive technology. The WD Red Pro product brief does state CMR. I just want to point out that I'm not sure...
  19. B

    WD Gold 16TB 18TB and 20TB Models Inbound Plus EAMR

    Cliff, all the articles on STH have come along at a great time (for me at least). I needed to replace a 3TB WD Re disk in one of my ZFS arrays so knowing which drive models were SMR has been very helpful. In the end I choose a WD 4TB Gold drive, which I found was CMR technology by using my...
  20. B

    Of BBQ and Virtualization Why Large Nodes Reign

    Hay @Patrick I'm over in England, but have visited the US quite a bit. US BBQ is the best!! Would you mind sharing the ingredients in your BBQ rub...I'm planning my own virtualization BBQ project and thought I'd ask for some tips. Bob