Napp-It replication error

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
@gea
I've been going through my replication job log and have just noticed this


ok time: 2020.06.20.01.01.20 line: 2126 my_log end 1551: ok incremental remote replication nr 177 finished (time: 72 s)
new snap tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_177, refer: 224G
end receive time: 2020.06.20.01.01.15 line: 941 zfs receive 1577473221 running 72 s
receiving incremental stream of tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_177 into
tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_177
error, monitor info time: 2020.06.20.01.01.03 line: 309 replication terminated: local receive=1, remote send=0 - check zpool status
incremental send time: 2020.06.20.01.00.04 line: 1061 ServerA: zfs send -i tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_176 tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_177 | /var/web-gui/data/tools/nc/nc -b 131072 -w 40 192.168.1.9 53740
start receiver time: 2020.06.20.01.00.03 line: 890 /var/web-gui/data/tools/nc/nc -b 131072 -d -l -p 53740 | /usr/sbin/zfs receive -Fv tank/backup/musicjukebox 2>&1
next replication started time: 2020.06.20.01.00.01 line: 119

replication jobs after this appear to have worked OK.
I've checked zpool status on both machines and it doesn't report any errors.
Both machines are running Omnios 151030bf with nappit 19.06h pro

The machines are supermicro motherboards, intel nics with netgear switch, they're both connected to the same switch.

The machine with the error message has an alert email job configured, but it doesn't appear to have sent an alert message.
The same machine also has a status email job which send a status update every day, and i am getting messages from that job.

What could have happened, and what should I do next?

Many thanks
bob

Edited - i pasted the log table incorrectly.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A zfs replication is like a radio transmission that you record with a tape recorder. Unlike a tape recorder zfs detects any transmission fault or dropout with its checksums. Additionally napp-it monitors a replication to avoid nonending waiting of zfs receive via netcat.

If the replication finishes on next run without error I would simply expect a short network outage/failure. No need for any actions.
 

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
This error has happened again.
The replication was moving 5.5GB of data, the data has moved across but there is an error in the NappIT job log.

error, monitor info time: 2020.07.09.09.54.14 line: 324 replication terminated: local receive=1, remote send=0 - check zpool status

I've had a look at zpool history and notice the zfs receive -Fv isn't being logged when the NappIT error is logged.

2020-07-09.09:52:49 zfs rename tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_194 tank/backup/musicjukebox@1577473221_repli_zfs-kd0707_ServerB_nr_194
2020-07-09.09:54:19 zfs set readonly=off tank/backup/musicjukebox
2020-07-09.09:54:20 zfs set sharesmb=off tank/backup/musicjukebox
2020-07-09.09:54:20 zfs set sharenfs=off tank/backup/musicjukebox
2020-07-09.09:54:25 zfs set readonly=on tank/backup/musicjukebox

2020-07-09.10:21:31 zfs rename tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_195 tank/backup/musicjukebox@1577473221_repli_zfs-kd0708_ServerB_nr_195
2020-07-09.10:21:33 zfs receive -Fv tank/backup/musicjukebox
2020-07-09.10:21:34 zfs set readonly=off tank/backup/musicjukebox
2020-07-09.10:21:34 zfs set sharesmb=off tank/backup/musicjukebox
2020-07-09.10:21:34 zfs set sharenfs=off tank/backup/musicjukebox
2020-07-09.10:21:39 zfs set readonly=on tank/backup/musicjukebox


The pool tank is a 3way mirror using 3 Hitachi deskstar 5K3000 on SATAII interface.

Any help is gratefully received.

Edit, add pool benchmark results

Code:
Benchmark: Write: filebench_sequential, Read: filebench, date: 07.09.2020

pool: tank

    NAME        STATE     READ WRITE CKSUM
    tank        ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        c3t2d0  ONLINE       0     0     0
        c3t3d0  ONLINE       0     0     0
        c3t4d0  ONLINE       0     0     0

host                            ServerB
pool                            tank (recsize=128K, ssb=-, compr=off, readcache=all)
slog                            -
remark                           


Fb3                             sync=always                     sync=disabled                   

Fb4 singlestreamwrite.f         sync=always                     sync=disabled                 
                                60 ops                          2925 ops
                                12.000 ops/s                    407.078 ops/s
                                2225153us cpu/op                104189us cpu/op
                                82.9ms latency                  2.1ms latency
                                11.8 MB/s                       406.9 MB/s
________________________________________________________________________________________
                                randomread.f     randomrw.f     singlestreamr
pri/sec cache=all               32.6 MB/s        31.0 MB/s      1.0 GB/s                     
________________________________________________________________________________________





Thanks
bob
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
There are three logs in napp-it to check

Menu Jobs (destination system)
- Last Log
click on the date of last run.
This shows details of last run

- Job-Log
click on "replicate" in the line of the job
This shows an overview of last executions


Menu Jobs (source system)
- Remote Log
click on Jobs > Remote Log
Enter the jobid in the filter field. This shows remote/source actions
 

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
Hello Gea,
I copied a file on the source machine to create an additional 5.5GB of data and ran the replication job again.

Here are the logs (sorry, they're quite long)

ServerA is the source, ServerB is the destination

ServerB last log
Log: Last run of Job 1577473221 (newest first)
2020.07.09.18.20.51&my_log_last line 2179
<- &my_log line 1937
<- &my_end_all line 1906
<- &my_fast_end line 1015
<- &my_remote_delta_replication line 433
ok
2020.07.09.18.20.50&my_log_last line 1013
<- &my_remote_delta_replication line 433
<- &my_run line 163
ok incremental remote replication nr 199 finished (time: 97 s)
new snap tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_199, refer: 234G
2020.07.09.18.20.48&my_log_last line 2363
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
end my postreplication
2020.07.09.18.20.48&my_log_last line 2352
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
my postreplication, zfs set readonly=on tank/backup/musicjukebox
2020.07.09.18.20.48&my_log_last line 2310
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
zfs set nfs/smb shares=off tank/backup/musicjukebox
2020.07.09.18.20.47&my_log_last line 2293
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
zfs set readonly=off tank/backup/musicjukebox
2020.07.09.18.20.47&my_log_last line 2265
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
1522 update id.par: last=09.jul_18_20
2020.07.09.18.20.47&my_log_last line 2245
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
next_src_snap from host ServerA
tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 0 - 234G
2020.07.09.18.20.46&my_log_last line 2233
<- &my_postreplication line 972
<- &my_remote_delta_replication line 433
<- &my_run line 163
begin my_postreplication
2020.07.09.18.20.45&my_log_last line 965
<- &my_remote_delta_replication line 433
<- &my_run line 163
end receive 97 s
2020.07.09.18.20.45&my_log_last line 933
<- &my_remote_delta_replication line 433
<- &my_run line 163
receiver message
receiving incremental stream of tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 into
tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_199
2020.07.09.18.20.36&my_log_last line 2010
<- &my_end_proc line 326
<- &my_monitor line 1133
<- &my_remote_delta_replication line 433
<- &my_run line 163
kill proc 16761 ? S 00:07 /usr/sbin/zfs receive -Fv tank/backup/musicjukebox (16761)
2020.07.09.18.20.35&my_log_last line 1984
<- &my_end_proc line 326
<- &my_monitor line 1133
<- &my_remote_delta_replication line 433
<- &my_run line 163
kill proc 16759 ? S 00:00 sh -c /var/web-gui/data/tools/nc/nc -b 131072 -d -l -p 53740 | /usr/sbin/zfs re (16759)
2020.07.09.18.20.35&my_log_last line 325
<- &my_monitor line 1133
<- &my_remote_delta_replication line 433
<- &my_run line 163
error;, monitor info, replication terminated: local receive=1, remote send=0 - check zpool status
2020.07.09.18.19.13&my_log_last line 304
<- &my_monitor line 1133
<- &my_remote_delta_replication line 433
<- &my_run line 163
Monitor: Remote proc:
16915 sh -c /usr/sbin/zfs send -i tank/storage/musicjukebox@1577473221_repli_zfs_Serv
16916 /usr/sbin/zfs send -i tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr
2020.07.09.18.19.10&my_log_last line 1104
<- &my_remote_delta_replication line 433
<- &my_run line 163
source snap1,2: size=0,0
2020.07.09.18.19.09&my_log_last line 1088
<- &my_remote_delta_replication line 433
<- &my_run line 163
start sender
id=1577473221
src_interface=/usr/bin/nc -w 40 192.168.1.9 53740
src_interface2=/var/web-gui/data/tools/nc/nc -b 131072 -w 40 192.168.1.9 53740
snap_name1=tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_198
snap_name2=tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199
send_inc=-i
snap_rec=
send_rec=
send_dedup=
send_i=-i
transfer_zero=
send_option=
pv_limit=
2020.07.09.18.19.08&my_log_last line 916
<- &my_remote_delta_replication line 433
<- &my_run line 163
start receiver
/var/web-gui/data/tools/nc/nc -b 131072 -d -l -p 53740 | /usr/sbin/zfs receive -Fv tank/backup/musicjukebox 2>&1
2020.07.09.18.19.05&my_log_last line 733
<- &my_remote_delta_replication line 433
<- &my_run line 163
start remote incremental replication
2020.07.09.18.19.05main line 76start job-replicate with parameter:
id=1577473221, action=run, par='run_1577473221



ServerB Job Log
ok time: 2020.07.09.18.20.51 line: 2178 my_log end 1551: ok incremental remote replication nr 199 finished (time: 97 s)
new snap tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_199, refer: 234G
end receive time: 2020.07.09.18.20.45 line: 964 zfs receive 1577473221 running 97 s
receiving incremental stream of tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 into
tank/backup/musicjukebox@1577473221_repli_zfs_ServerB_nr_199
error, monitor info time: 2020.07.09.18.20.35 line: 324 replication terminated: local receive=1, remote send=0 - check zpool status
incremental send time: 2020.07.09.18.19.09 line: 1085 ServerA: zfs send -i tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_198 tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 | /var/web-gui/data/tools/nc/nc -b 131072 -w 40 192.168.1.9 53740
start receiver time: 2020.07.09.18.19.08 line: 913 /var/web-gui/data/tools/nc/nc -b 131072 -d -l -p 53740 | /usr/sbin/zfs receive -Fv tank/backup/musicjukebox 2>&1
request remote snap destroy time: 2020.07.09.18.19.06 line: 812 tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_196
next replication started time: 2020.07.09.18.19.05 line: 127


ServerA Remote Log


## 2020.07.09.18.19.58 repli-send.pl id 1577473221
remote send finished after 48 s
zfs send (tank/storage/musicjukebox): ok

5.03GiB [ 108MiB/s]


## 2020.07.09.18.19.10 repli-send.pl id 1577473221
NAME USED AVAIL REFER MOUNTPOINT
snap1: tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_198 0 - 229G -
snap2: tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 0 - 234G -


## 2020.07.09.18.19.10 repli-send.pl id 1577473221
/usr/sbin/zfs send -i tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_198 tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199 | /usr/bin/pv -frb -i 10000 2> /tmp/53740.log | /var/web-gui/data/tools/nc/nc -w 90 -b 131072 192.168.1.9 53740 2>&1

par 1577473221: id=1577473221, send_dedup=, send_option=, pv_limit=, send_rec=, send_inc=-i, snap_name1=tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_198, snap_name2=tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199, src_interface=/usr/bin/nc -w 40 192.168.1.9 53740, send_ip=, port=, transport=, buffer=, send_i=-i,


## 18:19 10 grouplib 1871: call job-repli-send.pl for id 1577473221 on interface /usr/bin/nc -w 40 192.168.1.9 53740


## 18:19 09 grouplib 1805: zfs snapshot tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_199


## 18:19 09 grouplib 1758: remote call zfs send 1577473221 initiated
_________________________________________________________


## 18:19 07 grouplib 2012: remote call destroy snap : zfs destroy tank/storage/musicjukebox@1577473221_repli_zfs_ServerB_nr_196

edit, i pasted an incomplete ServerA remote log

Many thanks
bob
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
A napp-it replication job starts two processes. One is the replication itself via netcat and the other is a monitoring process to avoid an endless running/waiting netcat.

In your case the replication 198->199 ends with ok. The sending process notes that the send lasts 48s while the receive process notes 97s for the process that includes receive to a clone and then create the updated filesystem from it. This includes a listing of snaps.

This is where the second process is important. This is a monitoring process. It asumes that 15s after the send is finished the replication should be finally completed, normally a huge time. In your case the monitor detects that send is finished but receive not, this is the reason on the monitor error.

So the question is, what can delay a receive so it needs 49s to finish after the sender has sended the last blocks. There are mainly four options:

- less RAM
- dedup with not enough RAM
- many snaps or filesystems on a slow pool
- a pool performance problem ex due a bad disk, run a Pool > Benchmark to verify

Beside the error message from the monitoring process, does the next replication run executes 199->200? This should be the case on a current napp-it free or Pro.
 

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
Hello Gea, thanks for the detailed explanation.

The next replication run executes OK, but the data has not changed so only a few bytes are transferred.
Other, smaller replication jobs execute OK, it seems to be very large replication jobs that generate the error message.

The system has 32gig RAM.
The system hosts 2 bhyve VMs, which consume 12Gig of Ram.
I'm not using dedup.
There are 256snaps on the receiving machine.
I'm strongly suspecting the pool is very slow. I posted a bench mark result in post 4 which doesn't look very good!

The disks are Hitachi Deskstar 5K3000, bought brand new about 8yrs ago.
The Smart stats dont show any Reallocated sectors, but do show numbers in Raw_read_error_rate.
The disks have been running for 24/7 for the last 8years so they're probably worn out!
heres the SMART output for one of the disks

Code:
smartctl 7.1 2019-12-30 r5022 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Hitachi Deskstar 5K3000
Device Model:     Hitachi HDS5C3030ALA630
Serial Number:    MJ1311YNG3XV0A
LU WWN Device Id: 5 000cca 228c1c95f
Firmware Version: MEAOA5C0
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    5700 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Fri Jul 10 14:35:09 2020 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84)    Offline data collection activity
                    was suspended by an interrupting command from host.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:         (37866) seconds.
Offline data collection
capabilities:              (0x5b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    No Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:      (   1) minutes.
Extended self-test routine
recommended polling time:      ( 631) minutes.
SCT capabilities:            (0x003d)    SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   099   099   016    Pre-fail  Always       -       131073
  2 Throughput_Performance  0x0005   134   134   054    Pre-fail  Offline      -       112
  3 Spin_Up_Time            0x0007   126   126   024    Pre-fail  Always       -       551 (Average 553)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       95
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   135   135   020    Pre-fail  Offline      -       31
  9 Power_On_Hours          0x0012   090   090   000    Old_age   Always       -       73169
10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       95
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       1025
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       1025
194 Temperature_Celsius     0x0002   176   176   000    Old_age   Always       -       34 (Min/Max 16/46)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     34217         -

SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
thanks
Bob
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Random and sync values are low.
Have you forced sync on the destination pool?

Are the pools quite full?
 

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
Hello Gea, thanks for your help.

Sync is set to standard on the destination pool.
The destination pool has 2.16G available and 754G used.

I think the pool is just very slow, it's using slow (5700rpm) disks on a SATA-2 interface.

I've discovered that if I limit the netcat rate to 30meg/s the replication job completes OK.
I noticed that when replicating 5gig of data the send bandwidth maxes out the network connection, it gets to 108MiB/s.
Limiting the bandwidth of the zfs send seems to help the slow receiving pool catchup.

The receiving machine is also hosting some Bhyve VM's.
Should I try to make the storage pool faster?
I was thinking I could add an LSI/Broadcom 9300-8i PCie card and then attach new WD gold SATA3 disks to it, then I could also configure an SSD mirror as an SLOG, that should give me close to 600Meg/s write speed.

Many thanks for your help with this.
Regards
Bob
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
ok

I expect the low random performance then to trigger the monitor error.
If so, use the monitor error as a warning or info only, not a real error. Just ignore.

I hesitate to increase default timeout of the monitor process but can give infos where to increase.
 
  • Like
Reactions: Bob T Penguin

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
Gea, I think the machine is having data pushed into it faster than it's able to write it to disk. My pools write performance seems very slow compared my network performance and so it takes a long time to write the data to disk.
I've slowed down the transfer speed of the replication jobs and that seems to have solved the problem.
Thank you for your help, I very much appreciate it.
I'll look at ways to speed up the pool write performance but I'll need to buy an HBA and some SSDs to do that.
Many thanks
Bob
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Pool performance may be a relevant point and a performance reduction may be helpful. With sync=default on the parent destination filesystem, your replication is done async. Your async sequential pool performance is > 400 MB/s. This is 4 x your network.

Beside that the send itself is done correctly. The problem occurs after the send when the process needs to update the replication clone, snaps and filesystem what lasts quite long.
 

Bob T Penguin

Member
Dec 16, 2015
55
1
8
47
Gea, Please forgive me, I'm a little confused.

Here is the memory usage on the machine receiving the replication.
Code:
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                    1250041              4882   15%
Boot pages                    125                 0    0%
ZFS File Data             2665024             10410   32%
VMM Memory                3154944             12324   38%
Anon                       301362              1177    4%
Exec and libs                 541                 2    0%
Page cache                  21406                83    0%
Free (cachelist)            12012                46    0%
Free (freelist)            974061              3804   12%

Total                     8379516             32732
Physical                  8379515             32732
The server has a Xeon E3-1230 at 3.2Ghz.

If I push 5GB of data into the machine and store it on a pool that has sync=default, that data is written to memory first and then flushed to the storage pool. However the storage pool is taking a long time to complete all the actions.

Presumably it's a combination of a large amount of data being pushed into limited memory with a slow storage system that's making the whole replication receive process take a long time.

regards
Bob