How to acheive SMOKIN' ZFS send/recv transfers

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Evening all, thought I'd decompress here and show some work I have been doing this evening to migrate my Vm pool of data (600GB roughly) from FreeNAS 9.10 to FreeNAS Corral 'the old fashioned way'. I am not quite ready to upgrade my FreeNAS 9.10 to Corral but have a Corral AIO that I wanted to protect my VM pool data over to. Thought I'd setup a FreeNAS peer but that only seems to work Corral to Corral...even tried a ssh host peer, no such luck. Didn't want to investigate since it's a 'one time backup/protection scheme' so this was what I resorted to. To maintain peak performance ensure your pool of storage is up to the task (sas3 HGST ssd's used in this case), your network is 10G, and FreeNAS hosts are connected 10G (vmxnet3 in my case between my AIO's), jumbo end-to-end.

Could have used this 'one liner' type of syntax but I know ssh has an overhead for sure, never gonna reach maximum throughput via that method unless you take hpn-ssh route maybe.

zfs send husmm-r0/nfs@auto-20170327.1913-2w | ssh 192.168.x.x zfs recv sas3/nfs-dr

Ended up settling in on good ole' netcat (DO NOT use this on an unsecure network, ONLY use on trusted network)

SRC - (run first) [root@freenas-esxi6a] ~# zfs send husmm-r0/nfs@auto-20170327.1913-2w | nc -l 3333

DEST - (run second) [root@freenas] ~# nc 192.168.x.x 3333 | zfs recv sas3/nfs-dr

The results:

SRC FreeNAS 9.10 system
upload_2017-3-27_20-5-14.png

DEST FreeNAS Corral system
upload_2017-3-27_20-5-43.png

Hope this helps someone out there that may want to move data from FreeNAS 9.x to Corral. Tried and true method between any ZFS distro really.
 
Last edited:

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
Evening all, thought I'd decompress here and show some work I have been doing this evening to migrate my Vm pool of data (600GB roughly) from FreeNAS 9.10 to FreeNAS Corral 'the old fashioned way'.
[...]
Ended up settling in on good ole' netcat (DO NOT use this on an unsecure network, ONLY use on trusted network)

SRC - (run first) [root@freenas-esxi6a] ~# zfs send husmm-r0/nfs@auto-20170327.1913-2w | nc -l 3333

DEST - (run second) [root@freenas] ~# nc 192.168.x.x 3333 | zfs recv sas3/nfs-dr
I use zrep + bbcp (watch out for the newer versions of the bbcp port - some patches to add IPv6 support broke things badly).

17TB in around 7.5 hours:



Lots more info here, or start at the top. Note that this is bare FreeBSD CLI, not FreeNAS.
 
  • Like
Reactions: K D and T_Minus

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I was getting only ~550GB per hour transfers using rsync and zfs send/recv.

Interesting - I've used mbuffer in the past with very good performance. I got the idea from this site ::

Using mbuffer to speed up slow zfs send | zfs receive - EveryCity
Used this method to copy a 4.2TB data set between 2 systems connected via 10GBe. Took around 6 hours. Both are stock freenas AIOs. I ended up with a data set in the destination system that was unusable. In the storage view of FreeNAS, it threw an error message when trying to edit the data set. When I opened up the parent dataset's share from windows, it had a 0 kb file with the dataset name.

Not sure where I messed up.

SRC - (run first) [root@freenas-esxi6a] ~# zfs send husmm-r0/nfs@auto-20170327.1913-2w | nc -l 3333

DEST - (run second) [root@freenas] ~# nc 192.168.x.x 3333 | zfs recv sas3/nfs-dr
Running this right now and I'm seeing ~1TB per hour copied.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
download.png

This is my transfer speed.

Source : 2vcpu, 16 GB RAM, 2x(4TBx4 RaidZ1), Intel S3700 400GB SLOG
Dest : 2vcpu, 8 GB RAM, 8TBx8 RaidZ2

Both are FreeNAS AIOs running on esxi, hosts are connected via Mellanox CX3 using a Ubiquiti US-16-XG switch.

I'll eventually double the allotted RAM on both the source and dest VMs. What other parameters should I be looking at to increase the transfer speed?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
What type of disks are you using for capacity? I see you have a s3710 for ZIL, if you are using spinners this 'may' be all she's got Scotty :-D

Assuming using my method you did end up w/ a usable ZFS dataset once data was sent/received whereas you said using mbuffer gave you issues.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Yup. You are right on both counts. 7200 RPM HGSTs in the Source and 8TB reds in the dest systems. Your method did work for me while mbuffer gave issues. The first transfer of 4.2 TB finished succesfully in about 3:50 hrs and a second 15.4 TB just finished taking about 17hrs using your method. Everything working as expected.

Have another total ~200 TB of transfers as I shuffle data around and was wondering if there was some way to speed things, even if it means changing something temporarily.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Not that I know of, that's a whole lotta data to be playing shuffle game w/ :-D.

40GbE and all-flash pools maybe hah $$$

EDIT: 200TB, good hell let's hear abt that/those system/s.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
LOL. The total data set is around 45 TB (Media, files, backups etc). In process of moving away from Hardware raid for storage to ZFS. So moving data to another box, upgrade hardware and drives and moving it back.

It doesnt help that the moment I think that one box is done, I think of some thing else to change and I have to repeat the whole darn thing again.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
No incrementals or staging/changed-data directory/folder/dataset? That's how I've managed the insanity in the past....either that or incremental snapshot send/recv but coming from HW raid (EWWWW...no eff that double EWWWWWWWWWW) YMMV :-D

GL w/ data move, now repeat after me..."All your data are belong to ZFS" HAH!
 
Last edited:

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Unfortunately in all my wisdom (or lack of it) I decided to upgrade all my servers and disks at the same time (Both home and lab) as well as upgrade to 10gbe. So right now everything is a disorganized mess.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
What type of disks are you using for capacity? I see you have a s3710 for ZIL, if you are using spinners this 'may' be all she's got Scotty :-D
If the eventual target is traditional hard drives which have no data (other than the initialized filesystem) on them and you are being limited by disk throughput, you should see a gradual slowdown in transfer performance as the disks fill up, due to the inner sectors being slower than the outer ones. For example (copied from my earlier post up-thread):



At the size of transfers we're talking about here, your ZIL / SLOG device is going to fill up and then start flushing to the hard drives - at the most, it will compensate for bursty traffic and feed the disks at their maximum sustained I/O rate. At worst, it will hurt performance as it runs out of erased sectors and has to start erasing in order to store data. You may benefit from temporarily removing the log device and possibly setting sync=disabled on the pool, assuming the pool is empty when you start. Don't forget to re-add the log device and reset sync to default (normally with "zfs inherit", since setting it to default will still be counted as a local parameter change).

The first thing you should do is see how fast your systems can actually talk to each other. In my case, since I use bbcp:
Code:
(0:1) srchost:~terry# bbcp -P 2 -s 8 /dev/zero desthost:/dev/null
bbcp: Creating /dev/null/zero
bbcp: 160620 06:06:45  0% done; 1.2 GB/s
bbcp: 160620 06:06:47  0% done; 1.2 GB/s
bbcp: 160620 06:06:49  0% done; 1.2 GB/s
bbcp: 160620 06:06:51  0% done; 1.2 GB/s
bbcp: 160620 06:06:53  0% done; 1.2 GB/s
^C
If you aren't getting wire-speed there, you certainly won't get it once you involve disk I/O on both ends. Look at my earlier post for links where I go into this in a lot more detail.
 
  • Like
Reactions: Stux and K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Thanks @whitey and @Terry Kennedy.

As a quick and dirty solution, I just connected the new disks to the same server, created the pool and locally used ZFS send/recieve. Copied 15TB in about 8 hrs. I think I'll just do this and transplant the disks to the target machines and import the pools there.
 
  • Like
Reactions: Stux

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Umm yes...lol unless living under a rock :-D

Edit: Point is this method works on all ZFS systems I have ever tested.
 
Last edited:

CJRoss

Member
May 31, 2017
91
6
8
Umm yes...lol unless living under a rock :-D

Edit: Point is this method works on all ZFS systems I have ever tested.
Surprisingly, a lot of people missed the announcement. I was just making sure since you mentioned it in your post.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
resurrecting an old thread

Trying to migrate some data and encountering very slow speeds in zfs send recv.
download.png

iperf shows almost wire speed. Using an intermediate windows box to just "copy and paste" gives around 260MBps speed. Any thoughts?


iperf.png
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
resurrecting an old thread

Trying to migrate some data and encountering very slow speeds in zfs send recv.
View attachment 6650
FreeBSD / FreeNAS or some other ZFS implementation?
How are you getting the data from A to B? SSH, something else? What does CPU usage look like on both boxes?Any chance you have synchronous writes only set on the destination pool?

Traffic from ZFS send/recv tends to be bursty, so a transfer method with lots of buffering is good. I use bbcp w/ zrep.

Remember, performance is limited by the sustained random write IOPS of your physical drives - even if you have a fast ZIL, it is going to fill up and you'll be limited by the speed it can flush to disk.