Alternative to rsync for increased NFS/CIFS throughput?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Craash

Active Member
Apr 7, 2017
160
27
28
This is going to be convoluted.

The short story is that I have a physical FreeNAS server with about 13 TB of data on it (5GB to 15GB chunks). I want to backup that data to a Ubuntu server on a ESXi 6.5 VM on a nightly basis. When I do this with rsync, I see about 200 Mb/s (yes, Mb, NOT MB). When I use CP over a mounted NFS or CIFS share on the same directory and data, I'll see 3-4 Gb/s. HUGE difference. I have a windows box setup that I can duplicate this issue one too. I AM using both the -z and the -W flags in rsync. -W sped it up a little, but still nowhere close to NFS/CIFS

Now, the details:
  • Network
    • All the effected machines use ConnectX-3 cards tied to a TP-Link T1700G-28TQ with DAC's under 3 meters.
    • iperf3 shows ~9.8 Gb/s between all machines, both ways, with a single thread.
    • Jumbo Frames, 9000 MTU on all machines, the ESXi host, and the switch.
  • Machines
    • FreeNAS is an Intel i7-3770 @3.4GHz, 32GB DDR3, HP 9207 HBA. 8 Each Western Digital WD4000FDYZ 4TB 64MB 7200RPM SATA 6.0Gb/s drives. Single ZFS pool.
    • ESXi is an Two Intel Xeon E5-2670, 256 GB DDR3, 3 8TB WD Reds on a Adaptec 71605Q SAS controller in a RAID 5 (I know, I know). VM has 8 vCPU, 64GB of RAM, a 64GB primary drive and a dedicated storage array for the backup data (EXT4).
    • Windows 10 is a i7-7800x with 64GB DDR4, a pair of EVO 960 512 (RAID 0), and a Lenovo 46C9110 IBM ServeRAID M5210
  • What I need: a method to copy updated and new data from the FreeNAS to the backup pool on the ubuntu VM that takes advantage of the network speed and hardware. I'd like to think I can maintain 2-3 Gb/s. I need it to remove files on the VM that have been removed on the FreeNAS - this is the hangup that keeps CP from being a valid solution. I just need to embed this in my existing bash script to run on a nightly basis. If the method returned a exit code that I could notate that would be a positive.
Edit:
I'm treating it like a local sync.

Actual line in my scrip: /usr/bin/rsync -rltDzvW --delete --stats /mnt/freenas/backup/freenas/ /mnt/x299prime/Backups/FreeNAS/ --log-file=$strLogFile


/mnt/x299prime/Backups/FreeNAS/ is mounted with NFS in fstab.

Help?
 
Last edited:

Craash

Active Member
Apr 7, 2017
160
27
28
Not easily. I use that same backup array for other machines backup, so it would require some major modifications. I had originally considered going that way, but the 24 port 5 series adaptec card I started with didn't offer true JBOD. My current one does, but that ship has kinda already sailed.
 
Last edited:

dswartz

Active Member
Jul 14, 2011
610
79
28
You didn't mention how rsync is connecting to the other host. ssh? If so, it's likely the encryption that is killing you?
 

Craash

Active Member
Apr 7, 2017
160
27
28
I'm treating it like a local sync.

Actual line in my scrip: /usr/bin/rsync -rltDzvW --delete --stats /mnt/freenas/backup/freenas/ /mnt/x299prime/Backups/FreeNAS/ --log-file=$strLogFile


/mnt/x299prime/Backups/FreeNAS/ is mounted with NFS in fstab.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
+1 on @nitrobass24 to remove compression. remove any encryption too, like if you're doing it over ssh (which I know you're not) ... all of those things slow things down a lot.

if that alone doesn't speed things up for you, instead of writing over NFS, just go straight rsync. Setup the rsync daemon on destination and do the transfer via rsync protocol (e.g., rsync://server/path/to/destination)

That should be a lot faster.
 

Craash

Active Member
Apr 7, 2017
160
27
28
@nitrobass24 and @BLinux you guys nailed it. Removing -z (compression) did the trick. I did testing with some various senarios and the results are below. Seems rsync -rltDvW --delete gives the most constant throughput. Thanks guys, now I have a ton of scripts to modify.


Test 1: cp -vur /mnt/freenas/media/ /mnt/storage/media/ 7:22 to 7:26 and 7:34 to 7:38
Notes: This is the only test that made use of the SSD Caching on the array. That accounts for the spike up to 4 Gb/s.

Test 2: rsync -rltDzvW --delete --stats /mnt/freenas/media/ /mnt/storage/media/ 7:40 to 7:45
Notes: This is what I had been using. Pathetic speeds.

Test 3: rsync -vrltDzW --delete /mnt/freenas/media/ /mnt/storage/media/ 7:40 to 7:45
Notes: Compression removed. Yeoooow!

Test 4: rsync -rltDv --delete /mnt/freenas/media/ /mnt/storage/media/ 8:00 to 8:08
Notes: Whole File removed (from MAN: With this option the incremental rsync algorithm is
not used and the whole file is sent as-is instead. The transfer can be faster if
this option is used when the bandwidth between the source and target machines is
higher than the bandwidth to disk)

Test 5: iperf3 -c freenas -t 60 8:11 to 8:12
Notes: For reference. The -t 60 flag tells iperf to transmit for 60 seconds. I did this to ensure I had a good register on the graph.

Overall Testing Notes:
1. The machine running the rsysnc is a Ubuntu 16.04.x LTS server VM on ESXi 6.5
2. The rsync target machine is a physical freenas 11.x machine.
3. The graph was generated from ESXi's performance monitor with ONLY the VM selected that was performing the rsync.
4. rsysnc flags:
# -r recursive
# -l symlinks
# -t times
# -D Devices
# -v Verbose
# -W Whole File

 

Craash

Active Member
Apr 7, 2017
160
27
28
Job just finished. 9.4TB in ~10 hours. Thanks again.

Backup Process Took: 9 hours 54 minutes 50 seconds.

6 jobs finished successfully and 0 failed.
Scripts Backup was successful.
pfSense Backup was successful. (Filesize:4206 KB.)
Plex Backup was successful.
FreeNAS Backup was successful.
Flat Backup was successful.
Media Backup was successful.


/ 19G used, 1.6G available (93% in use).
/home 48M used, 38G available (1% in use).
/mnt/freenas/media 9.4T used, 14T available (42% in use).
/mnt/x299prime 381G used, 3.3T available (11% in use).
/mnt/storage 9.4T used, 37G available (100% in use).