Long distance, high-bandwidth file transfer

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

danwood82

Member
Feb 23, 2013
66
0
6
Hi all,

Not sure the best place to ask about this, but in the past I've found the STH community to be extremely knowledgable and helpful.

I'm residing in Mexico for a while, and have a server and rendering rack set up back in the UK. Back there, I have a 200-down, 20-up fibre-optic connection (Virgin Media), and over here in Mexico, I've just had a 100/100-synchronous fibre package installed (Axtel - for half the damned price of my UK package!)

The Mexico line easily clocks 115+ upload in speed-tests to local servers... but when I test out uploading files back to a basic FTP I've set up back in the UK as a test (FileZilla Server), I seem to cap out at more like 12-13Mbit. (Running speed-tests to certain London-based servers seems to clock in maybe 25-30Mbit, with roughly a 120-150ms ping)

I've tried fiddling with "Internal transfer buffer size" and "Socket buffer size" on the ftp config, which seems to cause it to burst a slightly higher initial speed if I raise them... but no matter what I set, it always seems to settle back down to 12-13 in a few seconds anyway.

I figure there may just be a basic limit on how much I can push across the Atlantic with basic residential broadband packages, but I've really no idea how internet routing voodoo works, so perhaps I'm missing some easy approach that would work better.

Should I be thinking about this in an entirely different way? Could it work better to set up a VPN and transfer files that way? Some kind of home-cloud service?

It seems conspicuous that I can transfer files quicker by uploading to something like WeTransfer (which will easily max my bandwidth), and then immediately download that same file at the other end, faster than I can perform a direct transfer. Surely there's some way of wringing more out of this?
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
You need to look at other file transfer protocols. Ones that use udp instead of tcp.

If local transfers are fine, then your issue is ack latency.

Chris
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
I'd fire up VPS boxes at various places in the US, and try ssh portforwarding (or sure, VPN) thru those servers, to see if that can improve performance. Individual TCP connections are probably QoSed and shaped as you transit the various networks.

Or sure, try different transfer mechanisms. Bittorrent.
 
  • Like
Reactions: Blinky 42

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
I would trace out your path from Mexico to the UK and then try some test to see where bandwidth starts dropping.
Pricing for large chunks of bandwidth can be stupid expensive fro a variety of factors when looking for colo there - while you have great bandwidth within AxTel's network, who they peer with and how much effective bandwidth you can get upstream from them is probably your limiting factor.

If you do find destinations that support high bandwidth you could always do a VPN through there that you hop through to get back to the UK.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Off topic as a home solution for sure but the absolute best file transfer I have ever used is Aspera !!

https://asperasoft.com/

Now the technology that Azure uses for bulk upload is very cool as well and performs very very well.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
The Mexico line easily clocks 115+ upload in speed-tests to local servers... but when I test out uploading files back to a basic FTP I've set up back in the UK as a test (FileZilla Server), I seem to cap out at more like 12-13Mbit. (Running speed-tests to certain London-based servers seems to clock in maybe 25-30Mbit, with roughly a 120-150ms ping)
Classic "long fat network". Bandwidth-delay product - Wikipedia

I've tried fiddling with "Internal transfer buffer size" and "Socket buffer size" on the ftp config, which seems to cause it to burst a slightly higher initial speed if I raise them... but no matter what I set, it always seems to settle back down to 12-13 in a few seconds anyway.
TCP windows need to be configured on *both* ends of the connection. Since bufferbloat has become a buzzword, some people have just shrunk all the buffers as a simple "fix", which makes it impossible to actually fill a long fat connection. (On the other hand, a symptom of bufferbloat is a connection that runs fast until it fills a buffer, experiences packet loss, and then stalls as it works through the buffered traffic before it can resend the lost packet(s).) Assuming that basic tuning is done properly, the next thing to look at is the quality of the network. If there's packet loss, that's going to affect the transfer rate, perhaps significantly. If you do have packet loss, the TCP congestion control algorithm plays a large part in determining how quickly a stream slows down in response and how quickly it ramps back up. Assuming a reasonably good/modern congestion control algorithm is in use, the next thing to look at is what else is in play between the network stacks on both ends. TCP offload can screw things up by imposing another buffer that the congestion control algorithm can't control. Firewalls can screw things up in all sorts of ways. Actually fixing this isn't always easy and may require looking at packet traces for example. You can get a start by looking at tuning guides specifically for long fat networks. Try to find examples that are relatively recent, as things have changed and a 20 year old guide might not point you in the right direction. They're also extremely OS-specific. For example, here's a linux guide: About Long Fat Networks and TCP tuning

You need to look at other file transfer protocols. Ones that use udp instead of tcp.
That's ridiculously bad advice.