DOCKER Swarm Advice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tssrshot

Member
Mar 18, 2015
58
8
8
Omaha, NE
Good Morning, Afternoon, or Evening...

First, Disclaimer: This is not a question for anyone to help me with my homework, work project for profit, or anything like that...

I've got some equipment in several geographically separated locations around the world. A Supermicro SuperServer 1018D-FRN8T is physically located at each rack, in each locale. My users have mobile devices that they travel with, between these sites, and have them setup to Sync (Download Only) from a WebDAV Server with a bunch of documents they reference while they are working in that locale.

Our bosses, decided to create a master directory of files at the Main Location in the US, and now we push them out to the forward locations, via IPSEC tunnels and rsync. So far, that part works well, but I want to know if I can use Docker and/or swarm to do this better. The WebDAV server is currently HTTP only, and we want to use HTTPS, and its only running a single instance of the Docker WebDAV Server.

I also wanted to see if I can't build in some redundancy, like maybe use DNS name, so if the Docker in their region is unavailable, it can move onto the next server via the tunnel (albeit at a much slower rate).

Where do I start with building a HTTPS -> NGINX -> SWARM to provide more throughput and processing to the users, closest to where they are? Some of the locations now have giant Wireless networks and very viable bandwidth (and the Supermicro is 10G to the primary switch) to squeeze more performance out of it?

My gut is that I need a NGINX Load-balancer to terminate the HTTPS portion, and direct traffic accordingly.

How do I size the correct number of replicas? The 1018D-FRN8T has 16 cores and 32 hardware threads...And they are literally sitting there, doing nothing.

Thank You,

Bryan
 

Discountsocks

New Member
Aug 16, 2018
8
7
3
Good Morning, Afternoon, or Evening...

I also wanted to see if I can't build in some redundancy, like maybe use DNS name, so if the Docker in their region is unavailable, it can move onto the next server via the tunnel (albeit at a much slower rate).

Where do I start with building a HTTPS -> NGINX -> SWARM to provide more throughput and processing to the users, closest to where they are? Some of the locations now have giant Wireless networks and very viable bandwidth (and the Supermicro is 10G to the primary switch) to squeeze more performance out of it?

My gut is that I need a NGINX Load-balancer to terminate the HTTPS portion, and direct traffic accordingly.

Thank You,

Bryan

Are you sure that you are barking up the right tree on this one? I would look at possibly using Anycast instead of DNS?... Have NGINX at each site on the same IP advertised into the network with a dynamic protocol. The user will go tho the closest NGINX depending on where they are connected. The NGINX can then spray across many local IPs if that is your design
 
  • Like
Reactions: MikeWebb