Docker multi host setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zecas

Member
Dec 6, 2019
35
0
6
Hi,

I'm planning on setting up some docker containers on my home and small business network, and since I'm not an expert, I looking for some opinions on how to do it the way I'm thinking about it.

Basically let's say that I would prefer not to have a monster VM running all docker containers. I'm interested in segregating roles and have multiple hosts for each one role.

These hosts will be setup as proxmox VMs, and let's say for example I'll have the following hosts/containers:

linux01 (frontends):
- nginx-proxy-manager
- portainer

linux02 (services):
- owncloud
- dyndns-update
- uptime-kuma

linux03 (dev-tools):
- git-server
- jenkins

linux04 (self-development):
- web-app1 (network-a)
- mysql (network-a)
- web-app2 (network-b)
- mysql (network-b)


So for example the nginx-proxy-manager will be the reverse proxy, providing SSL certificates, and will proxy the requests for:
- owncloud @linux02:80
- web-app1 @linux04:8081
- web-app2 @linux04:8082

Since I'm no expert on this docker container subject, and to keep some level of security, I would configure firewall connection on hosts linux02 & linux04 to only accept incoming connections from the linux01 host, and for those required ports. That way, one could only access for example web-app1 from the linux01, expecting it to be from the nginx-proxy-manager service.

There must be for sure better ways to accomplish this, I've been reading about overlay networks, or kubernetes, but not sure if it would be the right tool for the job, if that's the way-to-go, or if it would make all this a more complex process.

I would very much appreciate some opinions of what would be better, looking forward to learn more about this subject.


Thank you for your attention and opinions.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
Hello @zecas,

I'm definitely not an expert with containers. I have used docker in my home lab on a few different hosts/vm running docker. I personally working towards moving to Kubernetes as I would like to have it running on multiple VMs/Hosts at home and not worry about a service interruption. I'm not running the full K8S but an going to K3S route. I watched a video from Techno Tim on how he used ansible to automate the build of a K3S cluster with etcd.

I was able to get a 5 node cluster up and running. The guide from his video and blog didn't cover everything that was needed and I had to solve some issues myself. A friend and I are working on this together and decided the rewrite the whole thing using our own playbooks.

The design he chooses to go with in the video is a HQ cluster with local databases. Overall that seems like a better option for me as I don't want to deal with an external DB for Kubernetes.

Here is a link to the official K3S documentation showing the architecture of a HQ setup with local DB. Architecture | K3s

The GitHub repo he has for this is a fork of the official K3S ansible repo, Jeff Geerling, and a few others.

Good luck and let me know how it goes.
 

Sealside

Active Member
May 10, 2019
134
47
28
Stockholm/Sweden
Run containers with macvlan driver, then you can assign a unique ip to each container. Visible on the network as it would be a standalone machine. You then also have the option of moving the container to another host seamless.

 

PigLover

Moderator
Jan 26, 2011
3,219
1,582
113
Been running multi-hosed docker-based services for many years.

With a single Docker host handling all of your services things are really sweet. You can use tools like Traefik to provide a single entry-point that automatically discovers your services when they start and also handles certificates via lets-encrypt, etc.. You have a single storage model, which is great since most of your services are probably stateful and need a filestore. Etc.

When you spread it over multiple hosts you lose both of these things and you get into a lot of manual management of the network. You have to pre-determine which services run on which hosts. You lose the "discovery" features so you have to do a bit of manual traffic routing. Etc.

There are ways to deal with this. There are not a lot of really good ones for a small lab.

Docker Swarm comes oh so close to ideal but lacks any sort of native shared storage model you would need to run "stateful" services. Note that almost everything you are going to run in a homelab is "stateful" as it is likely just an existing service dropped into a container. You can drop shared storage (Ceph or Longhorn, etc.) on top of Swarm or run it alongside. But after many attempts I find this more challenging than just manually managing things.

Kubernetes hits all the boxes - but come at a complexity level that just makes no sense to learn or run for the small handful of services you may want to run.

Transparent networking approaches (as suggested by (@Sealside) are great and let you move things around (by hand...) but miss out on most of the dynamic service registration and discovery features that make the container world wonderful. Note that there are some services available that try to sorta recreate parts of this - like the wonderful open source project Traefic-Kop - but its still a bit of a management headache.
 

MrGuvernment

Member
Nov 16, 2020
48
11
8
Hello @zecas,

.... I personally working towards moving to Kubernetes as I would like to have it running on multiple VMs/Hosts at home and not worry about a service interruption. I'm not running the full K8S but an going to K3S route. I watched a video from Techno Tim on how he used ansible to automate the build of a K3S cluster with etcd.
........
Kuber is the way to go to orchestrate once you start growing....
 

Kabi

New Member
Dec 12, 2023
1
0
1
I run a 3 node docker setup managed with Portainer. Each node has an IPVLAN L3 network defined which allows each container to get its own IP and I can control access via firewall rules.
 

Captain Lukey

Member
Jun 16, 2024
41
12
8
Surely Simple simple and simple.... Traefik Proxy

Then use 101 macvlans for each docker container / static ip or dhcp is the choice?

firewall ahead of traefik. Add a contatner, set static IP, traafik picks this up on tags and proxy the service and firewall at front managed which ports are exposed. Simple, Scaleable, dynamic and reboot time if the server ever fails is super quick.
 

louie1961

Active Member
May 15, 2023
358
156
43
I do a variation on what you are proposing. I run two VMs, each on its own VLAN. I define my VLANs and firewall rules in pfSense, and do all my routing/firewall rules there, I do not use the Proxmox SDN/firewall for anything really. One VM is dedicated to containers that are exposed to the internet, mostly my Cloudflare tunnel connectors, and an instance of Grocy. The other VM is for services that only run internally and are not exposed to the internet in any way. Things like Heimdall, Photoprism, Uptime Kuma, Librespeed, and some wordpress instances for dev work. I mostly use macvlan networking in docker and let my pfSense assign IP addresses and DNS for internal services. As a result I don't use a proxy like Nginx or Traeffic. Not that there's anything wrong with those. I expose all of my external/internet facing application such as Wordpress and Nextcloud (both running in VMs because I prefer it for security and ease of backups), Grocy, and even my Synology web interface via Cloudflare tunnels. I don't have to worry about DDNS that way and I like not having my external IP address published to the world. My domain names all use Cloudflare IP address and Cloudflare DNS.
 
  • Like
Reactions: Captain Lukey

Captain Lukey

Member
Jun 16, 2024
41
12
8
I do a variation on what you are proposing. I run two VMs, each on its own VLAN. I define my VLANs and firewall rules in pfSense, and do all my routing/firewall rules there, I do not use the Proxmox SDN/firewall for anything really. One VM is dedicated to containers that are exposed to the internet, mostly my Cloudflare tunnel connectors, and an instance of Grocy. The other VM is for services that only run internally and are not exposed to the internet in any way. Things like Heimdall, Photoprism, Uptime Kuma, Librespeed, and some wordpress instances for dev work. I mostly use macvlan networking in docker and let my pfSense assign IP addresses and DNS for internal services. As a result I don't use a proxy like Nginx or Traeffic. Not that there's anything wrong with those. I expose all of my external/internet facing application such as Wordpress and Nextcloud (both running in VMs because I prefer it for security and ease of backups), Grocy, and even my Synology web interface via Cloudflare tunnels. I don't have to worry about DDNS that way and I like not having my external IP address published to the world. My domain names all use Cloudflare IP address and Cloudflare DNS.
Very nice, what is the performance like tunnelling everything back ? as i have tried a similar method and as i added about 30 docker containers it became quite hard to manage each tunnel.. Very cool that it works for you...
 

louie1961

Active Member
May 15, 2023
358
156
43
Very nice, what is the performance like tunneling everything back ? as i have tried a similar method and as i added about 30 docker containers it became quite hard to manage each tunnel.. Very cool that it works for you...
I only run 6 tunnels and it isn't terrible to manage at this number. Also I mostly use tunnels for outbound stuff. I also have Tailscale set up on my pfSense box, my laptop and my phone. I advertise routes from pfSense box on Tailscale, so my phone and laptop can access anything in my network when away from home.

I also run watchtower to keep my Cloudflare connectors updated automatically. The performance seems decent. One of the Wordpress instances is my wife's food blog. I have all the Cloudflare caching and accelerations turned on (all the features in their free tier anyway). Considering my home internet is 300mbps down and 50mbps up, it really works pretty well.
 
  • Like
Reactions: Captain Lukey