Docker multi host setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zecas

Member
Dec 6, 2019
30
0
6
Hi,

I'm planning on setting up some docker containers on my home and small business network, and since I'm not an expert, I looking for some opinions on how to do it the way I'm thinking about it.

Basically let's say that I would prefer not to have a monster VM running all docker containers. I'm interested in segregating roles and have multiple hosts for each one role.

These hosts will be setup as proxmox VMs, and let's say for example I'll have the following hosts/containers:

linux01 (frontends):
- nginx-proxy-manager
- portainer

linux02 (services):
- owncloud
- dyndns-update
- uptime-kuma

linux03 (dev-tools):
- git-server
- jenkins

linux04 (self-development):
- web-app1 (network-a)
- mysql (network-a)
- web-app2 (network-b)
- mysql (network-b)


So for example the nginx-proxy-manager will be the reverse proxy, providing SSL certificates, and will proxy the requests for:
- owncloud @linux02:80
- web-app1 @linux04:8081
- web-app2 @linux04:8082

Since I'm no expert on this docker container subject, and to keep some level of security, I would configure firewall connection on hosts linux02 & linux04 to only accept incoming connections from the linux01 host, and for those required ports. That way, one could only access for example web-app1 from the linux01, expecting it to be from the nginx-proxy-manager service.

There must be for sure better ways to accomplish this, I've been reading about overlay networks, or kubernetes, but not sure if it would be the right tool for the job, if that's the way-to-go, or if it would make all this a more complex process.

I would very much appreciate some opinions of what would be better, looking forward to learn more about this subject.


Thank you for your attention and opinions.
 

Zack Hehmann

Member
Feb 6, 2016
72
5
8
Hello @zecas,

I'm definitely not an expert with containers. I have used docker in my home lab on a few different hosts/vm running docker. I personally working towards moving to Kubernetes as I would like to have it running on multiple VMs/Hosts at home and not worry about a service interruption. I'm not running the full K8S but an going to K3S route. I watched a video from Techno Tim on how he used ansible to automate the build of a K3S cluster with etcd.

I was able to get a 5 node cluster up and running. The guide from his video and blog didn't cover everything that was needed and I had to solve some issues myself. A friend and I are working on this together and decided the rewrite the whole thing using our own playbooks.

The design he chooses to go with in the video is a HQ cluster with local databases. Overall that seems like a better option for me as I don't want to deal with an external DB for Kubernetes.

Here is a link to the official K3S documentation showing the architecture of a HQ setup with local DB. Architecture | K3s

The GitHub repo he has for this is a fork of the official K3S ansible repo, Jeff Geerling, and a few others.

Good luck and let me know how it goes.
 

Sealside

Active Member
May 10, 2019
126
45
28
Stockholm/Sweden
Run containers with macvlan driver, then you can assign a unique ip to each container. Visible on the network as it would be a standalone machine. You then also have the option of moving the container to another host seamless.

 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Been running multi-hosed docker-based services for many years.

With a single Docker host handling all of your services things are really sweet. You can use tools like Traefik to provide a single entry-point that automatically discovers your services when they start and also handles certificates via lets-encrypt, etc.. You have a single storage model, which is great since most of your services are probably stateful and need a filestore. Etc.

When you spread it over multiple hosts you lose both of these things and you get into a lot of manual management of the network. You have to pre-determine which services run on which hosts. You lose the "discovery" features so you have to do a bit of manual traffic routing. Etc.

There are ways to deal with this. There are not a lot of really good ones for a small lab.

Docker Swarm comes oh so close to ideal but lacks any sort of native shared storage model you would need to run "stateful" services. Note that almost everything you are going to run in a homelab is "stateful" as it is likely just an existing service dropped into a container. You can drop shared storage (Ceph or Longhorn, etc.) on top of Swarm or run it alongside. But after many attempts I find this more challenging than just manually managing things.

Kubernetes hits all the boxes - but come at a complexity level that just makes no sense to learn or run for the small handful of services you may want to run.

Transparent networking approaches (as suggested by (@Sealside) are great and let you move things around (by hand...) but miss out on most of the dynamic service registration and discovery features that make the container world wonderful. Note that there are some services available that try to sorta recreate parts of this - like the wonderful open source project Traefic-Kop - but its still a bit of a management headache.
 

MrGuvernment

Member
Nov 16, 2020
39
7
8
Hello @zecas,

.... I personally working towards moving to Kubernetes as I would like to have it running on multiple VMs/Hosts at home and not worry about a service interruption. I'm not running the full K8S but an going to K3S route. I watched a video from Techno Tim on how he used ansible to automate the build of a K3S cluster with etcd.
........
Kuber is the way to go to orchestrate once you start growing....
 

Kabi

New Member
Dec 12, 2023
1
0
1
I run a 3 node docker setup managed with Portainer. Each node has an IPVLAN L3 network defined which allows each container to get its own IP and I can control access via firewall rules.