Docker Compose organization

billybobkent

New Member
Oct 24, 2022
4
0
1
Trying to figure the best way to organize my setup.

I have Proxmox running a TrueNAS VM.
I am currently providing an NFS share to the Proxmox server which run an Ubuntu VM over NFS.
This VM is dedicated to Docker containers and runs a few docker containers and portainer.

I am trying to run all containers via Docker Compose rather than set up via Portainer, I only use Portainer to visualize things.

I have a folder called stacks, and under that I have a folder for each container I want to run with a docker-compose.yml.

Is there a better way to do this? I currently only have Portainer, Heimdall, and CloudFlare DDNS containers setup.
 

casperghst42

Member
Sep 14, 2015
75
13
8
54
I do not know if better, but ... the way I do it is that I combine containers in to what I guess they call stacks. For example; I have one directory which is called monitoring, and the docker-compose.yaml contain everything which is needed to run influxdb2, grafana, telegrafe x2, and a couple of other things. That is a stack, and that is what makes docker-compose so very nice.

I have been playing around podman (due to RHEL), and it does not have anything like docker-compose which is so frustrating.
 

PigLover

Moderator
Jan 26, 2011
3,106
1,434
113
The only thing I do differently is building it all on ZFS. The "stacks" directory is a ZFS filesystem rather than a normal directory (create it with "zfs create ".../stacks" rather than mkdir). Each "stack" subdirectory is also a ZFS filesystem under stacks (zfs create .../stacks/stack-xxx"). Any volumes passed through to the container are kept in the same directory as the docker-compose.yaml file and references to it in the compose file use relative references (e.g., "volumes: ./config:/etc/config").

The advantage of doing this are: (a) snapshots (b) ease of backing up the stacks using "zfs-autobackup" and (c) if I need to move them to a different machine I can just pick up the whole stack with a snapshot and move it with zfs send/receive.

I build it all at the root level of Proxmox instead of putting it into a VM or LXC (I know - not best practice).
 
  • Like
Reactions: rubylaser

casperghst42

Member
Sep 14, 2015
75
13
8
54
Redhat seems to disagree...
They do, but I do not find that podman-compose is as "easy" to use as docker-compose. As they say, podman does not have a service to talk to, which make is difficult to offer the same service. I would say if you need podman and want orchestration, then better look somewhere else. I do understand why Redhat is trying something else, but in this case it's not replacing something with something better.

Some people use ansible, which is a bit ... more complicated.
 

Craig Curtin

Member
Jun 18, 2017
94
19
8
58
Trying to figure the best way to organize my setup.

I have Proxmox running a TrueNAS VM.
I am currently providing an NFS share to the Proxmox server which run an Ubuntu VM over NFS.
This VM is dedicated to Docker containers and runs a few docker containers and portainer.

I am trying to run all containers via Docker Compose rather than set up via Portainer, I only use Portainer to visualize things.

I have a folder called stacks, and under that I have a folder for each container I want to run with a docker-compose.yml.

Is there a better way to do this? I currently only have Portainer, Heimdall, and CloudFlare DDNS containers setup.
So are you providing the NFS share from the TrueNAS VM ?

You are setting yourself up for a world of pain if doing it this way.

Think about what you would have to go through to get a recovery if the Proxmox server fails - chart out the steps you would need to get your whole environment running again ?

1) If it is a hardware failure you are going to have to source new hardware, install proxmox, recover your Truenas VM (which will probably be large and take a while)
2) Re establish your NFS environment
3) Recover your Ubuntu VM with the Dockers on there.

Now imagine you have to do this in two years time when you have not documented anything or thought about the recovery process.

I do something similar to you with running a Debian VM for all my Dockers and i run this on my ESXi hosts - but i have 3 of them - the VM is stored on a seperate physical NFS mount from a physical machine - i snapshot this VM every day and store the snapshots on a different media from the VM - i also clone the VM once a week so i have that as an instant rollback in the event of failure of the NFS datastore. (and i do this for a living as well so it makes keeping my brain up to date all the easier.

If i was you i would simplify it - create a 2nd volume inside the Ubuntu container from the Proxmox server and do that as an LVM/Btrfs so you can snapshot your docker volatile data on a regular basis - then create a job on proxmox to snaphot the ubuntu VM and store it to an external NAS or disk somewhere

Craig