For the last two years, I've been treating compose files as individual runners for individual programs.
Then I brainstormed the concept of having one singular docker-compose file that writes out every single running container on my system... (that can use compose), each install starts at the same root directory and volumes branch out from there.
Then I find out, this is how most people use compose. One compose file, with volumes and directories branching out from wherever ./ is called.
THEN I FIND OUT... that most people that discover this move their installations to podman because compose works on different versions per app and calling those versions breaks the concept of having one singular docker-compose.yml file and podman doesn't need a version for compose files.
Is there some meta for the best way to handle these apps collectively?
I think compose is best used somewhere in between.
I like to have separate compose files for all my service "stacks". Sometimes that's a frontend, backend, and database. Other times it's just a single container.
Multiple compose file, each in their own directory for a stack of services. Running Lemmy? It goes to ~/compose_home/lemmy, with binds for image resized and database as folders inside that directory. Running website? It goes to ~/compose_home/example.com, with its static files, api, and database binds all as folders inside that. Etc etc. Use gateway reverse proxy (I prefer Traefik but each to their own) and have each stack join the network to expose only what you’d need.
Back up is easy, snapshot the volume bind (stop any service individually as needed); moving server for specific stack is easy, just move the directory over to a new system (update gateway info if required); upgrading is easy, just upgrade individual stack and off to the races.
Pulling all stacks into a single compose for the system as a whole is nuts. You lose all the flexibility and gain… nothing?
This. And I recently found out you can also use includes in compose v2.20+, so if your stack complexity demands it, you can have a small top-level docker-compose.yml with includes to smaller compose files, per service or any other criteria you want.
I prefer compose merge because my "downstream" services can propagate their depends/networks to things that depend on them up the stream
There's an env variables you set in .env so it's similar to include
The one thing I prefer about include is that each include directory can have its own .env file, which merges with the first level .env. With merge it seems you're stuck with one .env file for all in-file substitutes
That's what I do. I always thought I was doing it "wrong" but it just made sense to me. I can also just up/down/etc... compose files to individually pull new images, test things, disable a service, and apply config updates without affecting another container at all.
I even keep my docker config files in a seperate directory so I can backup the docker composes in a second over the network.
I started by using a single mariaDB instance with multiple databases, but now I see the same benefits from moving to one database container per compose file that needs it to make it even more flexible so I don't need to start up mariadb and redis before all of my containers.
File permission problems? Down the compose that needs it, fix it, re-up it without losing any uptime for other services and never having to use docker commands kludged together.
I've always heard the opposite advice - don't put all your containers in one compose file. If you have to update an image for one app, wouldn't you have to restart the entirety of your apps?
If by app you mean container, no. You pull the latest image and rerun docker compose. It will make only the necessary changes, in this case restarting the container to update.
As other have said, I have a root docker directory then have directories inside for all my stacks, like Plex. Then I run this script which loops through them all to update everything in one command.
for n in plex-system bitwarden freshrss changedetection.io heimdall invidious paperless pihole transmission dashdot
do
cd /docker/$n
docker-compose pull
docker-compose up -d
done
echo Removing old docker images...
docker image prune -f
I don’t like the auto update function. I also use a script similar to the one op uses (with a .ignore file added). I like to be in control when (or if) updates happen. I use watchtower as a notification service.
This is what I use whenever I make my own services or am using a simple service with only one container. But I have yet to figure out how to convert a more complicated service like lemmy that already uses docker-compose, so I just use podman-docker and emulate docker-compose with podman. But that doesn't get me any of the benefits of systemd and now my podman has a daemon, which defeats one of the main purposes of podman.
Just forget about podman-compose and use simple Quadlet container files with Systemd. That way it is not all in the same file, but Systemd handles all the inter-relations between the containers just fine.
Alternatively Podman also supports kubernetes configuration files, which is probably closer to what you have in mind, but I never tried that myself as the above is much simpler and better integrated with existing Systemd service files.
You're probably thinking about systemd-nspawn. Technically yes they're containers, but not the same flavour of them. It's more like LXC than Docker: it runs init and starts a full distro, like a VM but as a container.
I moved from compose to using Ansible to deploy containers. The Ansible container config looks almost identical to a compose file but I can also create folders, config files, set permissions, etc.
Sure. Below is an example playbook that is fairly similar to how I'm deploying most of my containers.
This example creates a folder for samba data, creates a config file from a template and then runs the samba container. It even has a handler so that if I make changes to the config file template it will cycle the container for me after deploying the updated config file.
I usually structure everything as an ansible role which just splits up this sort of playbook into a folder structure instead. ChatGPT did a great job of helping me figure out where to put files and generally just sped up the process of me creating tasks to do common things like setup a cronjob, install a package, or copy files around.
I'm currently using YunoHost behind CG-NAT with a Wireguard VPS bypass, but plan on moving to a Dockerized setup soon because of YNH still using an outdated version of Debian. What do you recommend me to keep my setup as similar to YNH?