Docker Compose looks like an attractive way to deploy applications to servers: you may already be using it for local development; it deploys containers so it can easily run open-source dependencies; and it can pack many containers onto a single machine making it much cheaper than container-as-a-service options like ECS, Cloud Run, etc. It's much simpler than Kubernetes, more mainstream than Kamal, and seems to be actively maintained unlike Docker Swarm.
The reality is that there are many ways to use Docker Compose to deploy applications and the documentation doesn't make it clear how to do it intelligenly. Follow along as I figure it out.
Local example
In local development, you usually have docker-compose.yml
and .env
files on disk. Running docker compose up
takes the values from .env
, fills out placeholders in docker-compose.yml
with those values, and sends commands to the local Docker daemon via Unix socket. Let's make an example that uses lots of Docker Compose features including image building, networking, and environment variable interpolation:
# docker-compose.yml
name: demoproject
services:
caddy:
build:
context: .
target: caddy
ports:
- ${PUBLIC_PORT}:80
networks:
- backend
restart: unless-stopped
nginx:
image: nginx:1.27.4
networks:
- backend
restart: unless-stopped
networks:
backend:
# .env
PUBLIC_PORT=8000
# Dockerfile
FROM caddy:2.9.1 AS caddy
COPY Caddyfile /etc/caddy/Caddyfile
COPY index.html /usr/share/caddy/index.html
<!-- index.html -->
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<title>Hello, world!</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
</head>
<body>
<p>Hello, world!</p>
</body>
</html>
# Caddyfile
:80 {
root * /usr/share/caddy
file_server
handle /nginx/* {
uri strip_prefix /nginx
reverse_proxy http://nginx:80
}
}
Now you can run docker compose up --build
and visit 127.0.0.1:8000 to see index.html
. If you visit 127.0.0.1:8000/nginx/ you'll see Nginx's welcome page.
Things are different in deployment
But now we want to run this app on a server. First, let's prove that our server exists:
$ ssh root@$IP docker ps # replace $IP with your IP
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Great, there's a server with the Docker daemon running. If we try to apply our same method of using Docker Compose, we would need to:
- Copy
docker-compose.yml
,.env
,Dockerfile
, andCaddyfile
to the server - Run
docker compose up -d --build
on the server
This doesn't sound too bad, but if we were building a Python application instead of one single HTML file we'd need to copy a lot of source files to the server and keep them in sync with our repo. This does not end up being very fun, so let's find another way.
We could use docker compose push
, either locally or in CI, to push built images to an image registry. Then we can remove the build step from the "deployed" docker-compose.yml
file perhaps by overriding the build directive with compose file merging. Then we're no longer copying source files to the server, only two compose files and a .env
.
A way to avoid copying files
What we really need is to get Docker Compose to send the appropriate commands to the server's Docker daemon. That shouldn't require copying files to the server, right? I noticed in the Docker docs that you can specify the Docker host with the ssh://
protocol, which is a lot easier than what I was about to try. Let's prove it works:
$ # Our demo project is running locally
$ docker ps --format "table {{.Names}}\t{{.CreatedAt}}"
NAMES CREATED AT
demoproject-caddy-1 2025-03-20 10:51:49 -0400 EDT
demoproject-nginx-1 2025-03-20 10:51:49 -0400 EDT
$ # With this command, we're talking to the server's Docker daemon instead
$ DOCKER_HOST=ssh://root@$IP docker ps --format "table {{.Names}}\t{{.CreatedAt}}"
NAMES CREATED AT
Excellent, let's just blindly try to run the app on the server like this.
$ DOCKER_HOST=ssh://root@$IP docker compose up --build
✔ nginx Pulled
Compose now can delegate build to bake for better performances
Just set COMPOSE_BAKE=true
✔ caddy Built
✔ Network demoproject_backend Created
✔ Container demoproject-nginx-1 Created
✔ Container demoproject-caddy-1 Created
Attaching to caddy-1, nginx-1
I did not expect that to work at all, but if I change PUBLIC_PORT
from 8000
to the usual HTTP port 80
, I can visit http://$IP/
and http://$IP/nginx/
in a browser! I wonder where the Docker build step is happening, though: are we sending the Docker build context over the Internet to the server? I'll copy a 70 MB file into the image in the Dockerfile
and see if it takes much longer (that would add up to a minute on my cheapo Internet plan).
Yes, I could see the 70 MB file get sent as build context:
# this kept climbing
=> => transferring context: 10.06MB
Production vs Staging deployments
I ended up using essentially this same technique to deploy a staging and production version of the application I was working on but with some conveniences around it.
At first, I wanted to share a single docker-compose.yml
between all deployed environments (staging, prod). I was going to take advantage of the fact that Docker Compose looks for a .env
in your current directory by default and have something like this:
├── docker-compose.yml
├── production
│ └── .env
└── staging
└── .env
Since those .env
files can configure Docker Compose itself in addition to filling in interpolation placeholders in docker-compose.yml
, I wanted to write something like this:
# .env
COMPOSE_FILE="../docker-compose.yml"
DOCKER_HOST="ssh://root@$STAGING_IP"
MY_VAR=staging_value
# etc.
This had two problems. The first is that while your .env
can set Docker Compose's "pre-defined" environment variables, Docker (without the "Compose") does not care about .env
, so you can't set DOCKER_HOST
there. So instead you have to use the clunkier Docker contexts.
The more dangerous problem is that Docker Compose's environment variables precedence rules prefer to use variables from your shell over the .env
file! This means my staging deployment accidentally received some configuration I had been using in local development. That is a big problem, so I had to make one more tweak.
The happy ending
Instead of using .env
files to set environment-specific values and risk accidentally using values from my local development shell session, I decided to use Compose file merging to override with the more powerful YAML syntax.
deploy/
├── docker-compose.yml
├── production
│ ├── docker-compose.yml
│ └── .env
└── staging
├── docker-compose.yml
└── .env
# staging/.env
COMPOSE_FILE="../docker-compose.yml:docker-compose.yml"
COMPOSE_REMOVE_ORPHANS="1"
This means when I cd deploy/staging && docker compose $WHATEVER
, it will start with the deploy/docker-compose.yml
and then merge in any changes made in deploy/staging/docker-compose.yml
. You could rename those files to be more distinctive if you wanted. The cool thing with Compose file merging is that you can add values to lists (like a container's environment variables), you can null out values (deciding to not run a certain service in staging for example), and more. In my case the staging Compose file only has to update a couple of values and leaves the majority the same as in production:
name: myproject-staging # to distingush from production at "myproject"
services:
caddy:
ports:
- 127.0.0.1:8000:80 # staging and prod need to use different ports
web:
environment:
- MY_VAR=value_1
- MY_VAR2=value_2
# etc.
db:
environment:
- MY_VAR=value_1
- MY_VAR2=value_2
# etc.
The resulting workflow to deploy to staging is:
# Get in the correct dir to use the correct .env file
cd deploy/staging
# Docker context sets DOCKER_HOST
docker context use myproj-staging
# Deploy the app
docker compose up --build -d
# Look at logs and verify the deploy was successful
docker compose logs --follow
# Reset the DOCKER_HOST to the local machine so you
# don't accidentally send changes to the remote server
docker context use default
I will keep this updated if I end up needing to make any changes to the way I deploy with Docker Compose. 👋