Magic caddy proxy for docker containers
Recently I have played a lot more with docker then I might have let on in my blog. It’s pretty much one of the main things I have done recently, at least technology wise. Among other stuff I have pretty much moved everything I do or deploy to docker containers, and preferably combined webservices with caddy (and thus letsencrypt) to be handled pretty much automatically.
This might also be guessable from some of the docker images I have recently created, among which are the following:
- alexanderjulo/caddy: Just another caddy docker image, based on alpine linux, so that debugging can be done in the container, if necessary (busybox included)
- alexanderjulo/caddy-gen: a docker image basing on docker-gen that automatically creates a caddy configuration file given containers that fulfill certain criteria
- alexanderjulo/caddy-proxy-flex: a not so automatic caddy proxy that can be used as an intermediate in certain cases
Just on their own these images probably do not sound like they make a lot of sense, but whenever you will start to deploy your containers to production you will probably run into some questions. One of the main problems is that if you run a lot of webservices, you will probably want all of them running on the default ports (80/443) to be accessible to other people, but only one container can listed on either one of these ports.
An easy solution would obviously be to have a manually setup web server (either in a container) or directly on the host. This means adapting the configuration of this webserver whenever you decide to change a container.
Another solution would be nginx-gen, which runs an nginx and automatically updates it with a new configuration whenever a new relevant container comes on or goes off. The problem with this setup is, that the setup does not support automatic SSL certificate generation & renewal with letsencrypt. This can be solved by using a companion, which unfortunately is rather complicated to setup and run.
The third, and my personal favorite option, is to make this whole thing very much easier by using docker-gen and caddy.
Setup caddy-proxy
In the most basic setup, you need 3 containers:
- A container running your service that is expecting traffic on a port of your choosing (let’s call it
X
). It needs two environmental variables set,VIRTUAL_HOST
, which should contain the host you want the app to be available at, i.e.www.julo.ch
andSERVER_PORT
, which is the portX
where your service is available. - A container running caddy that has listening on port 80 and 443 on the host, it needs a volume shared to the place where caddy expects the configuration file, if you are using my
alexanderjulo/caddy
that would be/srv
, to get it’s configuration. And optionally a second volume share to/root/.caddy/letsencrypt
to back up the configuration to the host. - A container that will automatically generate a caddy configuration, i.e.
alexanderjulo/caddy-gen
, which has a volume share with the docker socket (read only) to be able to listen to the docker events and know about starting and stopping containers and a volume share to the caddy configuration directory from #2 shared to/etc/caddy
. Additionally an environmental variableLETSENCRYPT_EMAIL
that contains your email address for the SSL certificates.
While this might sound very complicated, if we put it into a docker-compose file i.e., it becomes rather simple:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
version: "2"
services:
app:
image: tutum/hello-world
environment:
- VIRTUAL_HOST="app.example.com"
- SERVER_PORT=80
caddy-gen:
image: alexanderjulo/caddy-gen:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- config:/etc/caddy
caddy:
image: alexanderjulo/caddy
command: caddy -restart=inproc -http2=false
ports:
- 80
- 443
volumes:
- config:/srv
- ./ssl:/root/.caddy/letsencrypt
volumes:
config:
driver: local
You just have to make sure that the DNS entry for VIRTUAL_HOST
points at your caddy so that it can get an SSL certificate, otherwise it will crash.
I also adapted the command setting for caddy with two options:
-restart=inproc
: If you do not add this option, caddy will restart which will kill the container and lead to downtime upon every configuration change. Withinproc
the downtime is way smaller-http2=false
: This is due to a bug in caddy I discovered. As soon as it is fixed this option can be removed.
While the app
can run on any host, caddy
and caddy-gen
have to run on the same host due to the configuration sharing. This can potentially be mitigated by using another storage driver
Upcoming
Now this is a very simple setup which might not scale very well when you are running a lot of hosts. So for the future I have two more posts planned, which I will link here as soon as available:
- Scaling web services with caddy-proxy setups over many hosts
- Managing dockers hosts and web services on a larger scale
If you are interested in either one of these posts or other related things, please let me know!