Hi! I'm Kelly, and this is my technical blog. I'm working on writing more random stuff here. Why? Because it never hurts to share ideas and I personally think my ideas are pretty good. Please feel free to contribute and start conversations in the comments as well as email me or comment on posts with ideas, suggestions and criticisms.
I have been doing software engineering as a career since about 2004 and playing with *nix operating systems since the late 90s. I go out of my way to learn as much as I can about everything. I believe every experience in life is a valuable and that people should express themselves to radical degree. Oh yah, and I'm a Burner!
So about a year ago Dotcloud came out with a magical piece of software called Docker.io; a Go-Lang wrapper for The Linux Container Engine (LXC). The first time I saw this I immediately jumped onto the Docker.io band wagon. Why? Because the idea is amazing! I had one problem with it though; the way Docker.io set’s up your containers. You either have to setup your containers using their container linking solution, which puts all the container IP’s into environment variables, or use a service discovery tool like Skydock with Skydns. Both of which are Docker.io containers themselves and required a pre-configuration process.
The second I saw Skydock and Skydns I thought it was great and really intuitive, but I already had a network I wanted to add the containers to. At the time I had been using the port forwarding feature provided by Docker.io and had exposed a ton of ports on the host OS, but this made it hard to work with services running on the same port. Especially when it came to service discovery and playing with multiple Docker.io hosts. There had to be a better way.
I was looking at Pipework and trying to hack the virtual ethernet interfaces to be configured by DHCP, but it felt like all my efforts were in vain and not working. So then I had an idea. What if it did not matter what the IP was? What if I could let Docker.io do it’s thing and just add the virtual network to my existing network without the need for DHCP (for Docker.io)?
This solution listens to the Docker.io events stream on the host OS and then updates your Bind9 DNS server with
nsupdate and does a Dynamic DNS update to register your service. The best part is there are NO dependencies (aside from a network interface) for your guest OS or the way that you start your container. All this is possible thanks to Linux Network Namespaces which allow me to configure the virtual ethernet interfaces inside Docker.io containers from my host OS.
Before you try to implement my solution, this guide assumes you are using Bind9 DNS with Docker.io on an Ubuntu 12.04+ operating system. I already have a Bind9 DNS server with Dynamic DNS setup; for the purposes of this post I will be skipping over that.
To start I would recommend setting up your own network bridge. You might be able to use Docker’s but you probably will want to make some adjustments. I do this with the following code in my
auto docker0 iface docker0 inet static bridge_ports none bridge_fd 0 address 10.1.0.1 netmask 255.255.0.0 network 10.1.0.0 broadcast 10.1.255.255 gateway 10.1.0.1 dns-nameservers 10.1.0.1 dns-search <search-domain> post-up route add default gw <external_interface> || /bin/true
To set this up there are a couple prerequisites. Just to be sure that I’m not getting any extra settings from Docker.io implicitly when it creates its bridge I usually delete it and re-create it manually (ensure the Docker.io service is not running).
ifconfig docker0 down brctl delbr docker0 brctl addbr docker0 ifup docker0
If you used a configuration similar to my
/etc/network/interfaces line then
ifup will configure your bridge for you. If you restart your machine or run
service networking restart it will also handle the above automatically for you.
After you have configured your bridge with custom subnets, DNS, etc… you need to update your Docker.io configuration to use your custom bridge (though it has the same name, I’m sourcing the variable just in case it changes how Docker.io internally handles network configuration; never hurts to be extra careful). To do this open up your
/etc/default/docker configuration file, and add/alter the
DOCKER_OPTS variable to contain
-b=docker0 -dns 10.1.0.1.
Once this has been completed you can take the
docker_ddns file and place it in
/usr/local/bin. You need to make sure you either set the environment variables for docker_ddns or that you edit the script itself. The script is written in Ruby and the lines and the environment variables are:
25: ENV['DDNS_KEY'] ||= "/etc/bind/ddns.key" 26: ENV['NET_NS'] ||= "10.1.0.1" 26: ENV['NET_DOMAIN'] ||= "kellybecker.me" 28: ENV['DOCKER_PID'] ||= "/var/run/docker.pid"
After you have that setup you should look at an upstart script or method of starting docker_ddns with Docker.io. I like to use Monit for service and process status monitoring. So here is my
docker_ddns.conf file for Monit.
check process docker_ddns with pidfile /var/run/docker_ddns.pid start program = "/usr/local/bin/docker_ddns" with timeout 60 seconds stop program = "/usr/bin/kill `cat /var/run/docker_ddns.pid`" if totalmem > 50.0 MB for 5 cycles then restart if 3 restarts within 5 cycles then timeout depends on docker group docker
I suppose you could also add the startup to your
/etc/init/docker.conf though it won’t ensure that the process does not die.
Once everything is all setup and running again; you can tail the logs of
docker_ddns and you should see something along the lines of:
I, [2014-02-18T09:32:55.978414 #1942] INFO -- : Event Fired (8311c4c09153): create I, [2014-02-18T09:32:56.173839 #1942] INFO -- : Event Fired (8311c4c09153): start I, [2014-02-18T09:32:56.429049 #1942] INFO -- : Updated Docker DNS (8311c4c09153): container.kellybecker.me 60 A 10.1.0.2. I, [2014-02-18T09:37:23.389485 #1942] INFO -- : Event Fired (8311c4c09153): die
Comment on this post.
docker_ddns [ /path/to/log.file | - ] Using '-' will log output to the standard out. Defaults to '-'