Docker allows software and dependencies to be bundled together in ‘containers’. This is a whole level up from a script that installs the libraries for a piece of software to run. A container holds the full environment in its own file system. This means anything you can install/setup on a server can be bundled in the container. The big advantage here is that we then have a portable unit which can be run consistently from machine to machine. Think of Docker as lighter-weight than a VM (e.g. Docker containers don’t have their own kernels, where as a VM would), but giving a similar level of separation, structure and consistency.
Docker has been around for a few years but it was only recently that I had a project which seemed to be a perfect fit. The project in mind had a complicated setup procedure for each new developer. Related to this was tight coupling of files and services on a single machine, meaning parts of it would have been difficult to scale at all, let alone quickly.
This post is a collection of (hopefully useful) articles and notes, gathered as I went through the initial learning curve of using Docker and porting the project over to a set of containers.
Getting Started
The guide linked straight from the Docker website was good for the initial setup procedure to get Docker running on my Mac. It showcases some of the overall concepts (creating/running Docker containers and demonstrating Docker Hub). As tutorials go, this is written to be accessible to those who aren’t too familiar with the command line. Whilst that perhaps make the article a little bloated in size, it sure is easy to follow. I tend to find such articles nicer to read than those written by an author who assumes too much.
The tutorial finishes with the user creating a ‘whalesay’ container that uses the output of the fortunes program to demonstrate a docker container. I spent at least 10 minutes starting up containers and reading the various quotes, I expect a lot of other developers have done the same!
A brief note about Docker Hub—users are able to make private repos as well as public repos. I like this a lot. One of the reasons I host a lot of code on BitBucket rather than GitHub is because BitBucket does not have a limit on the quantity of private repos I can have (at least for those only accessible to less than 5 users).
Creating a Web Server with Docker
Going through the initial setup didn’t really get me excited about Docker. It also didn’t demonstrate how I might get a container to run a service in the background, like those in my project that needs porting.
Servers for Hackers has a good write-up on getting started with Docker, demonstrating some of the more useful features and how you might want to use Docker beyond a portable way to read out fortunes. The tutorial is a little out of date now (e.g. boot2docker
is now docker-machine
). The guide goes through the process of setting up a basic nginx server from a Docker container. Each step has a good level of detail. Now things are starting to get interesting!
The article shows how Docker tracks changes and how that is the basis for making images. It also touches on linked containers, and how you could actually access a running container as a service by exposing ports (e.g. in the tutorial we expose port 80 to access our running web server). This proved to be particularly relevant as I expect the porting project will logically go into two distinct containers that will need to communicate with each other.
One line from the article I thought proved to be very true:
This is the unfortunate baby-step which everyone needs to take to first get their feet wet with Docker. This won't show what makes Docker powerful, but it does illustrate some important points.
I consistently find this to be the case in the initial part of learning any new technology—perseverance is key!
Post Tutorial Questions
After working through those two tutorials I found I had a basic enough understanding to start branching off and exploring more on my own. A few questions I had which I expect a lot of developers new to Docker might also have:
Using the Ubuntu image, what version am I getting?
You can specify a specific version via the image’s tag. For example, the FROM ubuntu
line in a Dockerfile could become FROM ubuntu:14.04
. See the official Ubuntu section on Docker Hub for available tags.
Are containers and images the same thing?
No, the short explanation is that containers are instances of images.
How does networking work to/from the container?
By default containers are part of Docker’s default bridge network. All new containers are added to this bridge network unless specified otherwise. You can create your own bridged network for a set of containers for finer control. There is plenty of detail and some useful diagrams in the networking docs.
You can verify a running container is part of a bridged network by running ifconfig
on the container. Run route
and the IP shown for the default route is that of the Docker host.
The gotcha here is if you’re running Docker on OS X you are running Docker inside a VM. This threw me off originally as running ifconfig
on the OS X command line did not show the IP address range I expected. You will be able to see the IP address for the Docker VM (in my case it is on the subnet 192.168.99.x
). If we made a container expose a web port, we’d then view this via that IP address for the host, rather than localhost
. An actual walk through for such an example on OS X is given here.
What is considered best practice for separating data(bases) out of a container?
The loose aim is one process per container to make scaling and maintainability easy. The Dockerfile best practices page has some good advice on this.
In the project I originally wanted to move to Docker this was achieved by having a separate container for MySQL and another container for the main app which then also had a mapped volume for a set of public web server assets. The advantage to the mapped folder is it allows the host web server to access the flat files without having to go through a container.
The MySQL Docker image README.md contains instructions and advice on how to bring up a MySQL container with a particular database schema. In short, there is a specific folder where any shell scripts and SQL files are executed on image creation.
How does disk space work in a container?
By default a container has the same amount of disk space available as the host machine.
I have a container running, how can I connect and execute commands?
docker exec -i -t [container-id/name] bash
, where the container ID/name can be found from docker ps
. By starting an instance of bash
on the container we can browse around the file system, view logs, edit files etc. I found this particular useful on the MySQL instance where I needed to prod the database mid-way through a troublesome long-running query. Logging on to the container using this method then running the MySQL CLI and looking at the processlist
allowed me to see what was going on.
Further Thoughts
FIG is briefly mentioned in the second tutorial but this has now been superseded by Docker Compose. I had a quick go setting up a WordPress container (with a separate MySQL container) based on this tutorial. It seemed slick. I then created a docker-compose YAML file to auto-connect my project containers which worked just as well. I expect I will also come back to trying out Docker Compose as a common part of my future WordPress dev cycle.
There is an interesting Stack Overflow post that looks at the difference between Docker and Vagrant. Besides pointing out the differences (tl;dr being Docker containers are not full VMs so the comparison doesn’t make complete sense) it touches on the use cases where each is most suitable. The most interesting replies include the one written by the author of Vagrant and another written by the author of Docker. I will likely post in the future about Vagrant, mostly as an overview for VVV and WordPress development.
A good sum up between the two technologies from that Stack Overflow post:
The short answer is that if you want to manage machines, you should use Vagrant. And if you want to build and run applications environments, you should use Docker.
Finally, this post has some early information on Docker. The explanation of the whole container metaphor is interesting, and explains the reasoning behind why a ‘build once, run anywhere’ platform is so appealing.