BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Docker: Using Linux Containers to Support Portable Application Deployment

Docker: Using Linux Containers to Support Portable Application Deployment

This item in japanese

Bookmarks

Docker is an open source tool to run applications inside of a Linux container, a kind of light-weight virtual machine. In addition to running, it also offers tools to distribute containerized applications through the Docker index -- or your own hosted Docker registry -- simplifying the process of deploying complex applications.

In this article I will describe the challenges companies face in deploying complex systems today, and how Docker can be a valuable tool in solving this problem, as well as other use cases it enables.

The deployment challenge

Deployment of server applications is getting increasingly complicated. The days that server applications could be installed by copying a few Perl scripts into the right directory are over. Today, software can have many types of requirements:

  • dependencies on installed software and libraries ("depends on Python >= 2.6.3 with Django 1.2")
  • dependencies on running services ("requires a MySQL 5.5 database and a RabbitMQ queue")
  • dependencies on a specific operating systems ("built and tested on 64-bit Ubuntu Linux 12.04")
  • resource requirements:
    • minimum amount of available memory ("requires 1GB of available memory")
    • ability to bind to specific ports ("binds to port 80 and 443")

For example, let's consider the deployment of a relatively simple application: Wordpress. A typical Wordpress installation requires:

  • Apache 2
  • PHP 5
  • MySQL
  • The Wordpress source code
  • A Wordpress MySQL database, with Wordpress configured to use this database
  • Apache configured:
    • to load the PHP module
    • to enable support for URL rewriting and .htaccess files
    • the DocumentRoot pointing to the Wordpress sources

While deploying and running a system like this on our server, we may run into some problems and challenges:

  1. Isolation: if we were already hosting a different site on this server, and our existing site runs only on nginx, whereas Wordpress depends on Apache, we're in a bit of a pickle: they both try to listen on port 80. Running both is possible, but requires tweaking the configuration (changing the port to listen to), setting up reverse proxies etc. Similar conflicts can occur at the library level, if I also run an ancient application still depending on PHP4 we have a problem, since Wordpress no longer supports PHP4, and it's very difficult to run both PHP4 and 5 simultaneously. Since applications running on the same server are not isolated (in this case at a filesystem and network level), they may conflict.
  2. Security: we're installing Wordpress, not the software with the best security track record. It would be nice to sandbox this application so that once hacked at least it doesn't impact the other running applications.
  3. Upgrades, downgrades: upgrading an application typically involves overwriting existing files. What happens during an upgrade window? Is the system down? What if the upgrade fails, or turns out to be faulty. How do we roll back to a previous version quickly?
  4. Snapshotting, backing up: it would be nice, once everything is setup up successfully, to "snapshot" a system, so that the snapshot can be backed up, or even moved to a different server and started up again, or replicated to multiple servers for redundancy.
  5. Reproducibility: It's good practice to automate deployment and to test a new version of a system on a test infrastructure before pushing it to production. The way this usually works is using a tool like Chef, Puppet to install a bunch of packages on the server automatically, and then when everything works, to run that same deployment script on the production system. This will work 99% of the time. That 1% of times, during the timespan between deploying to testing and production, the package repository has been updated with newer, possibly incompatible versions of a package you depend on. As a result, your production setup is different than testing, possibly breaking your production system. So, without taking the burden of taking control of every little aspect of your deployment (e.g. hosting your own APT or YUM repositories), consistently reproducing the exact same system onto multiple setups (e.g. testing, staging, production) is hard.
  6. Constrain resources: what if our Wordpress goes CPU crazy and starts to take up all our CPU cycles, completely blocking other applications from doing any work? What if it uses up all available memory? Or generates logs like crazy, clogging up the disk? It would be very convenient to be able to limit resources available to the application, like CPU, memory and disk space.
  7. Ease of installation: there may be Debian or CentOS packages, or Chef recipes that automatically execute all the complicated steps to install Wordpress. However, these recipes are tricky to get rock solid, because they need to take into account many possible existing system configurations of the target system. In cases many, these recipes only work on clean systems. Therefore, it is not unlikely that you have to replace some packages or Chef recipes with your own. This makes installing complex systems not something you try during a lunch break.
  8. Ease of removal: software should be easily and cleanly removable without leaving traces behind. However, as deploying an application typically requires tweaking of existing configuration files, and putting state (MySQL database data, logs) left and right, removing an application completely is not that easy.

So, how do we solve these issues?

Virtual machines!

When we decide run each individual application on a separate virtual machine, for instance on Amazon's EC2, most of our problems go away:

  1. Isolation: install one application per VM and applications are perfectly isolated, unless they hack into each other's firewall.
  2. Reproducibility: prepare your system just the way you like, then create an AMI. You can now instantiate as many instances of this AMI as you like. Fully reproducible.
  3. Security: since we have complete isolation, if the Wordpress server gets hacked, the rest of the infrastructure is not affected -- unless you litter SSH keys or reuse the same passwords everywhere, but you wouldn't do that, would you?
  4. Constrain resources: a VM is allocated certain share of CPU cycles, available memory and disk space which it cannot exceed (without paying more money).
  5. Ease of installation: an increasing amount of applications are available as EC2 appliances and can be instantiated with the click of a button from the AWS marketplace. It takes a few minutes to boot, but that's about it.
  6. Ease of removal: don't need an application? Destroy the VM. Clean and easy.
  7. Upgrades, downgrades: do what Netflix does: simply deploy a new version to a new VM, then point your load balancer from the old VM to the VM with the new version. Note: this doesn't work well with applications store state locally that needs to be kept.
  8. Snapshotting, backing up: EBS disk can be snapshotted with a click of a button (or API call), snapshots are backed up to S3.

Perfect!

Except... now we have a new problem: it's expensive, in two ways:

  • Money: can you really afford booting up an EC2 instance for every application you need? Also: can you predict the instance size you will need, because if you need more resources later, you need to stop the VM to upgrade it -- or over-pay for resources you don't end up needing (unless you use Solaris Zones, like on Joyent, which can be resized dynamically).
  • Time: many operations related to virtual machines are typically slow: booting takes minutes, snapshotting can take minutes, creating an image takes minutes. The world keeps turning and we don't have have that kind of time!

Can we do better?

Enter Docker.

Docker is an open source project started by the people of dotCloud, a public Platform-as-a-Service provider, that launched earlier this year. From a technical perspective Docker is plumbing (primarily written in Go) to make two existing technologies easier to use:

  • LXC: Linux Containers, which allow individual processes to run at a higher level of isolation than regular Unix process. The term used for this is containerization: a process is said to run in a container. Containers support isolation at the level of:
    • File system: a container can only access its own sandboxed filesystem (chroot-like), unless specifically mounted into the container's filesystem.
    • User namespace: a container has its own user database (i.e. the container's root does not equal the host's root account)
    • Process namespace: within the container only the processes part of that container are visible (i.e. a very clean ps aux output).
    • Network namespace: a container gets its own virtual network device and virtual IP (so it can bind to whatever port it likes without taking up its hosts ports).
  • AUFS: advanced multi layered unification filesystem, which can be used to create union, copy-on-write filesystems.

Docker can be installed on any Linux system with AUFS support and a 3.8+ kernel. However, conceptually it does not depend on these technologies and may in the future also work with similar technologies, such as Solaris' zones, or BSD jails, using ZFS as a file system, for instance. Today, your only choice is Linux 3.8+ and AUFS, however.

So, why is Docker interesting?

  • It's very light weight. Whereas booting up a VM is a big deal, taking up a significant amount of memory, booting up a Docker container has very little CPU and memory overhead and is very fast. Almost comparable to starting a regular process. Not only running a container is fast, building an image and snapshotting the filesystem is as well.
  • It works in already virtualized environments. That is: you can run Docker inside an EC2 instance, a Rackspace VM or VirtualBox. In fact, the preferred way to use it on Mac and Windows is using Vagrant.
  • Docker containers are portable to any operating system that runs Docker. Whether it's Ubuntu or CentOS, if Docker runs, your container runs.

So, let's get back to our previous list of deployment and operation problems and let's see how Docker scores:

  1. Isolation: docker isolates applications at the filesystem and networking level. It feels a lot like running "real" virtual machines in that sense.
  2. Reproducibility: Prepare your system just the way you like it (either by logging in and apt-get in all software, or using a Dockerfile), then commit your changes to an image. You can now instantiate as many instances of it as you like or transfer this image to another machine to reproduce exactly the same setup.
  3. Security: Docker containers are more secure than regular process isolation. Some security concerns have been identified by the Docker team and are being addressed.
  4. Constrain resources: Docker currently supports limiting CPU usage to a certain share of CPU cycles, memory usage can also be limited. Restricting disk usage is not directly supported as of yet.
  5. Ease of installation: Docker has the Docker Index, a repository with off-the-shelf docker images you can instantiate with a single command. For instance, to use my Clojure REPL image, run: docker run -t -i zefhemel/clojure-repl and it will automatically fetch the image and run it.
  6. Ease of removal: don't need an application? Destroy the container.
  7. Upgrades, downgrades: same as for EC2 VMs: boot up the new version of an application first, then switch over your load balancer from the old port to the new.
  8. Snapshotting, backing up: Docker supports committing and tagging of images, which incidentally, unlike snapshotting on EC2, is instant.

How to use it

Let's assume you have Docker installed. Now, to run bash in a Ubuntu container, just run:

docker run -t -i ubuntu /bin/bash

Depending on whether you have the "ubuntu" image downloaded already, docker will now download it or use the copy already available locally, then run /bin/bash in an ubuntu container. Inside this container you can now do pretty much do all your typical ubuntu stuff, for instance install new packages.

Let's install "hello":

$ docker run -t -i ubuntu /bin/bash
root@78b96377e546:/# apt-get install hello
Reading package lists... Done
Building dependency tree... Done
The following NEW packages will be installed:
  hello
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.1 kB of archives.
After this operation, 102 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main hello amd64 2.7-2 [26.1 kB]
Fetched 26.1 kB in 0s (390 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package hello.
(Reading database ... 7545 files and directories currently installed.)
Unpacking hello (from .../archives/hello_2.7-2_amd64.deb) ...
Setting up hello (2.7-2) ...
root@78b96377e546:/# hello
Hello, world!

Now, let's exit and run the same Docker command again:

root@78b96377e546:/# exit
exit
$ docker run -t -i ubuntu /bin/bash
root@e5e9cde16021:/# hello
bash: hello: command not found

What happened? Where did our beautiful hello command go? As it turns out, we just started a new container, based on the clean ubuntu image. To continue on from our previous one, we have to commit it to a repository. Let's exit out of this container and find out what the id of the container was that we launched:

$ docker ps -a
ID                  IMAGE                   COMMAND                CREATED              STATUS              PORTS
e5e9cde16021        ubuntu:12.04            /bin/bash              About a minute ago   Exit 127
78b96377e546        ubuntu:12.04            /bin/bash              2 minutes ago        Exit 0

The docker ps command gives us a list of currently running containers, docker ps -a also shows containers that have already exited. Each container has a unique ID which is more or less analogous to a git commit hash. It also lists the image the container was based on, and the command it ran, when it was created, what its current status is, and the ports it exposed and their mapping to the hosts' ports.

The one of the top was the second one we just launched without "hello" in it, the bottom one is the one we want to keep and reuse, so let's commit it, and create a new container from there:

$ docker commit 78b96377e546 zefhemel/ubuntu
356e4d516681
$ docker run -t -i zefhemel/ubuntu /bin/bash
root@0d7898bbf8cd:/# hello
Hello, world!

What I did here was commit the container (based on its ID) to a repository. A repository, analogous to a git repository, consists of one or more tagged images. If you don't supply a tag name (like I didn't), it will be named "latest". To see all locally installed images run: docker images.

Docker comes with a few base images (for instance ubuntu and centos) and you can create your own images as well. User repositories follow a Github-like naming model with your Docker username followed by a slash and then the repository name.

So, now we've seen one way of creating a Docker image the hacky way, if you will. The cleaner way is using a Dockerfile.

Building images with a Dockerfile

A Dockerfile is a simple text file consisting of instructions on how to build the image from a base image. I have a few of them on Github. Here's a simple one for running and installing an SSH server:

FROM ubuntu
RUN apt-get update
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:root" | chpasswd
EXPOSE 22

This should be almost self-explanatory. The FROM command defines the base image to start from, this can be one of the official ones, but could also be zefhemel/ubuntu we just created. The RUN commands are commands to be run to configure the image. In this case, we're updating the APT package repository, installing the openssh-server, creating a directory, and then setting a very poor password for our root account. The EXPOSE command exposes port 22 (the SSH port) to the outside world. Let's see how we can build and instantiate this Dockerfile.

The first step is to build an image. In the directory containing the Dockerfile run:

$ docker build -t zefhemel/ssh .

This will create a zefhemel/ssh repository with our new SSH image. If this was successful, we can instantiate it with:

$ docker run -d zefhemel/ssh /usr/sbin/sshd -D

This is different than the command before. -d runs the container in the background, and instead of running bash, we now run the sshd daemon (in foreground mode, which what the -D is for).

Let's see what it did by checking our running containers:

$ docker ps
ID                  IMAGE                   COMMAND                CREATED             STATUS              PORTS
23ee5acf5c91        zefhemel/ssh:latest     /usr/sbin/sshd -D      3 seconds ago       Up 2 seconds        49154->22

We can now see that our container is up. The interesting bit is under the PORTS header. Since we EXPOSEd port 22, this port is now mapped to a port on our host system (49154 in this case). Let's see if it works.

$ ssh root@localhost -p 49154
The authenticity of host '[localhost]:49154 ([127.0.0.1]:49154)' can't be established.
ECDSA key fingerprint is f3:cc:c1:0b:e9:e4:49:f2:98:9a:af:3b:30:59:77:35.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[localhost]:49154' (ECDSA) to the list of known hosts.
root@localhost's password: <I typed in 'root' here>
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.8.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@23ee5acf5c91:~#

Success once more! There is now a SSH server running and we were able to login to it. Let's exit from SSH and kill the container, before somebody from the outside figures out our password and hacks into the container.

$ docker kill 23ee5acf5c91

As you will have seen, our container's port 22 was mapped to port 49154, but that's fairly random. To map it to a specific port, pass in the -p flag to the run command:

docker run -p 2222:22 -d zefhemel/ssh /usr/sbin/sshd -D

Now, our port will be exposed on port 2222 if it's available. We can make our image slightly more user-friendly by adding the following line at the end of the Dockerfile:

CMD /usr/sbin/sshd -D

CMD signifies that a command isn't to be run when building the image, but when instantiating it. So, when no extra arguments are passed, it will execute /usr/sbin/sshd -D. So, now we can just run:

docker run -p 2222:22 -d zefhemel/ssh

And we'll get the same effect as before. To publish our newly created marvel, we can simply run docker push:

docker push zefhemel/ssh

and after logging in it will be available for everybody to use using that same previous docker run command.

Let's circle back to our Wordpress example. How would Docker be used to run Wordpress in a container? In order to build a Wordpress image, we'd create a Dockerfile that:

  1. Installs Apache, PHP5 and MySQL
  2. Download Wordpress and extract it somewhere on the filesystem
  3. Create a MySQL database
  4. Update the Wordpress configuration file to point to the MySQL database
  5. Make Wordpress the DocumentRoot for Apache
  6. Start MySQL and Apache (e.g. using supervisord)

Luckily, various people have already done this, for instance John Fink's github repository contains everything you need to build such a Wordpress image.

Docker use cases

Beside deploying complex applications easily in a reliable and reproducible way, there are many more uses for Docker. Here are some interesting Docker uses and projects:

  • Continuous integration and deployment: build software inside of a Docker container to ensure isolation of builds. Built software images can automatically be pushed to a private Docker repository, and deployed to testing or production environments.
  • Dokku: a simple Platform-as-a-Service built in under 100 lines of Bash.
  • Flynn, and Deis are two open source Platform-as-a-Service projects using Docker.
  • Run a desktop environment in a container.
  • A project that brings Docker to its logical conclusion is CoreOS, a very light-weight Linux distribution, where all applications are installed and run using Docker, managed by systemd.

What Docker is not

While Docker helps in deploying systems reliably, it is not a full-blown software deployment system by itself. It operates at the level of applications running inside of containers. Which container to install on which server, and how to start them is outside of Docker's scope.

Similarly, orchestrating applications that run across multiple containers, possibly on multiple physical servers or VMs is beyond the scope of Docker. To let containers communicate, they need some type of discovery mechanism to figure out at what IPs and ports other applications are available. Again, this is very similar to service discovery across regular virtual machines. A tool like etcd, or any other service discovery mechanism can be used for this purpose.

Conclusion

While everything described in this article was possible before using raw LXC, cgroups and AUFS, it was never this easy or simple. This is what Docker brings to the table: a simple way to package up complex applications into containers that can be easily versioned and distributed reliably. As a result it gives light-weight Linux containers about the same flexibility and power as "real" virtual machines as widely available today, but at a much lower cost and in a more portable way. A docker image created with Docker running in a Vagrant VirtualBox VM on a Macbook Pro will run great on EC2, Rackspace Cloud or on physical hardware, and vice versa.

Docker is available for free from its website. A good place to get started is the interactive getting started guide. According to the project's roadmap, the first production-ready version, 0.8, is expected to be ready in October 2013, although people are already using it in production today.

About the Author

Zef Hemel is Developer Evangelist and member of the product management team at LogicBlox, a company developing an application server and database engine based on logic programming, specifically Datalog. Previously he was VP of Engineering at Cloud9 IDE, which develops a browser-based IDE. Zef is a native of the web and has been developing web applications since the 90s. He's a strong proponent of declarative programming environments.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Nice

    by Csaba Okrona,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the article, it's a really nice intro to Docker.
    I've written a sample walkthrough to get a Django app up and running with docker: ochronus.com/docker-primer-django/

  • First Class article

    by Mark Stuart,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Have not seen a better article summarising the benefits of Docker; this article is first class. Thank you for sharing your experience and insight.

  • Excellent one!

    by Muhilan Mg,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This is an excellent article about intro to docker !! Thanks for this !

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT