Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Q&A with Baruch Sadogursky on the Challenges of Managing Docker Containers Lifecycle

Q&A with Baruch Sadogursky on the Challenges of Managing Docker Containers Lifecycle

Baruch Sadogursky, developer advocate at JFrog, has given talks at several conferences, including DevOps Days Kiel last May, on the challenges of controlling and tracking the flow of Docker images from development to production and the solutions proposed by JFrog.

InfoQ interviewed Sadogursky to better understand some of the challenges in managing the lifecycle of Docker containers.

InfoQ: Do you see many engineering teams today using Docker images as build artifacts and promoting them through their deployment pipeline?

Sadogursky: We see huge interest in Docker these days. People try to implement promotion pipelines with Docker as they are used to with their previous stacks, but that doesn't always work. Some architectural decisions of Docker make it hard to do the right thing, some of the biggest advantages of Docker also make it easy to do the wrong thing.

InfoQ: And the ones that do are they actually deploying the images in production or just using them as fast and easy replicas of their actual VM-based production instances?

Sadogursky: The number of teams actually using Docker in production is dramatically less than the number that "play" with it, PoC it or use it in their development environment. We believe that one of the reasons is that adding one more opaque layer of abstraction adds to uncertainty that production–level artifacts can't have.

InfoQ: Can you explain briefly why you advocate using Docker images as artifacts instead of Dockerfiles (with dependency management) + application binaries (such as a WAR file)?

Sadogursky: The whole theory is explained in this blog post, but in two words – there is no way to ensure that if you build your images from Dockerfiles multiple times (per environment) you will end up with the same artifact in hand. And that's because of the nature of the Dockerfile – most of its commands are bringing various dependencies and usually not using a stable version.

InfoQ: What kind of information should be included in a Dockerfile and what should be left out, in your opinion?

Sadogursky: Try to lock up the versions of all the dependencies, as much as possible. For some dependencies it's possible, e.g. run apt-get with a version number, for others it's not (e.g. Ubuntu base images accumulate security updates under the same version), but try.

InfoQ: What are the challenges promoting Docker images through the pipeline?

Sadogursky: The current mode of promotion using most of the Docker registries is pull the image, retag to new registry (or repository) for the next step in the pipeline and push it back. Pretty silly, isn't it?

InfoQ: How do you see the state of Docker registries today (Docker's and competitors)? What are the main differences between them?

Sadogursky: That's a very broad question. There are tons of metrics the registries can be compared to. One of the most interesting is if you need additional tools for the rest of your pipeline. Docker is a container technology; containers contain something. And if you can manage this "something" on the same artifact repository as your container images, you can establish a traceable metadata between the builds that created the image and the builds that created whatever is in the container.

InfoQ: How do you avoid a dependency management hell if you need to ensure traceability between source code, application binaries versions and Docker images versions?

Sadogursky: The "build once, promote immutable binary" actually solves this problem as well. You run the dependency management systems once, so the traceability to the source is the same once the binary is created, all the way to production.

InfoQ: From a security point of view, what are your recommendations in terms of ensuring the Docker images that get promoted to production are not vulnerable?

Sadogursky: No surprises here. You should use security scanner. Just make sure you use one that can not only scan Docker containers, but also what's inside the image, what's inside the artifact that's inside the image, etc. They all can expose your container to security vulnerabilities at runtime.

InfoQ: Can a Docker-images-based promotion pipeline help teams achieve immutable infrastructure? What other pieces of the puzzle are needed to support that kind of "build & forget" approach (from a patching and dependency management perspective) to infrastructure?

Sadogursky: Docker-images-based approach is all about the immutable infrastructure. The idea of creating immutable image as soon as possible is exactly the methodology of the immutable infrastructure. Once you have a good CI pipeline, every source change triggers a set of CI builds and promotions, ending up in a new set of Docker containers - that's immutable infrastructure at its best.

InfoQ: How do configuration management tools like Chef or Puppet fit in that kind of scenario?

Sadogursky: That's the 1 billion dollar question. I am not sure anyone can answer this question now. It's very interesting to see how mutable infrastructure software re-invent itself in the immutable infrastructure world. Maybe Chef Habitat is the first step in that direction? We'll see.

Rate this Article