BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Top Docker Misconceptions

Top Docker Misconceptions

This item in japanese

Lire ce contenu en français

Based on his experience as a system administrator evaluating Docker, Matt Jaynes has written at the DevOps University website about the top Docker misconceptions, warning about adopting Docker at small scale and without solid infrastructure foundations, and providing alternatives to improve the deployment process.

Benjamin Wootton, CTO of Contino, at his article "Microservices - Not a free lunch" already warned about the significant operations overhead of a microservices based architecture. Matt, from his experience focused on server setups delivering web applications, warns about it as well, and the need to be an expert in systems administration to use it safely in production:

At the moment, you need more systems expertise to use Docker, not less. Nearly every article you will read on Docker will show you the extremely simple use-cases and will ignore the complexities of using Docker on multi-host production systems. This gives a false impression of what it takes to actually use Docker in production.

If you do not want to have to learn how to manage servers, you should use a Platform-as-a-Service (PaaS) like Heroku. Docker is not the solution.

Matt recommends starting with role based containers (app, db, cache,...) and only for the roles that make sense, but only if the project has a solid infrastructure foundation:

If you have critical holes in your infrastructure, you should not be considering Docker. It would be like parking a Ferrari on the edge of an unstable cliff.

Other optimizations that can be done instead of adopting Docker to improve performance and consistency in the deployment process:

  • Using configuration management tools (Ansible, Puppet,...) to easily create and manage servers, particularly in the cloud.

  • Creating cloud images, instead of provisioning from scratch, to create new servers faster. The base image and the servers already started can still be provisioned with configuration management tools.

  • Pinning versions, using explicit versions for packages and dependencies to ensure software does not change from environment to environment or over time.

  • Deploying applications using git or rsync, so the servers can be updated with minimal downloads, similarly to Docker's image layer caching.

  • Deploying applications using packages if the application build or preparation process takes a long time. Using a pre built package such as a zip, rpm or deb, would speed up the deployment across servers.

For multi-host production, Matt recommends using these optimization methods for as long as possible until the extra benefits provided by Docker are needed, currently requiring a large scale before Docker's benefits outweigh the extra complexity added. However, at the rate the project is progressing, it could quickly no longer be the case.

Rate this Article

Adoption
Style

BT