FutureOps with Immutable Infrastructures and Built-in Failure Recovery
Mitchell’s vision encompasses repeatable environments (through configuration management tools) and extremely fast deployments times (through pre-built static images) and service orchestration times (through decentralized orchestration tools). In this scenario provisioning a new server behaves no different from replacing a failing server.
This vision relies on the idea of immutable infrastructures where machines are configured at startup and never modified again. Any later environment change results in a new machine deployment that replaces the outdated immutable machine (with some caveats such as complex database server changes or small application changes like a CSS modification for example). This idea has been deemed utopian by some due to the large number of external dependencies in any system today.
For Mitchell those issues are accentuated by current configuration management tools such as Puppet or Chef. Repeatable deployments of the same server are hard to guarantee due to dependencies on packages, network availability or changes in the environment descriptions (cookbooks in Chef or manifests in Puppet). The key to predictability, says Mitchell, is to use machine images (binaries) that have been pre-built and tested, akin to software binaries compiled from source code.
According to Mitchell, machine images gained a bad reputation in the past due to the difficulty to maintain them. But with the current configuration management tools images can now evolve easily and be built in a continuous integration style. New tools such as Packer simplify the task further by building images for multiple hypervisors (VirtualBox, VMWare, etc) based on a single set of templates and environment descriptions.
However, service discovery and orchestration tasks (like setting up load balancing), still have to take place after deployment of the image (as opposed to being part of the deployment process itself). Serf is another tool developed by Mitchell to help in this domain. According to Mitchell, Serf was designed to support failure detection and recovery by relying on loosely coupled agents and gossip-based membership (a new agent must contact an existing one to join the system). Similarly, an agent might detect a failing node and “gossip” the news to other agents which will then decide if it needs to be replaced.
The main benefits, says Mitchell, include the speed of orchestration and the simplification of the configuration management process (only needs to configure the Serf agent service, Serf then starts automatically at machine startup) during the machine image generation.
During the Q&A Mitchell also mentioned that he sees no problem in the cohabitation of Docker (for application containers), Packer (for common infrastructure) and Serf (for service orchestration).
Sarah Howe Jul 06, 2015