Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Design for Continuous Evolution: Immutable Model Is Key for Robustness

Design for Continuous Evolution: Immutable Model Is Key for Robustness

This item in japanese

During his keynote at QCon New York, Eric Brewer described how advancing from continuous delivery to fast and stable continuous evolution requires a discrete construction step to define an immutable model of the system. Brewer’s compute infrastructure design team at Google uses the Kubernetes package manager Helm to construct and safely validate new deployment models, prior to attempting real deployment, although the concepts are technology agnostic. The patterns Brewer discussed, including the reliance on immutability at each step in the deployment chain, should be key design considerations for any cloud-native software.

The deployment process can be improved by inserting a discrete step to define the construction of an immutable model of the system, prepared for deployment. Relying on immutability at each step in the deployment chain allows for a fast and stable evolution of the entire system. Improving the evolution of software at Google and the associated design issues were the focus of Eric Brewer’s keynote at QCon New York. While Google uses the Kubernetes package manager Helm, the concepts leveraged by his team are technology agnostic and the patterns apply to any cloud-native software.

Brewer described how working with immutable building blocks provides testable, repeatable encapsulations of functionality, which build upon each other at each step in the deployment process. As shown in Brewer’s slide below, binaries are packaged within containers, and containers are co-located within pods. Replicated pods become the building blocks for services. The new construction step defines the model of services used to build the system, prior to deployment.

Brewer acknowledged that "construction" is a new verb in the deployment process, and broke it into two steps. First, define a model of what will be deployed, specifying the topology, system composition, and physical resources. Next, this model is used to construct the graph of what will be deployed, separate from the knowledge of how to create individual components. Offline construction of the deployment graph is zero-risk, completely segregated from implementation. While not error free, it is deterministic, repeatable and testable, prior to real deployment.

Brewer emphasized construction code runs at construction time, not deployment time. This requires a new declarative framework to write the construction code. Such a language needs to be applicable to only the task of construction, rather than the general purpose scripting languages typically used to automate deployment tasks.Brewer likes to refer to the process as construction because each deployment type takes arguments, much like an object’s constructor. In this case, the types are system components such as front-end or monitoring back-end, and the arguments determine what type of front-end to construct. The construction process can also be additive, and one type constructor may invoke other constructors, relying on primitive types such as load balancer, auto-scaler, or disk.

Defining the intent of a deployment facilitates a process to observe discrepancies between the current system and the desired state. Regardless of why the state (or the intent) changed, a robust system can take action to ensure the intended deployment goal is achieved and maintained.

The ironic lesson is immutability is a necessary foundation for a stable process of software evolution—the more things change, the more they need to stay the same.



Rate this Article