Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News The Birth of Continuous Delivery and DevOps

The Birth of Continuous Delivery and DevOps

Leia em Português

At GOTO Amsterdam 2014 conference, agile coach Dan North shared his experience as part of a build team employed in a client project back in 2005. The team introduced several (technical and cultural) practices that became core tenets of the Continuous Delivery book and of the DevOps movement (for instance bridging the gap between development and ops teams was critical to success in that project).

(InfoQ reached out to the team members for some highlights of that period, quotes in this story are coming from them)

Tasked with reducing the testing bottleneck the organization was facing (testers waiting long for releases, followed by frenetic testing and reduced time frames), Dan’s team realized the core initial problem was the slow and unpredictable provision and deployment of controlled environments (today's snowflake environments) in sufficient number.

Trying to set up our QA machines in the US when they vanished mid-terminal session. A few phone calls later and I find out they are on a truck being moved across the country to a new data centre!

One of the first steps the team took was to version control all the artifacts and configurations involved in the deployment, including the WebLogic application server container as well as the deployment code itself. At a time when manual steps were often the only means to install or deploy application components, automating such steps (for example by templating the XML files generated by WebLogic’s UI) allowed the team to deterministically reproduce servers with the required configuration (OS, packages, application server, environment settings) for a given application (an imperative implementation of infrastructure as code).

The automation advancements were met with initial skepticism by the Ops team. Breaking down the silos required direct communication (having an ally in the other team helps says Dan) and incremental steps that were digestible. For instance all the newly automated (previously manual) steps still required explicit approval by an Ops person in the early stages. Another example Dan recalled of treating Ops as a customer was the choice of language for “Conan the Deployer” code: the team went along with Ops’ choice (shell scripting) although it was the team’s least preferred option.

Dan somehow convincing them that we _could_ use Conan to deploy.

Several of the team members became active promoters of Continuous Delivery and the DevOps movement: Chris Read took part in the first DevOps Days back in 2009, Jez Humble and Dave Farley (not a build team member but tech lead for several other teams) wrote the Continuous Delivery book, Sam Newman presented tutorials on Continuous Delivery, Julian Simpson became known as the build doctor.  

My strongest memory is of the Conan team being the hairiest, stinkiest, most foul-mouthed, yet effective team I've ever come across.

Many techniques that the team introduced in this project later featured on the Continuous Delivery book, namely single deployable artifacts, blue-green deployments  or self-service deployments.  

Best of all, no more weekend releases because of blue/green deployments.

A 2006 paper titled “The Deployment Production Line” already dived into some of the issues and solutions for automating a build pipeline.

Do you have similar pre-DevOps automation and collaboration stories to share with the community?

Rate this Article