The main benefit behind continuous delivery is lower-risk releases; comprehensive test automation and continuous integration are the practices that have the biggest impact on IT performance. Research of continuous delivery and IT performance tells us that implementing continuous delivery practices leads to higher IT performance and high performers achieve both higher tempo and higher levels of stability, said Jez Humble.
Jez Humble will talk about implementing continuous delivery at the Lean IT Summit 2017. InfoQ will cover this conference with Q&As, summaries and articles.
The Lean IT Summit 2017 will be held on March 14-15 in Paris, France:
Lean IT provides validated management practices to tackle the many challenges digital brings to the workplace. The summit will explore three themes on how Lean in IT/ Digital can:
- Delight your customers and users
- Increase velocity and agility to have a competitive advantage
- Move your organization beyond Taylorism and develop intense collaboration with your customers, employees and suppliers
InfoQ interviewed Jez Humble about ongoing research on the relationship between continuous delivery, and IT performance and the benefits that organizations get from applying continuous delivery, the impact of continuous delivery practices on performance, and asked him for advice to organizations that want to apply continuous delivery.
InfoQ: What made you decide to research the relationship between continuous delivery, and IT performance?
Jez Humble: Since the continuous delivery book came out in 2010, it has gone from being a controversial idea to being mainstream, even in large financial services companies and government. However, I wanted to understand which bits of it are important and why, and to try and build a quantitative framework to explore the benefits. The work that I do on DevOps Research and Assessment with my business partners Dr Nicole Forsgren and Gene Kim, along with the amazing team at Puppet, has far exceeded my wildest expectations.
We’ve found a statistically valid way to model IT performance, shown that implementing continuous delivery practices leads to higher IT performance, and shown that high performers achieve both higher tempo - measured in terms of release frequency and lead times - and higher levels of stability.
InfoQ: Which benefits do organizations get from applying continuous delivery?
Humble: As I describe on my continuous delivery website, the main benefit, and the original driver behind continuous delivery, is lower-risk releases. When Dave Farley and I wrote the book, one of our biggest goals was to enable people to release new versions of complex, enterprise systems during normal business hours rather than having to perform complex, orchestrated and (typically) manual releases that required downtime during evenings and weekends.
However, these same techniques also enable higher software quality, faster time-to-market, more frequent releases, and lower ongoing delivery costs. By substantially reducing the transaction cost of pushing out a change, we shift the economics of the software delivery process so that it becomes viable to work in small batches. Thus continuous delivery principles and practices are also a prerequisite for many of the techniques advocated by the lean startup movement such as A/B testing and quickly delivering and rapidly evolving MVPs based on feedback from users.
Perhaps most interesting, our research shows that continuous delivery reduces burnout, making for happier teams, and improves your organizational culture. That’s not to say it’s a panacea - continuous delivery requires a substantial investment and is only suitable for products and services that are likely to evolve significantly over their lifetime.
InfoQ: Which continuous delivery practices have the biggest impact on IT performance?
Humble: Comprehensive test automation and continuous integration are the biggest, with version control of both code and infrastructure a smaller but still important contributor. Continuous delivery also has a significant impact on IT performance, explaining nearly 1/3 of the variance.
However, these technical practices require investment, and even now, more than 15 years after Extreme Programming talked about their importance, plenty of teams haven’t adopted them. Many organizations still don’t have reliable, comprehensive, maintainable test automation for their mission critical services. Many teams that say they are practicing continuous integration aren’t. I have a test to see if people are actually practicing continuous integration: are developers pushing to a shared trunk/master at least daily? Does every commit cause the automated build and tests to run? And when the build fails, is it typically fixed within ten minutes? Most people can’t answer "yes" to all three of these questions. Continuous integration and test automation are hard and require substantial investment, but the data shows clearly the enormous impact they have on software delivery performance.
InfoQ: Are there also practices which seem to have less or no impact? Do we know why that is the case?
Humble: We found that having changes approved by people external to the development team substantially reduces tempo, while having negligible positive impact on stability. Internal approval processes such as pair programming and code review are much more effective. Our hypothesis is that it’s just very hard to understand the impact of code changes through inspection. It’s much better in terms of risk management to rely on test automation supplemented by lightweight intra-team approval processes.
Another thing which will be surprising for some is that there is no correlation between QA primarily creating and maintaining acceptance tests and IT performance. It’s much better to have developers do it, or to have them work with testers. Our hypothesis here is that when developers are involved in creating and maintaining automated tests, it exerts a force on the software that makes it more testable, and that this in turn makes the tests more reliable. My personal experience is that when developers aren’t involved in the test automation, the tests end up being very expensive to maintain, and that they fail a lot because developers don’t care about them.
InfoQ: What’s your advice to organizations that want to apply continuous delivery?
Humble: Continuous delivery is a journey, not a destination, and it’s fundamentally about continuous improvement. Start by defining and communicating the measurable business goals you want to achieve, and then make sure teams have the time and resources they need to succeed. Getting better at delivery typically involves substantial investment in test and deployment automation along with re-architecture to build systems that can be easily tested on developer workstations and deployed in a simple fashion without (say) lots of orchestration. So you need to be ready for that. It took Amazon four years to move from a monolithic architecture to a service-oriented architecture which could be deployed continuously.
However one of the great things about continuous delivery is that in many cases you can make big local improvements without a ton of work. My original experience with continuous delivery back in 2005 was working on a team of a handful of people working to automate the deployment of a large application where we took the deployment from being a multi-day process to something that could be done in less than an hour, with sub-second downtime and fully automated sub-second rollback using the blue/green deployment pattern. We achieved that in a matter of a few months, and it was all done using bash and cvs. That brings me to my other point: people tend to place undue focus on tooling. Avoiding the bad tools is definitely important, but there are so many good (and free!) tools out there that you shouldn’t spend a great deal of time and attention arguing about which ones to use. Focus on architecture and culture instead.