Continuous Delivery with Continuous Design: Completing the Cycle
Innovations in software delivery and product ideation don’t always impact each other. However, the rapidly increasing appetite for new product features coupled with the decreased lifetime of products, and even business models, necessitates joining up the cycles of continuous design and continuous delivery into a holistic approach to delivery. For lack of a better name, we’ll refer to it in this article as the innovation cycle; the stages of this cycle are as follows:
- The selection of an idea
- The refinement of the idea into a testable hypothesis
- The selection of features to implement the idea
- The development and testing of these features
- The development and testing of the measures that will demonstrate if the hypothesis is true or not
- The deployment of the idea into production
- The measurement of the success or failure of the idea
- Repeat the cycle.
Supporting such an approach not surprisingly requires the application of innovations in metrics and analytics, architecture, testing, experimental design, and continuous design, as well as the processes and tools supporting continuous delivery. In this article, we’ll introduce the crucial innovations required to keep this cycle going. Each of the topics addressed below could be an article or more in their own right, so consider this survey piece a teaser that hopefully motivates you to learn more about the topics.
Metrics and Analytics
Metrics are a risky business. It is very easy to develop a metric that ends up incenting the wrong behavior. Despite the risks, however, metrics are a key aspect of the cycle we described above. Ideas for inclusion in the software system are defined not just in terms of what they’ll do but how we know whether the idea was a good one. Let’s consider a specific example. Someone in the marketing department thinks that sales on their website will increase by 20% if the buying flow is changed. The idea in this example is the changed flow, and the testable hypothesis is that users using the new flow will be 20% more likely to buy than those using the old flow. In this case, the metric is pretty simple; we need to measure number of users for each flow and whether or not a sale results. We can randomly assign users to the old or new flow, and then assess whether the flow achieved its objective. So to implement this, we need to make sure we are tracking both users per flow as well as successful sales. At the end of the testing phase, we know if the marketing idea was any good or not.
Sometimes it isn’t quite so obvious what the right metric is or what type of analysis is needed to establish the value of an idea. Ideas that attempt to improve productivity, for example, can be difficult to measure. However, the feedback provided to the owners of the system on the success or failure of the idea is vital in prioritizing new features and helping to guide the evolution of the system.
Designing the testable hypothesis is effectively designing a set of experiments that will demonstrate whether the feature delivers the desired outcome. So, we start with determining what the outcome of the feature is and identifying how that feature affects the behavior of the system. The key to designing the experiment is to ensure that the feature’s true impact is measured. In the example of the changing buying flow above, measuring and comparing customers using both the old and the new flow provides a more legitimate comparison than going against historical data, for example.
Making the cycle efficient requires, in part, the ability to rapidly adapt the system to try out the new ideas. While in an ideal world, we could predict all the possible ways we would want to change the system, in the real world, things change in essentially unpredictable ways. The expectations of the user, the business environment, the competitive landscape, and regulations all change outside the control of an organization. So, we need to be able to accommodate changes whether they had been anticipated or not. Evolutionary architecture and emergent design are approaches of agile software development that make it as easy and safe as possible to make unpredicted changes.
Testing, covered in the next section, supports both evolutionary architecture and emergent design. Emergent design focuses on keeping the code maintainable and clean through recognizing opportunities for refactoring that arise as new functionality is added to the system. In contrast, evolutionary architecture focuses on the structural elements of a system, including technology choices and the design of the interfaces between the different structural elements.
Objectives for evolutionary architecture in this context include the ability to develop different parts of the system independently, even when they interact with each other. We accomplish this through proper encapsulation of functionality coupled with contract tests that document the assumptions each system makes about the other system’s behavior. Providing such tests to each development team provides the early warning of potential incompatibilities that inevitably occur as systems evolve. Other objectives of evolutionary architecture target data changes, message changes, and exchanging one implementation of a component for another.
In the evolutionary architecture section, we described on important style of testing to support the innovation cycle. However, many forms of testing are critical to support this cycle. Comprehensive and effective unit testing is crucial to support emergent design and more generally to provide the confidence that changes to support additional functionality don’t break existing functionality. Similarly, a comprehensive regression test suite provides the same kind of protection at a different level of granularity.
A comprehensive test strategy for a system involved in this innovation cycle should address the need to quickly ensure new functionality works and the desired old behavior has not been compromised. This testing should include technical testing (load, environment etc.) in addition to functional testing. It is also important to test that the functionality supporting the measurement of the idea is included in the test strategy. To keep this cycle going, test automation is crucial. Manual testing cycles are too long in general. Manual testing effort should be focused in critical areas and should be much more exploratory in nature.
A heartening trend is the focus being placed on customer and user experience design. As has occurred in the past with other aspects of software development, user centered design is learning how to work in a more agile fashion. Just like with architecture and application design, some amount of up-front thinking is needed to set the framework for the user experience. However, again just as with architecture and design, the focus should be on setting as little as possible initial framework, allowing the design to evolve along with the changing user needs and expectations.
Continuous design as an approach allows for the user experience to exploit insights learned during the evolution of the system. The approach also focuses our attention on what value the feature will deliver to users of the system. Hopefully, utilizing a user-centered approach prevents us from the feature bloat that so often plagues our applications.
As with the other topics in this survey article, continuous delivery is a complex and comprehensive enough topic to take up volumes and not just a few paragraphs. However, the idea behind continuous delivery provides the mechanism by which our innovation cycle can operate. The principle of continuous delivery that is most relevant here is the ability to deploy rapidly into production, shortening the cycle time between an idea and feedback on the value of the idea.
Achieving rapid deployment requires many continuous delivery techniques, including infrastructure automation, build automation, deployment and rollback automation, data migration automation and of course test automation mentioned previously. Each of these techniques is necessary to support the rapid development of new features, rapid testing of the new system, safe and rapid deployment of the application into production, and safe and rapid rollback in case either the system isn’t working as expected or if the feature turns out to be a bad idea.
The Innovation Cycle
Ultimately, the goal of this innovation cycle we’re describing is to allow organizations to more readily test ideas. Without this approach, organizations invest a lot in implementing an idea, meaning that fewer ideas are tried and the organization becomes more vested in the success of a particular idea. By lowering the cost, time and risk of experimenting with new ideas, this innovation cycle allows organizations to try many more things, increasing the probability of finding a hidden gem of an idea.
Evolutionary architecture and comprehensive testing, as well as other agile software development practices, reduce the development time needed to implement the idea. Continuous design focuses attention on how the feature should integrate with the other components of the system and also provides the perspective to think about how to measure the success of the idea. Experimental design and proper use of metrics implements the feedback loop needed to assess the success or failure of the idea. Continuous delivery provides the mechanism that enables all of these to work together to implement the innovation cycle.
Deceasing the risk and cost of experimentation provides the basis for organizations to continually evolve their services to their employees and customers. We think that completing the cycle of innovation provides organizations with a compelling competitive advantage.
About the Author
Dr. Rebecca Parsons is ThoughtWorks' Chief Technology Officer. She has more than 20 years' application development experience, in industries ranging from telecommunications to emergent internet services. She has extensive experience leading in the creation of large-scale distributed object applications, services based applications, and the integration of disparate systems. She is also currently the Chair of the Board of the Agile Alliance.
Before coming to ThoughtWorks she worked as an assistant professor of computer science at the University of Central Florida. She also worked as Director's Post Doctoral Fellow at the Los Alamos National Laboratory researching issues in parallel and distributed computation, genetic algorithms, computational biology and non-linear dynamical systems. She spent her sabbatical from ThoughtWorks working with UNICEF's Innovation Lab in Kampala, Uganda in 2010.
Rebecca received a Bachelor of Science degree in Computer Science and Economics from Bradley University, a Masters of Science in Computer Science from Rice University and her Ph.D. in Computer Science from Rice University.