BT

Q&A with Eberhard Wolff On the Book “A Practical Guide to Continuous Delivery”

| Posted by Dylan Raithel Follow 8 Followers on Dec 14, 2017. Estimated reading time: 7 minutes |

Key Takeaways

  • Continuous Delivery (CD) leads to rapid feedback loops that result in an often simpler technical solution, more resilient against failure and with an overall simpler application architecture.
  • The value-add for CD will be more quickly realized with new software projects, while the value for existing projects will need to be taken on a case by case basis.
  • CD should ideally integrate with the rest of the testing cycle. Load-testing, capacity testing, and canary deployment all fit into a well-rounded CD pipeline, allowing people to focus more on edge-cases and acceptance testing that might be less easily quantified.
  • Roll-backs, canaries and blue-green deployments become increasingly complex with larger systems, whereas systems composed of smaller, modular components can take advantage of CD and integration more readily. Inversely, successful CD will result in systems designed this way.

InfoQ: Thanks for speaking with InfoQ. Can you briefly introduce yourself, your background on CD and why you wrote this book?

My name is Eberhard Wolff. I am a Fellow with innoQ in Germany. We are doing software projects. My focus is not so much on project but primarily on consulting and training.

I wrote the book because in my opinion CD is currently the most important tool to achieve increased productivity and higher quality when developing software. However, people are often struggling to make CD happen. That is why the book gives a very practical guide to CD with lots of concrete technology examples.

That helps to get hands-on experience with the technology and then to start implementing CD. The book comes with an executable example application that includes an implementation for every step in a CD pipeline. The reader can use this setup to gain some experience with the technologies.

InfoQ: What’s the analysis an organization or project team needs to make in order to decide when and what to implement on CD, and what are those factors?

The obvious and original goal of CD is to improve time to market for new features and thereby to get better business results. But there is more to CD: Constantly testing the software with reproducible results and a high degree of automation improves the quality of the software. Deploying more often and automating deployment decreases the risk of the deployment. This has a positive impact on software development and IT. These benefits might be reason enough to implement CD.

How far you can go with CD depends on the buy-in from business as well as software development, operations, and QA. With limited buy-in from business you won’t be able to get better time-to-market. With limited buy-in from Ops you won’t be able to extend the automated pipeline to go directly into production. Still even a limited implementation of CD will be worth it and of course it can always grow.

InfoQ: Can you describe the impact CD has had on early industry adopters and where it’s fit into larger, more mature organizations that might not have adopted CD as a part of their development life cycle yet?

The early adopters were looking for a more agile way to work. CD is the logical conclusion. It has its roots in the agile movement. It is hard to see the value in working in iterations if the results cannot go into production because CD is not implemented.

More mature organizations are currently on the path to more agility. To get all benefits, just CD is not enough. The organizations need to adopt Agile, too, and therefore have to reorganize. IT is often driving the change. My book shows how to initiate the change and how to really implement it at the technical level. Once you actually get out changes much quicker the conversation about agile changes.

InfoQ: Can you detail some examples where you’ve seen CD implemented or integrated too much into an application’s architecture? What is that point, and what are some warning signs?

Often people think that CD is about automating deployment. But at the core CD is about tests and reliability. I think you can never have too many tests or too much reliability.

The impact on the architecture is a split into separately deployable modules: microservices. CD is a big driver for adopting microservices. That is the reason why I focus a lot on architecture and microservices nowadays and why I have written a book about microservices, too. But too often microservice architectures are designed in a way that at the end still everything has to be deployed together. Then the benefits associated with CD cannot be achieved.

So I think the problems are not about putting too much emphasis on CD but rather about not achieving the goals or focusing on just automating deployment and not taking the tests into consideration, too.

InfoQ: Can you detail the impact CD has had on testing applications, testing on meaningful infrastructure, and managing the software development lifecycle?

CD puts even more focus on test automation. Even acceptance tests should be automated even though that is notoriously difficult to do. In the end this renders testing much quicker. Because the CD pipeline is executed frequently, smaller batches of changes are tested. That makes testing easier and provides faster feedback. So CD has not just brought more attention to tests but also improved the approach to tests.

InfoQ: Why are acceptance tests so important?

Acceptance tests are done to assure that the customer accepts the software. Often the customer manually tests the application. Such manual tests can take days or weeks and is error prone. So acceptance tests provide a huge potential for improvement. If they are automated, the whole pipeline will be a lot quicker and much more reliable.

However, acceptance tests are often hard to automate because the customer has to understand and trust the tests. Otherwise they will just continue their manual testing. The book shows a technology to implement behavior-driven design to improve the collaboration with customers for acceptance tests. It also shows UI-based tests as an alternative. At the end the focus should be more on collaboration and trust and not so much on the technology.

InfoQ: Can you talk about exploratory testing, canaries, and capacity tests as they relate to CD platforms?

Exploratory testing is almost separate from the CD platform. An example might be tests regarding the usability or performance of a registration process. The test team works on these issues with manual exploratory testing. Because most of tests are automated now, the testers can focus on these tests now. The exploratory tests do not stop software from being released. The intention is to polish the software.

Canary releasing means that the software is installed on just a few servers in a cluster. Only after it has worked on these servers, it is also installed on the other servers. This is typically implemented with deployment automation. But canary-deployment is a pattern and can be implemented in many different ways in the deployment process. Canary releasing mitigates risk during deployment. If the deployment goes wrong just a few servers need to roll-back.

Capacity tests can show that there is a performance problem in the application. Often testing is just concerned with finding problems in the logic. But a performance problem might also make it impossible to run the software in production. It is just another stage in the CD pipeline.

However, capacity tests run on a test environment that is often not as powerful as a production environment and often only contains test data. Therefore, it is important to implement measures like canary-deployments in case a performance problems slips through the capacity test phase.

InfoQ: Can you talk about load testing frameworks as they relate to CD and some of the unique value CD provides to already-useful load testing frameworks?

The book gives an introduction to Gatling. Gatling uses a DSL to write the tests in. So it supports the goal of automating all the steps to production. Gatling setups can be distributed to make sure enough load is generated.

CD helps load tests to not just focus on performance but also to make sure that no errors in the logic are introduced during performance optimization by executing the full test suite including the acceptance tests.

Also CD introduces infrastructure automation and makes it easier to set up environments for capacity tests. In particular for capacity tests setting up environments can otherwise proof complicated because capacity tests might take long. Running the tests on multiple environments in parallel might help to solve that problem.

InfoQ: What are the common pitfalls with CD platforms where they risk falling short on developer productivity?

CD by definition is cross-organizational: it spans product development and software engineering teams, operational functions, and QA. Getting collaboration across all these departments is usually hard. However, I think there is a lot to be gained. Even with limited collaboration you can still improve automation and tests and then get better quality. You can analyze how software is currently put into production and invest where the most is to be gained. Besides increased productivity, you also get better quality. It is always a great idea to analyze your current processes and figure out how to improve them. That does not have to be a strategic investment but rather small improvements.

InfoQ: What might software engineers need to build in order to implement blue-green deployments and rollbacks without using CD tools? Can you provide examples of where CD excelled at enabling these functionality?

Blue-green deployment and rollbacks are features of the deployment infrastructure. Kubernetes and PaaS like Cloud Foundry for example support this. Blue-green deployments and rollbacks also depend on the size of the software that is deployed. It is basically impossible to enable them for very large systems. So blue-green deployment is another reason to split a system in smaller units that can be developed separately - microservices.

About the Book Author

Eberhard Wolff is a Fellow at innoQ in Germany, has more than 15 years of experience as an architect and consultant working at the intersection of business and technology. He gives talks and keynotes at international conferences, has served on multiple conference program committees, and has written more than 100 articles and books. His technological focus is on modern architectures—often involving cloud, Continuous Delivery, DevOps, microservices, and NoSQL. He is author of Microservices: Flexible Software Architecture.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss
BT