Continuous Deployment: Easier Said Than Done
Continuous deployment is often described as an Agile or a Lean technique where all the code written for the application is immediately deployed into production. There are numerous perceived benefits of this technique including reduced cycle time and reduced time to market for bug fixes and new features. However, is it as easy as it sounds?
Jim Bird suggested that most changes that people talk about when they mention continuous deployment are trivial changes like minor tweaks, cosmetic fiddling or small bug fixes. Anything larger than that requires a relatively detailed and careful approach.
According to Jim,
Schema changes can’t be made continuously. Bigger functional changes can’t and shouldn’t be made continuously, even with dark launching. Etsy for example (one of the companies used as a poster child for Continuous Deployment), doesn’t continuously deploy bigger public-facing features. They take their time and design them and prototype them and test them and review them and plan them out with operations and customer support and product management like any sane organization.
Jo Liss mentioned that real challenge with continuous deployment is the non-zero cost of reverting a change. According to Jo, for continuous integration, the limit to how often you integrate is mostly technical but this is not the case for continuous deployment where the cost of reverting a change could be huge.
But once you have deployed to a production site with users and valuable data, reverting is expensive, because you may have to:
- Migrate the database back to the previous schema and conventions.
- Think about what happens to users who are using your site right now and having the application changed under their feet (potentially causing links to break, and Ajax requests to fail).
- If it’s a bug (rather than just a decision you’d like to reverse), you might even have to email users who were affected by the problem, or deal with support requests.
Likewise, Eric Ries suggested that one of the biggest challenge with continuous deployment is getting ready to release all the time.
On the one hand, this is the ultimate in customer responsiveness. On the other hand, it is scary as hell. With staged releases, time provides a (somewhat illusory) safety net. There is also comfort in sharing test responsibility with someone else (the QA team).
So how can a team ensure that they can realize the benefits of continuous deployment?
- Dont push features, build in response to signal from customer
- Code in small batches
- Prefer functional tests over unit tests whenever possible
- Implement alerts and monitoring both at system and application level
- Tolerate unexpected errors exactly once and fix them immediately
Jo suggested that one should push fewer commits to the server. According to her, the reasonable deployment delay is in the order of five hours to two days.
So if you can sit on your hands for a while instead of succumbing to the temptation to be hyper-aggressive and deploy immediately, then you might be able avoid most of those nasty 5% of changes you really wish you hadn’t pushed to the production server, and at the same time only lose unsubstantial amounts of early user feedback.
As Jim summed it up,
Yes there’s a lot to learn from Continuous Deployment, about streamlining and simplifying release and deployment, and reducing risk by breaking work down into smaller and smaller pieces and tying all of this together with ops monitoring and metrics. But it’s not the “Holy Grail of Devops”, or at least it shouldn’t be.
Continuous deployment requires discipline
* Continuous deployment doesn't necessarily mean releasing every check-in to version control. A key pattern when implementing continuous deployment is the deployment pipeline, which puts each build through a series of tests to validate it is capable of being deployed without breaking everything. This is described in more detail here: www.informit.com/articles/article.aspx?p=1621865
* Continuous deployment means checking in on mainline. Even though you might not be exposing new features to users, you can still check them in to mainline incrementally and release code containing them, without exposing those code paths to users. You can even perform large-scale changes incrementally. See the feature toggle and branch by abstraction patterns.
* DevOps and continuous deployment share the fact that they require both automation of the build, deploy, test and release process and great collaboration between everybody involved in the delivery process. These things both do a great deal to reduce the risk of releases.
* Some systems shouldn't be released very frequently - for example embedded systems, user-installed enterprise products, and so forth. In this case the same techniques and patterns apply, so that IT can release on demand - in other words, so that IT isn't the constraint on the ability of the business to go from concept to cash. This idea is called continuous delivery. Continuous deployment is when you take this one step further by pressing the "release" button on every build that is found to be suitable for deployment.
Re: Continuous deployment requires discipline
Re: Continuous deployment requires discipline
As build/deployment automation is something I've done a few times now for different companies, this is a subject I am particularly interested in.
I especially appreciated the last point: Continuous delivery vs deployment. Thanks for the clarification.
Some clarification re:Etsy and features
This is all true. :) But it's worth mentioning that rollouts of public-facing features almost *always* involve a percentage ramp-up. Meaning, a new feature is shown to some small (1-5%) of members on the site to begin with, and gradually ramps up as we gain confidence on the feature, it's performance, any edge cases, etc. The "planning out with operations and product and customer support" refers to the "No or No-Go" meeting we have before the roll-out happens with features that are large enough to warrant it.
It's an assumption of all features that go fully public:
- have had Ops involved from the beginning of the development
- have actually been "launched" for internal Etsy staff for some number of weeks (sometimes even longer) so it's technically in production before the public ramp-ups start
- customer support, community, product, design, development all have regular pow-wows during the process from idea to 100% public in production
Hope that helps. :)
Continuous Deployment - The best part of Agile
I think the area that was skipped in this article is that big architecture changes will mostly happening during the initial 50% of the development process, while the product is in beta. This works fine if the users that understand they shouldn’t be entering real data into the application yet because upon release of the application they may need to reenter the data. I would typically release the application to production once I complete the most important 80% of the functionality. If you did you job properly the remaining 20% will be the small stuff. One thing to keep in mind, your user testing group should represent the full range of users. This will minimize your risk of big changes when you complete the last 50% of the functionality.
One example I can think of is when I created a new application to store measurements for an upcoming CMMI Level 3 assessment. My initial requirements were garbage to be honest. I recognized that right away in how the users would state some functionality they want and then at our next meeting it would change. The business analysts on the team were actually brilliant when it comes to our companies overall CMMI requirements but they had great difficulty in stating the subset of functionality required for this new application. A prior development team actually failed on this task before I was assigned to take over the project. I started development from scratch the month it was supposed to be released and half way through my development process the CEO was requesting weekly status reports so there wasn’t time for mistakes.
I worked on the functionality required first and gave weekly demos remotely to my main users since I worked in a different state. A number of times I had to drastically change the database design to take into account what the users wanted. This is also why only senior developers should be using Agile. If it takes you longer than a few days to drastically change several database tables and their associated webpages you should consider hiring a better senior developer. It’s not that difficult to do. You actually want these kinds of changes as early as possible in the development effort. Once you complete 50% of the functionality you should not be making big changes anymore. If you are, you need to add more types of users to your group of beta testers. One trick I found useful is to remotely meet one-on-one with some critical users if you think they are not testing the application enough. They really respect the attention and it also strengthens their feeling of ownership.
Also the application at www.compassrising.com was really helpful during the development process. Their application allows you to capture user requests for functionality and easily convert them into requirements. Incidentally I completed the primary 80% of functionality for my application within 3 months, at which time the users started entering production data and it was in time for our upcoming assessment. The users were actually very surprised and very impressed with the application.
I agree with Eric Ries for the
"functional tests" but I suggest that this can be scheduled as agents/job and executed non-stop.
My name is Valeriano Cossu and I work for IBM Brno.
Re: Continuous deployment requires discipline
Well, that's why they need to go read a copy of the book :)
It's about practices and tools!
1. Database Refactoring: Use Liquibase or build your own database versioning.
2. Continuous Hot-deploy: You can use LiveRebel.
3. Remote deploy: Hudson + SCP Ant Script
Hope that helps!
It's really "just" the people side that's hard
It hard to get developers to increase their level of responsibility. It's equally hard to get testers to start thinking automate automate automate. Developers have an much easier time with automating everything but a tendency to not verify and not think compability enough. Testers seem to not mind manual tasks and always seem to think its ok to verify bugs manually. Product owners seem to think that they can push more on a team just because it just became more agile.
So training everyone to understand how to work in a continuous environment is genuinely hard. It's hard because its about people and behaviors aquired over a long period of time.
Relational database changes are hard as well. But even here it's hard because it requires people to take responsibility and think backwards compability and change in steps. Using a db versioning tool like liquibase or flyway isn't hard. Setting up a test pipe that continuously tests backward compability is easy as well. Once you have that it's all about learning and training.