BT

Does Continuous Production Lead To Extreme Agility?

by Ben Hughes on Feb 08, 2008 |

Paul Duvall of Stelligent recently posted an article on a round up of the activities that are required to extend Continuous Integration to Continuous Production - the practice of constantly deploying software, instead of batching it up into releases.

The article goes on to describe common practices in continuous production that extend from the common tasks of continuous integration (build, integrate, test):

  • Continuous Database Integration/Migration
  • Automatically promoting the build artefacts through Development, QA, Staging and Production.
  • Remote deployments (using frameworks such as SmartFrog & Capistrano)

 So how can these help affect the product lifecycle and in turn give the organisation greater agility? Chris May blogs:

'Release early, Release often' doesn't become any less effective, for any value of often. The smaller and quicker the releases, the less chance of regression, the faster features get to users, and the sooner feedback comes back to the team. Basically, they [Flickr] release pretty much every feature and bug-fix as soon as it's complete – they don't really bother with 'batching' releases like we do.
Tim O'Rielly commented in his 2005 article "What is web 2.0":
[Following]...The open source dictum, "release early and release often" in fact has morphed into an even more radical position, "the perpetual beta," in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time.
Consequently this very notion looks to be a fundamental behaviour of a truly agile business. ZDNet published an article in 2005 "Why Microsoft Can't Best Google" in which they state:
Microsoft’s business model depends on everyone upgrading their computing environment every two to three years. Google’s depends on everyone exploring what’s new in their computing environment every day.
This demonstrates that the form in which the organisation releases its products can create constraints in the way the organisation responds to the changing needs of the customer.

What experience of Continuous Production do InfoQ readers have? Does it really afford the regular project team (and hence the host organisation) extra agility, or is the cost benefit difficult to justify except for the most affluent of these types of organisation?

Hello stranger!

You need to Register an InfoQ account or to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

It depends.... by Amr Elssamadisy

Many corporate customers don't want their applications changing on them regularly because there is a significant training overhead.

Re: It depends.... by Jason Yip

The training overhead issue only applies for habitual usage applications. And the significant training overhead is showing up because 1. the application was not designed with learnability in mind and 2. too many changes are being introduced at once instead of gradual changes

Non-habitual use applications require re-learning all the time and should be designed to be inherently intuitive to use (aka approach zero learning time).

Re: It depends.... by Dave Rooney

Exactly, Jason. If a user needs to learn about one new feature, it doesn't take very long. If they have to learn about 10, it's going to take much longer... perhaps 10 times as long! ;)

I'm actually quite tired of hearing the argument that "the users don't like the application changing on them". This comes from the same people who scratch their heads wondering why the users are complaining that they don't get new features often enough.

I've been doing some work with a startup that deploys to production as often as is humanly possible. This isn't as much an indication of the group's agility, but rather it's essential for their business. If something is messed up in the release, then you simply revert back to the last released version. That's not a big deal, since it was probably only one feature ago.

Perhaps the key motivation here is based on fear. In the former case, the people are fearful of "screwing up" for lack of a better term, since a rollback would mean that many features wouldn't be delivered. In the latter, the startup fears not being able to show that they can deliver a product with features that people want, and can tolerate one or possibly two features being rolled back while they are fixed.

My experience has been that if the deployment process is relatively simple, delivering as often as possible really focuses the whole team. It doesn't have to be stressful, either. You build something, then you release it. Simple as that.

Dave Rooney
Mayford Technologies

Re: It depends.... by Javid Jamae

The training overhead issue only applies for habitual usage applications.


I'm not familiar with the term "habitual usage application", and Google didn't provide me with any insight, could you shed some light on what you mean?

The type of application may be an important factor, but I think there are other important factors to consider as well:

- The ability for the customer to accept change
- The scope of the change
- The impact on the deployment environment

Some users can't afford frequent changes (no matter how small) without appropriate training and preparation. Think about day traders who can't take their eyes off of the screen for a second. They can't afford the time it takes to learn new or changed features during trading hours. I've just never seen domain-specific business software be anywhere near zero-learning time.

If you're refining existing features, the changes can usually be small (e.g. change the text on a button). But, if you're creating new features, the scope of change is usually bigger (e.g. add a new price forcasting feature). Even the Web 2.0 services listed in the article don't release everything continually. They usually batch up cohesive sets of features and release them together. For example, Flickr rolled out their new RIA image organizing feature in one fell swoop. I'm sure they've been continually refining it since then.

Depending on your deployment model, some features require changes to the environment (hardware, software, libraries, licenses). These things also require planning. If you have a GUI application (not Web-based) are you really going to ask each user to upgrade every few days? Probably not.

Regardless of what your iteration/release cycle looks like, I think that it makes sense to have an overlapping shorter iterations (1-2 days) to roll out fixes and small changes continually.

Re: It depends.... by Jeff Santini

"This isn't as much an indication of the group's agility"

It sounds like a pretty good measure of a groups agility to me!

Re: It depends.... by Dave Rooney

"This isn't as much an indication of the group's agility"

It sounds like a pretty good measure of a groups agility to me!

Sorry - should have said, "...the group's focus on agility..." instead to convey what I meant!

Dave Rooney
Mayford Technologies

Day traders? by Will C

"day traders who can't take their eyes off of the screen for a second" - would those day traders prefer to lift their eyes for an hour to learn one new feature a week, or take three whole days off each quarter to learn 24 new features, and not have the benefit of those features for up to 3 months after they were ready?

Note that where users' well-being or money is at stake you can't afford the 'continuous-beta' releases to be buggy.

Re: Day traders? by Dave Rooney

"day traders who can't take their eyes off of the screen for a second" - would those day traders prefer to lift their eyes for an hour to learn one new feature a week, or take three whole days off each quarter to learn 24 new features, and not have the benefit of those features for up to 3 months after they were ready?

I remember reading an article not long after I heard about XP back in late 2000 that described how programmers literally worked side-by-side with the traders. As the traders needed new analyses of data, the programmers would build and deploy a very quick application for the specific need.

I'll see if I can find the article again.

Dave Rooney
Mayford Technologies

Re: Day traders? by Dave Rooney

I can't find the article, but here's a blog entry talking about essentially what I had read:

beingextreme.blogspot.com/2005/10/my-first-real...

Dave Rooney
Mayford Technologies

Re: It depends.... by Geoffrey Wiseman

I dunno if I buy that. Like continuous integration, frequent changes might make each change smaller and easier to absorb, whereas large batches of changes might require significant retraining.

It might be a different story if frequent change resulted in a lot of churn - changing one feature repeatedly requiring people to come to terms with "the latest" change over and over.

It's also tricky if the change is subtle - not visible, but important. So the way you introduce changes might become important.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

10 Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2013 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT