BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews Jez Humble on Continuous Delivery

Jez Humble on Continuous Delivery

Bookmarks
   

2. Jez needs really no introduction but he is the author of Continuous Delivery and he is also a principal at Thoughtworks studios and I think I got that right. So what does the principal at Thoughtworks do?

It is kind of a made-up name. We have principals that are the second highest people at Thoughworks, but what I do in particular is a few things: writing, speaking at conferences, speaking to our customers, doing some consulting on and off and helping out with sales.

   

3. It sounds easy but it is not. I kind of wanted to get into this is developers we need to focus on continuous integration and indeed InfoQ has covered that quite a bit, but so I wanted to kind of step back just a little bit and talk about the overall deployment pipeline. Can you give us an overview?

Yes, absolutely. One of the problems that we’ve experienced again and again working on large scale Agile projects at Thoughtworks is that you start doing Agile and development teams start doing Agile and is doing Agile, but you have these iterations but in terms of getting software delivered to production or released to users in the case of products or embedded stuff. There is this sometimes what is called "the last mile", the bit from dev complete to release, and that is often the most high risk and volatile part of a delivery process. And it goes back to the fact that it can be done with a bit of functionality (dev complete), but you are not really done-done, in the sense that you haven’t tested it in a production-like environment under realistic loads and you can find all kinds of problems when you do that around your architecture and so forth.

And then also at a slightly high level if it takes a long time to get stories released to users it takes a long time to get feedback on whether what you are doing is actually valuable. So one of the things that has been talked a lot about in the Silicon Valley is the lead-startup and this idea of creating a minimum viable product and iterating rapidly, so Continuous Delivery is essential when you want to get fast feedback on your business idea, on your hypothesis, in order to be able to actually iterate and produce something valuable.

   

4. So feedback is absolutely critical?

Yes. It’s all about optimizing for fast feedback and rich feedback and multiple feedback loops from users to the business at the highest level, but also these other feedback tools from the unit test from developers, from operations back to development and testing and so forth and so there are all these different feedback loops and it’s about very rich and fast feedback.

   

5. Last night at one of the receptions I had the pleasure of meeting two guys from the Department of Defense at the Pentagon and we were talking actually about continuous deployment, not necessarily delivery but deployment. First of all there is the distinction, right?

Continuous deployment is when you release every good build to production and that really is something that applies to web sites and software-as-a-service kind of systems. Continuous Delivery is kind of a superset in the sense that continuous delivery is about being able to release on demand and to be able to do push-button releases that are low risk. That may mean continuous deployment, but it may not and particularly in the case of products or embedded systems it doesn’t make sense do to continuous deployment in the sense of continuous release to users, but it still makes sense to do continuous delivery, but you still want to go as far the pipeline as you can, you still want to be continually taking builds and running them, production-like environments which in the case of embedded systems is in the real electronics as frequent as you can. So Continuous Delivery applies to any kind of system that involves software.

   

6. So that was their concern, was "am I required to deploy all the time?" and they are working in very mission critical systems that they frankly were very scared of the very thought of continuous deployment. So is risk one of the criteria for considering not deploying all the time?

Actually continuous delivery is all about risk management and increasing the stability of your systems. That really is one of the key value propositions apart from the feedback thing. So one of the things that I like to point out is that when Flickr required by Yahoo, Flickr were deploying 10 times a day more or less and Yahoo obviously had a more traditional processor and they did some stats and they worked out that Flickr actually had a high higher uptime than Yahoo because this stuff requires discipline. What you are doing when you implement a deployment pipeline is you are constantly validating your system against realistic scenarios and so that gives you much better transparency into the risk of the changes you are making and that is really important.

This stuff is really about reducing risk, it’s about increasing transparency, it’s about constantly validating what you are doing, so in mission critical systems that is really important, continuous delivery allows you to be constantly validating against what is actually going on in real life in getting you fast feedback on this stuff. So I think if you are doing mission critical systems this stuff becomes more important.

   

7. Right. So you are working the problems out as you go and your releases have less problems coming through the pipeline.

Exactly because what you are doing is all the practices and continuous delivery are about making sure that when you release them there is a problem that route cause analysis is really simple. So we have practices like building the binary once at the beginning and then taking that same binary all the way to through production. hat removes the binary as a source of problems in your release process because you know it’s the same binary you tested all these times all the way through the pipeline, infrastructure’s code, managing your configuration of your environments from source and testing it all the time that allows you to remove your infrastructure configuration as a source of risk the delivery process.

And the same is true with database deployments, configuration management, all this stuff writing scripts that use the same scripts that deploy to production that you use to deploy testing environments. It’s all about testing every single part of the release process from as early on in the projects as possible and as frequently as possible, so you remove those sources of from risk early on.

   

8. Ok, so that is test driven development really, right?

Absolutely. It’s basically applying the original principles of XP to the whole delivery process from the beginning to end. I think in terms of principles these is not a lot that’s new in the book, it’s not a massive original leap. It’s all stuff that we’ve been talking about from Agile and even before that. What I think is new is a bunch of practices and patterns, like the deployment pipeline, that we’ve proven to be successful over the last 5 to 10 years at Thoughtworks and also a bunch of tools that have come out in the space like Puppet and Chef, and all these tools that help you manage the whole stack. The F5 network people just released a VM that allows you to simulate your router low balancing configuration in test environment and now what we are finding is you can actually validate the whole stack way before you actually have to release the production.

   

9. Test Driven Development done wrong: talking about something maybe Unit testing, what is it maybe a bad example of Unit testing?

Unit testing I think is really important. One of the things I think is essential in order to be able to do these validations is that you have automated tests at every level whether that is the unit test level, the automated acceptance test level, you need to have these validations automated. One of the constraints on the ability to do continuous delivery is if you are doing many regression testing. My colleague Neil Ford has a joke that "when humans do things that computers could be doing in instead all the computers get together late at night and at laugh us" and I think in terms of regression testing that is absolutely true. So yes, you need tests at the unit level and the acceptance level in particular that are automated.

In my experience the only way to create an effective set of automated unit tests is through TDD and so that is a really key principle. And it’s one that is not sufficiently employed. I facilitated a panel on continuous deployment in Silicon Valley a couple of months ago and I was asking people to say how many people were doing TDD, there was something like 25% of the audience and I was pretty shocked because I thought that in Silicon Valley people are going to get it, but it’s not really the case, but it’s the same in enterprise software, it’s one of these means that it hasn’t really caught on as much as it should have done.

So I think TDD done wrong, the first thing is you should very strongly consider doing it because it’s a big problem, but there are really important practices around how to do it right, how to do it wrong and creating, particularly maintainable suites of tests, it is very easy to do TDD in a bad way where you create suites that aren’t maintainable, that break all the time not because the system on the test is buggy, but because you’ve read the test in a brittle way. One of my favorite books that came out in the last couple of years was Growing Object Oriented Software, Guided by Tests by Nat Pryce and Steve Friedman that actually won the Gordon Pask award a couple of years ago here at the Agile conference.

   

10. Recently on one of your blogs you talked about feature branching. Talk to us about feature branching.

Back in the old days, before distributed virtual control systems, one of the things we found, in many large organizations is that they would use branches for developing on and they would integrate into main line very infrequently at the end of the release and when that happened you integrate all these branches and you’d find the system. A)it would take ages to merge and that would be extremely painful and when you finished merging the system, it wouldn’t even work and that was the source of the lot of the pain of integration that the original continuous integration stuff was designed to solve.

So one of the things that happened is we’ve been beating this drum for many years - "Don’t use these source control tricks." Branching is fine, there is no problem with branching it’s just this practice of feature branching where developers don’t merge into trunk regularly which is problematic and we still see that today frankly, a lot, much more than we should. And so one of the interesting things that happened in the last few years is the rise of distributed version control systems and it’s something people at Thoughtworks have been using since the early days. I started writing that book back in 2006 and then I’ve used Git and Mercurial almost exclusively for the last 2-3 years.

So I am a big fan of DVCSs and then one of the points that a lot of fans of DVCS use to talk about the benefits of the tools feature branching and the ease of merging. So I am conflicted on this because a DVCS from a purely semantic point of view every time you are working on a developer work station it’s a branch, by definition. So yes, you are always working on a branch, so I like to say feature branching is evil, but that is a sound bite. The real point is you don’t want to keep to much inventory away from mainline. You want to make sure everyone on your team is constantly merging into mainline which in the case of DVCS is a conventionally designated central repository which is the start of your build pipeline.

That is where the binaries are created, they get taken into production. So really the point I trying to make is yes, you are working at feature branches, that is OK, the point is you want to be merging regularly into mainline and obviously when you do that you have to pick up other people’s changes as well and merge those in. But you want to make sure there is not too much inventory on those branches, not more than you can read and make sense of pretty easily. And there is a number of reasons why it’s problematic, not just because of the integration problem, but also because it discourages refactoring.

If a bunch of people have stuff on branches someone refactors, yes they should tell you when they are going to do it and that is important but if they tell you and you’ve got weeks work of stuff that is not merged, then telling you is great but it’s not going to solve the problem you don’t have to merge in a week’s worth of stuff.

   

11. And manual testing, when does that come into play and that is where human error comes in that is where we have the best chance to introduce errors, so automated testing versus manual testing?

Absolutely. Brian Merrick has this great diagram, his test quadrant diagram where he divides tests into four quadrants according to whether they are developer facing or customer facing and according to whether they validate the technical part of the system, the user facing part of the system, I am not sure if I’ve got it right but it’s something along those lines. But the point is that you’ve got on one part of the quadrant the unit tests and the component tests and in the user facing part you’ve got the acceptance tests and then down on the bottom right there is cross functional tests, security, performance, availability, so forth and in the top right there are things like show cases, exploratory testing , usability testing.

That stuff, show cases, usability testing and exploratory testing that is what humans are good at and that is what your testers should be spending most of their time on because that is where it needs imagination and cleverness and smarts. If you are using humans to do this other stuff on the left hand side that is really problematic because it’s error prone. The days where you could have this massive acceptance test scripts that people repetitively go through, those days are gone I think in the case of strategic software. In reality people still do it but I think as we start reducing the lead time and cycle time of our projects it’s going to be too big of a constrain. So yes, all this stuff on the left side should be automated.

We are starting to see more tools for doing things like performance testing and security testing in an automated way, it’s still hard but these practices are coming forward. Acceptance testing, creating maintainable suites of automated acceptance testing is still hard but again I talk about some of the practices around that, me and Dave talk about some of these practices in the book: "Continuous Delivery" and we are starting to blog more about this stuff. It’s something we know it’s possible because we’ve done it successfully on projects at Thoughtworks but the practices and the tools are still evolving.

   

12. So we are coming down to the commit stage, what happens when the tests fail?

Firstly you should know one of the things we talk about is the importance of getting feedback in 5minutes or less. And it’s not going to be comprehensive feedback but it’s going to be some indication, is my system is still working and obviously the first thing people should find out and then the next thing is that actually you have to stop and fix it. So there is this concept from lean which you pull when you see if there is a problem and everything stops. So at that point someone needs to pony up and say: "I am actually going to volunteer to fix this" and that is the main thing. People talk about continuous integration and people often think it’s about the tool. It’s not, it’s about the practice and one of the key thing is A) you have to get the feedback and then crucially people have to act on it. I am sure we’ve all been to places where there is a CI server and it’s red and no one is paying any attention to it, at that point you are not doing continuous integration.

   

13. That is really the importance of tools, I guess.

You have to have the tools. I mean the tools are useful but the important thing is the human factor.

   

14. Just taking a step further, we’re down to the release. What happens when the release fails and there is a couple of reasons why we’ve gone this far?

The point the release fails the first thing that happens it to restore service. You have to focus on restoring service. Certainly for anything critical, that is the first priority. But then the important follow-on to restoring service is doing recourse analysis and actually working out why that happened and being able to put guards in place to prevent it happening again, which certainly mean tests at some point so that problem can never occur again. Which again speak the importance of automating everything because those tests maybe on your code, those tests may also be about your infrastructure configuration. I mean being able to test your infrastructure configuration is one of the key things that comes out, the infrastructure’s code movement, being able to do BDD on infrastructure, using tools like Cucumber and Puppet, but yes I think first restoring service, then doing the recourse analysis.

John Allspaw did a really great talk, at Velocity this year which is very well worth checking out, he talks a lot about creating reliable systems and doing things like record analysis and some of the practices around making sure you can restore service fast and you can create resilient systems and so forth.

   

15. You mentioned Puppet; what do you think of the whole DevOps movement?

I am very excited about it. I think it’s very interesting. Just before I came here I was watching a talk by Patrick Debois who really founded the movement and Julian Simpson about some of the latest tool advances in this space and they are talking about Vagrant for managing virtual machines and Chef Puppet and so forth. DevOps has two components really. In my opinion DevOps has resisted definition on purpose I think. It’s kind of an anti movement in some respects because I think they want to focus on the cultural side of things firstly and this idea of development and operations and testing, collaborating very closely all the way through the delivery cycle because a lot of the problems in releasing software reliably is developers are measured on how fast they can deliver stuff. Operations are measured on the stability of the production systems. And so the come into conflict.

I think one of the primary messages is that continuous delivery is it’s not a zero-sum game. This is why I like to talk about the Flickr example because they are releasing more frequently, the stability of their production systems is also increased. So you can achieve both of these things and DevOps talks a lot about how you do that both through collaboration and also through the application of Agile techniques like infrastructure as code, test driven development and refactoring and so forth to infrastructure, so those are really the two kind of components.

There was a blog entry we talked about DevOps which talks about culture, automation, measurement and sharing and that is kind of the good way to think about it simply. I mean any time that you are doing continuous delivery in an organization which an operations department you need to be thinking about DevOps. It’s crucial to enable continuous delivery.

   

16. I just wanted to find out what you are up to, what you are currently working on or perhaps what you are interested in right now, maybe aside from continuous delivery?

One of the things that’s peaked my interest recently is the lean start-up stuff that Eric Ries has been working on partly as a factor of moving to San Francisco and actually seeing a lot of this stuff happening around me. So I did a talk on Tuesday at Agile 2011 about taking the lean startup to the enterprise. Obviously most of our customers at Thoughtworks are enterprises. And so it kind of fits nicely with continuous delivery because one of the key things about the lean startup is how do you innovate and produce a novelty of products and services under conditions of extreme uncertainty.

And that is the problem that enterprise faces as well, particularly now we have a boom in Silicon Valley, these people are going to eat enterprises for lunch if enterprises aren’t ready to respond rapidly to changing market conditions. So I think Eric Reis has come with this whole methodology and it’s stuff that’s been going on in Silicon Valley and other places for a long time but it hasn’t necessarily been codified.

Stephen Blank wrote a book, "The Four steps to the Epiphany" about customer development which is a key element of this stuff and then you can kind of think of it as a cycle. So there is a customer development side where you have ideas and you work out what you should build and then there is a continuous delivery part where you build stuff and you get feedback from users on what you’ve built and whether it’s valuable and that goes back into validating your ideas and iterating on your ideas and maybe finding out your whole business hypothesis was flawed and then pivoting your business ideas. So I think that it’s the application of the scientific method to the process innovation fundamentally and so I think that is something that I think a lot of enterprises could benefit from.

It touches a lot of different parts of enterprises from the PMO to delivery process to the work of operations, it fits into continuous delivery in DevOps, so that is one of the things I am particularly interested in right now.

   

17. Very interesting. This is a very non-technical question. The venture capital community, are they becoming aware of the lean startup, because I can envision a day when we are populating again with tons and tons of startups?

Absolutely, and that is happening right now. The VCs are very much involved with interest in all this. I mean Steve Blank has been working with the VCs for a number of years now and I think VCs are interested in it because it reduces the risk of their investments. If you can have a more scientific approach to the management of startups and you can get more success rate or faster failure at least that is very valuable. VCs are interested in it. And one of the points I’d like to make is that the business within enterprises is effectively acting as venture capitalists. They may not think they are but they actually are. One of the problems is that projects get measures in terms of their success based on delivering on scape, on time and on budget.

That is not actually a good measure of the success of the project. A good measure of the success of the project is, did we actually make money? That often isn’t even taken into account so I think, we’ve certainly worked on projects within Thoughtworks where we’ve done a great job, the customer has been very happy, we’ve delivered the project, we’ve delivered the service or whatever and then we come back a year or two later and it’s died because it turned out that people didn’t actually want it. And so until people actually start measuring this stuff--not just was it delivered in an acceptable way and meeting the constraints but was it actually valuable to people, this goes back to the first principle of the Agile manifesto.

Our first job is to deliver high quality, valuable functionality to our users. I can’t remember exactly the wording of it, but this is what enterprises need to be focusing on and people in the businesses who pay for these projects they are VCs, they don’t think about themselves like this but they are.

   

18. You brought Thoughtworks up and there is the old expression "eating your own dog food." I think it’s pretty well understood that Thoughtworks eats their own dog food, but can you shed a little light on how Thoughtworks integrates this in their own practices for studios?

Absolutely. I worked for the last three years within studios, I was the product manager for our continuous integration and release management tool called "GO" and we were very heavily into that to the extend that we built "Go" using "Go" and when we had a good build of Go it would redeploy itself in order to rebuild itself again, so it’s kind of metal. So we built that into our process, we have other tools, Mingle in project management, Twist for test automation and we have a big internal grid of boxes that we use for building and testing those using Go. On the delivery side of the products we very much strongly dog food, I mean we use Twist to write the automated functional tests for Go, we use Mingle to manage the projects and so forth.

Also we like to use those within Thoughtworks but Thoughtworks is very proud of its objectivity and of doing the right thing for customers. So Thoughtworks consultants will never shed our products, actually to the extend of overcompensating and saying we are not even going to push these tools, so it’s kind of interesting that the harshest criticism we get for our products is from within Thoughtworks. So people outside Thoughtworks are much politer about our stuff than other thoughtworkers, which is kind of interesting and you know if Thoughtwork is alike of what you’ve done then you’ve done something really good.

As long as you’ve got a thick skin and you can survive that criticism it’s really useful, so yes, absolutely. We have now a continuous delivery practice within Thoughtworks. Continuous delivery is something that appeals to executives. You can take this message to executives and they love it and they are interested in it and we’re actually doing customer development on developing this practice and delivering offerings to users. So we want to take this stuff, we want to use it to build our business and I think this has always been something we’ve been doing within Thoughtworks is being early adopters, testing this stuff out, finding what is valuable, but always subordinate it to doing the right thing for our customers.

We have a technology radar, one for this quarter that just came out where we talk about what is new in the technology landscape, to what extend it’s tested and to what extend would recommend it. The practices are solid around this, this is well understood technology, you can use it, or this is stuff that is new, trial it, don’t necessarily put it into practice. So we always want to make sure that we are doing the right thing.

   

19. It’s a learning experience for you to keep your head in the game basically, have there been any major failures in that process, in the products or maybe the continuous deployment and integration actually has been very successful?

Continuous integration is a kind of a no-brainer, it’s one of those practices that almost universally makes a big positive difference in terms of bang for your buck. You always need to be careful about all these practices. You can never say "this is always right for you," you always have to understand the context and the human element in particular. There is a well-worn saying that most failures are people failures not technology failures, all problems are people problems and that is absolutely true. Any time you recommend something you have to specify the context in which it applies.

   

20. And need to adapt to that context.

Absolutely, which is the route of Agile, again it’s a scientific method. One of the things that we say about continuous delivery for example is that you shouldn’t drop everything and implement the newest delivery. You should always be incremental about these things, take the pilot projects, something which is strategically important, but that people aren’t working weekends and nights to deliver. Try out these techniques, see if they work for you, apply the scientific method, you have a hypothesis you test it, you get the results back. It’s the cycle plan and you need to do these things in an incremental way. So that is very important, I think there is no sense in which any of these things you can say: "You’re going to do this, it’s going to be a silver bullet, it’s going to solve all your problems" that is crap.

And there is the tool vendor version of this which is if you our tool everything will be fine. We resist that even as a tool vendor someone comes to me says: "We want to do continuous delivery, can we have your tool? You always want to step back and say: "Listen, the most important thing is the organizational part of it. Let’s focus on that. Yes, the tool will enable it, but you need to focus on the organizational element.

   

21. Get the philosophy part right first and put it into practice. Just as an observation I noticed that the tool vendors have maybe a part of the solution and they are talking more to each other because each one enables the other especially when you are talking about ALM. Is that your observation as well with Thoughtworks or is Thoughtworks trying to solve the entire problem front to back?

No we are not, we are not big enough to do that. Yes I think there’s a lot of collaboration between vendors and there is this kind of marketing term "co-opetition" which I think kind of applies even though it’s a horrible mangling of the English language, but you always need to be aware of what is happening. All tools exist within an ecosystem and you’re rarely ever going to solve part of that ecosystem problem. So for example with our stuff within studios if you look at Go, it is kind of an orchestration layer around the deployment pipeline, but it relies on build tools, deployment tools, testing tools, infrastructure management tools, project management tools, it has to tie into all these things we are never going to solve this whole problem, we don’t want to, it’s not the right thing to do. We need to tie into that stuff and also all these tools exist within the context of a process.

A studio’s on main value promise is you have a process, we are not going to mandate the process. We are going to adapt to the process you use and if you look at some of the full stack tools where they do try and solve the problem front to end they sell you on these beautiful graphs and these reports that you get. They don’t work unless everyone uses the process that they mandate and this is problematic because it might not be the right process for you and it relies on the filling in all these boxes and as a developer there is nothing people hate more than having, before you check in any code you miss fill in these 20 boxes.

People get around it by filling it full of crap and then you get the grasp but they are meaningless. So our whole thing is we are going to adapt your process, you can still get the pretty graphs, but you can get the pretty graphs by configuring the tools according to your process and we’ll still gather that information for you. I think we’re kind of unique in that space with doing that, but we think it’s the right way to do.

   

22. We are pretty much at the end of our time, but as maybe a couple of parting words do you have any advice for maybe the enterprise architect community that’s taking a look at continuous delivery now?

Yes, I think in terms of enterprise architecture there are two elements: there is the element of actually what does it mean to be an architect and what is the value of architecture as an engineering practice. I think we always had advocated architects who are practitioners who know how to code and actually code and again it’s about the feedback loop. You put an architecture in place, that architecture will change as the product evolves, as the service evolves and for business plan changes that will have impact on the architecture as well. So it’s important that architects always being involved and actually writing the code to implement so you get feedback.

In an enterprise context one of the things we did recently which was quite of interesting was we got all the architects from all the regions together and we had a session where everyone just talked about stuff and really the benefit of that was getting a shared understanding in which all the architects of what they were doing in implications of it and then meeting regularly after that to kind of touch base and drive that feedback loop. So in term of the human elements I think architecture practitioners, in terms of actually coding is important. In terms of architecture there is this misconception that Agile removes the need for architecture and that is really not true.

Again what we need to talk about is just enough architecture up front and you always need architecture because the architecture apart from anything else, there is the standardization side of it, there is also the fact that architecture is what defines the cross functional attributes of your system, performance availability and so forth. And so it’s important to get that right, but you won’t get it right the first time, that is just the nature of complex systems. So again the deployment pipeline is useful because it allows you to validate your architecture from early on and if you can validate your architecture by running performance tests, availability tests, these kinds of things from early on, then you can actually make the changes to your architecture that you need to make sure that it’s actually the right one.

And again it’s just the feedback loop constantly validating and refining your architecture because those changes are expensive to make late. You want to make them early on when they are cheap to make which is part of their value property.

Oct 14, 2011

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • I still can't agree

    by Adam Nemeth,

    • Re: I still can't agree

      by Jez Humble,

      • Re: I still can't agree

        by Adam Nemeth,

        • Re: I still can't agree

          by Jez Humble,

          • Re: I still can't agree

            by Alen Milkovic,

          • Re: I still can't agree

            by Adam Nemeth,

            • Re: I still can't agree

              by Jez Humble,

              • I still can't agree

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                I know that thoughtworks doesn't have too much emphasis on thought, as I was unhappy to realize when I was working with them (let's not get into details). But this "code is the only thing which matters" should be stopped. Now. I mean, what I read here, and the other CD interview, this is just crazy.

                First off, if a defense system fails, people die. Also, you can't test your doomsday machines on field for obvious reasons.

                If unit tests are your only way to know if they will do it well or not, you're seriously in trouble. If you automate a bit more than essential, you'll have problems. Like, you trust a binary, the only thing you don't realize that it was compiled for a different architecture/java version/anything you have at the end, and since you can't test it in production, you won't notice this until it's too late. Backup systems are good examples, as they're usually automatized to the maximum, and I guess around half of them fails when the time comes. Have you heard of RIM Blackberries lately?

                So, you better hand-check everything twice _after_ the automatic tests did run, and don't even think that they were correct.

                This is not true for enterprise development. In enterprise development, you just shouldn't disappoint your users, and shouldn't send inferior code to them (you shouldn't test on them), as they do rely on you. But they don't die at least. You have a 2nd chance.

                People don't rely on startups (actually, they do after a while, but a startup is about taking risks from both sides.) Therefore, this guinea-pig experiment with putting things into production to test can work there. Hence the name "Lean Startup".

                Also, you should hand-check sometimes. Like, before each commit, or once a day. Not everything, this isn't regression testing, just the code you worked on before. Oh, and despite what's said here, a half-built module shouldn't be in trunk, even if it's absolutely unit-tested. Better not to take worthless risks just because it's said it's uncool. It depends on your development pipeline and system architecture of course, but once you learn how to do software design well, they won't be as much a problem.

                If performance problems of architecture comes out only in real life, why on Earth do you call yourself an architect? Architect shall have a clear big picture. This doesn't work if you think only in code. You have to zoom out. No, the sum of the parts is not the whole, but everything has different sideviews and resolutions. There's a huge library of forgotten knowledge about how to do this well. It's called software design and modeling.

                If people don't model, they don't have a picture bigger than themselves and their code, and they lose their sense to abstract meaningful details from technical nonsense. When they don't see things in macroscopic level, they don't understand their failure. And code is expensive. What's told here is that you should build things up, and watch them failing, then try to figure out why. This is like defining darkness as standard.

                We're full of half-truths. full of misunderstanding of context (I don't want to see agile-developed marsrovers, nor lean defense systems, when we're waiting for someone to die from our mistake so we could learn how to do it this time for real-or-not), and this wouldn't be a problem, only if people wouldn't take it so seriously.

                We're enterprise developers. We work for large companies, and the softwares we build define the [work]life of thousands of people. They're not guinea pigs and they don't behave like robots (automated acceptance tests) do. We're responsible for them.

                I know a lot of you f.ing don't care about them. I also happen to know that you don't meet them. I also know that you're sometimes implementors and testers, but not designers, as your managers (POs) don't tell you what problem they have, they tell you a solution. You only figure out how to implement that solution, and you name that design, as you have no better name for that.

                You start to concentrate on technical excellence based on frustration, as the software isn't defined by you. You start to think of how to excel at things which don't really matter. You start to have a competition on "who's agile enough", then on "what percentage is automated", and all these have no direct effect on the end result, but you lost direct effect years ago.

                We have to stop this. If anything should come after Agile, it should be this. But first, we have to stop this pouring of half-truths. That Agile is the Silver Bullet. Or TDD. Or Lean. Or CD. That you should concentrate on the delivery pipeline instead of actually delivering. Or that you should concentrate on the tests of the code, instead of the code itself.

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                I'm not sure which video you watched, because I don't find myself disagreeing with almost anything you say (apart from the ad hominems and your characterization of agile, which doesn't agree with any sensible real-life version of it that I recognize).



                Yes, you should do design and modelling. Yes, you should hand-check, and not rely purely on your automated tests. No, you shouldn't use production as a "guinea pig" testing environment. Of course we should be meeting our users and customers and getting feedback from them continuously from early on. That's the whole reason we release early and often, and why we need to use a combination of automated and manual tests to make damn sure our software is high quality from early on in the project. Of course there is no silver bullet.

                Continuous delivery is a reaction to the many enterprise projects that fail - go over budget and over schedule - because people left integration and deployment to production systems and testing to the end of the project, and found out that the system didn't work as expected or perform as expected. This is by far the most common failure mode of projects. The other problem continuous delivery attempts to mitigate is projects which are "successful" - on scope, budget and time - but fail to deliver value to the business. As the Standish group reported in 2002, over 50% of features are "never or rarely used". Both of these issues can be mitigated substantially by early and continuous delivery of valuable functionality.

                The only thing you say that I think is obviously wrong is your claim that we shouldn't "build things up, and watch them failing, then try to figure out why". There is a name for this process. It is called the scientific method. I'm not sure what you propose instead: perhaps making something up, hoping it works, and then denying any responsibility when it goes wrong (which seems to be quite common in the world of enterprise software). Real computer systems are inherently complex, and thinking you can perfectly predict their behavior in advance is the acme of foolishness.

                Finally I want to address the canard that agile is not for mission critical projects. As discussed here, incremental and iterative methods have been used in the defense industry for decades, including in the space shuttle and polaris programs. Waterfall was actually an aberration caused by a misunderstanding of Royce's original paper. There's nothing new or modern in agile - it's simply making sure you maximize feedback loops based on real data from working software and real users.

              • Re: I still can't agree

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Let me answer in chunks:

                Agile bashing

                I don't bash Agile. I bash the enterprise notion of Agile, I bash what I've seen at certain enterprises, I've collected some of my problems with their approach here. Coincidentally, most of the things mentioned there came as an advise of 3 TW consultants. After listening to them, and looking at some TW publications (including on InfoQ), I came to the conclusion that I can't agree with TW.

                This wouldn't be a problem, if what they did improved the situation, but for what matters to me, it was worse. Long story short: agileness and unit test coverage, number of catastrophic bugs in production, nr. of regressions per release up, user satisfaction, nr of features per release down.

                Of course, You don't do Agile enough, etc. The company became famous for being Scrum. Coincidentally, their demise started the same year as their conversion to Agile.

                This doesn't mean Agile is wrong, bear with me.

                Build to Fail vs. "the" scientific method

                Building something completely only to fail is anything but scientific. While trial-and-error are part of the scientific toolbox, they're far from being the only one, and it's not "the" scientific method. Usually when scientists try something out:

                a) they have 90% of likeliness that it'll succeed
                b) they don't try out the whole thing, only its relevant parts (from certain viewpoints; wether it makes unexpected problems in humans is just a viewpoint)

                Hence, the scientific method is waterfall-ish: you check everything twice on paper before building a more elaborate model, and you iterate this until the model is so realistic that it actually becomes reality.

                It's very important to note that in science, the first iterations are not executable.

                A very good example is pharmaneutics: if they'd go with the build-and-fail method at first, people would die. They have an expectation on how it works long before they give it to you (even as a part of an experiment), and they don't commit themselves to the drug until they're confident it'll work. This "don't commit yourself to your code until you're confident it'll work" is what's missing in certain Agile practices, seemingly CD too.

                People with Agile practices do commit themselves to the first implementation: that's the only thing they have. I'd rather throw them away while they're still some diagrams on paper only such isn't made anymore.

                Getting feedback and release (design vs release)

                The whole thing I'm arguing is that if production code is your only feedback method, you're f.ing it up, and that certain Agile approaches (including TW employees or the CD book) tells us that. I bet I can draw diagrams faster than any developer code a full implementation, and I can also drop out at least 30% (but more like 70%) of the errors with diagrams and paper mockups earlier, than using code as a feedback. In my approach, most of the parts the customers don't like weren't implemented even once! And you're telling me that I should deliver into the trunk a working copy, and use the functionally half-done, yet theoretically complete implementation as a feedback mechanism.

                My statement is: Quick feedback loops and production-code-as-feedback are contradictory needs

                What I'm not saying is that model-only feedback is sufficient, but you can draw multiple diagrams and mockups, go back-and-forth to the customer with them, then doing the same in production-ready, unit tested code. Hence, CD is not the tool for quick feedback loops despite repeated statements that it is.

                Proposal for modeling instead of trial-and-error

                If we'd learnt anything from Pattern Languages, we've learnt, that solutions come as a counterpart of their problems. A pattern, in its original sense a (problem, solution) pair stuck together. A pattern is a tree, with named nodes, while a Pattern Language is a forest of related trees.

                UML is in some sense, a pattern language. It's a pattern language on how to look at things. "If you have an integration problem, use a component diagram" "If you have trouble understanding how the users interact in a process, use an activity diagram" "If you have trouble understanding how objects interact in a distributed system, use the sequence diagram". Of course, this is a very high-level, not-so-elaborate model of UML, and UML is not the only modeling method available. (Another TW bashing, last one: they seem to have a NIH approach to modeling tools, especially UML...)

                Nowhere it's said that "Any problem you have, use code to test it". Except for... well, a lot of places. But this pair doesn't fit.

                Also, design iteration is about the iteration of resolutions: imagine it as drawing of a person. By looking at how he stands, just a blurry shape is needed: you build a skeleton, according to the posture. Then, the image get less blurry, you start to distinguish body parts. It gets less blurry, you go to do more elaborate details. Sometimes you have to realize, what you thought it's his arm, in fact, it's part of his coat. It's fine.

                Continous Delivery is about doing drawing with a pen. You don't really draw a skeleton, you just make sure that anything you do can be seen at once by the customer. Since it's a pen drawing, even if you do a mistake, you can't go back, as you did a lot of details at once. And the customer has to accept it, as he has a lot of money in it by the time he realizes it's messed up. You didn't show him the pencil version, as you don't have a pencil.

                (If you do it with Pencil, you cannot do CD, as simply, you aren't delivering code continously, you're delivering pencil drawings.)

                I try to always remember, that the essence of a problem has only one essential solution. How this solution is implemented varies, yet the two essences seem to go in pair. Yet I cannot understand the problem right on the trunk, I cannot understand the problem right in code+tests, and I cannot aim for doing something which is executable, deployable at once.

                But doing something which is executable and deployable at once might be Agile, might be iterative, might be CD, but it's not scientific

                In the other interview, you're telling us to create the deployment system on day one, when we don't even recognize the shape of the cloud.

                Agile vs Iterative

                Agile processes are a subset of Iterative Processes. While Iterative solutions are indeed common in some fields, I'd like to quote Peter Gluck, a PM for NASA:


                Peter: I am not aware of anybody using agile development here.

                (Greene & Stellmann: Beautiful Teams, O'Reilly, 2010, p. 235)

                Unfortunately, I cannot make such a quote from Northrop Grumman’s chief engineer, Neil Siegel, but what he describes in the same book looks pretty much like Waterfall. He's doing Command-and-Control systems for the US Army. I also cannot quote Boeing on paper, as I've only heard from people working there that "we do Waterfall for a reason" (Oh, found an article, Niels Jorgensen dissects the Waterfall at Boeing here, telling us that the design itself was iterative, implementation wasn't).

                But there's nothing wrong with iterative processes, we have formal iterative processes as well, Boehm, the inventor of the Spiral model worked for DoD / DARPA when he did that (Wikipedia). RUP was also done first for a high-risk system (as far as I remember, a kind of Air Traffic Control, but can't find a reference), and it's first and foremost iterative.

                Conclusion

                I think I made it pretty clear, that CD approaches can and will lead to software which is not suitable for the user. (It might be suitable for the cusomer, though). I hope I made it pretty clear that Agile is not feasible for certain situations, especially when it's matter of life-and-death, and I made it pretty clear that the build-and-fail method is as far from scientific as it could be. No science ever did that as its primary method, only as part of its modeling toolset, to check presumptions which were correct in lower-resolution models. I showed that quick feedback contradicts with CD methodology, and I showed that Agile is not the same as Iterative.

                Personally, I'm really sad about this patchwork of knowledge, where facts are taken to a broader or narrower contexts, hence becoming half-truths, and I'm really sad when engineers (or mathematicians, for that matter) do this, creating misunderstanding. I fail to recognize the consistency behind these words: this just doesn't add up here I feel.

                The words agile coaches say, and the words people write down in a book or say in an interview have effects, just like the code we write have effects. They have effects on developers. and hence, they have effects on the software we create. Everyone who says them, and everyone, who reads them is responsible. Sometimes, we have to do negative feedbacks, saying "hey, please don't do that, can't you see what's coming from that?". And I've seen projects failing after such thoughts, and I see how CD brings us further from our stated goals - namely, user satisfaction and rapid feedback.

                I don't say I have the silver bullet, or I have anything close to that, and I also feel modeling is not the solution. The solution lies somewhere in understanding the problem, and solving the right problem, yet delivery is not the problem: the problem is about users not getting what they need.

                That which is Below corresponds to that which is Above, and that which is Above, corresponds to that which is Below, to accomplish the miracles of the One Thing

                (Tabula Smaragdina, or Secret of Secrets, an ancient text in circulation since the early Middle Ages)

                If we don't have consistence, or we miss-aim, we may shoot far from our goals, even if had a silver bullet. Also, noone can escape from the organizations they're in, or the things they write, in the same way that the software cannot escape the thoughts of their creators. But perhaps the ancient rules shouldn't be used as describing a situation: instead, they should be used to actually create, and build wonderful systems - truly miracles of the One Thing.

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Clearly you're not going to change your mind on any of this. However you are attacking a straw man, not the views that I actually hold. So I am just going to point out a few areas where you're misrepresenting my views, and a few other areas where you just made a mistake:

                * There is "a" scientific method, as set out by Descartes in his Discourse on Method, and elucidated by many scientists and philosophers following him. Go and check it out (you might start with looking up "scientific method" on Wikipedia). It doesn't mean you build something only to fail. It means you come up with a hypothesis based on experience (modeling), create an experiment to test your hypothesis, and then work out what to do next. Iterative, incremental development.
                * While I agree it's a good idea to do modeling first in software (just one example of where you claim I hold a position that I actually don't), you don't seem to have a good understanding of how the life sciences industry works. Check out this recent talk from a program manager at Genentech, slide 7. It's a highly iterative and incremental process, and they do in fact start by running experiments in the lab using target molecules. Genentech are experts at applying lean - remember, lean won the war in the manufacturing and industrial product design industries decades ago.
                * "if production code is your only feedback method, you're f.ing it up". I agree. Nowhere in my book or in real life do I say you should do this. All I say is that code delivers no value until it's in production. But in an enterprise context you should have multiple feedback loops from unit tests, automated acceptance tests, automated performance tests, manual exploratory testing, manual usability tests, showcases to the customer, and manual and automated security tests, before you put anything in production. If you're not increasing the stability of production, you're not doing it right.
                * "Continuous Delivery is about doing drawing with a pen". No. You create a walking skeleton and deploy it to production (making sure you're doing all your validations, manual and automated, with every release). At this point it's very cheap and easy to change your code and your architecture, because you've only done a week or two worth of work. The whole point is to create a pencil diagram - a hypothesis that you can test, and that you can easily modify based on real data from a production system and feedback from your customer. Then you develop more functionality incrementally.
                * While Peter Gluck may not have had people on his teams doing agile, the references I cite earlier make it clear that other teams within NASA and the defense industry were. Trying to claim that agile can't succeed on mission-critical systems just isn't a credible position.
                * "I can't agree with TW". People within TW disagree with each other. We're not monolithic. And sometimes we screw it up too. But we have plenty of satisfied customers, and have delivered many successful systems that have delighted users. Unlike IBM and Accenture, we don't have a dominant market position. If we couldn't beat the industry average by some margin, we would not exist. We would have gone bust. We only survive - and grow, as we continue to do - on delivering results.
                * "The solution lies somewhere in understanding the problem, and solving the right problem, yet delivery is not the problem: the problem is about users not getting what they need". I couldn't agree more. But the only way to find out whether users are getting what they need is to show them real software and get their feedback, hence early and continuous delivery of valuable software. Fortunately, as a nice side effect, we fix the delivery problem.

              • Re: I still can't agree

                by Alen Milkovic,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Well, the guy has a point. Focusing on delivering a little piece of a big system into production is not cost effective. You get a lot of changes. You listed a lot of things one does in Agile to ease the pain of changes, but it still costs a LOT more to change something in production than on a drawing board. And sometimes one has figure out all how the different parts are going to work together before you start coding. Changing how different parts work together will cost a fortune to do after all the parts have been completed.
                We use Agile and have started delivering sketches in one sprint and implementing the functionality in the next. With this we deliver something and get a fast feedback. We also try to break down the project into separate independent pieces so we can deliver as soon as possible. But sometimes we have to think hard and long before we code so we don't end up with something we have to change many times since it is far from optimal.

                Please people, rediscover think first, code later :)

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Hi Alen

                First of all, I am not advocating that we shouldn't model and think before coding. I know Adam is trying his best to ignore the fact I am not some dogmatic agilista on that front, but I actually think a little bit of design and modeling up front is a fine idea, and I certainly would never advocate "changing how different parts work together… after all the parts have been completed." Indeed the whole point of continuous integration is to detect these kinds of problems early when they are cheap to fix.

                However I must disagree with your statement that "Focusing on delivering a little piece of a big system into production is not cost effective". That may have been true 10 years ago, but in the last ten years many tools and patterns have come into being that dramatically reduce the transaction cost of releases. That's what my book continuous delivery is about. Without a shadow of a doubt, it is now far more cost effective to deliver incrementally - assuming you design your software development process with this in mind. Reducing batch size has enormous economic advantages that massively outweigh the small extra transaction cost of more frequent releases. If you want a thorough and well-documented book that demonstrates this scientifically, I strongly recommend Don Reintertsen's principles of product development flow.  

              • Re: I still can't agree

                by Alen Milkovic,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                I think we agree. The not effective part was within the context of rushing it out without modeling up front.

              • Re: I still can't agree

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Hello,

                My problem is that you're still thinking in buzzwords and absolutes. What I referred as a 'patchwork of knowledge' does turn into this: lean didn't win the manufacturing industry, it's a buzzword there. And I guess you still mix up Agile with Iterative - it wouldn't be fun if the space industry would prefer code over documentation, just for example. Healthcare industry is pretty waterfall-ish still: they don't give drugs to mice until it works in the computer, do they? Even according to the slide you mentioned it's waterfall.


                And I'm not personally attacking you - as an individual, you do whatever you want. But I saw what happened when people read Bob Martin's clean code, and I saw what happens when a lot of people spread out to implement your ideas. And they'll use it as an excuse to not to think. It didn't matter that Bob Martin is an author of one of the UML books, and it didn't matter that even told people in blogpost to use UML. "Code is what matters".

                And a certain sense, they're right, as they're masons, not engineers. To a mason, it's brick what matters. When you get a state machine (from which form to go to which form), you get each form separately as a UX/Visual design, the design and architecture is pretty much done by then. You're just requested to implement other people's design. Hence, as a mason, you may draw sketches on a whiteboard, but the only question is about how fast and reliably do you deliver the implementation.

                What I proposed, is that we should quit this cycle, and harmonize design with implementation.

                Also, you seemingly ignore the contradictions I mention: by concentrating on the delivery of something, without that something existing, you're making a commitment to a deployment architecture, which will shape your production architecture as well. This early commitment doesn't reflect the problem domain, hence, you're messing it up early.

                Hence, deployment system stands on its own (as it was done first), and everything is done according to that. If that's not the wrong aim then what?

                Also it seems that you ignore this distilling the problem into its core essence part, you say that the only thing to know whether a problem is solved is to bring a solution. I can't agree. What I was speaking about that you should first drop all the unrelated issues from the problem - deployment, computer architecture, programming language - distill a solution to the distilled problem, and then build these back.

                You say that the emphasis is on the assembled, deployed software, and I say that f.ck the deployment, understand the problem, and bring out a solution first, it doesn't matter if it's undeployable yet to millions. You want to have it integrated from day one, while I'd do it standalone, the demo could be deployed anyhow and doesn't have to resemble production deployment.

                But I guess you're bought for the buzzwords. You resonate well with people who don't try to find consistence in your words and don't have a clear picture of what's going on. Most of the enterprise developers - but especially IT managers - are such type. While you claim you don't talk about modeling because it's natural, in fact, people who'll try to implement your book won't model because you don't mention it.

                And what your book implies to them is that you can skip reasoning (part of "the" scientific method), and instead of doing the experiment on something you're mostly sure of (as described by Descartes), you tell people that the only real thing is the software.

                Descartes was the founder of rationalism, the methodology of reasoning. I did use reasoning in order to show, that CD cannot be "the" scientific method. It's funny, as I used Cartesian methods to show it. Of course, using Descartes as a buzzword, and understanding the essence of Descartes' philosophy is an entirely different ground.

                I don't want to convert you, as you are what you are. I just see those horde of people who'll see your writings as another legitimation of their "Holy Code" approach, and if there's something, which would be harmful to software in general, and how people experience software in their daily life, is this. That's why I argue and try to show that CD shouldn't be thought as a primary method.

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Well, judging by the ad hominem attacks (apparently I can only think in "buzzwords" and I can't possibly be expected to understand philosophy), I have got under your skin. However you are still attacking a position that I don't actually hold.

                For example I have repeatedly said that design and modeling before writing code are important, and I would never claim that you should "skip reasoning" or "concentrat[e] on the delivery of something, without that something existing." I agree with your statement "we should quit this cycle, and harmonize design with implementation." Design without implementation is blind; implementation without design is chaos.

                Where I do disagree with you is the statement that "you should first drop all the unrelated issues from the problem - deployment, computer architecture, programming language - distill a solution to the distilled problem, and then build these back". Real computer systems are complex, and involve trade-offs. Any real-life system has constraints - performance, consistency, quality (to the user), functionality, schedule, scope. You can't solve the problem without addressing these trade-offs, which means paying attention to the "unrelated issues". And if you're doing something even remotely complex, you can't possibly know if the solution you design will work in real life until you deploy it to a production-like environment and test it using realistic loads with real data sets. Yes, you can and should have an educated guess, but I have seen too many "educated guesses" fail in real life to be arrogant enough to assume my design will hold up based purely on deductive reasoning.

                For me, iterative, incremental development is the essence of agile (I am not really so bothered about the other parts, and especially not scrum), the essence of the continuous delivery approach, and the essence of the scientific method applied to software development. And they have been proven to work in every domain, including mission critical systems, defence, and biotech (despite your wildly contorted attempts to describe what are clearly iterative, incremental approaches as "waterfall").

                Finally, yes, people may take this as support for the "holy code" approach, but people may take what you say as the "holy design document" approach. Neither will work. The devil is in the details - which you have repeatedly either failed to acknowledge or address.

              • Re: I still can't agree

                by Bedwyr Humphreys,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Seeing as this started off with a dig at ThoughtWorks from Adam (I wonder what you did to him!?) I think Jez has been more than patient.

                Adam, why don't you read the book? It's a good read but more pertinently although it has strong views on what you should be doing it is mostly pragmatic and offers alternatives to many of the practices it espouses.

              • Re: I still can't agree

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Details are secondary in design, or rather, they're the non-details of later phases of design.

                Why is it that my designs did hold in two fortune 500 companies and a startup, without modifications? No, I wasn't ignorant, I read every single line of code which went to production, and some of those are still there. Still, I only did high-level design. How it's possible? What's different of what I know about design and how different others do it?

                Of course, junior devs always keep coming to me saying that this or that doesn't work, or isn't performant (or just silently ignoring the design, so that I shall ask the question to them on peer review), and I could always come up with a design-conform, performant solution. It took time for them to learn this, I hope they're on track by now. We were keeping the psychology standard response times whenever possible (sometimes it isn't due to the task at hand).

                But those are junior devs. Of course, I also see old devs stuck in practice-oriented development, where every hack is welcome. I can't do anything with them, I usually try to build a team of juniors or implement alone, or with very skilled people with who we mutually trust each other.

                I don't think my designs will hold up in real-life. I think that designs which worked from every viewpoint on paper, were tested as UX mockups, sometimes in multiple iterations, were tested as frontends with mock-up backends, and even then, all the known psychological and coding rules are applied (when they fit context), and talked through-and-through with peers, they'll work. It's so rare that they don't that they're usually in a minor mistake somewhere, a one-liner.

                I think you should add complexity as thin layers on top of the problem at hand. I wish I had the possibility to show that on open source projects, so anyone else could check.

                It's important to note, that there are no side effects; harmony is not a side effect here, it's built into every little detail, every code line, every pixel of the visuals, how the form elements are related, how the data structure or application structure relates to the problem at hand.

                When I say that CD doesn't fit as a primary method, I say it as it is: it doesn't mean CD doesn't have value, only that it shouldn't be taken as the primary method. When your problem lies in that it takes ages to deploy the already thought-out, designed software, and it's a problem for the customer, then it's OK. Personally, in places where I do my development, obtaining production environment resources is the last thing you want to do, as they take money while idle, and they need proper numbers to function well.

                I have seen most projects failing based on not having a single clue on what people are doing, hence no "educated guess". I have also seen projects failing when good ideas were taken out-of-context (over-engineering, applying a rule where it doesn't make sense, etc), I think the old Fremen rule holds, "you know something only when you know its limits".

                Neither in the two interviews do you mention design or modeling (David Farley does however once in the other one, to my surprise as I didn't remember), and the word design isn't used in context of software design in the sample provided by pearson.

                For me, Agile, when speaking about it is based on what's written in the Agile Manifesto, and the XP-based methodologies surrounding it (including TDD, SCRUM customs,etc). But the main base is the Agile Manifesto: when designing something to save your life, processes and tools are more important than (project-participant) individuals, comprehensive documentation is at least as much important than working software, negotiation is superior to customer collaboration, and following a plan is sometimes more important than responding to every change of the wind's blow.

                Where do my arguments come from

                The statement that "you shouldn't build a deployment system around something which doesn't even exists yet" comes as an answer to your previous statement,


                Jez Humble: Continuous delivery means that your software is production-ready from day one of your project

                Interview and Book Review: Continuous Delivery, InfoQ

                The notion of missing manual checks comes from the sample chapter available at InformIT:

                [..] he found that the system had actually stopped working three weeks earlier. [..] 80 developers, who usually only ran the tests rather than the application itself, did not see the problem for three weeks.

                We fixed the bug and introduced a couple of simple, automated smoke tests that proved that the application ran and could perform its most fundamental function as part of our continuous integration process.

                Continous Delivery: Sample Chapter [5]: Anatomy of a Deployment Pipeline,from InformIT

                The Agile vs. Iterativity comes from a lot of places here. Seemingly we have a different definition on what Agile is and especially what it isn't.

                The notion that pharmaneutics is non-iterative comes from simply looking at slide 7 of the presentation to mentioned earlier and looking at a nice waterfall image with "phases". It was your own link. (Sorry for the screencast, PDFs are not #linkable)

                The use of cartesian deduction instead of trying out in real-life comes from both David Farley's and your comments here and in the previous interview about "you can't be sure until it's deployed", and also the following storm of comments. Using your users as guinea pigs comes as a reaction to the previous interview and Abby Fichtner's thoughts on what should software development be like 10 years later, just as well as Mary Poppendieck's works. The disagreement here is exactly cartesian: Descartes is sure that God exists purely based on deduction. (Note the words which introduces cogito ergo sum: "Then without doubt I exist also if he deceives me, and let him deceive me as much as he will, he can never cause me to be nothing so long as I think that I am something" (emphasis added)

                As to the failure of a deduction, I can only logic, that a false assumption implies anything. The only thing we can do is to avoid false assumptions, and make sure what we build on all hold: wether it's user requirements, or methodology. Therefore, I'm still against using CD as I claim that it's a false assumption that CD solves other problems as side-effects.

              • Re: I still can't agree

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                When I read the other interview, I was banging my head against the wall. Then I read the sample chapter at Informit, and I was banging my head even stronger. Then this came.

                This wouldn't be a problem if the authors wouldn't believe that it "comes close to being a silver bullet" (see other interview's comments), it wouldn't be a problem if it didn't win an award as this year's best IT book, and that this interview wasn't the second on InfoQ featuring this book.

                Hence, this book has "credibility", something which I feel as harmful.

                Why is it harmful?

                In recent years, I find it harder and harder to find people who can actually do harmonous design & implementation. Stupid hacks are more and more welcome, or it's a nice code which doesn't solve the problem. Either way is problematic.

                And these guys wave exactly such books, they do wave Mary Poppendiecks' writings, or they do quote Bob Martin out of context (or just use that horrorific "keep it simple, stupid" rule, which was discussed by famous designers and famous scientists as well. If only some people would see what kind of dirt shall I argue through while people are holding these books and telling me those sayings, they would surely ask for a penalty each time they're mentioned.

                Especially when you have a better version for what matters to you: harmony to the world, to the problems at hand. And you just can't tell people, as a lot of such dirt is pouring in. And year after year, it becomes more and more harder, to actually bring something beautiful to life, and there are less and less and less developers living on Earth capable of doing something beautiful, which pleases its users as well as its creators.

                As for thoughtworks, it was a nightmare. We were playing against each other. I asked for less rigorousness to agile adherence, so that maybe some meeting notes are made, they were for more, and eliminated digital administration at all. For certain architectural reasons, there was a plugin layer, which they killed off, which killed off a transition a month later, making everyone's life much much harder. Everything they did came to a disaster. And they were celebrated by the management, those ... I won't say words.


                Make a tree good and its fruit will be good, or make a tree bad and its fruit will be bad, for a tree is recognized by its fruit

                Jesus according to the Gospel of Matthew, chapter 33

                I saw what TW's tree has brought, and I believe I foresee what this book will bring to us. And I'm deeply horrified by that future.

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Good grief. Since you seem to have totally misunderstood many of the things I've written, I am going to clarify some of them. But it doesn't bode well for you that you pick these big arguments before trying to understand what you are arguing against.

                Continuous delivery means that your software is production-ready from day one of your project


                By this I mean the part of the project where you start writing code. Of course I'm not suggesting deploying something before you've got a reasonably well worked out vision of where you want to go. That would be retarded. Here's a tip I have found useful over the years: when reading somebody's work, try and interpret what they say on the assumption they are at least as smart as you. Thus if the interpretation you come up with makes them totally boneheaded, it's probably your poor interpretation.

                The notion of missing manual checks comes from the sample chapter


                The part you've quoted is describing an anti-pattern. We immediately follow it by saying

                unit tests only test a developer’s perspective of the solution to a problem. They have only a limited ability to prove that the application does what it is supposed to from a users per- spective. If we want to be sure that the application provides to its users the value that we hope it will, we will need another form of test. Our developers could have achieved this by running the application more frequently themselves and interacting with it.


                I think you would agree with this, no?

                For somebody so interested in deductive reasoning and rationalism, I am utterly baffled by your insistence on misinterpreting what Dave and I say and turning it into the opposite. Please find somebody to fight who actually holds the position you are arguing against.

                Thanks for the discussion - I am going to sign off this thread now, as I can see that I have fallen into a rabbit hole: xkcd.com/386/

                Jez.

              • Re: I still can't agree

                by Jez Humble,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                I think I understand why you are so cross


                When I say that CD doesn't fit as a primary method, I say it as it is: it doesn't mean CD doesn't have value, only that it shouldn't be taken as the primary method.



                Neither in the two interviews do you mention design or modeling (David Farley does however once in the other one, to my surprise as I didn't remember), and the word design isn't used in context of software design in the sample provided by pearson.


                Correct. That's because design and modeling are out of scope of the book. As we say in the preface (and several other places), we're only considering the part of the value stream from check-in to release.

                That's not to say that design and modeling aren't important - of course they are - but there are plenty good books written about this topic, and almost none about build, deployment, testing, database management, infrastructure and so forth.

                So inasmuch as you are cross we haven't considered these things - well, of course we haven't. That doesn't mean we don't think they are important.

                So you're right. CD isn't the whole answer. Of course not. It was never intended to be.

              • Re: I still can't agree (scope)

                by Adam Nemeth,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                OK, sorry, I hope it made clear. Also, I reviewed the book excerpt and the interview with several "agilist" and "anti-agilist" developers and architects, and all the "agilists" said that "yeah, they're right, we shouldn't do UML, we should feedback solely with code", and all the "anti-agilists" were like: "oh great, another lean fad, can't these guys just shut up and let us work?"

                Of course, as a deployment methodology, CD is a pretty good summary of best practices and patterns. Not always applicable (again, in my practice, prod envs have certain limitations which you don't want to kick in until you're preparing to launch to the public), yet in general terms, it's well-written.

                My problem was always with out-of-scope, like, your first sentence says "day one of the project". Preface was not available as an excerpt I guess.

                And again, albeit it looks like, I'm not cross personally at you: I'm cross at the situation, where each time I have to defend that single day per sprint of modeling, that 1-2 hours per task. I'm not a fan of waterfall, yet thinking through does make sense, and it's harder to harder to enforce it.

                And I have antipatterns, like "it's not the silver bullet but close", or "it'll solve everything else as a side-effect" (TDD guys claim this sometimes, it makes most of their functions public as a side effect, but won't clean up APIs if not thought through well), because as a profession, we're still seeking for the "Holy Way", we try to find the system which is applicable most of the time.

                I think in searching for the Holy Way, we were closer 20 years ago than we are today, in certain sense, in the sense that we concentrated on user problems, not delivery, in terms that we looked at the system as an exact consequence of the problems at hand, rather than something untouchable, inconcievable, independent organism which escapes from all the rules we try to keep it to.

                So, scope is important I guess.

                Sorry for being harsh.

              • Great points

                by Craig Smitham,

                Your message is awaiting moderation. Thank you for participating in the discussion.

                Jez - I appreciated your observation that architects need to code so they can get feedback on their designs. It takes vulnerability and integrity to take responsibility for your own designs, and no better way to lead the way than show how it's done.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT