BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Interview and Book Review: Continuous Delivery

Interview and Book Review: Continuous Delivery

Bookmarks

Continuous delivery means that a software product is production-ready from day one of the project, even if all features not implemented, and the product can be released to users on demand. Authors Jez Humble and David Farley discuss in the Continuous Delivery book, the concept that involves implementing the best practices in all three areas of an organization: process, people and tools to automate the various components of the software development process.

They talk about the problem of delivering software in organizations where the software is deployed manually, deploying production environment only after the development is complete (rather than more frequent deployments) and manual configuration of the environments. To address this problem, the authors explain the best practices in the areas of configuration management and continuous integration which include using a version control system to manage not only the application code but the other artifacts like the software configuration and environment details.

The deployment pipeline discussion in the book focuses on the practices of build and deployment phases of a project and the metrics to get feedback on the software delivery process. In the book, Jez and David also cover the topics of implementing testing strategies and how to test non-functional requirements.

This book was also just named as winner of the 2011 Jolt Awards for Books. InfoQ spoke with both authors of the book on the continuous delivery concept and how it can be used to automate the software product delivery process to be more efficient and effective.

InfoQ: Can you define "Continuous Delivery" concept and how it differs from other practices like Continuous Integration or traditional QA (Testing) tasks?

Jez Humble: Continuous delivery means that your software is production-ready from day one of your project (even when it's not "feature complete") and that you can release to users on demand at the push of a button. There are several practices and patterns that enable it, but in particular excellent configuration management, continuous integration, and comprehensive automated testing at all levels form the foundation. The key pattern is the deployment pipeline, which is effectively the extension of continuous integration out to production, whereby every check-in produces a release candidate which is assessed for its fitness to be released to production through a series of automated and then manual tests. In order to be able to perform these validations against every build, your regression tests - both at the unit and acceptance level - must be automated. Humans then perform tasks such as exploratory testing, usability testing, and showcases as later validations against builds that have passed the automated tests. Builds can be deployed automatically on demand to testing, staging and production environments by the people authorized to do so.
Through these practices, teams can get fast feedback on whether the software being delivered is useful (as required by methodologies such as the Lean Startup), reduce the risk of release, and achieve a much more predictable, reliable process for software delivery.

David Farley: Continuous delivery is fundamentally about establishing the shortest possible feedback loops for every aspect of development. We want early feedback of which requirements are effective when our software is delivered to its users, so we want to deliver software early and often. We want feedback if a change breaks an existing feature so we want to have great automated tests to highlight regression problems. We want feedback that our software can be successfully deployed to production so we need to exercise the deployment mechanisms before we get to that point.

Once this fundamental need is understood all of the technologies, techniques and practices are really about delivering this feedback as clearly and quickly as possible.

InfoQ: What type of organizational culture or level of process maturity is the best environment to use the Continuous Delivery practices discussed in the book?

Jez Humble: The backbone of continuous delivery is a culture in which everybody involved in delivery - developers, testers, infrastructure, DBAs, managers, customers - collaborates throughout the lifecycle of the product. It also relies on some level of maturity in automation of the delivery process: build, test, deployment, infrastructure and database management. But there's no real limit in terms of applicability. These techniques are valuable whether you're working on embedded systems, products, or web-based systems. However in return for rapid, reliable software releases you do incur a cost, so continuous delivery should only be used for strategic software projects where fast feedback and predictable, low-risk releases are valuable to the business.

Dave Farley: When I was a consultant, I always thought that establishing Continuous Integration in an organisation that wasn't used to it was like a beach-head for agile practices. It is the first and most obviously valuable practice and it drives good behaviours everywhere else. Continuous Delivery (CD) takes that a step further. For genuine CD to work you need high levels of discipline, and an across the board focus on quality, delivery and, to some extent, efficiency. This is a big challenge for many organisations, because it can often mean significant changes to the way in which organisations do things in a broader sense than only the development team: It affects how requirements are prioritised; how they feed into the development process; how testing is performed; how releases are done. CD has a significant, and in my opinion positive, affect on every role associated with software production from inception onwards. Any organisation can dip in and gain some of these benefits, but it does require significant commitment to get all of the benefits. I strongly believe that there is a significant pay-back from these techniques over a period of time, there is certainly an investment required to achieve that pay-back. My gut feel is that CD is appropriate for any software that will have changes made to it after a period of 3-6 months. If the software is done and dusted and will never change again within 3 months, then maybe not.

 InfoQ: Can the Continuous Delivery practices discussed in the book also be used in software development environments that are using traditional waterfall development methodologies?

Jez Humble: Yes, absolutely. The techniques we discuss are engineering practices that are mostly independent of your project management process. Automation and collaboration are just as important if you're using waterfall, and they make a huge difference to the quality of your software and the predictability of your delivery process.

Dave Farley: I agree with Jez, a waterfall project that uses the techniques of CD will be better than a waterfall project that doesn't. However there is a sometimes hard to estimate cost to CD that can make scheduling tricky. Part of the process is to stay on top of failures. The process depends on a timely and effective view of the state of your software - it needs to stay working all of the time to achieve that.   If your automated tests highlight a problem in your code then you need to react to it and fix it. Equally if your automated testing highlights a problem with the feedback cycle, tests running too slowly is the classic problem, you need to react to that too. So any process that employs CD needs the flexibility to react to problems and fix them as soon as they arise.

Inevitably I am going to say that although I believe that CD is an effective practice for any approach to software development, it is most naturally aligned with agile development practices and that these practices maximise its impact.

InfoQ: Software quality testing and validation process usually includes automated testing (via code scans), manual functional testing and manual code reviews. What is the best combination of these different quality validation efforts in order to get the best of each effort?

Jez Humble: Brian Marick created a test quadrant which classifies the various forms of tests that are essential in software development - and you need all of them. Developers should be using test-driven development to make sure the code they are delivering is of high quality. Developers and testers should work together to create suites of automated functional acceptance tests to ensure the software fulfills its requirements as part of the development process. Testers should perform exploratory testing and usability testing. Teams should also test cross-functional requirements such as security, availability, and performance from early on in the project, automating as much as possible.

One of my favourite software development quotes is from W Edwards Deming, who says "Cease dependence on mass inspection to achieve quality. Improve the process and build quality into the product in the first place." That means that testing is not a phase to be performed after development is complete. It needs to be done all the time throughout the delivery process by everybody, using techniques such as behaviour-driven development and acceptance test driven development. It also means that quality is not the responsibility of testers - it's the responsibility of the whole team. Testers are essential to creating high-quality software, but their job is to make the quality of the system transparent, not to be responsible for that quality.

Dave Farley: Jez has nailed it, quality is everyone's responsibility. One of the significant benefits of CD is that it gives better visibility to everyone associated with a software project of the state of that project and so anyone can react to something that doesn't look quite right. People are useless at complex, but repetitive tasks. Using any form of human based testing for regression is wrong. Computers are wonderful at that, so all regression testing, functional, performance, whatever, all repeated actions really, should be automated.

People are wonderful at pattern matching. People should be exploring the system, highlighting things that "just don't seem right" their role is to worry about usability and picking up on the subtler cues that suggest where there may be a problem. In my projects I use this distinction in all aspects of quality review, we automate everything that is repetitive, be that tests, build or deployment tasks or anything else and we encourage everyone to use their instincts, intuition and reasoning to highlight problems from whatever source, build problems, release problems, performance problems, code quality problems - anything. An important aspect of CD is a focus on continuous improvement.

InfoQ: What are some version control and release management best practices especially for geographically distributed teams?

Jez Humble: Firstly, everything that is required to build, test and deploy your system should be in version control. It should be possible to plug in a new workstation, and check out everything you need to build, test and deploy your system to any environment you have access to from version control. Deploying software to testing environments should also be a push-button process. This is even more important with distributed teams, since otherwise you waste cycles trying to work out why something works on their machine but not on the machine of somebody sat somewhere else in the world.

It's also essential that everybody checks in every day to trunk, again to make sure that everybody is working from a single source of truth as to the state of the system. Distributed version control systems such as Git and Mercurial can make this much easier when working in distributed environments since if your connectivity is bad, you're not reliant on waiting for a centralized version control system on a different continent to share your work. Instead you can check in locally multiple times per day, and push to the designated central repository out of band on a less regular basis (but still at least once per day).

Dave Farley: In CD every commit is considered to be a release candidate. This is means that it must work with everything else and be deployable and ready for use, we never commit changes that we intend to fix later. That means that in order to evaluate any commit it is essential that we evaluate it alongside every other change. In the ideal world every commit is made to HEAD, a single canonical representation of the state of the application, so that we can get the fastest feedback possible if our change causes problems. Technologies like distributed VCS are great, I am a big fan, but they have some common patterns of use that are antithetical to CD. My rule of thumb is that if you are not merging changes into HEAD at least once a day you aren't doing Continuous Integration and so will pay a significant cost when using a CD process.

InfoQ: How can the development teams integrate architecture practices into the software development process without impacting project deadlines too much, but at the same time creating software components and services that are useful for the specific project but also are reusable across the projects (in long term) in an organization?

Jez Humble: Wow, that's a big question! Books have been written on that topic. I don't think I can really do it justice in a short space. All I'd say is that you have to focus on creating a useful product first. Re-usability is important, but it is subordinate to creating working software. Frameworks and components are best created through harvesting them from similarities in existing systems. However how you structure your software is essential. The SOLID patterns as expressed by Bob Martin - using techniques such as encapsulation and low coupling to create modular software - is essential in order to create maintainable systems.

Dave Farley: This is a question close to my heart, but one that I can't do justice to here because it is such a big topic. First some simple pragmatic things, in my current project we have many tests that assert a level of architectural conformance (e.g. if you try to access a database directly from one of our web servers the build will fail). The tests are much more sophisticated than that, but it gives you a sense of the direction.

More generally I think that good design is orthogonal to development process. I very strongly believe that good design is an iterative process and that effective software evolves over time. This means that I think that agile, iterative development processes make it easier to create good designs, but you can write bad software as well as good using any development process. The only silver bullet is talented people.

Having said that I think that there are some common features of good design. A continual focus on quality and efficiency - good software does things simply if it looks complex it is probably wrong. 

Modelling - good software has a shape and collections of abstractions that allow simplified descriptions of what is going on.

Automated tests - you can write bad software that is fully tested, you can write good software that has no automated tests but that is really hard. I think that it is hard enough to write good software already, so don't make it harder by ignoring the tests.

InfoQ: Another architecture related question. When is the right time to review and validate the software built to ensure that it's following the architectural and design standards?

Jez Humble: All the time. Pair programming - particularly having senior developers always pair with junior ones in order to teach them - and regular code reviews by senior developers are great ways to make sure that any problems are found early. However I don't recommend gating the delivery process so that each check-in must be reviewed before it can be merged. Instead, senior developers should monitor check-ins through feeds and then help people refactor through pairing when necessary.

Dave Farley: All the time. Every commit should assert some levels of architectural conformance. It is broader than that though. I like to maintain what I call a white-board model of the software that I work on. This is a diagram that shows the important abstractions. The whiteboard bit is significant in that it should be enough of a shared understanding of the system at a high-enough level of abstraction so that any reasonably senior member of the team can reproduce it on a whiteboard from memory - it should not be too detailed. New requirements should be evaluated against this whiteboard model to see how they fit and to see if they imply a change to the model. If the model needs changing we assemble the team and discuss the changes together.

InfoQ: Most of the organizations need to be in compliance with some type of regulatory or internal audit requirements. What are your thoughts on best ways to incorporate the security policies and compliance aspects into the software development process?

Jez Humble: There are two important areas where regulation applies. First to the delivery process itself. You need to make sure that errors or back-doors don't slip in to your systems, and that your software delivery process is auditable. The best solution to these problems is comprehensive configuration management, automation of the build, test, deploy and release process, and close collaboration between everyone involved in delivery. Pair programming and regular reviews are a great way to provide checks and balances on what gets introduced into the code base. Second, you need to make sure your software conforms to requirements such as security, and the ability to audit the data in the system. These are best enforced through automated validations - tests - that are run against each release candidate using the deployment pipeline. The deployment pipeline is what provides end-to-end traceability of every bit of information through your delivery process, and ensures that the necessary validations have been run against every release candidate.

I have an article coming out in the Cutter IT Journal later this month that discusses these issues in some detail.

Dave Farley: I am currently working in the finance industry and so I am no stranger to regulatory compliance issues. My recent experience has been that there is no better way than a process like Continuous Delivery to achieve this.

It can be complex to explain to people like auditors or compliance experts, but only because it is new to them. A full blown CD system offers soup to nuts traceability. In my organization we have a complete automated audit trail, as a free by-product of our techniques. Our requirements system records when a requirement was created, when development was started and when completed and who was involved at each step. This is tied in to our version control system that shows which changes were made related to that requirement. Our deployment systems are tied to the version control system via release numbers and audit who agreed to the release to a particular version of the software into a particular environment. We can answer almost any question about the life-cycle of any feature and show a complete audit trail of the change. As I said, this facility came as a side benefit, we did no extra work to get this level of traceability it comes free as a virtue of comprehensive version control.

The auditors that I have dealt with love this stuff.

 InfoQ: There is more attention recently being given to the collaboration between developers (Dev) and operations (Ops) teams in integrating the development and testing tasks with the deployment and software delivery tasks. Can you talk about some best practices and guidelines in this area?

Jez Humble: Again, there's more detail in my forthcoming Cutter IT journal article, but essentially you need to make sure that developers, testers, and operations collaborate throughout the delivery process. That means all of them are present for project inceptions, showcases, and retrospectives. Unfortunately in most organizations developers are measured according to throughput, and operations are measured according to the stability of production systems. That leads to huge dysfunction. Developers should be accountable for the stability of the code they create - one way to encourage this is by having them wear pagers so that when things go wrong in production they have to help fix the problems. Operations have to be accountable for throughput - which means making sure that developers can self-service production-like environments for testing purposes to ensure that software can be released automatically without causing problems. Ultimately it's in everybody's interest.

Dave Farley: Delivery should be everyone's responsibility. The whole team should be focused on successfully delivering good quality software to its users. Providing good visibility into the lifecycle of releases is an important part of that. In addition making sure that the people that cause problems are responsible for fixing them is vital to the establishment of effective feedback loops.

InfoQ: Do the continuous delivery processes and practices differ in any way when the software product is deployed to a cloud instead of being hosted internally? What are the pros and cons of using a cloud solution when it comes to Continuous Delivery?

Jez Humble: Continuous delivery is really orthogonal to the issue of what platform you deploy your system on. The techniques are equally applicable whether you're releasing an embedded system, a cloud-based solution, or something that sits on a mainframe. That will depend on the cross-functional requirements of your software.

Dave Farley: No there is no real difference. The virtualization of the host system, inherent in cloud computing, can make it a little easier, from a technical perspective, to tear-down and re-commission a known-good environment into which the software will be deployed, but that is the only difference that I see, the process and techniques are the same.

InfoQ: To conclude the interview, how do you rate the significance of the elements of a successful continuous delivery environment in terms of Processes, People, and Tools? Is one element more important than the other?

Jez Humble: No, you need them all in order to achieve success. I think though that the most important thing to focus on is people. Most problems in IT boil down to people problems. Continuous delivery fundamentally requires discipline from the people developing software, and the ability of teams to adjust their environment in order to improve. Without that, you can't repeatably deliver valuable, high quality software. Tools and process are important, but they are enablers, not pre-requisites. One of my favourite papers on continuous integration is "CI on a dollar a day" by James Shore in which he shows how to do CI using an old developer workstation, a rubber chicken, and a bell. CI is a practice, not a tool. Running Jenkins on your feature branches does not mean you are doing CI.

Unfortunately most of the time people like to start by talking about the tools, because it's nice and concrete, and because it's easier to stick in a tool than it is to change people's behaviour. Tools are important enablers - I would recommend in practice that you use a CI tool - but they're not the deciding factors in whether you will be successful.

Dave Farley: Fundamentally I agree with Jez, all are important to get right. If pushed I would say that tools are the least important aspect, but only because if you have good people and effective processes then they will create the tools that they need (it's not actually all that hard). The things that are important about CD are not related to a particular product, tool or technology, but in general it is the effective use of technology to support the process that is an enabling factor.

Dave also said:

Fundamentally I think that software development, at its best, should be based on a process of exploration and measurement, a fairly scientific approach to the subject I guess. I think that CD is effective because it facilitates an explorative approach by providing real, valuable measurements of the output of the process and feeding those results back into the process.

InfoQ: Thank you both for your time with this interview.

About the Book Authors

Jez Humble is a principal consultant with ThoughtWorks and co-author of Continuous Delivery (Addison Wesley, 2010). He has been fascinated by computers and electronics since getting his first ZX Spectrum aged 11, and spent several years hacking on Acorn machines in 6502 and ARM assembler and BASIC until he was old enough to get a proper job. He got into IT in 2000, just in time for the dot com bust. Since then he has worked as a developer, system administrator, trainer, consultant, manager, and speaker. He has worked with a variety of platforms and technologies, consulting for non-profits, telecoms, financial services and on-line retail companies. Since 2004 he has worked for ThoughtWorks and ThoughtWorks Studios in Beijing, Bangalore, London and San Francisco. He holds a BA in Physics and Philosophy from Oxford University and an MMus in Ethnomusicology from the School of Oriental and African Studies, University of London. He is presently living in San Francisco with his wife and daughter.

Dave Farley has been having fun with computers for nearly 30 years. Over that period he has worked on most types of software, from firmware, through tinkering with operating systems and device drivers, to writing games, and commercial applications of all shapes and sizes. He started working in large scale distributed systems about 20 years ago, doing research into the development of loose-coupled, message-based systems – a forerunner of SOA. He has a wide range of experience leading the development of complex software in teams, both large and small, in the UK and USA. Dave was an early adopter of agile development techniques, employing iterative development, continuous integration and significant levels of automated testing on commercial projects from the early 1990s. He honed his approach to agile development in his four and a half year stint at ThoughtWorks where he was a technical principal working on some of their biggest and most challenging projects. Dave is currently working for the London Multi-Asset Exchange (LMAX), an organization that is building one of the highest performance financial exchanges in the world, where they rely upon all of the major techniques described in this book.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • First impression: great book!

    by Stephan Oudmaijer,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I am reading the book right now to help implementing Continuous Delivery at my current client and to give me some new insights. It covers a lot of subjects from continuous integration, version control and branching strategies till Continuous Delivery. Great addition and reference for anyone interested in Continuous Delivery and CI.

  • Why is this important?

    by Adam Nemeth,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I don't understand why would it be needed to have a software which is "ready with 0 features" from day one.

    A user needs a software that does what he minimally needs to solve his problem as fast as he can, a software which does nothing, or falls below this minimal thing is out of interest on the production environment.

    Also, for feedback, why would the actual software be the only feedback mechanism? I know this is un-agile, but with pure thinking, and writing non-code documents, like e-mails (and sometimes, yes, with drawings) you could spot errors. Faster, as paper doesn't have to run and doesn't care about wether it's a public or private method (no runnable modeling), unlike java (and ruby still cares _something_ about runnability, except when you prototype, but you don't do it in the production env, do you?)

    I do understand why is it needed that a software which does what it was asked for reliabily doesn't wait more than a day, or that a critical fix doesn't wait more than a few minutes needed to be checked that it doesn't break anything else, this is totally fine.

    But should CD really be the thing which drives us? Can't I work on a code for a few days before releasing it into production? Can't I check it in somewhere? Can't I show half-done code on a separate instance while important fixes are still deployed to production?

    Also, while for important fixes, it's pretty fine that they should go out, but for actual requirement changes, a day or even perhaps a week is totally fine for the actual users.

  • Re: Why is this important?

    by Dave Farley,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    > "I don't understand why would it be needed to have a software which is "ready with 0 features" from day one."

    The idea is that you don't store up trouble and that you minimize the degree to which you rely on guesswork. We are not suggesting that you deploy a system with "0 features" into production, however we do suggest that it is much easier to create a completely automated deployment system incrementally.

    If, instead, you wait until you have a functionally complete system before beginning work on deployment you will have already wasted lots of time deploying the system to test environments manually. If you make the results of the first story that you create deployable you can begin simply. You will have a system that is useful for automated and manual testing. You will have rehearsed the deployment many times, and so made it effective and efficient, by the time you release for real. Importantly you will not end up with a big, complex, hard-to-estimate cost of getting the application deployable towards the end of the project.

    > "Why would the actual software be the only feedback mechanism?"

    Finished software isn't the only feedback mechanism. Tests, of all kinds, provide feedback at every stage of a deployment pipeline. However there is one key aspect of a system where you can only get real feedback from completed software, "Does the software meet the needs of it's users?". Before the software is in the hands of it's users we can guess at an outcome, we can make educated guesses based on our experience, but we can't know. I believe that the process that we describe in our book is very much built upon a more scientific approach to software development than is common in most development organisations. We should make progress on the basis of measurement and evaluation of facts rather than guesses. The only reasonable measurement of the effectiveness of any given feature is whether real users use it and do they find it effective?

    > "Can't I work on a code for a few days before releasing it into production?"

    Yes you can. Continuous Delivery does not mandate that every commit is released immediately into production (that is commonly referred to as 'Continuous Deployment'. I would say that all 'Continuous Deployment' system operate a process of 'Continuous Delivery' but the reverse is not the case. My own organisation operates a 2 weekly release cycle based on a Continuous Delivery process. What 'Continuous Delivery' (CD) says is that every commit should be deployable. This is one of those disciplines that helps us to be more effective. If every time that we commit we work on the assumption that this change will end up in production we will concentrate on leaving things in a sensible state. We will try hard not to commit changes that break other things on the assumption that we will fix them later, and we will not leave build changes, deployment scripts and testing until the end because they may cost more that we expected. So staying on top of these issues is a bit like tidying up while you are cooking, you can leave all the washing up and tidying until the end, but the kitchen will look like a disaster area by the time you are ready to eat ;-)

    In order to facilitate this approach you need to commit frequently and avoid branches as much as possible, without that you again store up the potential for trouble later. CD is the extrapolation of CI. By definition you are not doing CI if you are not committing at least once a day. I would never argue that there is any silver-bullet in software development but CI & CD comes close. The reason is that, again, you are acting on facts. If I commit a change and it passes all of the tests I KNOW that it works with the changes that everyone else has committed. Until I commit that change I can only guess that it may work. The longer I leave between commits the greater the risk that someone else's work may invalidate the assumptions inherent in my work. I am an experienced software developer, I have been doing this stuff for decades, if I am honest, I think that I am fairly good at it, but I make dumb assumptions every day. Software development is a complex activity and it is very easier to make subtle, and sometimes not so subtle, mistakes. I use tests to evaluate my decisions and to confirm my assumptions. I had written a lot of code before I learned how to use CI, I would never go back because I know that my code is better with automated tests and that I am less exposed to other peoples assumptions if I commit regularly.

  • Re: Why is this important?

    by Adam Nemeth,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    It seems that we both want to maximize something, but it seems that the environments are vastly different, yet I also think that the approach has differences also.

    we do suggest that it is much easier to create a completely automated deployment system incrementally


    If, instead, you wait until you have a functionally complete system before beginning work on deployment you will have already wasted lots of time deploying the system to test environments manually


    In large-scale systems, creating a production environment of a non-existing product is a complete guesswork

    A production environment for me involves a lot of components which none of the test environments do: CDNs, secure payment contracts, elastic clusters, and so on.

    CDN and payment providers require you to fill in contracts with data you have no clue about on day zero, and they charge you a lot of money also on availability. If you want to avoid guesswork, you better have the system in a near-ready shape before signing those contracts.

    Also, it takes weeks and usually involves 2-3 teams to get this out right.

    You ought to prepare yourself for certain features of the production environment, yet it seems to me that in your practice, the production env is not that much different from the test environments.

    My problem is this however: deal with the non-technical problem your software is to solve first. Every time I saw this done otherwise, the project utterly failed to solve anything for ordinary humans well.

    In large-scale systems, you avoid testing on your full user-base

    Before the software is in the hands of it's users we can guess at an outcome, we can make educated guesses based on our experience, but we can't know


    Perhaps it's just me, but I'm working with the assumption that when I hit deploy on the production environment, I send something out for the hundred thousands of people who're using my employer's products, and I have no way to undo it until at least a few thousand will see it. And I just don't want thousands of people being angry at me and unsatisfied. I don't want to waste a minute of their time, that's just 100 000 minutes wasted.

    That's why I never test on users. Yes, I call some of them in, say that you're part of a test, now how do you like this, and please tell noone. But that's not the production environment.

    Why you shouldn't test your ideas on fully deployable code

    Technical parts (like, will this work with a CDN?) are always one side of the coin only. They're important, yet with a few constraints (like, use only static files), you can avoid them mostly.

    With most products nowadays, it's not the problem wether they do pass their unit tests or not, mostly because unit tests are in place. However, a unit test doesn't show anything about feasibility as a product solving human needs.

    Technical feasibility and human feasibility are entirely different areas. If you keep your constraints (technical feasibility), then you can concentrate on the problem at hand.

    Personally, I don't even deploy anything anywhere to test this. I literally pass over my laptop, with my working environment open, and even that's at the end of the spectrum, before commit, as it involves code.

    An idea can and should be tested with much less investment.

    Let's say a story takes a week to implement to be deployable on production. You actually have to go into your room, type it in and test it.

    But what happens if there's something wrong conceptually?

    In my experience, the answer is that usually it gets deployed, as they don't want the programmer to hide for another week, and it's something, it's full, and everyone is emotionally attached to it. Besides, a programmer's cost is around 500 EURs a day in western european countries,they wouldn't like to acknowledge throwing the price of a car out of the window.

    So, it gets deployed to 200 000 people, and they saw it's no good. Change request!

    With 200000 user-minutes and 2500 EURs already paid, you start to do modifications. It also takes 2-3 days, 1000-1500 EURs.

    (If anybody said to you code is free, tell them to get a better job ASAP)

    I can't believe people can't test on paper. I can't believe that a quick javascript-only demo with no actual backend doesn't help test-users to find out it won't work conceptually. I don't believe you actually need a backend and CDN-compliancy to find that out.

    Yes, of course, you can't be sure before deployment, but is it worth the pain of users? If we would do the same with medicines, would you approve it? A human is not a rat after all, we can't know... We can't know how do people in mass react...

    People rarely turn to their computers just to waste time (or rather, only a few selected applications have the luxury to help them in this.) People use applications to solve problems mostly. They don't die with a bad program, they just can't solve their problems with it.

    What does it have to do with development?

    you need to commit frequently and avoid branches as much as possible, without that you again store up the potential for trouble later [...] I would never argue that there is any silver-bullet in software development but CI & CD comes close. The reason is that, again, you are acting on facts. If I commit a change and it passes all of the tests I KNOW that it works[..].


    For me, software development is development of understanding. You shouldn't deploy a single line of code which you don't understand. Once you understand something, only then you're able to write it down clearly, or even decide wether it's clean or not.

    Unit tests don't always help you with understanding: it's still your understanding of the code, shown to noone else human, perhaps shown to your programmer pair, but nearly never to users or customers.

    What you show to customers and users are mockups, half-done versions, branches(!) for different alternatives, perhaps diagrams or descriptions. Why don't they have a right in our practice? Why shall we approach everything with a full shoot? Why don't we meet our users more often? It's their problem after all.

    That's what I can't understand. While I totally appreciate the urge to do everything right from day one, I'm asking wether concentrating mainly on the technical side of this you are able to deliver software which is worth much, and that it costs just the price what it's worth.

    Because development is not about bringing a software into production. Development is about making a computer understand a human problem with information handling.

  • Re: Why is this important?

    by Eric Smith,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Adam --

    Of course there is a lot more to developing software than deploying it (though without deployment it is somewhat pointless). I think the theme of the book is getting your software to your customers as quickly and easily as possible, while having confidence about its quality. There's still the hard work of figuring out the right thing and crafting it well.

    I'm about halfway through the book, and it has been great. I've lived through lots of mistakes and problems (including a big one just yesterday) from which I would have been spared had I read it a couple of years ago and implemented more of the suggested practices. I'd love to persuade my whole team to read it.

  • Re: Why is this important?

    by Adam Nemeth,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Eric,

    You're quite fortunate in case your customers are also your users. Most of the time, these are entirely separate groups, usually not speaking that much with each other, let alone with the developers.

    I must confess I only read this interview and the sample chapter. Generally speaking, there are some good thoughts in it, yet sometimes the conclusion is just scary.

    One of such conclusions was, that you should have a production env at day one, which I said, no, on day one you should concentrate on the purpose of the software at hand, which has nothing to do with deployment. Feasibility is enough through most of the first sprints.

    Also such a conclusion was, that you should bring out the features to the end-users to get scientific results on how bad it is actually. Not only this is a lot of wasted development and QA effort, but it wastes time and patience of your end users as well. You should use logical induction and probability theory instead. I did a calculations on how much loss it is money-wise on a usually-sized international web project.

    If you read the sample chapter, you see the following:


    He stepped back through earlier and earlier versions, until he found that the system had actually stopped working three weeks earlier [...] 80 developers, who usually only ran the tests rather than the application itself, did not see the problem for three weeks


    What was the conclusion?


    We fixed the bug and introduced a couple of simple, automated smoke tests that proved that the application ran


    It does translate for me this way: the developers didn't bother to check the application at hand, and we thought it's OK.

    I have a morale problem here: if someone gives out a product as an engineer out from his (her) hands that he never ever started it, never ever tried to use it as a user, are you sure, that the solution is to add a few more tests? Isn't there some... deeper problems in this situations?

    I remember when someone did it from my team. I sat down with him, and said: "Listen, you are human. The users are also humans, even if you don't meet them, even if they're way less clever perhaps than you think you are, and it's also true they mostly have no interest or understanding in these fancy tech-savvy things we're fascinated about. They spend their day with the software we're creating. They do so because that's their job, and they have to support their families. Now we want these people to have a good day, or at least, an easy 8 hours. The boss will be impatient anyway, and most of the troubles aren't caused by us. But when there's a trouble, it's our software to help them. Could you provide something which truly helps them without ever lookigng at it with your own eyes? It's fair you don't want to check it at every modification, that's what our testers are for, but could you please, before calling a feature done, just log in, and pretend you're the user, walk through the path once, as, after all, you're the author of this module and not the tester? Thank you very much."

    I also remember, sorry for the profane example, yet it's the simplest: we had a men's room-type, repeated through one of the office buildings of my client. Measurement-wise, it had everything: the water was flowing, there were the obligatory chinaware everywhere, yet you still had a sense, that it wasn't thought out.

    Then I realized: the interior designer must have been a woman. She didn't understand the politeness policies regarding to usage of a shared men's room, although she made things a bit higher than at the women's room (that's measurable) perhaps, these were ok, but she didn't succeed to show it to an actual user -a man - before completion it seems. Then it was too late and so it stayed.

    I'm pretty sure you meet such things every day: why a crossing was built that way, which idiot put this lamppost here, and so on. For each, I could show you a corresponding software solution, where such errors weren't recognized until it went out, yet they weren't deemed to be problematic as much that they need attention. It just leaves a bad feeling in the mouth of the people asked to use your software day-by-day. That's it. You messed up each workday a little.

    This disconnectedness between the developer and the user - no matter how creative people can be in writing automated tests - just scares me. But what scares me more is that they want to solve it with more automation. You can't automatize the user as well, he'll be still human!

    And then, also calling this "close to the silver bullet". Telling me that this program is surely good. Maybe we can measure every part of humanity by numbers now, it's just that I've missed the test technology where you put ASP.NET code and getting UX results back, I don't know. Maybe the authors are working on sensor systems, deployed to volcanoes, never touched by humans for 1000 years. Or kernel modules. Anything.

    And this continues:


    In iterative processes, acceptance testing is always followed by some manual testing in the form of exploratory testing, usability testing, and showcases. Before this point, developers may have demonstrated features of the application to analysts and testers, but neither of these roles will have wasted time on a build that is not known to have passed automated acceptance testing


    If manual testing is after technical acceptance testing, you're lost: you've invested thousands of euros into something which was never seen by humans. You don't want to throw this out anymore, trust me. The hardest thing is to get off clinging development teams and customers off dead horses. You could name a few for sure.

    So my question is: is this the silver bullet? Is this something we should organize our life around? Should we automatize everything? What about humans, is an automated world surely better for a human, where the errors go through unnoticed by human eyes? Is the tester responsible for this? The customer? Or is it the one holding the engineer degree and getting 500 EURs each day? What are we developing here actually, what are the problems we are ought to solve each day?

    Sorry for going too philosophical..

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT