BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Measuring Agile in the Enterprise: 5 Success Factors for Large-Scale Agile Adoption

Measuring Agile in the Enterprise: 5 Success Factors for Large-Scale Agile Adoption

Bookmarks
01:26:42

Summary

In this presentation filmed during Agile 2008, Michael Mah analyzes the development process in 5 companies: 2 Agile (one of them BMC) and 3 classic. He measures the development productivity and effectiveness and compares the results with industry averages. He also presents the factors which contributed to the success of BMC's Agile adoption.

Bio

Michael Mah is director of the Benchmarking Practice and author with the Cutter Consortium, a Boston-based industry thinktank. With over 20 years of experience, Michael has written extensively and consulted to the world’s leading software organizations while collecting data on thousands of projects worldwide.

About the conference

Agile 2008 is an exciting international industry conference that presents the latest techniques, technologies, attitudes and first-hand experience, from both a management and development perspective, for successful Agile software development.

Recorded at:

Oct 09, 2008

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Conclusion?

    by Olivier Gourment,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The recording stops at about 50mn. Am I the only one having the problem?

    One comment about the first part, where Michael presents data (productivity, quality, time to market) gathered from two Agile projects and one waterfall+outsourced:

    SLOC to me, as a measure of productivity, should be deprecated. One may be able to compare the SLOC from one iteration, maybe one project to the next, with the same team and same technologies, but going outside these boundaries is probably a stretch. I am really not sure what comparing with other teams with other experience and seniority levels, other coding conventions, let alone other companies and other languages and technologies will give you. Think about Ruby vs. Cobol; think about junior developers programming using copy-paste vs. senior developers actually lowering the number of lines of code while introducing MORE features; think about code with unit tests vs. no unit tests (and unit tests in production code - I view this as a good practice by the way)...
    Also, funnily enough, I suspect that waterfall projects (and from my understanding BMC had a waterfall culture) will probably seem to have a better productivity because more code is written that is not deployed, or ends up not being used. Also not using code reviews/pair programming and unit tests will tend to produce code that is not refactored.
    A slightly better measure might be counting the diffs - ie, lines added/changed/removed in source control over the length of the project.

    Measuring number of defects is highly random as well... Think about testers paid by the bug vs. testers-as-part-of-the-team-who-tell-the-developer-where-the-bug-is-instead-of-entering-them-in-the-defect-tracker.

    I am hoping the part I am missing gets more interesting.

  • Re: Conclusion?

    by Diana Baciu,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Olivier

    i am able to watch the presentation until the end on both IE and Firefox.

    Diana

  • Re: Conclusion?

    by Mike Bria,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Re:
    <<
    Measuring number of defects is highly random as well... Think about testers paid by the bug vs. testers-as-part-of-the-team-who-tell-the-developer-where-the-bug-is-instead-of-entering-them-in-the-defect-tracker.
    >>

    First, I would never suggest a "tester gets paid per bug" approach, that's a recipe for disaster.

    Beyond that...
    I suggest to orgz who want to track their bugs via a tracker tool that they establish a standard as to what phase a bug does or does not make the "bug tracker tool". Specifically, my rule is "No bug trackers until the story is ruled DONE-DONE, ie after the iteration where the story is developed". Any bug found related to a DONE-DONE story, ie. post-iteration, gets entered into the tracker.

    Further, if they distinguish between internal release and real-deal-external customer release (ie, X number of iterations before a "release"), as most agile orgz do (some, but few, using Kanban or something similar actually release every iteration), the internal tester-found bugs get a different categorization code than do the post-release customer-found bugs.

    Generally, from a macro-process POV, finding a bug in the iteration is "way to go!", recording it is just Muda. Finding bugs post-iteration but pre-release is "not ideal, but still a good thing", finding a bug post-release is "shame on us".

    To measure progress, ultimate measure is a decrease in post-release bugs (many orgs might be well enough just to stop here); secondary measure is to see that the detection of pre-release bugs becomes more spread out evenly throughout the release, as opposed to being weighted at the tail-end; a decrease in the overall pre-release count is also good stuff of course, so long as post-release is also declining.

    Basically, good agile is about finding and fixing the bugs faster - in the ideal case, we find them as soon as they're introduced. At the very least we do better at not ever letting them out the door.

    **Caveat about "tracking" bugs found for in-progress (not DONE-DONE) stories. Teams may want to informally keep track of this (via stickies or marks on story's index card for example) if they are trying to get a sense for how their TDD efforts are panning out. IE, they want to see that fewer bugs are even be checked into CI in the first place. I think this is often okay, so long as its kept very lightweight and done informally, for and by the team itself. Not something long-term either, just during adoption phases.<>

  • Re: Conclusion!

    by Olivier Gourment,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thank you Mike.

    I guess my original point was that Michael did not explain (*in the first 50 minutes*) why he considered the measurements valid in the context of comparing productivity between different companies. But (and since I was able to listen through the rest this time- Thanks Diana) I found that the SLIM model is explained in books and papers sold in the Resources section. Michael also did quickly touch upon what the Productivity Index is, and -in BMC's case- mentioned that peer reviews help avoid copy-pasted code.

    Anyway, there is no doubt from the rest of the presentation that the Agile teams presented here are case-in-point references for demonstrating that Agile can work, and even on very large projects. The 5 success factors are presented at the end and commented (BMC).

    Thank you Michael for sharing this with us.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT