BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Presentation: Measuring Agile in the Enterprise: 5 Success Factors for Large-Scale Agile Adoption

Presentation: Measuring Agile in the Enterprise: 5 Success Factors for Large-Scale Agile Adoption

Bookmarks

In this presentation filmed during Agile 2008, Michael Mah analyzes the development process in 5 companies: 2 Agile (one of them BMC) and 3 classic. He measures the development productivity and effectiveness and compares the results with industry averages. He also presents the factors which contributed to the success of BMC's Agile adoption.

Watch: Measuring Agile in the Enterprise: 5 Success Factors for Large-Scale Agile Adoption (1h 26min)

Michael compares 5 companies, 2 implementing Agile, one XP and one Scrum, and other 3 implementing classic processes. He uses specific metrics to measure the productivity and success of their software development, and compares the results against industry averages.

Michael's measurements show that the companies doing Agile performed better, being faster to market and with lower defect rates. One of the companies is BMC, and their success is comprised in the following "secret sauce", as Michael calls it:

  1. Buy-In
    • VP-Level (or higher) Senior Executive Sponsorship
    • Scrum Master Training
    • Core Group Energized and Passionate
  2. Staying “Releasable”
    • Nightly Builds/Test
    • 2-week Iteration Demos
    • Frequent, Rigorous Peer Code Review
  3. Dusk-to-Dawn Teamwork
    • Communication Techniques for Information Flow
    • Wikis, Video-conferencing, Periodic On-Site Meetings
    • Co-Located Release Planning
    • Scrum of Scrum Meetings (US Time)
  4. Backlogs
    • One Master Backlog AND Multiple Backlog Management
    • One Setup for User Stories Across Teams
    • Added “Requirements Architect” to Interface Product Mgt with R&D
  5. “Holding Back the Waterfall”
    • Test Driven Development
    • Retrospective Meetings to Not Regress into old Waterfall Habits
    • Outside Source to Audit the Process

During the second part of the presentation, Michael invites Walter Bodwell, Senior Director of Engineering at BMC Software when the analyzed project was developed, to carry on a conversation with him about the above mentioned factors which contributed to the BMC's successful implementation of Agile. Walter explains the ingredients of the "secret sauce".

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Conclusion?

    by Olivier Gourment,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The recording stops at about 50mn. Am I the only one having the problem?

    One comment about the first part, where Michael presents data (productivity, quality, time to market) gathered from two Agile projects and one waterfall+outsourced:

    SLOC to me, as a measure of productivity, should be deprecated. One may be able to compare the SLOC from one iteration, maybe one project to the next, with the same team and same technologies, but going outside these boundaries is probably a stretch. I am really not sure what comparing with other teams with other experience and seniority levels, other coding conventions, let alone other companies and other languages and technologies will give you. Think about Ruby vs. Cobol; think about junior developers programming using copy-paste vs. senior developers actually lowering the number of lines of code while introducing MORE features; think about code with unit tests vs. no unit tests (and unit tests in production code - I view this as a good practice by the way)...
    Also, funnily enough, I suspect that waterfall projects (and from my understanding BMC had a waterfall culture) will probably seem to have a better productivity because more code is written that is not deployed, or ends up not being used. Also not using code reviews/pair programming and unit tests will tend to produce code that is not refactored.
    A slightly better measure might be counting the diffs - ie, lines added/changed/removed in source control over the length of the project.

    Measuring number of defects is highly random as well... Think about testers paid by the bug vs. testers-as-part-of-the-team-who-tell-the-developer-where-the-bug-is-instead-of-entering-them-in-the-defect-tracker.

    I am hoping the part I am missing gets more interesting.

  • Re: Conclusion?

    by Diana Baciu,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Olivier

    i am able to watch the presentation until the end on both IE and Firefox.

    Diana

  • Re: Conclusion?

    by Mike Bria,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Re:
    <<
    Measuring number of defects is highly random as well... Think about testers paid by the bug vs. testers-as-part-of-the-team-who-tell-the-developer-where-the-bug-is-instead-of-entering-them-in-the-defect-tracker.
    >>

    First, I would never suggest a "tester gets paid per bug" approach, that's a recipe for disaster.

    Beyond that...
    I suggest to orgz who want to track their bugs via a tracker tool that they establish a standard as to what phase a bug does or does not make the "bug tracker tool". Specifically, my rule is "No bug trackers until the story is ruled DONE-DONE, ie after the iteration where the story is developed". Any bug found related to a DONE-DONE story, ie. post-iteration, gets entered into the tracker.

    Further, if they distinguish between internal release and real-deal-external customer release (ie, X number of iterations before a "release"), as most agile orgz do (some, but few, using Kanban or something similar actually release every iteration), the internal tester-found bugs get a different categorization code than do the post-release customer-found bugs.

    Generally, from a macro-process POV, finding a bug in the iteration is "way to go!", recording it is just Muda. Finding bugs post-iteration but pre-release is "not ideal, but still a good thing", finding a bug post-release is "shame on us".

    To measure progress, ultimate measure is a decrease in post-release bugs (many orgs might be well enough just to stop here); secondary measure is to see that the detection of pre-release bugs becomes more spread out evenly throughout the release, as opposed to being weighted at the tail-end; a decrease in the overall pre-release count is also good stuff of course, so long as post-release is also declining.

    Basically, good agile is about finding and fixing the bugs faster - in the ideal case, we find them as soon as they're introduced. At the very least we do better at not ever letting them out the door.

    **Caveat about "tracking" bugs found for in-progress (not DONE-DONE) stories. Teams may want to informally keep track of this (via stickies or marks on story's index card for example) if they are trying to get a sense for how their TDD efforts are panning out. IE, they want to see that fewer bugs are even be checked into CI in the first place. I think this is often okay, so long as its kept very lightweight and done informally, for and by the team itself. Not something long-term either, just during adoption phases.<>

  • Re: Conclusion!

    by Olivier Gourment,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thank you Mike.

    I guess my original point was that Michael did not explain (*in the first 50 minutes*) why he considered the measurements valid in the context of comparing productivity between different companies. But (and since I was able to listen through the rest this time- Thanks Diana) I found that the SLIM model is explained in books and papers sold in the Resources section. Michael also did quickly touch upon what the Productivity Index is, and -in BMC's case- mentioned that peer reviews help avoid copy-pasted code.

    Anyway, there is no doubt from the rest of the presentation that the Agile teams presented here are case-in-point references for demonstrating that Agile can work, and even on very large projects. The 5 success factors are presented at the end and commented (BMC).

    Thank you Michael for sharing this with us.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT