BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Evaluating Agile and Scrum with Other Software Methodologies

Evaluating Agile and Scrum with Other Software Methodologies

Lire ce contenu en français

Bookmarks

In general selecting a software development methodology has more in common with joining a cult than it does with making a technical decision. Many companies do not even attempt to evaluate methods, but merely adopt the most popular, which today constitute the many faces of agile. This article uses several standard metrics including function points, defect removal efficiency (DRE), Cost of Quality (COQ), and Total Cost of Ownership (TCO) to compare a sample of contemporary software development methods.

There are about 55 named software development methods in use, and an even larger number of hybrids. Some of the development methods include the traditional waterfall approach, various flavors of agile, the Rational Unified Process (RUP), the Team Software Process (TSP), V-Model development, Microsoft Solutions Framework, the Structured Analysis and Design Technique (SADT), Evolutionary Development (EVO), Extreme Programming (XP), PRINCE2, Merise, model-based development, and many more.

The data itself comes from studies with a number of clients who collectively use a wide variety of software methods. The predictions use the author’s proprietary Software Risk Master™ tool which can model all 55 software development methodologies.

Introduction

The existence of more than 55 software development methods, each with loyal adherents, is a strong message that none of the 55 is capable of handling all sizes and kinds of software applications.

Some methods work best for small applications and small teams; others work well for large systems and large teams; some work well for complex embedded applications; some work well for high-speed web development; some work well for high-security military applications. How is it possible to select the best methodology for specific projects? Is one methodology enough, or should companies utilize several based on the kinds of projects they need to develop?

Unfortunately due to lack of quantified data and comparisons among methodologies, selecting a software development method is more like joining a cult than a technical decision. Many companies do not even attempt to evaluate alternative methods, but merely adopt the most popular method of the day, whether or not it is suitable for the kinds of software they build.

When software methodologies are evaluated the results bring to mind the ancient Buddhist parable of the blind men and the elephant. Different methods have the highest speed, the highest quality, and the lowest total cost of ownership.

(In the original parable the blind man who touched the trunk thought an elephant was like a snake. The blind man who touched the side thought an elephant was like a wall. The blind man who touched the tusk thought an elephant was like a spear; the blind man who touched the tail thought an elephant was like a rope.)

Combinations of Factors that Affect Software Projects

An ideal solution would be to evaluate a variety of methods across a variety of sizes and types of software. However that is difficult because of combinatorial complexity. Let us consider the major factors that are known to have an impact on software project results:

Factor

Number of Possibilities

   

Methodologies

55

Programming languages

50

Nature, class, and type of application

15

Capability Maturity Model Levels

5

Team experience (low, average, high)

3

Size plateau of application (small, medium, large)

3

Application complexity (low, average, high)

3

   

Combinations of factors

5,568,750

Since the number of combinations is far too large to consider every one, this article will make simplifying assumptions in order to focus primarily on the methodologies, and not on all of the other factors.

In this article the basic assumptions will be these:

Application size

1000 function points

Programming languages

C and C++

Logical code statements

75,000

Requirements creep

Omitted

Deferred features

Omitted

Reusable features

Omitted

Team experience

Average

Complexity

Average

Cost per staff month

$7,500

By holding size, languages, complexity, and team experience at constant levels it is easier to examine the impacts of the methodologies themselves. There are unfortunately too many methodologies to consider all of them, so a subset of 10 methods will be shown, all of which are fairly widely used in the United States.

(Note that the actual applications being compared ranged from about 800 function points in size to 1,300. The author has a proprietary method for mathematically adjusting application sizes to a fixed size in order to facilitate side-by-side comparisons.)

Methodologies in Alphabetic Order

  1. Agile with scrum
  2. CMMI 1 with waterfall
  3. CMMI 3 with iterative
  4. CMMI 5 with spiral
  5. Extreme programming (XP)
  6. Object-oriented development
  7. Pair programming with iterative
  8. Proofs of correctness with waterfall
  9. Rational unified process (RUP)
  10. Team Software Process (TSP)

Since not every reader may be familiar with every method, here are short descriptions of the ones in the article:

Agile with scrum: The term “agile” is ambiguous and there are many flavors of agile. For this article the term is used for projects that more or less follow the 1997 agile manifesto, have embedded users to provide requirements, use user stories, divide projects into discrete sprints that are developed individually, and use the scrum concept and daily status meetings. Minimizing paperwork and accelerating development speeds are top goals of agile.

CMMI 1 with waterfall: The Capability Maturity Model Integrated™ (CMMI) of the Software Engineering Institute is a well-well known method for evaluating the sophistication of software development. CMMI 1 is the bottom initial level of the 5 CMMI levels and implies fairly chaotic development. The term “waterfall” refers to traditional software practices of sequential development starting with requirements and not doing the next step until the current step is finished.

CMMI 3 with iterative: The third level of the CMMI is called “defined” and refers to a reasonably smooth and well understood set of development steps. The term “iterative” is older than “agile” but has a similar meaning of dividing applications into separate pieces that can be constructed individually.

CMMI 5 with spiral: The 5th level of the CMMI is the top and is called “optimizing.” Groups that achieve this level are quite sophisticated and seldom make serious mistakes. The spiral model of software development was pioneered by Dr. Barry Boehm. It features ideas that also occur in agile, such as individual segments that can be developed separately. The spiral segments are often larger than agile segments, and are preceded by prototypes.

Extreme Programming (XP): This method falls under the umbrella of agile development but has some unique features. The most notable unique feature is to delay coding until test cases have been developed first. The XP method also uses reviews or inspections. Sometimes pair programming is used with XP but not always, so that is a special case. Quality of the final software is a major goal of the XP method.

Object-oriented development (OO): The OO method is one of the oldest in this article and has had many successes. It has also led to the creation of special languages such as Objective C. In this article OO analysis and design with use cases are used. The C++ language is also an OO language. OO analysis and design are somewhat different from conventional methods so a learning curve is needed.

Pair programming: The concept of pair programming is often part of the agile approach, but is not limited to it. In this paper pair programming is used with iterative development. The basic idea of pair programming is that two people take turns coding. While one is coding the other is watching and making suggestions. Sometimes the pair use only one computer or work station between them.

Proofs of correctness: The concept behind proofs of correctness is that of applying formal mathematical proofs to the algorithms that will be included in a software application. It is obvious that the algorithms need to be expressed in a formal manner so that they can be proved. It is also obvious that the person who performs the proof has enough mathematical skills to handle rather complex equations and algorithms.

Rational Unified Process (RUP): The RUP methodology was originated by the Rational Corporation which was acquired by IBM in 2003 so it is now an IBM methodology. The RUP method includes aspects of both iterative and object-oriented development. Since RUP is now owned by IBM there are numerous tools that support the method. Use Cases and visual representations are standard for RUP applications, but the author’s clients usually include other methods as well such as decision tables.

Team Software Process (TSP): The TSP method was developed by the late Watts Humphrey, who was IBM’s director of programming and later created the assessment method used by the Software Engineering Institute (SEI) capability maturity model. TSP is very focused on software quality. All bugs are recorded; inspections are used, and high quality is the main goal on the grounds that bugs slow down development. The TSP method has some unusual aspects such as self-governing tools and a coach that serves the role of manager. TSP is now endorsed and supported by the SEI.

Three Kinds of Methodology Evaluation and 10 Metrics

Even with the number of methods limited to 10 there are still a great many results that need to be evaluated. However from working with hundreds of clients, the topics that have the greatest importance to development managers and higher executives are these:

Speed-related metrics

  1. Development schedules
  2. Development staffing
  3. Development effort
  4. Development costs

Quality-related metrics

  1. Defect potentials
  2. Defect removal efficiency (DRE)
  3. Delivered defects
  4. High-severity defects

Economic-related metrics

  1. Total Cost of Ownership (TCO)
  2. Cost of Quality (COQ)

Even with only 10 methodologies and 10 topics to display, that is still quite a significant amount of information.

This article will attempt to compare methodologies in three major categories:

  1. Speed: Development schedules, effort, and costs
  1. Quality: Software quality in terms of delivered defects
  1. Economics: Total Cost of Ownership (TCO) and Cost of Quality (COQ)

Note that the technique used in this article of holding application size constant at 1000 function points means that the data cannot be safely used to determine the best methods for large systems of 10,000 function points or massive systems of 100,000 function points. However applications in the 1000 function point size range are very common, and are large enough to show comparative results in a fairly useful way.

Some of the data in this article was prepared using the author’s Software Risk Master ™ (SRM) tool, which is designed to perform side-by-side comparisons of any development methodology, any CMMI level, any complexity level, and any level of team experience. Some of the tables are based on SRM outputs, although derived from earlier measured applications.

Speed: Comparing Methodologies for Development Schedules and Costs

The first comparison of methodologies concerns initial development speeds, costs, and short-term issues. Among the author’s clients the most frequent request when estimating software projects is to predict the development schedule. Because schedules are viewed as critical to a majority of software managers and executives, table 1 is sorted by the speed of development.

Table 1: Software Schedules, Staff, Effort, Productivity

 

Methodologies

Schedule

Staff

Effort

FP

Development

 

 

Months

 

Months

Month

Cost

             

1

Extreme (XP)

11.78

7

84

11.89

$630,860

2

Agile/scrum

11.82

7

84

11.85

$633,043

3

TSP

12.02

7

86

11.64

$644,070

4

CMMI 5/ spiral

12.45

7

83

12.05

$622,257

5

OO

12.78

8

107

9.31

$805,156

6

RUP

13.11

8

101

9.58

$756,157

7

Pair/iterative

13.15

12

155

9.21

$1,160,492

8

CMMI 3/iterative

13.34

8

107

9.37

$800,113

9

Proofs/waterfall

13.71

12

161

6.21

$1,207,500

10

CMMI 1/waterfall

15.85

10

158

6.51

$1,188,870

             
 

Average

13.00

8.6

112.6

9.762

$844,852

As can be seen the software development methods that yield the shortest schedules for applications of 1000 function points are the XP and Agile methods, with TSP coming in third.

Quality: Comparing Defect Potentials, Defect Removal, and Delivered Defects

The next topic of interest when comparing methodologies is that of quality. The article considers four aspects of software quality: defect potentials, defect removal efficiency, delivered defects, and high-severity defects.

The phrase “defect potential” refers to the sum of defects found in requirements, design, source code, documents, and “bad fixes.” A bad fix is a new defect accidentally injected during an attempt to repair a previous defect. (About 7% of attempts to fix bugs include new bugs.)

The phrase “defect removal efficiency” refers to the combined efficiency levels of inspections, static analysis, and testing. In this article six kinds of testing were included: 1) unit test; 2) function test; 3) regression test; 4) performance test; 5) system test; 6) acceptance test.

(There are about 40 total kinds of testing, but the specialized forms of testing are outside the scope of this article.)

When quality is evaluated readers can see why the parable of the blind man and the elephant was cited earlier:

Table 2: Software Defect Potentials, Removal, and Delivery

 

Methodologies

Defect

Defect

Defects

Hi Sev.

 

 

Potential

Removal

Delivered

Defects

           

1

TSP

2,700

96.79%

87

16

2

CMMI 5/ spiral

3,000

95.95%

122

22

3

RUP

3,900

95.07%

192

36

4

Extreme (XP)

4,500

93.36%

299

55

5

OO

4,950

93.74%

310

57

6

Pair/iterative

4,700

92.93%

332

61

7

Proofs/waterfall

4,650

92.21%

362

67

8

Agile/scrum

4,800

92.30%

370

68

9

CMMI 3/ Iter.

4,500

91.18%

397

73

10

CMMI 1/ Water.

6,000

78.76%

1,274

236

           
 

Average

4,370

92.23%

374

69

When the focus of the evaluation turns to quality rather than speed, TSP, CMMI 5, and RUP are on top, followed by XP. Agile is not strong on quality so it is only number 8 out of 10. The Agile lack of quality measures and failure to use inspections will also have an impact in the next comparison.

Economics: Total Cost of Ownership (TCO) and Cost of Quality (COQ)

Some of the newer methods such as Agile and XP have not been in use long enough to show really long-range findings over 10 years or more. In this article TCO is limited to only five years of usage, because there is almost no data older than that for Agile.

The figures for TCO include development, five years of enhancement, five years of maintenance or defect repairs, and five years of customer support. While the Software Risk Master tool predicts those values separately, in this article they are all combined together into a single figure.

The figures for COQ consolidate all direct costs for finding and fixing bugs from the start of requirements through five years of customer usage.

Table 3: Total Cost of Ownership (TCO): Cost of Quality (COQ)

 

Methodologies

TCO

COQ

Percent

         

1

TSP

$1,026,660

$298,699

29.09%

2

CMMI 5/ spiral

$1,034,300

$377,880

36.53%

3

Extreme (XP)

$1,318,539

$627,106

47.56%

4

RUP

$1,360,857

$506,199

37.20%

5

Agile/scrum

$1,467,957

$774,142

52.74%

6

OO

$1,617,839

$735,388

45.45%

7

CMMI 3/iterative

$1,748,043

$925,929

52.97%

8

Pair/iterative

$2,107,861

$756,467

35.89%

9

Proofs/waterfall

$2,216,167

$863,929

38.98%

10

CMMI 1/waterfall

$3,944,159

$2,804,224

71.10%

         
 

Average

$1,784,238

$866,996

44.75%

Because applications developed using the TSP, CMMI 5, and RUP methodologies are deployed with low numbers of defects it is fairly easy to enhance them, maintain them, and support them. Therefore the 5-year total cost of ownership clearly favors the quality-related methods rather than the speed-related methods.

Agile is not bad, but with a COQ of more than 50% Agile needs to take quality more seriously up front.

The COQ percentages reveal a chronic problem for software applications. We have so many bugs that finding and fixing bugs is the major cost of both development and total cost of ownership.

The Methods that Achieve Top Rankings in all Categories

To continue with the metaphor of the blind men and the elephant, here are the top methods in each of the 10 categories:

Table 4: Top Methods in 10 Categories

1.

Development schedules

Extreme programming (XP)

2.

Development staffing

Agile/scrum (tied)

3.

Development effort

CMMI/5 spiral

4.

Development costs

CMMI/5 spiral

5.

Defect potentials

TSP

6.

Defect removal efficiency (DRE)

TSP

7.

Delivered defects

TSP

8.

High-severity defects

TSP

9.

Total Cost of Ownership (TCO)

TSP

10.

Cost of Quality (COQ)

TSP

The phrase “be careful of what you wish for because you might get it” seems to be appropriate for these methodology comparisons. Methods such as Agile that focus on speed are very quick. Methods such as TSP, RUP, and CMMI 5 that focus on quality have very few defects.

Why some Methods Compare Poorly for Speed, Quality, and Economics

As can be seen the various methodologies fluctuated in their effectiveness on the speed, quality, and economic dimensions. However three methodologies were near the bottom for all three evaluations. These laggards were the waterfall method, which was in last place, the proof of correctness method, and the pair programming method. It is useful to explain the probable reasons for the low placements of these three methodologies.

Waterfall and CMMI 1

It is no secret that about 35% of software projects for more than 50 years have been cancelled due to poor quality or overruns. Most of these used waterfall development and either were at CMMI level 1 or did not use the CMMI at all.

At the 1000 function point size range used in this example for waterfall, the percentage of time and effort devoted to finding and fixing bugs is about 25.71%. The number of projects that run late or exceed their budgets is about 50%. These are not very large applications, but with waterfall they are often troublesome.

It should be mentioned that the primary motivation of most of the newer methods is to overcome the historical problems associated with waterfall development.

There have been a few successes with waterfall projects but these tend to be those done by expert teams.

Pair Programming

Unfortunately pair programming is an expensive mistake. The idea of letting two people take turns programming while one watched is a theoretical idea but weak in practice. The evidence for pair programming is flawed. There are assertions that pairs create software with fewer bugs than individual programmers. Unfortunately the individuals were using basic waterfall methods. Capable individual programmers who use static analysis and participate in formal code inspections of their work produce better code for about half the cost of pair programming.

Further, there are some 90 different software occupations. Why double up programmers and nobody else? If the idea of pair programming worked as asserted, then architects, business analysts, testers, quality assurance and everybody might be doubled. Why not use two project managers instead of one?

The usage of pair programming is symptomatic of very poor measurement practices and also a failure to understand the distribution of talent among large populations. If a company were building a large system with 500 programmers, it would not be possible to bring in or hire 500 more to pair up with them.

Proofs of Correctness

The idea of proofs of correctness is an academic construct and is more theoretical than real. In order to prove algorithms in software they need to be formally expressed and the personnel doing the proofs need considerable mathematical sophistication. Even then, errors will occur in many proofs.

In the sample used in this article for 1000 function points there were about 690 specific requirements that need to be proved. This is why even small applications that use proofs take a long time, because proofs are time consuming.

It would be essentially impossible to use proofs of correctness on an application of 10,000 function points because there would be 7,407 specific algorithms to be proved and that might take several years, during which the requirements would have changed so many that the earlier proofs might no longer apply.

Matching Software Methodologies with Projects

Since no method is top-ranked in every category, readers may well ask how to select methods that match the needs of their projects.

For smaller applications of 1000 function points or less where speed of delivery is the most critical parameter, then XP, Agile, and TSP are all very good choices.

For complex applications that might need FDA approval, operate weapons systems, or control sensitive financial data, high quality levels are mandatory. In this class TSP, CMMI 5, and RUP are the top choices, with XP as another possible method. Agile has been used for such applications but needs to be bent and twisted so much that it no longer is very agile. Agile is not strong on quality.

For applications that might last for more than 10 years or which require very frequent enhancements and therefore need well designed interior structures, TSP would be the top choice, with CMMI 5, RUP, and XP also possibilities. Agile has not shown much success with long-term maintenance and enhancements.

Summary and Conclusions

As this article is written the software industry has about 55 different development methodologies. This is too large a number to compare in a short article.

For the 10 methods compared here, most have had some successes and most have had a few failures too.

Overall the Agile family and the methods that emphasize speed have achieved their goal, and they are fairly quick.

The methods that emphasize quality such as TSP, RUP, and CMMI 5 have also achieved their goals, and deliver very few defects.

No single method appears to be a universal panacea that can be successful on every size and kind of software application.

This article attempts to show the methods that give the best fit to three important factors:

  1. speed;
  2. quality;
  3. long-rang economic value.

About the Author

Capers Jones is currently vice president and chief technology officer of Namcook Analytics LLC.  This company designs leading-edge risk, cost, and quality estimation and measurement tools. He was the president of Capers Jones & Associates LLC until 2012, a software researcher and manager at IBM for 12 years and Assistant Director of Programming at the ITT Corporation where he started their software measurement program. Capers is a well-known author and international public speaker. Some of his books are The Economics of Software Quality (with Olivier Bonsignour), Software Engineering Best Practices, Applied Software Measurement and Estimating Software Costs. He is currently working on his 15th book entitled The Technical and Social History of Software Engineering, to be published in the autumn of 2013.

Capers and his colleagues have collected historical data that is used for judging the effectiveness of software process improvement methods and also for calibrating software estimation accuracy, in cited in software litigation in cases where quality, productivity, and schedules are part of the proceedings. He has also worked as an expert witness in 15 lawsuits involving breach of contract and software taxation issues.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • TSP promo

    by James Michelsen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Have never heard about this methodology.

    Sounds as promotion of TSP, doesn't it?

  • Pair Programming . . .

    by eswar vandanapu,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    When reading about the pair programming topic in this article it gives me an impression that pair programming is too bad. There are several reasons why pair programming in software development can't be combined with other professions. For a difficult surgery there are often 2 doctors in the OR, and we don't see artists pairing up to create an paired art.

    Pair programming has some key ideas that are good in practice, even though pairing 100% of the time is for sure waste of time and money. While working and large projects with more than 300 people I have employed what I call as selective pairing where you pair people for a purpose. Mostly I let engineers decide when they want to pair up and it mostly worked out very well. And sometimes I pair up people to work together for a few hours on building up systems, or when doing a critical piece of functionality and it indeed reduced the amount of possible bugs in that area.

    In short I believe selective pairing with a purpose is a very good practice.

  • Agile misunderstanding

    by Carlos Granitto,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "Minimizing paperwork and accelerating development speeds are top goals of agile."

    This statement is completely inexact. A state of this kind shows lack of knowledge about this matter. If I were new about Agile, I could believe this but, after a couple of months of working with agile would be enough to know something quite different.

    I hope this report wouldn't compare methods in the same way.

  • Re: TSP promo

    by Antonio Nascimento,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    With sure!!!

  • XP without pair programming is not XP.

    by David Hillier,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The author mentions that pair programming is optional for XP. I disagree and this would not be advocated in any XP book. The practices all support each other.
    This makes me wonder how accurately any of the methodologies where applied.
    For example, if the XP developers applied TDD - why where there any bugs? If they had applied it perfectly there where be none after delivery. I know it isn't always possible. How did the author account for these inconsistencies?
    As an experienced developer and team lead, I know how difficult it is to consistently apply any method.

  • Re: TSP promo

    by Ben Linders,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    For those who haven't heard of the TSP before: The Team Software Process (TSP) has been developed by Watts Humphrey in the late 1990s, and further developed and supported by the Software Engineering Institute (SEI).

    See: en.wikipedia.org/wiki/Team_Software_Process and www.sei.cmu.edu/tsp/

  • Re: XP without pair programming is not XP.

    by James Michelsen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Absolutely agree with David.
    And also have such a feeling all those numbers are taken from the air.

    Definitely hidden TSP promotion...

    Just curious why lots of companies practice SCRUM, XP, Pair programming and consider RUP uneffective:).
    Definitely something wrong with this world:)

  • Unknown data fed through a proprietary tool...

    by David Allsopp,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "The data itself comes from studies with a number of clients [...] The predictions use the author’s proprietary Software Risk Master™ tool..."

    Unless the author gives details of the data (how many studies? how were they conducted? what data were gathered and how?) and the tool used to analyse them (what algorithms, what assumptions?), we can draw no conclusions whatsoever from the output of the tool as reported in this article.

    Articles that analyze data to produce predictions should be scientific, and therefore should follow the basic scientific principle that the experimental or analysis must be reproducible.

    Furthermore, the categories of methodologies make little sense - many of them are orthogonal aspects (OO development, proof of correctness, pair programming). Why are specific CMMI levels matched with specific development lifecycles? The descriptions of the methodologies do not inspire confidence; TDD is not unique to XP, the "top goals" of agile are highly debatable, etc.

  • Re: TSP promo

    by Roberto Simoni,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    ... and let me say that if it is "developed by IBM" I would like to have IBM (at least in Italy) use it instead of that "six waterfall pack" they bring me everytime!

  • Beyond bizarre

    by Tim Ottinger,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This is beyond comparing apples to oranges. It compares to apples cores, apple skins, fish, and water.

    I'm okay with comparing scrum, XP, and waterfall/spiral.

    But you can't compare OO because it's not a methodology. It's a design philosophy used in scrum, xp, waterfall and spiral projects. Same with pair programming. Pair programming is a technique, often used in OO projects under any of the four models.

    I can't speak to TSP.

    Also, using pair programming is shown to NOT be twice as expensive, and generally cheaper when you count defect reduction -- which brings us to another odd fish: All the oo, agile, pair-programming teams I know also use static analysis tools AND TDD to reduce bugs.

    So it's like saying that animals which breathe air and ingest vitamins orally work better than those which aspirate and respirate, and that those which actually contain DNA don't work as well as those which grow armpit hair. It's not a meaningful division, and so I can't believe anything else in the report.

  • surprised to see such bad content at infoQ

    by Pimentel Pedro,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    "Agile focus on speed"

    "pair programming is an expensive mistake. The idea of letting two people take turns programming while one watched is a theoretical idea but weak in practice"

    I cried!

  • Re: surprised to see such bad content at infoQ

    by Capers Jones,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I have data on around 15,000 projects. Pair programming needs to be compared against:

    Single programmers using static analysis and inspections
    Single programmers using the same methods as the pair

    Also the pairs themselves need to be evaluated:

    Two top performers
    Two average performers
    Two marginal performers
    Unequal pairs with one better than the other

    And compared against

    Individual top performers
    Individual average performers
    Individual marginal performers

    Most of the literature on pairs does not compare the results against single programmers who use static analysis and inspections.

    Present your data instead of your tears and I'll be glad to consider it.

  • Re: Beyond bizarre

    by Capers Jones,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I've been commissioned by clients to compare OO programming languages and various OO design methods since the 1990s. If clients say that have used OO languages and methods I can measure the productivity and the quality of delivered software. I don't understand why you can't.

    Your statement about pairs being generally cheaper when you count defect reduction is just an assertion.

    Out of curiosity there are a total of 116 occupation groups found in large software organizations. Do you suggest:

    Paired architects
    Paired project managers
    Paired business analysts
    Paired testers
    Paired quality assurance analysts
    Paired technical writers
    Paired data base analysts
    Paired configuration control specialists
    Paired integration specialists

    Do you suggest that large applications with 1000 development personnel such as big operating systems and defense systems would be better with 2000 personnel?

    In general pairs are more expensive than individuals. If the pair does not use inspections and static analysis and the individual does, then quality will be better for the individual. A top individual is better than an average pair. However a top pair is better than an average individual.

    Regards,
    Capers Jones

  • Re: TSP promo

    by Anthony DaSilva Jr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Yes, it does. I like Capers, but he (like everybody) has personal biases and underlying assumptions that may or may not be known to him. The numbers look impressive and the logic seems impeccable, but always be leery of what experts advise you. Look at track record of financial industry experts for giving "advice". But of course, this is different :)

  • Re: Beyond bizarre

    by Anthony DaSilva Jr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Capers,

    It's an impressive fact that you have data on 15000 projects and an impressive fact that you have been commissioned since 1990 by lots of important clients to calculate metrics based on proprietary methods which are further based on at least some unexposed and possibly unknown assumptions. Note that I'm not saying you're "wrong", but you may want to consider scaling back on your condescending "I'm the papally infallible expert" tone?

    I thought your article was very thoughtful and interesting, but I wouldn't make any decisions based on it alone. Thanks for writing and sharing it.

    Just my 2 cents

  • Lean-Agile misunderstood

    by Johnny FromCanada,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The author does not seem to have a firm grasp of Lean-Agile. For example, comparing projects (regardless of how many) is useless, as is measuring/comparing individual performance, unless the contexts are all of a similar simple/complicated nature, but not complex/creative. In the latter, a well-intentioned science experiment can easily be obscured by mountains of data, a false sense of precision/accuracy, compounded errors, and confounded variates.

    "Methodologies" can reasonably be compared based on the number of constraints they impose on the system (a point made quite elegantly in www.infoq.com/minibooks/kanban-scrum-minibook). As such, they fit on a spectrum ranging from highly constraining (say RUP) to highly unconstraining (say Kanban/Lean). The appropriate choice depends on how volatile your problem space is; how malleable your solution space is; and the degree to which your business sponsors are willing to tolerate variations in time, scope, cost, quality, and risk.

  • Seems to be biased...

    by Vinod Vijay,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Caper,

    With due respect to your experience on analyzing this data and all that you had mentioned as answers to some of the comments...
    As a researcher in LEAN/AGILE & SIX SIGMA and as a Software proffesional, this article didnt make any 'real' sense to me.

    1. You have used a software to estimate the better methodlogy with set of constraints. And that too some of the key constraints as omitted.
    These contraints (considered as omitted) are normally bound to happen in a practical world.
    It is not a surprise that you prefer other methodologies over agile; as agile is more equipped to handle the omitted scenarios.(eg: Requirements creep, Deffered features etc.)

    2. Pair Programming - it's not a waste. It should be based on the pull of the program/project.
    If an architect is having some serious concerns or the project manager is having some bottlenecks; then dont we have other experts of the same nature jumping in.
    If you belive in employee self empowerment; then this is something that you need to accept witha pinch of salt and try to practice.
    P.S: Nowadays; we have companies with Co-CEO models.
    Am not advocating that we need double the number of people; but we need to have leeway for experts to interact for the project delivery on an adhoc basis.

    3. In the book on TSP, Watts Humphrey talks about a confined set of priorities for each set of individuals involved in software teams.
    He has also later emerged with another strategy of PSP(People Software Process)
    In both of these; there are implicit interactions between team members and the project delviery is taken as a team deliverable.

    Am neither advocating Agile is the panacea for all issues in software development nor that all other methodolgies are crap.

    But the teams should be deligent enough to tailor the processes as per their requirements.

    The article gives a strong feel that you have some real bad experiences with Agile or Non -TSP processes.

  • Re: TSP promo

    by Girish Seshagiri,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Here is a real TSP promo. My company is among the earliest adapters of the TSP. For the past 10 years, we have been delivering substantially defect-free software within 10% of committed schedule, and 4% of committed cost. We offer performance guarantees including the least time spent by customers in acceptance testing and lifetime warranty against defects found in production use. We recently modernized one of the largest databases in government. We delivered more than 600,000 line of code four weeks ahead of schedule. Customer found zero cybersecurity vulnerabilities in the code in two independent penetration tests. We had zero voluntary staff turnover during the project that lasted more than two years.

  • Nice work but flawed

    by Eric Weimer,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    For example, the sentence "The existence of more than 55 software development methods, each with loyal adherents, is a strong message that none of the 55 is capable of handling all sizes and kinds of software applications."

    There are many reasons why 55 methods may have existed in your study. For example, every programmer has preferences, and they often prefer what he/she last used.

    And selection of a method is often the result of many factors. It is hard to see this conclusion.

    Other issues:

  • Scrum is a PM method; XP is a development method. Are we comparing apples and oranges?

  • Scrum assumes the use of an Industry Best Engineering method, but your article does not mention which method is used with Scrum. There is a big difference between XP and old style coding, for example.

  • Scrum experts I know concur the majority of companies doing Scrum do not adhere to it (some call this "lip-service" Scrum). How do you distinguish between these?

  • On a personal note, (I am a long time consultant) I have found CMMI to be more of an obstacle. For example, the traceability it demands prevents developers from fixing obvious mistakes they happen to some across without a lot of process, and the result tends to be they ignore, which increases technical debt.

  • When writing an article full of conclusions, it is best to make the all the facts and data available. Otherwise the conclusions are suspect.

  • Despite the constructive criticism, I appreciate the time you invested.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT