BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Probabilistic Project Sizing Using Randomized Branch Sampling (RBS)

Probabilistic Project Sizing Using Randomized Branch Sampling (RBS)

Bookmarks

 

In order to forecast the time and the budget needed to deliver a new software product we need to be able to quantify “what” we are building since the resources required are related to “how much” software is built. That quantification is referred to as “sizing”. Software sizing is different from delivery time estimation. Sizing estimates the probable size of a piece of software while delivery time estimation forecasts the time needed to build it. The relationship between the size of a piece of software and the time needed to deliver it is referred to as productivity. Since estimates of themes and epics will be more uncertain than estimates of the more specific, smaller user stories sizing requires that all user stories in the product backlog to be analyzed and estimated (Cohn, 2005). Then we sum all story sizes and arrive at the total size of a project.

Analyzing all the stories in a project requires significant time and it can easily happen that great part of this effort will be pure waste. Priorities of features change and some features would not be developed at all. The question many of us in software development try to answer is - how can we estimate the size of a project without prior identification and analysis of every single user story? The answer is needed for portfolio related decisions, quotations on prospect projects etc.

If you don't want to analyze all user stories in your project in order to estimate its size then Randomized Branch Sampling (RBS) is an approach you can use.

How big is our project?

First we have to decide in what units to measure project size. In the past the “amount of product” produced from a software development project was perceived as being the number of source lines of code written. Later on Function point analysis (FPA), which measures the size of the software deliverable from a user’s perspective, was introduced. FPA even made it to be codified in several ISO standards.

However FPA is not used in Agile projects where T-Shirt sizing and story points are the favorite ways to estimate how big a user story is. The primary advantage of using t-shirts sizes is the ease of becoming accustomed to sizing. Their primary disadvantage is that they are not additive. We cannot say the size of a product is 3 mediums (M), 4 larges (L) and two smalls (S) (Cohn, Estimating with Tee Shirt Sizes, 2013).

The most popular sizing measure is the number of story points. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively (Cohn, Don’t Equate Story Points to Hours, 2014). They are usually represented with Fibonacci numbers or an exponential scale. Story points are not about how complex is or how hard is for the development team to deliver a product feature. Story points are about the effort required to developed a user story (Cohn, It’s Effort, Not Complexity, 2010). Effort is defined as the person-days or hours required to develop a feature. Story points should be an estimate of how long it will take to develop a user story. Story points represent time. (Cohn, Story Points Are Still About Effort, 2014). But software sizing is different from software effort estimation. That makes it difficult to use story points for sizing a product unless we change the definition of a story point and equate it with complexity.

One dimension of complexity is the number of scenarios per story. A scenario is an acceptance test customers could understand written in their ordinary business language (North, 2009).It is a formal test conducted to determine whether or not the system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. A user story can have one or more scenarios. We can size a project as the total number of scenarios to be developed. If we slice stories down to needing only a single acceptance test then the number of user stories will equal the number of scenarios (Killick, 2014).

No matter which one of function points, t-shirt sizes, story points and scenarios we decide to use sizing requires that all user stories in the product backlog are analyzed and estimated. Then we sum up all story sizes and arrive at a total size of a project. This practice is hard to follow because it is time consuming and requires quite significant analysis. Also new features are added, priorities of features can change and there are many cases when a feature will not be developed at all. As a result probably a great part of this effort will be pure waste.

How can we estimate the total number of story points for a project without prior identification, analysis and sizing of every single user story? If we look for analogy we can change the question into - how can we estimate the number of fruit on a tree without counting all of the fruit one by one? It turns out there is such a technique called Randomized Branch Sampling first proposed by Jessen (Jessen, 1955).

Randomized Branch Sampling (RBS)

RBS was designed to efficiently estimate the total number of fruit found in the canopy of a tree while only having to count the fruit on select branches. With RBS, branches are selected from the tree by creating a pathway which starts at the base of the trunk and travels upwards. In order to apply RBS for sizing a software project we will represent a product backlog as a branching system as follows:

The product backlog is the work that needs to be accomplished to deliver a product with specified features and functions. The features and functions are called requirements and are presented and managed using vehicles such as epics and user stories. Epics are the highest-level requirements artifact. Epics are not implemented directly but are broken into user stories, which are the work items used for planning purposes. Epics are not directly testable. Instead, they are tested by the acceptance tests or scenarios associated with the user stories that implement them. Even for a quality related requirement such as “the system should scale horizontally” we need to have a user story. Each one of the user stories should represent independent customer value and could be delivered in any order following the INVEST mnemonic.

                                                                                 Fig.1

On Fig. 1 we have a fictitious product backlog that splits into three epics A, B and C. Epic A splits into two user stories 1 and 2, epic B splits into user stories 3 and 4, epic C splits into three user stories 5, 6, and 7.

In RBS a Horvitz-Thompson estimator is used to derive an unbiased estimate of the total size of the product backlog by dividing the size (x) of a selected user story i by its associated unconditional probability (Qi) with which that particular user story was selected.

An unconditional selection probability (Qi) is obtained from the conditional selection probability of the user story when the conditional probabilities of the product backlog and the epic the story belongs to are accounted for. Since there is only one product backlog its conditional selection probability is 1.

There are three alternative ways for determining the conditional selection probabilities are:

  • PE (probabilities equal). In this scheme we have to know the total number of user stories in the product backlog. That is not very practical especially for large projects. Since in Fig.1 we have seven user stories in the backlog then each will have an unconditional selection probability Q of 1/7.
  • PPN (probability proportional to number). In this scheme we have to know the total number of epics but there is no need to know the total number of the user stories in the product backlog. We break down in user stories only the epics that are randomly selected. We use the total number of epics to calculate the conditional probabilities at epics level and the total number of user stories per selected epic to calculate conditional selection probabilities at story level. In Fig.1, since in the product backlog there are 3 epics in total then each epic’s selection probability is 1/3. Epic A splits into two user stories hence they both have a selection probability of 1/2. Epic B splits into two user stories hence they both have a selection probability of 1/2. Epic C splits into three user stories hence they all have a selection probability of 1/3.Using the formula (2) we obtain the unconditional selection probability Q for story 1, 1/3x1/2=1/6; for user story 4, 1/3x1/2=1/6; for user story 7, 1/3x1/3=1/9 etc. as visible in Table 1. Here is the formula when using story points

The same formula can used by replacing story points with number of scenarios or number of tasks.

  • PPRS (probabilities proportional to relative size). In this scheme we estimate the relative size of each epic in story points and use it for establishing selection probabilities at epics level. In this way the big epics will have a greater chance of selection than a small epic. Then we use the total number of user stories per selected epic to calculate conditional selection probabilities at story level. In Table 1 in row “Epic's relative size” we have the relative sizes for the epics from Fig. 1. Their total size is 410 hence Epic A has a conditional selection probability of 100/410. Story 1 has a conditional selection probability of 1/2 because epic A splits into two user stories. Hence the unconditional selection probability Q for Story 1 is 10/41x1/2 as visible in Table 1. Here is the formula when using story points


The same formula can used by replacing story points with number of scenarios or number of tasks.

Let’s apply RBS to the fictitious backlog from Fig. 1 using all three methods - PE, PPN and PPRS. Assume we have estimated each of the stories in story points, number of scenarios and number of tasks as presented in Table 1. We can see that our fictitious product backlog has a size of 28 story points.

Table 1

Table 1 show that our single estimates vary widely depending on the particular user story selected. But the numbers also show that RBS is unbiased. By unbiasedness it is generally meant that the average of the estimates over all possible samples will be identical to the actual project size. The results show that the sizes estimated using PE, PPN and PPRS all equal to the real size of 28 story points. Note that the mean is calculated by using weighted average.

Applying RBS to a real project

Let’s check if RBS would have predicted the actual project size using data from a past project. When the project finished we delivered 12 epics, 87 user stories and a total of 143 scenarios.

Here is the PPRS based algorithm we will follow:

  1. Divide the project scope into epics.
  2. Analyze and give each of the epics a relative size. Calculate the conditional selection probability for each epic.
  3. Randomly sample one of the epics.
  4. Break down the selected epic into user stories. Write down the number of stories.
  5. Randomly sample one of the stories of the epic.
  6. Establish the scenarios for that story. Write the number of scenarios down.
  7. Using formula (4) estimate the total number of scenarios for the project.
  8. Repeat points 7-12 several times.
  9. Plot the distribution of the size of the project.

Table 2 shows the results. We calculated 7 estimates of the project size using formula (4). We break down those 7 epics into 54 user stories but analyze and establish scenarios for only 7 user stories.

Table 2

The estimated project size is not a single number. It is a distribution as plotted on the bellow histogram. The statistics are next to it.

The estimated mean project size is 144 scenarios. It is very close the actual size of 143 scenarios.

Is RBS applicable to software development?

The number of fruit found in the canopy of a tree has an inherent, abstracted from its interpretation, objective value. Counting all the fruit should always produce the same number plus or minus some counting error. When we apply RBS to find the number of fruit the fruits are already there.

The size of a software project has no objective value. It is intangible - a proxy for all the capabilities (features and functions) the final product is required to offer when delivered to the customer. But requirements on a software project change over time and for a good reason. We are learning as we work on the project. We will discover new requirements and decide others are no longer valid (Anderson, 2011). Hence at the beginning of a project when we apply RBS to forecast its size the user stories are not available yet. The full set of user stories will be there when we finish the project. That leads to the question – from what universe of user stories we sample using RBS?

The assumption behind using RBS for software development is that project size depends on the context – the customer, the people developing the product and the methodology they use for managing the requirements, breaking down the product into stories and sizing a story. It doesn’t matter what the methodology is – Planning Poker (Cohn, 2005), Product Sashimi (Rainsberger, 2012), Behavior Driven Development (North, 2006), Feature Driven Development (Coad, 1999) etc. What is important is the methodology to be cohesive, explicit and to be consistently applied during project execution when we slice the requirements into user stories.

I teamed up with Ajay Reddy and the CodeGenesys/ScrumDo.com team(Reddy, 2015) to test the correlation between project sizes estimated using RBS and the actual story points estimated in thirteen randomly selected projects from the pool of real Scrumdo projects that met the following criteria:

  • Epic-Story-Task breakdowns
  • Successful release history
  • Stable teams (systems)
  • Have an active ScrumDo coach or scrum master
  • Commercial projects
  • Have a minimum size of 12 epics/features.

As seen on the scatterplot below we found a very strong correlation between project sizes estimated using RBS and the actual number of story points estimated for some real Scrumdo projects.

Conclusion

RBS is a forecasting technique for sizing software projects without prior identification, analysis and sizing of every single user story. Project size may be measured in story points, scenarios, number of tasks or function points.

By running RBS on past data from actual projects, we found that the RBS would have estimated the same size without all the usual effort.

Hence RBS helps us to reduce uncertainty regarding “how much” software needs to be developed when we have to make portfolio related decisions, provide quotations on prospect projects etc.

About the Author

Dimitar Bakardzhiev is the Managing Director of Taller Technologies Bulgaria and an expert in driving successful and cost-effective technology development. As a LKU Accredited Kanban Trainer (AKT) Dimitar puts lean principles to work every day when managing complex software projects. Dimitar has been one of the evangelists of Kanban in Bulgaria and has published David Anderson’s Kanban book as well as books by Goldratt and Deming in the local language.

 

References

Cohn, M. (2010, June 21). It’s Effort, Not Complexity

Cohn, M. (2013, April 2). Estimating with Tee Shirt Sizes

Cohn, M. (2014, September 16). Don’t Equate Story Points to Hours

Cohn, M. (2014, September 2). Story Points Are Still About Effort

Good NM, M. P. (2001). Estimating Tree Component Biomass Using Variable Probability Sampling Methods. Journal of Agricultural, Biological, and Environmental Statistics, pp. 258–267.

Jessen, R. (1955). Determining the Fruit Count on a Tree by Randomized Branch Sampling. Biometrics 11, 99–109.

Killick, N. (2014, July 16). My Slicing Heuristic Concept Explained

North, D.(2009) WHAT’S IN A STORY?

Timothy G. Gregoire, H. T. (1995). Sampling Methods to Estimate Foliage and Other Characteristics of Individual Trees. Ecology, Vol. 76, No. 4, 1181-1194.

David J. Anderson (2011). “Understanding the process of knowledge discovery”

Alistair Cockurn (2013). “How “Learn Early, Learn Often” Takes Us Beyond Risk Reduction”

North, D. (2006). “Introducing BDD”

Peter Coad et all.(1999) “Java Modeling In Color With UML”

J.B. Rainsberger (2012) “Product Sashimi”

Cohn, M. (2005) "Agile Estimating and Planning"

Douglas W. Hubbard (2014) “How to Measure Anything”

Ajay Reddy, Improving Sizing

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT