BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Towards Agile CMMI Level 3: Requirement Development and Verification

Towards Agile CMMI Level 3: Requirement Development and Verification

This article shows how to do requirement development in agile environments, covering concepts and offering examples of how an agile team could run a CMMI for Development SCAMPI to become appraised at a targeted level 3 for the areas of requirements development and verification.

The first section describes and analyses the CMMI for Development areas that are covered by agile practices.

The second section provides examples of agile CMMI practices using a proven set of good practices. These examples come from actual practice and can be used to deploy the CMMI areas with agile.

1.1 Introduction of Agile CMMI

1.1.1. Brief Introduction to CMMI

Wikipedia defines CMMI as “a process-improvement training and appraisal program and service… (that) can be used to guide process improvement across a project, division or an entire organization.”

The CMMI model is categorized by process areas. Each process area covers a set of goals reached by performing some practices. The CMMI for Development model does not intend to judge how optimal these practices are. It’s expected that teams will continue to evolve these practices so, from the CMMI for Development point of view, having a non-optimal practice is better than having none.

Some CMMI for Development areas are configuration management, verification, and validation. Each area has an acronym – e.g. configuration management is known as CM, verification as VER, etc.

Fig. 1: CMMI for Development classification. CMMI can be considered an aggregate of areas composed of goals, which can be classified into generic goals (common to all areas) and specific goals (unique to a single area). Each generic or specific goal is associated with practices, which are respectively generic practices (GP) and specific practices (SP).

There are two kind of goals. Below is a list of the generic goals and the related generic practices:

  • GG 1 Achieve Specific Goals
    • GP 1.1 Perform Specific Practices
  • GG 2 Institutionalize a Managed Process
    • GP 2.1 Establish an Organizational Policy
    • GP 2.2 Plan the Process
    • GP 2.3 Provide Resources
    • GP 2.4 Assign Responsibility
    • GP 2.5 Train People
    • GP 2.6 Control Work Products
    • GP 2.7 Identify and Involve Relevant Stakeholders
    • GP 2.8 Monitor and Control the Process
    • GP 2.9 Objectively Evaluate Adherence
    • GP 2.10 Review Status with Higher Level Management
  • GG 3 Institutionalize a Defined Process
    • GP 3.1 Establish a Defined Process
    • GP 3.2 Collect Process Related Experiences

Usually, generic goals and practices are provided and empowered by the organization. Individual teams have more autonomy to decide how they want to reach specific goals. We’ll focus on reaching the specific goals for requirement development in this article and we'll show how to accomplish that using agile practices.

One thing to consider when using the CMMI for Development is the definition of a work product. A work product is simply the result of a practice. Performing a task without producing some kind of value is a waste of time (and therefore of money) so all practices have at least one work product.

One of main reasons why CMMI for Development is often perceived as bureaucratic is that heavy processes produce work products that are mainly documents, many documents. It can be hard to maintain all these documents. Our experience in an ALM@ team taught us that developing and releasing documentation must be performed on demand only. We also learned to establish a way to automatically generate required documentation as much as possible. This sounds like magic, but modern ALM tools focus teams on engineering tasks rather than bureaucratic ones, thus working around an ALM foundation.

An ALM foundation is a centralized platform that lets an entire organizational unit (e.g. a company, a department, or a business unit) share work products and knowledge from processes and automatically performs the longest processes, such as automated testing, building, and delivery (and eventually deployment). If you want to further explore the ALM foundation concept, check out this whitepaper from David Chappell on adopting a common ALM foundation.

(Click on the image to enlarge it)

Fig. 2: Example document generated from work items stored in Microsoft Team Foundation Server.

In our ALM@ team, our ALM foundation is mainly powered by Microsoft Team Foundation Server. On this platform, artefacts can include diagrams. Those are the elements to be kept and maintained. To release a document, team members use a simple template to download selected artefacts by using a third-party tool such as TeamSpec or Team4Word.

While documents can be useful, they are only one of the possible formats for a work product.

1.2 Dismantling a Myth: CMMI and Agile Are Not Opposites

One goal of this article is to demonstrate that CMMI and agile are indeed compatible.

Since we like the early '90s game called Street Fighter, we'll use it to illustrate our points graphically. Many view agile vs. CMMI for Development in this way:

Fig. 3: Chun-Li on the right represents an agile process – sweet, light, and volatile. Honda illustrates how CMMI for Development is often perceived: heavy, slow, and full of fat. (© CAPCOM Co. Ltd.)

After we started implementing CMMI and trying to fit it our agile teams, we realized that our reality was something like this:

Fig. 4: In real life, agile and CMMI can be seen as similar street fighters, with different suits and hair. (© CAPCOM Co. Ltd.)

A great article, "CMMI or Agile: Why Not Embrace Both!", published in 2008 by Hillel Glazer et al explores this topic. In our previous article, we discussed the challenges facing an agile team adopting a CMMI model once a big organization decides to move toward CMMI. Basically, we find no contradiction between agile and CMMI. Even more, in Schneider Electric's ALM@ team, we think they complement each other very well.

1.3 Good-Practices Catalogue

In recent years, we have been training teams at Schneider Electric's ALM@ team and helping them to increase their CMMI maturity level. Our experience is described in our InfoQ article "Spreading CMMI Practices among Agile Teams in Big Organizations". To summarize the article, the most important lesson we learned is that practices must fit into teams rather than teams into practices.

(Click on the image to enlarge it)

Fig. 5: An example of a good-practices catalogue for requirements development.

If you are responsible for achieving CMMI specific goals with a team, you should analyse how team members perform their daily tasks. Teams are there to do their work and add value to the business, so already execute many good practices. Add these to your organizational good-practices catalogue. When a team approaches CMMI with the good-practices catalogue, they will often perceive several different cases:

  • They realize there are some good practices already in use by the team.
  • They realize some of the team's practices could help achieve a CMMI goal, so good-practices catalogue can be enriched from team experience.
  • The team may be missing practices needed to achieve a CMMI goal. In this case, team members should choose an existing practice, or propose a new one to help reach the CMMI goal.

2 Requirements Development and Verification

Let’s explore the areas that we will cover in this article.

  • The purpose of requirements development (RD) is to elicit, analyse, and establish customer, product, and product-component requirements.
  • The purpose of verification (VER) is to ensure that work products will meet requirements.

A nice statement summarizes difference between VER and validation (VAL): Verification demonstrates things are well done and validation demonstrates things are correctly done. RD, VER, and VAL are highly coupled (as are most CMMI areas to a certain degree). We’ll cover VAL in future articles.

Let’s explore software processing as we would any other kind of factory. A sausage factory would take in commodities like meat and spices, and also cardboard, plastic, chemical compounds, etc. The factory processes those commodities to deliver a packed box of sausages ready to be delivered to groceries. A sausage factory adds value from meat.

In software factories, a team takes in requirements, essentially, and is expected to process them and deliver a final product that adds value from the requirements. In RD, we try to transform customer input into something that a team can use to develop software that will add value.

Some good practices of agile RD are:

  • Agile portfolio management or program portfolio management (PPM) can be defined, according to Scaled Agile Framework as “the highest-level fiduciary (investment and return) and content authority”. In other words, with a portfolio vision, management can decide where to invest and create high-level artefacts, which are known as epics (business or architectural epics).
  • Backlog grooming is the name of an action that is regularly performed on backlogs to scrap, renew, split, and generically modify backlog composition. With time, some backlog entries can lose their business value as new backlog entries become more important. Backlog entries can be modified in many different ways. They could be split into different backlog entries, they could be modified, or they could even disappear once overrun by events. Each team performs its own backlog grooming. The key point is that backlog is a live entity that teams should not avoid modifying.
  • Product backlog development is the process by which a team receives high-level artefacts and converts them into user stories, scenarios, and so on. This planning game, which is fundamental part of agile environments, produces a set of artefacts that a development team can consume.

The main goal of VER is to ensure that final product will be delivered as it was understood by the team. It’s widely accepted that VER is factory software testing such as unit testing, integration testing, and so on. Although software testing is a strong point in software-verification processes, it should be noted that VER should apply to all product work Items. For example, when we review a delivered document (which is done in most organizations), we perform VER in a very similar way as peer reviews on code. Thus, when a product-backlog item (e.g. a user story) is derived from a portfolio element, this product-backlog item should also be verified.

3 Real Examples of Agile CMMI Practices

3.1 Good Practice: Portfolio Management

Portfolio entries may be split into several portfolio levels as you’re advancing on describing functionality. We’ll see there can be many breakdown levels depending on multiple factors, such as solution complexity, team size, and capability. MSDN as a nice article ("Agile Portfolio Management: Using TFS to support backlogs across multiple teams") on how to manage an agile portfolio. Although the article is specific to Team Foundation Server technology, you can extrapolate its conclusions and practices to any other ALM platform, such as SourceForge or CollabNet.

Portfolio management involves two portfolio levels: initiatives and features.

(Click on the image to enlarge it)

Fig. 6: Initiatives breakdown classified by business value.

Initiatives and features are often written in customer domain language. An unwritten rule says the deeper you’re in a backlog, the more specific the language will be. An initiative would summarize what is widely known as an elevator pitch. An overview describes what a solution should do to satisfy to stakeholder. When initiatives are exposed in a time line, they can be used as a major source for a solution road map.

(Click on the image to enlarge it)

Fig. 7: An Initiative example from previous backlog.

Initiatives are split into features, which are written in customer domain language. In small teams, features may be the only portfolio artefact. Bigger teams will use initiatives to wrap a set of features so that dedicated teams can execute them.

Portfolio management can help define major deliveries and can be used when planning projects using practices from CMMI areas like project plan (PP) and project monitoring and control (PMC).

It’s important that portfolio entries agree with the final customers. Ideally, portfolio initiatives and features should come from customers with the approval of the product owner in a fluent and continuous way. A possible way to do that is to generate a sheet with which customers can agree through a conventional email:

(Click on the image to enlarge it)

Fig. 8: ALM platform artefacts exported to a traditional Excel catalogue spread sheet.

Delivering documents only on demand is the best way to prevent documentation from heavily impacting daily tasks, as only those documents that are requested add value to the final product. Thus, teams should be ready to create documents easily and quickly with little impact on their work. Most ALM platforms allow the export of artefacts into document templates.

With this good practice, we can consider RD partially covered.It's also a foundation for the next practice: backlog development.

3.2 Good Practice: Backlog Development

Portfolio management is good but not sufficient. It is trendy to make it the end point for RD but that is not agile, that’s hacking. We need to develop a backlog from this portfolio using more agile language.

From features, we can derive user stories. In ALM@, in order to support both formal (in example, RUP or PMP) and agile processes (like scrum or kanban), we call them business cases. (Don’t confuse this with marketing's business cases. It’s just a fusion of business stories and use cases.)

(Click on the image to enlarge it)

Fig. 9: User stories from features.

User stories are usually written with the pattern: "I, as <Role> want to <Trigger Action> so <Value Added>". You can get further details about writing user stories in an exceptional article from Ronica Roth, "Write a Great User Story".

Fig. 10: Example of a user story.

A goal of the CMMI for Development model is that requirement will be validated. As an extension of user story, our TFS template includes a checklist that can be used to determine whether this user story is ready to bring into development. This covers verification of requirements by peer review. Usually, agile users will set acceptance criteria and definition of done at this point, but in our chosen good-practices catalogue, we’ll separate it into different artefacts as in the following good practices.

3.3 Good Practice: Requirement Grooming (Backlog-Items Maturity)

Requirement grooming is a practice widely used in the agile community. We’ll focus on the verification for requirements. Grooming ensures that backlog items will be consistent, won’t be repeated, and will not become obsolete. This is usually performed in non-agile environments with a checklist, but there is no reason not to also use a checklist with product-backlog items to remind us what a good user story should be. Since the same artefact is checked in formal lifecycles, why not use the same checklist with an agile practice?

Fig. 11: Verification checklist.

A user story involves a peer to review requirements. TFS allows us to automate assignation when a user story's state changes. When a user story is assigned to reviewer and the state changes to Review Pending, the artefact is automatically assigned to the reviewer. Nowadays, most modern ALM platforms allow this kind of automation:

(Click on the image to enlarge it)

Fig. 12: Provider and reviewer in a user story.

This verification practice is part of the requirement management (REQM) and project plan (PP) areas, which we will explore in detail in future articles. What’s important at this point is that requirements are going to be verified as part of our grooming good practice.

3.4 Good Practice: Scenario Definition and Implementation

Many agile teams stop developing their backlog when they reach the user-story level, and then start implementing - which requires defining acceptance conditions and definitions of done. However, there is also an early testing approach called BDD. BDD is abbreviation for behaviour-driven development. It is a powerful technique that fits very well into CMMI for Development. Describing BDD in detail is out of scope of this article but let's explore how it can be used to define components and interfaces. Let’s take a look at a sample scenario written in Gherkin (such as Cucumber or SpecFlow).

Feature: HoldingEnhancedVisibility 
I, as ALMC User, want to filter Holding combos, so I could select only Holdings I am allowed to see
@ScenarioTests
@EnhancedVisibilityTests
@XX4147CRBCShowHoldingInformation
Scenario: An administrator wants to display all available holdings
Given an admin user who will display holdings And a Regular User so at least one holding were created
When admin displays Holding Then all available holding in ALMC are returned
@ScenarioTests
@EnhancedVisibilityTests
@XX4147CRBCShowHoldingInformation
Scenario: A Holding Manager can display only his Holding
Given a Regular User to create a Holding Structure
And a 'HoldingManager' for this Regular User
When manager displays Holding
Then only one holding is displayed
And holding for user and holding for manager is the same

BDD developers are familiar with and can read proper architecture definitions (SOA/DDD) and can identify components like “When” actions with services and entities. In other words, there is no need to define complex diagrams that are difficult to be maintain. There is also a way to define UI actions (e.g. When actor clicks on button).

In this case, “Then” can be used as an acceptance condition and definition of done.

Let’s take another example:

@ScenarioTests 
@EnhancedVisibilityTests
@XX3216CRBCShowDependencyTasks
Scenario: An administrator user displays combo to create a dependency among two tasks
Given a registered user with administration permissions
And a Task with description "AdminTaskDependency" which is unique
And a set of Tasks non associated
| Description |
| AdminNotDependency3216A |
| AdminNotDependency3216B |
And a set of Tasks already associated
| Description |
| AdminDependency3216C |
| AdminDependency3216D |
When combo is displayed for associating with pattern 'a' from previously created task
Then all non associated task will be displayed
| Description |
| AdminNotDependency3216A |
| AdminNotDependency3216B |
And Master Task wont be among them
And Task with taskid to be associated with wont be among them

BDD is easy to learn and is easily adopted by teams who welcome the technique. After some practice and with proper architecture, parameters (in bold in the above example) are interpreted as interface parameters. BDD is an easy way to define components, interfaces, and parameters – in other words, a way to define product requirements.

When inserted in an ALM platform and passed through backlog-item maturity, BDD allows binding tests with requirements:

Here is how the backlog appears:

(Click on the image to enlarge it)

Finally, a continuous-integration platform such as Jenkins or Team Foundation Server can execute those tests with each upload into source control. This would require defining a control configuration environment.

3.5 Good Practices Summary

By following these four good practices, which are quite common in agile environments, only a little effort is needed to cover most specific practices from a process area.

By following the previously described good practices, the following areas and goals are achieved:

  • Green -> Fully implemented
  • Yellow -> Partially implemented (in this case because we’ll go back on code peer review with CM and PI)
  • Black -> Not implemented because it will be covered with CM and PI

RD Specific Practices by Goal

  • SG 1 Develop Customer Requirements
    • SP 1.1 Elicit needs
      • Portfolio management
    • SP 1.2 Transform Stakeholder Needs into Customer Requirements Backlog development Scenario definition and implementation
  • SG 2 Develop Product Requirements
    • SP 2.1 Establish Product and Product-Component Requirements
      • Backlog development
      • Scenario definition and implementation
    • SP 2.2 Allocate Product-Component Requirements
      • Scenario definition and implementation
    • SP 2.3 Identify Interface Requirements
      • Scenario definition and implementation
  • SG 3 Analyse and Validate Requirements
    • SP 3.1 Establish Operational Concepts and Scenarios
      • Scenario definition and implementation
    • SP 3.2 Establish a Definition of Required Functionality and Quality Attributes
      • Scenario definition and implementation
    • SP 3.3 Analyse Requirements Backlog development
      • Scenario definition and implementation
    • SP 3.4 Analyse Requirements to Achieve Balance
      • Backlog development
    • SP 3.5 Validate Requirements
      • Requirement-maturity review
  • VER Specific Practices by Goal
  • SG 1 Prepare for Verification
    • SP 1.1 Select Work Products for Verification
      • Scenario definition
      • Backlog-items maturity
      • Portfolio management
    • SP 1.2 Establish the Verification Environment
      • Not with this set of good practices
    • SP 1.3 Establish Verification Procedures and Criteria
      • Not with this set of good practices
  • SG 2 Perform Peer Reviews
    • SP 2.1 Prepare for Peer
      • Reviews Reviewing requirements through backlog-items maturity
    • SP 2.2 Conduct Peer
      • Reviews Reviewing requirements through backlog-items maturity
    • SP 2.3 Analyze Peer Review Data
      • Reviewing requirements through backlog-items maturity
  • SG 3 Verify Selected Work Products
    • SP 3.1 Perform Verification
      • Backlog-items maturity
      • Scenario definition and implementation
    • SP 3.2 Analyze Verification Results
      • Not implemented in this set of good practices

About the Authors

Nicolás Afonso Alonso is currently software-development technical leader and CMMI evangelist at Schneider Electric. He worked for several years in the space industry as a QA team leader in integration projects. After joining Schneider Electric, he has focused mostly on process engineering for software-development-leading Team Foundation Server activities and ALM foundation planning.

Victor Jara is an infrastructure-operations engineer at Schneider Electric. As part of his duties, he implements specific solutions according to technical requirements to carry out configuration management and product lifecycle.

Rate this Article

Adoption
Style

BT