BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Coverity 2012: How to Get a Low Defect Density

Coverity 2012: How to Get a Low Defect Density

This item in japanese

Lire ce contenu en français

Bookmarks

This article contains the testimonies of several project leaders detailing the process used to achieve a low Coverity Scan defect density.

The recently released Coverity Scan Report 2012 contains the results of scanning the top 118 participating open source projects cumulating 68 millions lines of code, a significant increase from last year’s 37M LoC. Coverity performs static code analysis scanning for medium-high errors such as Memory Corruption, Uninitialized Variables, Error Handling Issues, etc., and computes Defect Density per project and an average across projects.

The Average Defect Density has fluctuated for the last five years between its lowest value - 0.25 in 2009 - and the highest value - 0.81 in 2010-. Some of the fluctuation can be attributed to the introducing of more algorithms for finding defects over time, while “in 2010, we had a high number of projects join the Coverity Scan service. This meant we had a number of “first time scans” which contributed to the slight increase in the defect density number,” told us Zack Samocha, Project Director for Coverity Scan.

In 2012, the Average Defect Density for open source projects was 0.69, considerably lower than the industry’s average which is 1.5, according to Samocha:

The average defect density based on an initial scan with Coverity technology (before any defects are fixed) is 1.5. After the first year of deployment of Coverity technology, we typically see this number drop to 1.0 – even with new code constantly being added and more defects being detected, the developers are able to fix more defects. With the 2012 Coverity Scan Report, we have seen this number drop again, to .69 and .68, for open source and proprietary code respectively. This indicates that developers are continually improving code quality with development testing.

The report contains the testimonies of several project leads who have managed to achieve a very low defect density for their projects, explaining the processes used to increase code quality. While they use different approaches, they all seem to employ code review or some sort of unit testing beside Coverity’s scanner. Following are excerpts from these testimonies.

AMANDA – a backup solution for IT administrators

Codebase: 160,800 LoC

Defect density: Zero

Interviewees: Jean-Louis Martineau, Project Gatekeeper, and Paddy Sreenivasan, Developer

Q: Why do you think you are able to achieve the highest level of code quality?

A: The number one thing is to have a gatekeeper who enforces a process. All of the patches and changes need to go through this person. Given our project’s size, it is easily managed by one person. All of the changes are reviewed.

They have extensive tests. We run the tests regularly on regular distributions and make sure it passes before the
patch is accepted. We maintain backward compatibility, ensure main web pages are all updated when the changes are made and ensure the documentation is up to date.

Our developers are used to this process—it’s been around for some time and they just know to follow it. However, we do have some flexibility in our release schedules; we aren’t under pressure to hit a specific ship date, so that helps.

Q: What other quality testing techniques have you adopted outside of Coverity?

A: Manual code review. We also use buildbot [another open source tool] to run generic test cases.

Q: What are the top 3 things about your project that enable you to deliver high quality code?

A:
• Size of the codebase—it’s a manageable size to keep quality under control
• Stability and expertise of the developers—our gatekeeper has been doing this for more than 10 years
• Not having the added pressure of hitting a release date—we don’t have to trade-off schedule for quality

Mesa – an OpenGL implementation

Codebase: 1,038,075 LoC

Defect density: 0.5

Interviewee: Brian Paul, Originator of Mesa (and Engineer at VMware)

Q: Why do you think you are able to achieve a high level of code quality?

A: We are meticulous with peer review. Every change that is committed to the code, with few exceptions, is posted and reviewed by people on the mailing list. We have a sign-off process for code review, even before it gets checked in. We have 30-50 active developers on the project, with approximately 15 who I would classify as prolific.

Q: What happens if you have a developer working on the project who submits code that doesn’t meet quality expectations?

A: We haven’t really had cases of people persistently submitting bad code. I started the product 20 years ago and still post my code for review. No one is above the process. The developers involved in the project take pride in their work. It’s not just throwing code out to check a box. This code is used by millions of people and the developers take this responsibility very seriously.

Q: What other quality testing techniques have you adopted outside of Coverity?

A: We have an open source test suite called piglit which contains approximately 10,000 tests. We also use Valgrind for memory and concurrency issues. Since Mesa is cross platform, we use different compilers which find different, lower level bugs. We use Git as our bug tracking system and repository.

Network Time Protocol (NTP) – an implementation of the NTP protocol used for synchronizing computer clocks over a network

Codebase: 286,295 LoC

Defect density: 0.32

Interviewee: Harlan, Project Manager

Q: What other quality testing techniques have you adopted outside of Coverity?

A: We use BitKeeper as our SCM system. In addition, unit testing is an increasingly significant part of our quality
testing. We usually have Google Summer of Code students work on unit testing suites, but not all developers
can be unit test engineers so we are bound by the existing skillset in the project. It's one of our goals to have a
sufficiently robust set of these tests; they can serve as examples, so our traditional developers can make tests be part of their patch submissions.

Q: What are the top things about your project that enable you to deliver high quality code?

A:
• People are the most important element of the project: having the right skills is key. The people who are working on this project have decades of experience. They know what works.
• Change oversight: even though I have 20 years of experience, if I make a change to the code, there are 3 or 4
people who will run it before committing the code and ping me if they have questions.

XBMC – media player and entertainment hub

Codebase: 1,201,946 LoC

Defect density: 0.17

Interviewee: Kyle, Developer

Q: Why do you think you are able to achieve a high level of code quality?

A: To keep high quality for the variety of platforms we support, we are trying to be flexible and test many use cases.

Q: Describe the developer community on your project.

A: The team is small and geographically dispersed. We have 20-30 developers with a core group of 5-10 developers who make regular submittals.

Q: What happens if you have a developer working on the project who submits code that doesn’t meet quality expectations?

A: This happens regularly. Code gets checked into the mainline and then we have to go back and fix issues through bug tracking. This is where Coverity has been very helpful. If a developer has repeated quality issues, they will be threatened to have their merge privileges revoked. There is strong community judgment and quality expectations are high. The developers who contribute higher quality code have more rank and weight in terms of their voice in the community. Developers who have had quality problems in the past will typically take more time to ensure higher levels of quality for future commits.

Our developers take pride in their work. Developers are using this software for themselves on a daily basis, so they are motivated to write high-quality code. The community isn’t trying to just check the box.

Q: Do you have formal acceptance criteria for code?

A: We have a fairly loose process and no formal metrics or targets. We conduct manual code reviews and then at
integration into the main trunk, I run Coverity on the code. I try to fix the Coverity defects myself and then post a group of changes for peer review. Developers then comment on the change set.

Q: What other quality testing techniques have you adopted outside of Coverity?

A: We use GitHub and Trac for defect tracking. Right now we run the builds manually. We don’t have any unit testing targets. We have very good quality given the size of our codebase, but we do see the value in having more formal project management and processes to help maintain quality. Today we don’t have any defined development goals other than what individuals want to develop themselves. We need more defined and formalized timelines, goals and metrics.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT