BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Bridging Internal and External Software Quality with Sonar and JaCoCo

Bridging Internal and External Software Quality with Sonar and JaCoCo

Bookmarks

Software quality is commonly divided into two worlds. On one side, the world of external quality whose main objective is to make sure that software responds as per expectations. This group includes the Integrations Tests (IT), User Acceptance Tests (UAT), Non-Regression Tests, and Performance Tests. The main steps in this testing process consists of interacting with the software, observing its behavior and making sure it is working as per functional specifications and later does not regress from those specs. These interactions may happen manually or through one of the numerous tools that exist on the market. This is commonly described as a "Black Box" approach and serves the purpose of building the "right software". The return on external quality investment is immediate.

On the other side, the world of internal quality whose objective is to measure how well the software has been built. This means internal inspection of the source code with static and dynamic code analysis tools and Unit Tests (UT) in order to review how the software performs against a set of pre-defined technical requirements. This is a "White Box" approach aiming at making sure that we are building the "software right". You can use a software quality analysis tool like Sonar to measure internal quality, i.e. to measure the technical debt according to the Seven Deadly Sins of the Developer. Each failure to a sin generates technical debt that brings risk to the software and/or makes it more difficult to maintain over time. The real return on investment of assessing internal quality is medium to long term.

Continuous Delivery is about putting the release schedule in the hands of the business which means making sure your software is always production ready throughout its entire lifecycle - that any build could potentially be released to users at the touch of a button using a fully automated process.

To fully embrace the Continuous Delivery approach, both external and internal quality must be continuously assessed and therefore monitored in a fully automated manner. This approach also includes answering questions like the following:

  • How well is my application covered by tests?
  • Knowing how much coverage is done by Integration Tests and Unit Tests would help the developers to evaluate much more accurately the risk associated to a change to be made:
    • I can sleep well; this line is covered by Integration and Unit tests.
    • I should be careful when changing this line of code. Though I will know immediately if I have broken its contract (UT), I cannot be sure that there will not be any regressions in the application (because of the missing IT's) or vice versa.
    • I am playing Russian Roulette if I change this line of code.

While there are tools in each of these categories to perform this monitoring, there is currently no single tool that is able to monitor both types of software quality. Building a bridge between internal and external quality worlds would provide even more insights into an application to answer the above questions. Let’s look at these questions in more detail.

Is the newly developed source code covered with associated Integration Tests?

It is important to know how much coverage you have at a given point in time, but it is even more important to make sure that when changing or adding new lines of code, you cover them with appropriate tests. This is what is going to ensure that you're working with a long-term strategy on the software quality. So basically, what you will want to review at the end of each sprint is whether you have the appropriate coverage (of both Unit Tests and Integration Tests) on that code.

Is the code deployed to production really used?

According to statistics, 64% of features in production are never or rarely used. Is it the case with my software? Also do I have any code that is not associated with any of the requirements?

What am I really going to impact when changing this line of code?

It is often difficult to know what other components you are going to impact when making certain changes in the code. It would really help to have some kind of cartography to show the impact your changes are going to have on the overall software.

Thinking through and answering these questions will help to:

  • Reduce existing source code that’s not mapped to any requirements or functionality and focus on what’s really being used in production.
  • Have the big picture when assessing the risks associated to source code changes.
  • Have a comprehensive functional software cartography of what’s being used at what frequency and how well it’s tested to provide the highest business value and with minimum technical debt.

A very interesting first step towards this software quality assessment is the recently released new extension to Sonar tool, the JaCoCo extension. In order to assess code coverage by integration / functional / acceptance / user interface (UI) tests (let's say integration tests), you must go through two steps:

  1. Tests must be executed and you do not really want to be intrusive in this process as it can be done through one of many tools - namely Maven plugins (maven-surefire-plugin, maven-osgi-test-plugin, maven-failsafe-plugin), Ant script, GreenPepper, or Selenium.
  2. Then, you need to instrument what you are going to measure and depending on the application package format (JAR, EAR, WAR) this can really be tricky and even a nightmare with coverage engines that use source code instrumentation (Clover) or off-line byte-code instrumentation (Cobertura or Emma). A typical Java application includes several Java libraries and doing static instrumentation of each Java library is often too complex to automate.

A better approach for this use case is to do the "on-the-fly" byte code instrumentation like JaCoCo does. Coverage information has to be collected at runtime. For this purpose JaCoCo creates instrumented versions of the original class definitions. The instrumentation occurs on-the-fly during class loading using so called Java agents. Furthermore, this unique approach makes it a better code coverage engine in the market currently.

Implementing this extension is done in two steps:

  1. Launch integration tests after having configured the JaCoCo agent to be launched with the integration tests and to dump a Jacoco result file at the end of the execution.
  2. Configure and launch Sonar to reuse the JaCoCo result file. The Sonar plugin will extract only the required code coverage information.

Here is an example with a Maven project containing three modules A, B and C. The C module is only used to execute integration tests with the Maven Failsafe Plugin and we would like to get the code coverage on module A and B by integration tests contained in module C. A command line argument must be added to the configuration of the Maven Failsafe plugin in the pom.xml file of the C module. Listing 1 below shows this javaagent command line argument.

Listing 1. Java agent configuration for JaCoCo

<argLine>-javaagent:${jacoco.agent.path}=destfile=${jacoco.file.path}</argLine>

Now, the integration tests in module C must be launched with the command shown in Listing 2.

Listing 2. Maven command to run integration tests with JaCoCo agent

mvn -Djacoco.agent.path="PATH_TO_AGENT" -Djacoco.file.path="PATH_TO_DUMP" -Prun-its clean install

After running the integration tests you can perform Sonar analysis on the overall project. Note that you still can use code coverage tools like Clover, Cobertura or Emma to assess code coverage by unit tests and use the JaCoCo extension for running integration tests. The Maven command used to run Sonar analysis is in Listing 3 below.

Listing 3. Maven command to run Sonar code analysis

mvn -Dsonar.jacoco.itReportPath="PATH_TO_DUMP" sonar:sonar

Figure 1 below shows the Integration Test code coverage metric in Sonar.

Figure 1. Code Coverage by the Integration Tests

Of course, it's also possible to drill down to the source code in order to see which lines of code are covered or not covered by integration tests. This feature is illustrated in Figure 2.

Figure 2. Integration Test Code Coverage at the Source Code Level

To get the code coverage by unit tests across all Maven modules, the configuration steps are pretty similar. Here are the steps to enable the unit test code coverage.

  • Add the argline property to your Maven Surefire Plugin configuration.
    <argLine>-javaagent:${jacoco.agent.path}=destfile=${jacoco.file.path}</argLine>
  • Build the Maven project using the command:
    execute mvn clean install -Djacoco.agent.path=”PATH_TO_AGENT” -Djacoco.file.path=”PATH_TO_DUMP”
  • Run Sonar code analysis step using the command:
    execute mvn -Dsonar.jacoco.itReportPath=”PATH_TO_DUMP” sonar:sonar”

What are the next steps?

The next item on the roadmap is Manual Code Review. The developers can start interacting with the Sonar GUI to start a thread of discussion, to manually create a new violation, to assign a violation to a developer or to flag a violation as "false-positive".

Conclusion

By using the JaCoCO extension in Sonar, it is possible to assess the code coverage by integration tests. While some people work to assure traceability between all kinds of technical documentation, the Sonar team has chosen to invest in ensuring continuous and automated traceability between executable specifications and source code.

About the Author

Olivier Gaudin is the co-founder and Director of SonarSource, the company that develops and promotes the open source platform Sonar. He has more than 14 years experience in IT, managing both developments teams and application support teams. In 2007, Olivier started contributing to Sonar and decided with Simon Brandhof and Freddy Mallet to launch SonarSource.

Rate this Article

Adoption
Style

BT