Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Are Unit Tests Part of Your Team’s Performance Reviews?

Are Unit Tests Part of Your Team’s Performance Reviews?


Key Takeaways

  • How to make sure developers are buying into modern development practices
  • What separates good unit tests from bad ones
  • The benefits of unit testing
  • How to measure unit testing, both quantitative and qualitative
  • What a good performance review looks like

Performance evaluations benefit both employee and employer. It is a time to provide feedback, recognize quality performance and set expectations for future job performance. It is also a time to have candid conversations about performance that is lacking and how performance can be improved. Ongoing performance discussions can assist in avoiding serious problems in the future.

But, what should a performance review for software developers and your development organization look like?

Sure, you’ve implemented sprints and even brought in a training program for unit testing. But, unless you replace your entire R&D staff with the top Agile gurus (and there’s just not enough talent to go around), you need to make sure that your developers buy into modern development practices, such as unit testing.

How do you do it? By measuring performance in frequent, quarterly performance reviews. By including unit testing in your performance reviews and promoting developers who unit test, management and your R&D team ensure expectations are clear and aligned.

The annual performance review has rightly received a lot of criticism and companies are doing away with it. Once a year is just not enough time to learn, change, and improve. An annual performance review is not agile!

But no matter how often you are reviewing your team’s work, make sure you are measuring that your developers are doing professional programming – including unit testing.

Unit testing achieves several important business objectives: quality improvement, ability to test legacy code, developers stay up-to-date with the latest and greatest methodologies, and yes, good unit testing even increases developer motivation.

Writing good unit tests that won't break on every single code change is not difficult and can be achieved easily by following a few simple practices:

Unit Tests Should be Atomic

A unit test should not be dependent on environmental settings, other tests, order of execution or specific order of running. Running the same unit test 1000 times should return the same result.

Using global state such as static variables, external data (i.e. registry, database) or environment settings may cause "leaks" between tests. The order of the test run should not affect the test result, and so make sure to properly initialize and clean each global state between test runs or avoid using it completely.

Know What You're Testing

There is nothing wrong with testing every aspect of a specific scenario/object. The problem is that developers tend to gather several such tests into a single method, creating very complex and fragile "unit tests".

One trick I found is to use the scenario tested and expected result as part of the test method name. When a developer has problems naming his test, it means that she's not sure what is being tested.

Testing only one thing has the additional benefit of a more readable test. When a simple test fails it's easier to find the failure cause and fix it than with a long complex test.

Mind the Test Scope

There is a direct correlation between the scope of the test and its fragility. Most tests have an object under test that responds or calls other objects as part of the test. The inner workings of these external dependencies are not important to the test and they can be faked. Using isolation (aka mocking), so that the test does not have to initialize and set objects that are used in the test but are not the actual object being tested, can solve this so that when one of these objects change it would not affect the test.

Avoid Over Specification

It's very tempting to create a well-defined, controlled and strict test that observes the exact process flow during the test, by setting every single object and testing every single aspect under test. The problem is that doing so "locks" the scenario under test preventing it to change in ways that do not affect the end result.

The result is maintainable code. Maintainable code is changeable code. Apart from finishing the tasks on time, you can measure how maintainable the code is. The best way to do that is via measuring unit tests.

By measuring how maintainable your code is, and including each developer’s contribution via unit testing to this important quality KPI, when the next developer comes - it’ll be a breeze to get up to speed.

Are you reviewing how your team contributes to quality?

We now all know that unit testing increases software quality. By unit testing, technical debt is reduced and code is released with fewer bug. Developers are more confident with their code, as tests can help them know what works and what doesn’t. It also helps developers go back and fix legacy code.

If you don’t measure it, it won’t improve. We have seen many teams that don’t write unit tests because they are not reviewed on it and promotions don’t depend on it. Without performance reviews including unit tests, development quality suffers and your code will be harder to maintain. This makes it harder for developers to move from project to project.

Good performance reviews have quantitative and qualitative metrics. So, is that being reflected in your employee incentive programs and performance reviews? After all, if they don’t know you recognize and reward unit testing, they will cut corners after long days of demanding deadlines.

By including both numerical and qualitative metrics that are appropriate to each developer’s role (some people focus more on bug busting, others are wading through legacy code, whereas others are developing fresh new features using simple frameworks), you can encourage good software development practices. This helps your team and ensure you can release high quality software, faster.

How should unit testing be formally included in your reviews?

Some metrics to consider include:



  • Number of new/changed tests
  • Test coverage
  • New code vs. Legacy Code
  • Bugs reported by QA
  • Customer complaints
  • Defect rates
  • Openness to unit testing
  • Code complexity
  • Continual learning
  • Mentorship
  • Quality of work

Now, there are critics to performance reviews. Agile development is more than a set of practices but rather a mindset. And when it’s turned into a checklist rather than a workflow, it’s failure rate has increased. It’s possible to unit test without adding business value – for example by testing irrelevant code, thus increasing code coverage but not adding value.

And the best developers want to do something because it’s good practice, not because the pointy-haired boss wants to submit a report with a metric. Performance reviews that only focus on the metric, and not the goals, are common in management but not very effective.

Therefore, a good performance review should focus on the business goals and allow open conversation and communication.

Luckily, even large companies are now throwing out the annual performance review in favor of more frequent free-form feedback. Frequent feedback, with a good mix of qualitative and quantitative goals, is good practice.

Frequent reviews also help bring your organization closer to the agile ideal. It’s no secret that the Agile Manifesto values “responding to change over following a plan” and Scrum project management priorities fast, frequent feedback.

Free-form feedback also encourages focus on business goals rather than vanity metrics.

But no matter how frequent your reviews are – weekly, monthly, quarterly, or annually – if they are connected to bonuses and promotions, unit testing should be part of the mix.

Here is an example of what one performance review might look like:

Business Goal: Write lasting, robust code

Susan Bar constantly champions unit testing within the team. Her code is well covered, particularly on complex issues that our unit testing dashboard warns us about. She identified more than 10% more critical issues that would have otherwise been sent to QA, delaying release. She also tests existing and legacy code, which has helps other developers write code that lasts.  This helped to lower the time needed to add four more features.

Recognizing unit testing as an important KPI in performance reviews is a recognition to your employees that you value code quality and modern software methodologies. The result is that by including unit testing as part of a performance review, you get better quality, more engaged employees, and better code with fewer defects. All letting you accomplish the agile ideal: better software, faster.

Unit testing can’t simply be a line on the performance review. It is something that you need to model throughout your development organization: from providing the best tooling – tools like ‘Typemock Suggest’ suggest unit tests – to a workflow like Scrum and development practices such as software craftsmanship and Extreme Programming that ensures that unit testing is a natural part of the daily grind, integrated well with the entire engineering organization including DevOps  and QA. 

This way, your performance review will be natural and your team won’t be thinking about their review but rather becoming better programmers.

About the Author

Eli Lopian has managed teams both at large public companies and as founder and CEO of Typemock, the Unit Testing Company, where he has received and conducted many performance reviews.

Rate this Article