BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Ways to Make Code Reviews More Effective

Ways to Make Code Reviews More Effective

This item in japanese

Key takeaways

  • There are many, many things a reviewer can look for when performing a code review.  A team should identify what's important to their project, and consistently apply those checks during a review.
  •  Human reviewers are very valuable, so automate everything you can, e.g: formatting, style, checking for common bugs, identifying common security issues, and running automated tests.
  •  When it comes to performance, understanding the system's performance requirements are key to knowing which potential issues to look for during a code review.
  •  Some simple checks by a human code reviewer can dramatically improve an application's security.
  •  A code review should be a collaborative discussion, not a battle. Deciding upfront what is important and what's not can reduce conflict and manage expectations.

 

It is widely known that performing Code Review in a team helps to increase code quality, share knowledge and responsibility, and ultimately helps build better software and a better team. If you only take a few seconds to search for information about code reviews, you’ll see a lot of articles about the value code reviews bring. There are many ways to do it, too: pull requests on GitHub, or code review in a tool like Upsource from JetBrains. However even when the process is clear and the tooling is right, the big question remains – what is it we should be looking for.

Probably the reason there’s no definitive article on what to be looking for is: there are a lot of different things to consider. And, like any other set of requirements, individual teams will have different priorities for each aspect.

This article aims to outline some of the things a reviewer could be looking out for. Deciding on the priority of each aspect depends on each project’s specifics.

Before we continue, let’s think about the problems people tend to comment on during a code review. It’s easy to argue about formatting, style and naming, and point out missing tests. These are valid things to check, if you want consistent, maintainable code. However, discussing these things during a code review isn’t the best use of time as many of these checks can (and should be) automated.

What can humans spot in a code review that we can’t delegate to a tool?

It turns out there’s a surprisingly large number of things. In the rest of this article we’ll cover a list of a wide range of features, and drill a little deeper into two specific areas: performance and security.

Design

  • How does the new code fit with the overall architecture?
  • Does the code follow SOLID principles,Domain Driven Design and/or other design paradigms the team favors?
  • What design patterns are used in the new code? Are these appropriate?
  • If the codebase has a mix of standards or design styles, does this new code follow the current practices? Is the code migrating in the correct direction, or does it follow the example of older code that is due to be phased out?
  • Is the code in the right place? For example, if the code is related to Orders, is it in the Order Service?
  • Could the new code have reused something in the existing code? Does the new code provide something we can reuse in the existing code? Does the new code introduce duplication? If so, should it be refactored to a more reusable pattern, or is this acceptable at this stage?
  • Is the code over-engineered? Does it build for reusability that isn’t required now? How does the team balance considerations of reusability with YAGNI?

Readability & Maintainability

  • Do the names (of fields, variables, parameters, methods and classes) actually reflect the things they represent?
  • Can I understand what the code does by reading it?
  • Can I understand what the tests do?
  • Do the tests cover a good subset of cases? Do they cover happy paths and exceptional cases? Are there cases that haven’t been considered?
  • Are the exception error messages understandable?
  • Are confusing sections of code documented, commented, or covered by understandable tests (according to team preference)?

Functionality

  • Does the code actually do what it was supposed to do? If there are automated tests to ensure correctness of the code, do the tests really test that the code meets the agreed requirements?
  • Does the code look like it contains subtle bugs, such as using the wrong variable for a check, or accidentally using an and instead of an or?

Have you thought about…?

  • Are there regulatory requirements that need to be met?
  • Does the author need to create public documentation, or change existing help files?
  • Have user-facing messages been checked for correctness?
  • Are there obvious errors that will stop this working in production? Is the code going to accidentally point at the test database, or is there a hardcoded stub that should be swapped out for a real service?
  • What are your performance requirements, and have you considered the security implications? These are big areas to cover and key topics, so let’s look at both of them in more detail.

Performance

Let’s drill more deeply into the topic of performance, as this is an area that can really benefit from code reviews.

As with all architecture/design areas, the non-functional requirements for the performance of a system should have been set upfront. Whether you’re working on a low-latency trading system which has to respond in nanoseconds or you’re writing a phone app to manage a “To Do” list, you should have some idea of what’s considered “too slow.”

Before deciding on whether we need to undertake code reviews based on performance, we should ask ourselves a few questions about what our requirements are. Although some applications really do need to consider how every millisecond is spent, for most applications there’s limited value spending hours agonizing over optimizations that will save you a few CPU cycles. But there are things a reviewer can check for in order to ensure that the code doesn’t suffer from common avoidable performance pitfalls.

Does this piece of functionality have hard performance requirements?

Does the piece of code under review fall under an area that had a previously published SLA? Or do the requirements state the performance characteristics required?

If code is a response to an issue along the lines of “the login screen is too slow to load,” the original developer needs to have found out what would be a suitable loading time – otherwise how can the reviewer or the author be confident that the speed has been sufficiently improved?

If there are hard requirements, is there a test that proves it meets those? Any performance-critical system should have automated performance tests which ensure that published SLAs (e.g. “all order requests serviced in less than 10 ms”) are met. Without these, you’re relying on your users to inform you of failures to meet your SLAs. Not only is this a poor user experience, but it could lead to avoidable fines and fees.

Has the fix/new functionality negatively impacted the results of any existing performance tests?

If you are regularly running performance tests (or if you have a suite that can be run on demand), check that new code in performance-critical areas hasn’t introduced a decrease in system performance. This might be an automated process, but since running performance tests as part of a CI environment is much less common than running unit tests, it is worth specifically mentioning it as a possible step during review.

Calls outside of the service/application are expensive

Any use of systems outside your own application that require a network hop are generally going to cost you much more than a poorly optimized method. Consider:

  • Calls to the database: The worst offenders might be hiding behind abstractions like ORMs. But in a code review you should be able to catch common causes of performance problems, like individual calls to a database inside a loop – for example, loading a list of IDs, then querying the database for each individual item that corresponds to that ID.
  • Unnecessary network calls: Like databases, remote services can sometimes be overused, with multiple remote calls being made where a single one might suffice, or where batching or caching might prevent expensive network calls. Again, like databases, be aware that sometimes an abstraction can hide that a method is calling a remote API.
  • Mobile / wearable apps calling the back end too much: This is basically the same as “unnecessary network calls,” but with the added problem that on mobile devices, not only will unnecessary calls to the back-end make your performance worse, but they will also drain the battery faster and possibly even cost the user money.

Using resources efficiently and effectively

Does the code use locks to access shared resources? Could this result in poor performance or deadlocks?

Locks are a performance killer and very hard to reason about in a multithreaded environment. Consider patterns like; having only a single thread that writes/changes values while all other threads are free to read; or using lock free algorithms.

  • Could there be a memory leak? In Java, some common causes can be: mutable static fields, using ThreadLocal, and using a ClassLoader.
  • Could memory use grow infinitely? This is not the same as a memory leak – a memory leak is where unused objects cannot be collected by the garbage collector. But any language, even a non-garbage-collected one, can create data structures that grow indefinitely. If, as a reviewer, you see new values constantly being added to a list or map, question if and when the list or map is discarded or trimmed.
  • Does the code close connections/streams? It’s easy to forget to close connections or file/network streams. When you’re reviewing someone else’s code, if a file, network or database connection is in use, make sure it is correctly closed.
  • Are resource pools correctly configured? The optimal configuration for an environment is going to depend on a number of factors, so it’s unlikely that as a reviewer you’ll know immediately if, for example, a database connection pool is correctly sized. But there are a few things you can tell at a glance, for example whether the pool is too small (e.g. sized at one) or too big (millions of threads). If in doubt, the defaults are usually a good start. Code that deviates from default settings should prove the value with some sort of test or calculation.

Warning signs a reviewer can easily spot

Some types of code immediately suggest a potential performance problem. This will depend upon the language and libraries used.

  • Reflection: Reflection in Java is slower than doing things without reflection. If you’re reviewing code that contains reflection, question whether this is absolutely required.
  • Timeouts: When you’re reviewing code, you might not know what the correct timeout for an operation is, but you should be thinking “what’s the impact on the rest of the system while this timeout is ticking down?”. As the reviewer you should consider the worst case – is the application blocking while a 5-minute timeout is ticking down? What’s the worst that would happen if this was set to one second? If the author of the code can’t justify the length of a timeout, and you, the reviewer, don’t know the pros and cons of a selected value, then it’s a good time to get someone involved who does understand the implications.
  • Parallelism: Does the code use multiple threads to perform a simple operation? Does this add more time and complexity rather than improving performance? With modern Java, this might be more subtle than creating new threads explicitly: does the code use Java 8’s shiny new parallel streams but not benefit from the parallelism? For example, using a parallel stream on a small number of elements, or on a stream which does very simple operations, might be slower than performing the operations on a sequential stream.

Correctness

These things are not necessarily going to impact the performance of your system, but since they’re largely related to running in a multithreaded environment, they are related to the topic.

  • Is the code using the correct data structure for a multithreaded environment?
  • Is the code susceptible to race conditions? It’s surprisingly easy to write code that can cause subtle race conditions when used in a multithreaded environment. As a reviewer, look out for get and set combos that are not atomic.
  • Is the code using locks correctly? Related to race conditions, as a reviewer you should be checking that the code being reviewed is not allowing multiple threads to modify values in a way that could clash. The code might need synchronization, locks, or atomic variables to control changes to blocks of code.
  • Is the performance test for the code valuable? It’s easy to write poor micro-benchmarks, for example. Or if the test uses data that’s not at all like production data, it might be giving misleading results.
  • Caching: While caching might be a way to prevent making too many external requests, it comes with its own challenges. If the code under review uses caching, you should look for some common problems, for example, incorrect invalidation of cache items.

Code-level optimizations

For most organizations that aren’t building a low-latency application, these optimizations are probably fall under the category of premature optimizations, so understand first if this level of performance is needed.

  • Does the code use synchronization/locks when they’re not required? If the code is always run on a single thread, locks are unnecessary overhead.
  • Is the code using locks or synchronization when it could use atomic variables instead?
  • Is the code using a thread-safe data structure where it’s not required? For example, can Vector be replaced with ArrayList?
  • Is the code using a data structure with poor performance for the common operations? For example, using a linked list but needing to regularly search for a single item in it.
  • Could the code benefit from lazy loading?
  • Can if statements or other logic be short-circuited by placing the fastest evaluation first?
  • Is there a lot of string formatting? Could this be more efficient?
  • Are the logging statements using string formatting? Are they either protected by an if to check logging level, or using a supplier which is evaluated lazily?

Simple code is performant code

There are some easy things for reviewers of Java code to look for, that will give the JVM a good chance of optimizing your code so that you don’t have to:

  • Small methods and classes
  • Simple logic – no deeply nested ifs or loops

The more readable the code is to a human, the more chance the JIT compiler has of understanding your code enough to optimize it. This should be easy to spot during code review – if the code looks understandable and clean, it also has a good chance of performing well.

Security

How much work you do building a secure, robust system is like anything else on your project – it depends upon the project itself, where it’s running, who’s using it, what data it has access to, etc. Now we’ll highlight some areas you might like to look at when reviewing code.

Automate as much as possible

A surprising number of security checks can be automated, and therefore should not need a human. Security tests don’t necessarily have to be full-blown penetration testing with the whole system up and running; some problems can be found at code-level.

Common problems like SQL Injection or Cross-site Scripting can be found via tools running in your Continuous Integration environment. You can also automate checking for known vulnerabilities in your dependencies via the OWASP Dependency Check tool.

Sometimes "It Depends"

While there are checks that you can feel comfortable answering with a “yes” or “no”, sometimes you want a tool to point out potential problems and then have a human make the decision as to whether this needs to be addressed or not. This is an area where Upsource can really shine. Upsource displays code inspections that a reviewer can use to decide if the code needs to be changed or is acceptable under the current situation.

Understand your Dependencies

One of the areas where security vulnerabilities can creep into your system or code base is via third party libraries. When reviewing code, at the very least you want to check if any new dependencies (e.g. third party libraries) have been introduced. If you aren’t already automating the check for vulnerabilities, you should check for known issues in newly-introduced libraries.

You should also try to minimize the number of versions of each library – not always possible if other dependencies are pulling in additional transitive dependencies. But one of the simplest ways to minimize your exposure to security problems in other people’s code (via libraries or services) is to:

  • Use a few as sources as possible and understand how trustworthy they are
  • Use the highest quality library you can
  • Track what you use and where, so if new vulnerabilities do become apparent, you can check your exposure.

Check if new paths & services need to be authenticated

Whether you’re working on a web application, or providing web services or some other API which requires authentication, when you add a new URI or service, you should ensure that this cannot be accessed without authentication (assuming authentication is a requirement of your system). You may simply need to check that the developer of the code wrote an appropriate test to show that authentication has been applied.

You should also consider that authentication isn’t just for human users with a username and password. Identity might need to be defined for other systems or automated processes accessing your application or services. This may impact your concept of “user” in your system.

Does your data need to be encrypted?

When you’re storing something on disk or sending things over the wire, you need to know whether that data should be encrypted. Obviously passwords should never be in plain text, but there are plenty other times when data needs to be encrypted. If the code under review is sending data on the wire, saving it somewhere, or it is in some way leaving your system, if you don’t know whether it should be encrypted or not, try and locate someone in your organization who can answer that question.

Are secrets being managed correctly?

Secrets are things like passwords (user passwords, or passwords to databases or other systems), encryption keys, tokens and so forth. These should never be stored in code, or in configuration files that get checked into the source control system. There are other ways of managing these secrets, for example via a secrets server. When reviewing code, make sure these things don’t accidentally sneak into your VCS.

Should the code be logging/auditing behavior? Is it doing so correctly?

Logging and auditing requirements vary by project, with some systems requiring compliance with stricter rules for logging actions and events than others. If you do have guidelines on what needs logging, when and how, then as a code reviewer you should be checking the submitted code meets these requirements. If you do not have a firm set of rules, consider:

  • Is the code making any data changes (e.g. add/update/remove)? Should it make a note of the change that was made, by whom, and when?
  • Is this code on some performance-critical path? Should it be making a note of start-time and end-time in some sort of performance-monitoring system?
  • Is the logging level of any logged messages appropriate? A good rule of thumb is that “ERROR” is likely to cause an alert to go off somewhere – if you do not need this message to wake someone up at 3am, consider downgrading to “INFO” or “DEBUG”. Messages inside loops, or other places that are likely to be output more than once in a row, probably don’t need to be spamming your production log files, therefore are likely to be “DEBUG” level.

Remember to involve experts

Security is a very big topic, big enough that your company may hire technical security experts. We can enlist the help of security experts if we have them, for example inviting them to the code review, or inviting them to pair with us while we review. Or if this isn’t an option, we can learn enough about the environment of our system to understand what sort of security requirements we have (internal-facing enterprise apps will have a different profile to customer-facing web applications, for example), so we can get a better understanding of what we should be looking for in a code review.

Conclusion

Code reviews are an excellent way to not only ensure quality and consistency in your code, but also to share knowledge within or between teams. There are many different aspects of the code and design to consider, even after you’ve automated the fundamental checks. Code review tools like Upsource can help identify some potential problems via inspections that highlight suspicious code and analysis that shows where issues have been fixed, or introduced, in a particular commit. Tools can also ease the process by providing a place to discuss the design and implementation within the code itself, and inviting discussion from reviewers, authors and other interested parties until agreement is reached.

In the end, it takes a team to decide which factors are important to them when it comes to code quality; it takes human expertise to decide which rules to apply to each code review; and everyone involved in the review should be developing and using interpersonal skills such as giving feedback and negotiating compromises in order to ultimately agree on when code is “good enough” to pass the review.

About the Author

Trisha Gee has developed Java applications for a range of industries, including finance, manufacturing, software and non-profit, for companies of all sizes.  She has expertise in Java high performance systems, is passionate about enabling developer productivity, and dabbles with Open Source development. Trisha is a leader of the Sevilla Java User Group and a Java Champion, she believes community and sharing ideas helps us learn from mistakes and build on successes. She’s a Developer Advocate for JetBrains, which means she gets to share all the interesting things she’s constantly discovering.

Rate this Article

Adoption
Style

BT