BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Effective Test Automation Approaches for Modern CI/CD Pipelines

Effective Test Automation Approaches for Modern CI/CD Pipelines

This item in japanese

Bookmarks

Key Takeaways

  • Shifting left is popular in domains such as security, but it is also essential for achieving better test automation for CI/CD pipelines
  • By shifting left, you can design for testability upfront and get testing experts involved earlier in your unit tests leading to a better result
  • Not all your tests should be automated for your CI/CD pipelines; instead focus on the tests that return the best value while minimizing your CI/CD runtimes
  • Other tests can then be run on a scheduled basis to avoid cluttering and slowing the main pipeline
  • Become familiar with the principles of good test design for writing more efficient and effective tests

The rise of CI/CD has had a massive impact on the software testing world. With developers requiring pipelines to provide quick feedback on whether their software update has been successful or not, it has forced many testing teams to revisit their existing test automation approaches and find ways of being able to speed up their delivery without compromising on quality. These two factors often contradict each other in the testing world as time is often the biggest enemy in a tester’s quest to be as thorough as possible in achieving their desired testing coverage.

So, how do teams deal with this significant change to ensure they are able to deliver high-quality automated tests while delivering on the expectation that the CI pipeline returns feedback quickly? Well, there are many different ways of looking at this, but what is important to understand is that the solutions are less technical and more cultural ones - with the approach to testing needing to shift rather than big technical enhancements to the testing frameworks.

Shifting Left

Perhaps the most obvious thing to do is to shift left. The idea of "shifting left" (where testing is moved earlier in the development cycle - primarily at a design and unit testing level) is already a common one in the industry that is pushed by many organizations and is becoming increasingly commonplace. Having a strong focus on unit tests is a good way of testing code quickly and providing fast feedback. After all, unit tests execute in a fraction of the time (as they can execute during compilation and don’t require any further integration with the rest of the system) and can provide good testing coverage when done right.

I’ve seen many testers shy away from the notion of unit testing because it's writing tests for a very small component of the code and there is a danger that key things will be missed. This is often just a fear due to the lack of visibility in the process or a lack of understanding of unit tests rather than a failure of unit tests themselves. Having a strong base of unit tests works as they can execute quickly as the code builds in the CI pipeline. It makes sense to have as many as possible and to cover every type of scenario as possible.  

The biggest problem is that many teams don’t always know how to get it right. Firstly, unit testing shouldn’t be treated as some check box activity but rather approached with the proper analysis and commitment to test design that testers would ordinarily apply. And this means that rather than just leaving unit testing in the hands of the developers, you should get testers involved in the process. Even if a tester is not strong in coding, they can still assist in identifying what parameters to look for in testing and the right places to assert to deliver the right results for the integrated functionality to be tested later.
 
Excluding your testing experts from being involved in the unit testing approach means it's possible unit tests could miss some key validation areas. This is often why you might hear many testers give unit tests a bad wrap. It’s not because unit testing is ineffectual, but rather simply that they often didn’t cover the right scenarios.

A second benefit of involving testers early is adding visibility to the unit testing effort. The amount of time (and therefore money) potentially wasted by teams duplicating testing efforts because testers end up simply testing something that was already covered by automated testing is probably quite high. That’s not to say independent validation shouldn’t occur, but it shouldn’t be excessive if scenarios have already been covered. Instead, the tester can focus on being able to provide better exploratory testing as well as focus their own automation efforts on integration testing those edge cases that they might never have otherwise covered.

It’s all about design and preparation

To do this effectively though requires a fair amount of deliberate effort and design. It’s not just about making an effort to focus more on the unit tests and perhaps getting a person with strong test analysis skills in to ensure test scenarios are suitably developed. It also requires user stories and requirements to be more specific to allow for appropriate testing. Often user stories can end up high-level and only focus on the detail from a user level and not a technical level. How individual functions should behave and interact with their corresponding dependencies needs to be clear to allow for good unit testing to take place.

Much of the criticism that befalls unit testing from the testing community is the poor integration it offers. Just because a feature works in isolation doesn’t mean it will work in conjunction with its dependencies. This is often why testers find so many defects early in their testing effort. This doesn’t need to be the case, as more detailed specifications can lead to more accurate mocking allowing for the unit tests to behave realistically and provide better results. There will always be "mocked" functionality that is not accurately known or designed, but with enough early thought, this amount of rework is greatly reduced.

Design is not just about unit tests though. One of the biggest barriers to test automation executing directly in the pipeline is that the team that deals with the larger integrated system only starts a lot of their testing and automation effort once the code has been deployed into a bigger environment. This wastes critical time in the development process, as certain issues will only be discovered later and there should be enough detail to allow testers to at least start writing the majority of their automated tests while the developers are coding on their side.

This doesn’t mean that manual verification, exploratory testing, and actually using the software shouldn’t take place. Those are critical parts of any testing process and are important steps to ensuring software behaves as desired. These approaches are also effective at finding faults with the proposed design. However, automating the integration tests allows the process to be streamlined. These tests can then be included in the initial pipelines thereby improving the overall quality of the delivered product by providing quicker feedback to the development team of failures without the testing team even needing to get involved.

So what actually needs to be tested then?

I’ve spoken a lot about specific approaches to design and shiting left to achieve the best testing results. But you still can’t go ahead and automate everything you test, because it's simply not feasible and adds too much to the execution time of the CI/CD pipelines. So knowing which scenarios need to be appropriately unit or integration tested for automation purposes is crucial while trying to alleviate unnecessary duplication of the testing effort.

Before I dive into these different tests, it’s worth noting that while the aim is to remove duplication, there is likely to always be a certain level of duplication that will be required across tests to achieve the right level of coverage. You want to try and reduce it as much as possible, but erring on the side of duplication is safer if you can’t figure out a better way to achieve the test coverage you need.  

Areas to be unit tested

When it comes to building your pipeline, your unit tests and scans should typically fall into the CI portion of your pipeline, as they can all be evaluated as the code is being built.

Entry and exit points: All code receives input and then provides an output. Essentially, what you are looking to unit test is everything that a piece of code can receive, and then you must ensure it sends out the correct output. By catching everything that flows through each piece of code in the system, you greatly reduce the number of failures that are likely to occur when they are integrated as a whole.

Isolated functionality: While most code will operate on an integrated level, there are many functions that will handle all computation internally. These can be unit-tested exclusively and teams should aim to hit 100% unit test coverage on these pieces of code. I have mostly come across isolated functions when working in microservice architectures where authentication or calculator functions have no dependencies. This means that they can be unit tested with no need for additional integration.

Boundary value validations: Code behaves the same when it receives valid or invalid arguments, regardless of whether it is entered from a UI, some integrated API, or directly through the code. There is no need for testers to go through exhaustive scenarios when much of this can be covered in unit tests.

Clear data permutations: When the data inputs and outputs are clear, it makes that code or component an ideal candidate for a unit test. If you’re dealing with complex data permutations, then it is best to tackle these at an integration level. The reason for this is that complex data is often difficult to mock, slow to process, and will slow down your coding pipeline.

Security and performance: While the majority of load, performance, and security testing happens at an integration level, these can also be tested at a unit level. Each piece of code should be able to handle an invalid authentication, redirection, or SQL/code injection and transmit code efficiently. Unit tests can be created to validate against these. After all, a system’s security and performance are only as effective as its weakest part, so ensuring there are no weak parts is a good place to start.

Areas for integration automation

These are tests that will typically run post-deployment of your code into a bigger environment - though it doesn’t have to be a permanent environment and something utilizing containers works equally well. I’ve seen many teams still try and test everything in this phase though and this can lead to a very long portion of your pipeline execution. Something which is not great if you’re looking to deploy into production on a regular basis each day.

So, the importance is to only test those areas where your unit tests are going to cover satisfactorily, while also focusing on functionality and performance in your overall test design. Some design principles that I give later in this article will help with this.

Positive integration scenarios: We still need to automate integration points to ensure they work correctly. However, the trick is to not focus too much on exhaustive error validation, as these are often triggered by specific outputs that can be unit tested. Rather focus on ensuring successful integration takes place.

Test backend over frontend: Where possible, focus your automation effort on backend components than frontend components. While the user might be using the front end more often, it is typically not where a lot of the functional complexity lies, and backend testing is a lot faster and therefore better for your test automation execution.

Security: One of the common mistakes is that teams rely on security scans for the majority of their security testing and then don’t automate some other critical penetration tests that are performed on the software. And while some penetration tests can’t be executed in a pipeline effectively, many can and these should be automated and run regularly given their importance, especially when dealing with any functionality that covers access, payment, or data privacy. These are areas that can’t be compromised and should be covered.

Are there automated tests that shouldn’t be included in the CI/CD pipelines?

When it comes to automation, it’s not just about understanding what to automate, but also what not to automate, or even if there are tests that are automated, they shouldn’t always land in your CI/CD pipelines. And while the goal is to always shift left as much as possible and avoid these areas, for some architectures it’s not always possible and there may be some additional level of validation required to satisfy the needed test coverage.

This doesn’t mean that tests shouldn’t be automated or placed in pipelines, rather just that they should be separated from your CI/CD processes and rather executed on a daily basis as part of a scheduled execution and not part of your code delivery.

End-to-end tests with high data requirements: Anything that requires complex data scenarios to test should be reserved for execution in a proper test environment outside of a pipeline. While these tests can be automated, they are often too complex or specific for regular execution in a pipeline, plus will take a long time to execute and validate, making them not ideal for pipelines.

Visual regression: Outside of functional testing it is important to often perform visual regression against any site UI to ensure it looks consistent across a variety of devices, browsers, and resolutions. This is an important aspect of testing that often gets overlooked. However, as it doesn’t deal with actual functional behavior, it is often best to execute this outside of your core CI/CD pipelines, though still a requirement before major releases or UI updates.

Mutation testing: Mutation testing is a fantastic way of being able to check the coverage of your unit testing efforts and see what may have been missed by adjusting different decisions in your code and see what it misses. However, the process is quite lengthy and is best done as part of a review process rather than forming part of your pipelines.

Load and stress testing: While it is important to test the performance of different parts of code, you don’t want to put a system under any form of load or stress in a pipeline. To best do this testing, you need a dedicated environment and specific conditions that will stress the limits of your application under test. Not the sort of thing you want to do as part of your pipelines.   

Designing effective tests

So, it's clear that we need a shift-left approach that relies heavily on unit tests with high coverage, but then also a good range of tests covering all areas to get the quality that is likely needed. It still seems like a lot though and there is always the risk that the pipelines can still take a considerable time to execute, especially at a CD level where the more time-intensive integration tests will be executed post-code deployment.

There is also a manner of how you design your tests though that will help make this effective. Automating tests that are unnecessary is a big waste of time, but so are inefficiently written tests. The biggest problem here is that often testers don’t have a full understanding of the efficiency of their test automation, focusing on execution rather than looking for the most processor and memory-effective way of doing it.

The secret to making all tests work is simplicity. Automated tests shouldn’t be complicated. Perform an action, and get a response. And so it is important to stick to that when designing your tests. The following below attributes are important things to follow in designing your tests to keep them both simple and performant.

1. Naming your tests

You might not think naming tests are important, but it matters when it comes to the maintainability of the tests. While test names might not have anything to do with the test functionality or speed of execution, it does help others know what the test does. So when failures occur in a test or something needs to be fixed, it makes the maintenance process a lot quicker and that is important when waging through the many thousands of tests your pipeline is likely to have.

Tests are useful for more than just making sure that your code works, they also provide documentation. Just by looking at the suite of unit tests, you should be able to infer the behavior of your code. Additionally, when tests fail, you can see exactly which scenarios did not meet your expectations.

The name of your test should consist of three parts:

  • The name of the method being tested
  • The scenario under which it’s being tested
  • The behavior expected when the scenario is invoked

By using these naming conventions, you ensure that it's easy to identify what any test or code is supposed to do while also speeding up your ability to debug your code.

2. Arranging your tests

Readability is one of the most important aspects of writing a test. While it may be possible to combine some steps and reduce the size of your test, the primary goal is to make the test as readable as possible. A common pattern to writing simple, functional tests is "Arrange, Act, Assert". As the name implies, it consists of three main actions:

  • Arrange your objects, by creating and setting them up in a way that readies your code for the intended test
  • Act on an object
  • Assert that something is as expected

By clearly separating each of these actions within the test, you highlight:

  • The dependencies required to call your code/test
  • How your code is being called, and
  • What you are trying to assert.

This makes tests easy to write, understand and maintain while also improving their overall performance as they perform simple operations each time.

3. Write minimally passing tests

Too often the people writing automated tests are trying to utilize complex coding techniques that can cater to multiple different behaviors, but in the testing world, all it does is introduce complexity. Tests that include more information than is required to pass the test have a higher chance of introducing errors and can make the intent of the test less clear. For example, setting extra properties on models or using non-zero values when they are not required, only detracts from what you are trying to prove. When writing tests, you want to focus on the behavior. To do this, the input that you use should be as simple as possible.

4. Avoid logic in tests

When you introduce logic into your test suite, the chance of introducing a bug through human error or false results increases dramatically. The last place that you want to find a bug is within your test suite because you should have a high level of confidence that your tests work. Otherwise, you will not trust them and they do not provide any value.

When writing your tests, avoid manual string concatenation and logical conditions such as if, while, for, or switch, because this will help you avoid unnecessary logic. Similarly, any form of calculation should be avoided - your test should rely on an easily identifiable input and clear output - otherwise, it can easily become flaky based on certain criteria - plus it adds to maintenance as when the code logic changes, the test logic will also need to change.

Another important thing here is to remember that pipeline tests should execute quickly and logic tends to cost more processing time. Yes, it might seem insignificant at first, but with several hundreds of tests, this can add up.

5. Use mocks and stubs wherever possible

A lot of testers might frown on this, as the thought of using lots of mocks and stubs can be seen as avoiding the true integrated behavior of an application. This is true for end-to-end testing which you still want to automate, but not ideal for pipeline execution. Not only does it slow down pipeline execution, but creates flakiness in your test results as external functions are not operational or out of sync with your changes.

The best way to ensure that your test results are more reliable, along with allowing you to take greater control of your testing effort and improve coverage is to build mocking into your test framework and rely on stubs to intercept complex data patterns that an external function to do it for you.

6. Prefer helper methods to Setup and Teardown

In unit testing frameworks, a Setup function is called before each and every unit test within your test suite. Each test will generally have different requirements in order to get the test up and running. Unfortunately, Setup forces you to use the exact same requirements for each test. While some may see this as a useful tool, it generally ends up leading to bloated and hard-to-read tests. If you require a similar object or state for your tests, rather use an existing helper method than leveraging Setup and Teardown attributes.

This will help by introducing:

  • Less confusion when reading the tests, since all of the code is visible from within each test.
  • Less chance of setting up too much or too little for the given test.
  • Less chance of sharing state between tests which would otherwise create unwanted dependencies between them.

7. Avoid multiple asserts

When introducing multiple assertions into a test case, it is not guaranteed that all of them will be executed. This is because the test will likely fail at the end of an earlier assertion, leaving the rest of the tests unexecuted. Once an assertion fails in a unit test, the proceeding tests are automatically considered to be failing, even if they are not. The result of this is then that the location of the failure is unclear, which also wastes debugging time.

When writing your tests, try to only include one assert per test. This helps to ensure that it is easy to pinpoint exactly what failed and why. Teams can easily make the mistake of trying to write as few tests as possible that achieve high coverage, but in the end, all it does is make future maintenance a nightmare.

This ties into removing test duplication as well. You don’t want to repeat tests through the pipeline execution and making what they test more visible helps the team to ensure this objective can be achieved.

8. Treat your tests like production code

While test code may not be executed in a production setting, it should be treated just the same as any other piece of code. And that means it should be updated and maintained on a regular basis. Don’t write tests and assume that everything is done. You will need to put in the work to keep your tests functional and healthy, while also keeping all libraries and dependencies up to date too. You don’t want technical debt in your code- don’t have it in your tests either.  

9. Make test automation a habit

Okay, so this last one is less of an actual design principle and more of a tip on good test writing. Like with all things coding-related, knowing the theory is not enough and it requires practice to get good and build a habit, so these testing practices will take time to get right and feel natural. The skill of writing a proper test though is incredibly undervalued and one that will add a lot of value to the quality of the code so the effort and extra effort required are certainly worth it.

Conclusion - it’s all about good test design

As you can see, test automation across your full stack can still work within your pipeline and provide you with a high level of regression coverage while not breaking or slowing down your pipeline unnecessarily. What it does require though is a good test design to work effectively and so the unit and automated tests will need to be well-written to be of most value.

A good DevOps testing strategy requires a solid base of unit tests to provide most of the coverage with mocking to help drive the rest of the automation effort up, leaving only the need for a few end-to-end automated tests to ensure everything works in order and allow your team to take confidence that the pipeline tests will successfully deliver on their quality needs.

About the Author

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT