BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Practical Tips for Automated Acceptance Tests

Practical Tips for Automated Acceptance Tests

This item in japanese

Bookmarks

Testing techniques like Equivalence Partitioning, Boundary Value Analysis, and Risk-based Testing can help you decide what to test and when to automate a test. When you are developing a new product, it might be better to initially go low on automation, argues Adrian Bolboacă. When you’re testing an established product he suggests to write more automated tests for areas where bugs have appeared.

Adrian Bolboacă, Organizational and Technical Coach and Trainer at Mozaic Works, spoke about different types of automated tests at the European Testing Conference 2017. InfoQ is covering this conference with Q&As, summaries, and articles.

In his blog post Automated Tests Purposes, Bolboacă defines the purpose of acceptance tests:

These are usually written by testers and they validate a certain feature of the system. Strange enough, most people refer only to Acceptance Tests when they talk about automated tests.

We can have acceptance tests at module level, across modules, at system level, across systems, etc. If you have an acceptance test across modules or across systems (focusing on more than two modules), then they are integrated tests. For example, any test that will access a behavior through a GUI on top of another module is an integrated test. These tests are brittle, they depend on the GUI. They are not advisable.

These tests are interesting first of all for the product people and for the sponsors, secondly for the testers and only thirdly for the technical team.

InfoQ spoke with Adrian Bolboacă about different types of tests, writing sufficient and good acceptance tests, criteria to decide to automate a test, and how to apply test automation to create executable specifications.

InfoQ: How should a good test look?

Adrian Bolboacă: A test should be very clear. Starting with the name of the test and its contents, everyone should understand why we need it, how it can help us, its purpose in life. For this reason it should be short and ideally should not contain technical words. For example, instead of using a variable "exception", it is better to use a variable "error".

Having many small, isolated tests help us understand where a problem occurs. But for that we need to write small atomic tests that focus just on one behavior. If you test more behaviors in a test, then it is not a unit test. It can be an integration test, an acceptance test, an integrated test, an end-to-end test or any other type of test. And of course, a good unit test should have minimum one, and maximum one, verification.

InfoQ: What are the differences between integrated tests and acceptance tests or end-to-end tests?

Bolboacă: I will start with the last one. End-to-end tests are meant to check if more modules of the system work well together. They shouldn’t focus on small behaviors from the modules. They are technical tests, that help us understand if we have the good setup, good security settings, good database connection, good links to webservices, etc. The audience for end-to-end tests is the technical team.

Acceptance tests are focused on features and the main audience is the product people. They need to show that the features work well. Product people can use these tests to accept, or not accept, the features before deploying them to production. Acceptance tests can be at a module level, they can pass through more modules, or at a system level. It depends on what we want to accept, and how our architecture is. The bigger they are, the harder it is to maintain them. The costs of having acceptance tests increases with their size. I recommend having acceptance tests focused on modules, and just using some end-to-end tests to see if they work well together.

Integrated tests are tests that pass through more than one module and are used to check the small behaviors in more modules. They are the worst, because they change a lot, being dependent of each small detail in each module.

Let’s consider we have more modules and each one of them has some behaviors we need to check.

Module 1: 16 behaviors to check
Module 2: 21 behaviors to check
Module 3: 36 behaviors to check

If we would want to cover all the behaviors with integrated tests, we would need to always change just one behavior and keep the rest of them unchanged. So a simple calculation gives us 16 * 21 * 36 = 12096 integrated tests. These tests are also slow, because they use GUI, real databases and real systems.

My alternative approach is to isolate acceptance tests on each module, and then just write a couple of end-to-end tests to make sure the setup, the "gluing" is correct. A simple calculation gives us 16 + 21 + 36 = 73 isolated acceptance tests + 2-10 end-to-end tests. My advice: never use integrated tests!

InfoQ: What advice do you have for writing sufficient and good acceptance tests?

Bolboacă: It is a good idea to start with an analysis of what it means to have 100% coverage of one specific feature. Use the techniques like Equivalence Partitioning, Boundary Value Analysis, Positive Testing, and Negative Testing. It is important to focus at the beginning on Positive Testing: what does the user need to do here? Then focus on Negative Testing: what could go wrong?

The second step is to make a list of these tests.

The third step I would recommend, is to do a risk analysis. Give a mark to each of the tests on two axes: risk and impact. The ones that have big risk and high impact would need to be automated. You can learn more about Risk Based Testing; I find it very useful in this case.

Then the fourth step is to start automating the selected tests. Make sure they are small, clear, and focus on the business domain language. Imagine you were the user and read them- would you understand them?

And the fifth step would be a review with a colleague tester, analyst, programmer, product person, etc.

It is always a difficult balance to know when you have sufficient tests. We sometimes forget some important tests, and sometimes we add redundant tests. So this is why a review process brings a balance. Always use the principle that two heads are better than one, and the team is smarter than me.

InfoQ: Which criteria can be used to decide to automate a test?

Bolboacă: As I said before, I would recommend Risk Based Testing to decide what should be automated.

Also, there are many variables here:

Am I building a new product? Then maybe I need to automate less until I understand which tests I need and how they help me. But when I know, I need to write more tests for the past features and keep up the pace with the new ones. In 2-5 months I should have a good idea where the team makes more mistakes, what the risks are, etc.

Do I have an established product? Then use the knowledge from the past and see where bugs have appeared, why that happened and write more automated tests in that area.

Am I adding a new tool / framework? Write a few more automated tests because it is clear that we will make mistakes in the beginning. People call these mistakes bugs :)

InfoQ: How do you apply test automation to create executable specifications?

Bolboacă: First of all, the tests should use domain specific language, and not technical language. The test name should be understood by any product people or customer.

Secondly, we need to group the tests in such a way that they make sense for the different groups of audience. So we group unit tests together in a package, integration tests in another package, acceptance tests in another package, etc.

Rate this Article

Adoption
Style

BT