BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News How Observability Impacts Testing: Q&A with Amy Phillips at QCon London

How Observability Impacts Testing: Q&A with Amy Phillips at QCon London

This item in japanese

Bookmarks

Observability gives you a picture of the system’s current health and can replace certain types of testing. For low-risk application areas you can rely on observability instead of testing, provided you have continuous delivery that provides fast feedback and allows you to release changes quickly.

Amy Phillips, engineering manager at Moo, spoke about testing observability at QCon London 2018. InfoQ is covering this conference with Q&As, presentations, summaries, and articles.

InfoQ interviewed Phillips about testing self-healing systems, observability, and what the future will bring for testing.

InfoQ: How has Continuous Delivery (CD) impacted testing?

Amy Phillips: Continuous delivery emphasises fast feedback and has led to greater use of automated testing in release pipelines. Although it isn’t the only way to bring about change, it has proved to be a powerful trigger to encourage teams to design effective release processes, part of which involves considering which tests need to run, and when it makes sense to run them.

Teams using continuous delivery generally don’t do less testing but they are usually more intentional about testing locally, and testing in production. The use of feature toggles can also help move testing out of the automated release pipeline and into production.

One of the most interesting aspects of continuous delivery is its popularity with developers. The integral part that testing plays in making continuous delivery successful leads to truly cross-functional teams working together to build, test, and release software.

InfoQ: What are the challenges when testing self-healing systems?

Phillips: It isn’t so much that self-healing systems are more challenging to test than other systems. We still need to test that the application "works" but we also need to be careful not to assume that the infrastructure will always be exactly as it is when tested.

One of the difficulties when testing or debugging are the intermittent issues, the ones that are known to happen but cannot be identified or traced. Self-healing systems could make this more of a problem, by resolving the underlying issue that triggered the defect without anyone needing to intervene.

Testers should have a good understanding of the application platform, and platform teams should have a good understanding of the testing being performed. By collaborating together and making use of observability techniques, this challenge can be reduced or eliminated.

InfoQ: How can observability help to ensure that systems are working properly?

Phillips: Testing tries to gather information about the system but can often slip into being a "does it work" tick box task whereas observability gives you a picture of the system’s current health. Together they give a much richer view of what’s really going on.

I once tested a web system that held the details about suppliers along with their special purchase rates based on a number of factors. The system was built incrementally with testing throughout the process.

One day someone noticed that one of the suppliers had a grossly incorrect rate. After investigating the logs and retesting the calculation, we concluded that someone had accidentally edited the value, and we added more logging around this code, just in case.

Some weeks later, the same thing happened, but on a different supplier. This time we had enough logging to know it wasn’t a bad edit. In fact, the supplier hadn’t been edited for several months. Still, the value was wrong and we didn’t know why. More logging was added.

Eventually the issue occurred when we had enough logging in place. It turned out to be caused by a MySQL "oddity" that ended up with the wrong number being retrieved from the database.

All of our build and testing had focused on the question of "Is it doing the right thing?". When the answer came back as "no", it took us a long time to debug. Focusing on observability in addition to testing could have helped in this situation.

InfoQ: Will observability replace testing?

Phillips: I think that observability is already replacing certain types of testing. Monitoring has long been considered a suitable alternative to testing everything. If you know that something has broken, and you have good release pipelines that allow you to release changes quickly, then low-risk application areas can be very well suited to relying on observability instead of testing.

Building systems with consideration to the "Did we build the thing right?" as well as the "How will we know when it isn’t working?" question give different perspectives on system health and could reduce the occurrence and impact of issues.

InfoQ: What do you expect that the future will bring for testing?

Phillips: Different systems bring different needs, but generally I think testing is moving to an even more collaborative place. We’re used to testers and developers working closely together, but now we’re starting to see the value in testers getting involved in Ops Engineering too.

Now, as at any time, the creative aspect of imagining risks and designing useful scenarios remains a key skill for anyone involved in testing. I hope that we’ll see more teams moving away from treating testing as a tickbox activity that takes place once the system has been built.

Additional information on Amy Phillips QCon London talk "Testing Observability" can be found on the conference website. 

Rate this Article

Adoption
Style

BT