BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Is Your Application Ready?

Is Your Application Ready?

Leia em Português

Lire ce contenu en français

Bookmarks

Simple question, difficult answer.

We mostly ship software by date, and try to squeeze all development and testing efforts toward that deadline. We prioritize what we think is important, and once our application passes a certain quality level, we’re ready to go live. We tend to assume there will be at least one more version in our future, since we’re not always perfectly happy with what we release. But even when we do ship, can we tell the readiness status of our application? We rely on our testers to tell us, but for the sake of the product and its quality we should involve all people in this process.

In this article, I’m going to discuss different ways we can test aspects of modern day applications towards releasing a working application. However, this is not your “regular” testing effort! We say quality cannot be patched at the end. Our aim is to build & ensure quality throughout the entire programming process.

A brief history of testing

Things used to be easy, before we had testers as a functional group in our development project. Developers made sure things worked. As projects grew bigger, applications became more complex and release deadlines tighter. Since programmers were always in short supply and now even more so, they were pushed to develop more features in less time. Bugs filled the land in the great “Quality Depression.” We needed exterminators.

Testers would be those exterminators. By moving the responsibility down the line, testers became the gate keepers to keep the bugs from leaving the building. Unfortunately, this idea didn’t work. Applications are too complex with too many scenarios for testers to completely cover. Even when automation was introduced, testing could require as much time as the original development. And the bugs are still out there, in abundance.

During recent years, especially in parallel to the expansion of Agile methodologies, it has become clear the changes in the ecosystem make it impossible to leave testing to the end. In addition, the move from installed software, to client-server, to cloud and mobile applications has made the decision whether an app is "market-ready" a lot more challenging.

Testing today is complex

We usually refer to testing as an important step in reaching shippable software. But if you think about it, testing becomes so pervasive in the development process, it’s definitely not a “period” or a “job”, but a set of skills that is spread across product development organizations.

Here’s a seemingly simple example of why testing is not just checking something works as originally specified. Suppose you’ve got a requirement: 3 failures to log into the application will lock the user out. That seems pretty straight forward, but once you start investigating, you have more questions: what IS a failure? What happens after lockout? These “invisible” requirements may not be specified, but factor in once we understand the context.

Let’s take it further: we’ve got all the requirements set in stone, how do we test all of them? You can’t test on the live system, so you want functionality tested on some pre-production servers. You’ll need to run an integration test that sets up the cases, runs the steps in the scenario, validates things actually work, and cleans up after itself. We can also test these cases at a unit level, where we mock the database and environment calls.

That is only one simple requirement, and is all done before the Tester gets his hands on the application.

Feedback loops

The V model is considered obsolete in Agile times, but even today it carries the fundamental concept that for every step in the process there’s a testing point to verifie the operation.

Today we call them "feedback loops". They can be described by Deming’s Plan-Do-Check-Act cycle:

In Agile development we try to make these loops as short as possible. What goes wrong in waterfall projects is the loops are huge, and the feedback cycle spans weeks, months and even years. Yet every developer who compiles their code every few minutes knows this: the shorter the loop – the better.

We have many options to invoke these loops in different situations. However, when we try to verify a feature works, we bump into reality.

Shorter feedback loops don’t “just happen” – we need to put them into practice and make them work. The intermittent “compile now” is made possible because:

  • We have tools that make it possible. Compilers are efficient, giving a lot of feedback quickly.
  • We know feedback works, so we create a system where feedback is provided – we’ve programmed ourselves to press the “compile” button every few minutes. We even feel we’re walking off-track when we don’t get that feedback in due time.

This only works when the code compiles relatively quickly. As efficient and lightweight a compiler is, if compilation takes a few hours, that’s not quick enough. We won’t do it as often, won’t get the feedback, and we’re back to square one.

Quick feedback through Isolation

To get to a shorter feedback loop, we need to cut corners. For example, to circumvent long compilation cycles, we can run incremental compilations. Instead of waiting for the whole compilation to finish, we can compile only the changed parts. This is risk management: We sacrifice feedback quality for feedback speed. There’s a chance the system will behave differently after a full compilation, but we assume there is not going to be much difference. Under this assumption, we shorten the feedback loop.

The idea of isolating parts of systems and processes to get quicker feedback is not new, and pops up a lot during development. We may not even use that term (like in the case of incremental compilation), but it’s being used in other scenarios, where we want to get quicker verification feedback.

Verification and isolation

Building the right application: The biggest waste is building the wrong product. Product development starts with identifying the right requirements, many times before having even a live prototype. Product people use all kinds of tools, including mockups, drawings and then prototypes, to collect requirements and feedback, verify development is on the right track, or pivot (in the lean startup lingo) – change direction, collect feedback again and finally get back on track.

Note this verification is different than the old V Model requirement verification: we’re verifying we build what the customer needs, rather than verifying the specified requirements were built correctly. In the beginning, we don’t need a live application. Later, this data collection doesn’t stop. Usage data continues to be collected on live systems, for continuing development and improvement.

Functional unit tests: This is the quintessential example for isolation usage, since a lot of the functionality cannot be tested quickly without isolation. Whether you use mocking frameworks, dependency injection containers, or write plain TDD and abstract dependencies; it comes down to isolation of what you test from its environment. The idea behind unit tests is to get quick feedback, so it makes sense to have tools for achieving that.

If you look beyond the process of unit testing and what it tries to achieve, you’ll find building isolation foundations into the software (for example, following the SOLID principles) have other benefits. We usually look at maintainability of an application as the capability of the development team to add features, fix bugs or change the design. Without the quick feedback mechanism of tests, these processes become risky and time consuming.

Functional Integration testing: Unit tests are similar to the results we get from incremental build: quick feedback in return for increasing the risk inherent in not testing complete workflows in the system. Integration tests are more similar to full builds in that manner: we get higher quality feedback at the cost of being slower. Setup and test runs can take time, but eventually they give us a better view on the performance of the system.

We sometimes try to trim these times by isolating several aspects of the system, in order to speed feedback. So we might want to fake browser operation and run workflows beneath the UI, including database code. Or we might want to run tests in the GUI, but not hit the database; playing with the balance between feedback quality and speed.

Communication testing: Although it's a special case of integration testing, communication testing requires special mention, and a personal story: A decade ago, my team was developing a software interface for a nice chunk of hardware. Communication was based on TCP and UDP. The problem was the hardware wasn’t built yet. Sure, we could have waited for it to come, but we wanted to make progress. We decided on message information and structure, communication linking and resuming, and error handling to build a network simulator. At the time it wasn’t automated – it was an application showing what information was received, and could send back messages to our software on request. Later we added some automation for different scenarios, like handshaking.

Having a simulator didn’t only speed us development and integration – it gave us feedback on how we developed the communication component, noted when something was broken, and was also used as a reference tool.

Non-functional requirements: Once upon a time, we had only had to worry about quality. However, we’ve come a long way, and we’ve got more things to worry about today, including:

  • Extensibility: Some frameworks are designed to be extended and customized. We usually supply users with APIs that allow them to extend the functionality in ways we had not anticipated. When we want to make sure our system is extensible, we build simulators (and sometimes real components) that use the extensibility, and verify it works.
  • Security: Sometimes we have hard security requirements for our applications, but more often than not security is an afterthought and the amount of hardening to be done is determined solely by the individual developer. We can do threat assessment by using static code analysis tools or by having outside professionals survey the application. Unfortunately full security scans can only be performed against production like environments. Unless such environments exist in a staging setting, this means last minute testing against live servers.
  • Scalability and performance: Especially with server applications, we need to assess how capable our application is, not only in responding to large amount of requests, but also how it can perform when the requests grow further. We use tools for stress testing the system, which give us the feedback we need to see how our system performs. Again, we’d rather know in advance, so we perform these tests on an isolated system, rather than on the servers, which affect actual users.
  • Availability and reliability: System performance is important, but so is availability to a large user base. In addition we’d like to assess the application ability to withstand and recover from shutdowns. We use stress tools to see what happens on isolated servers to get feedback on our application behavior for these cases.
  • Portability: In the mobile world, we’d like to test our application on multiple devices. Each can have a different operating system, memory, resolution and capabilities. The ability to test on multiple devices becomes a challenge with every new device to arrive on the market. While we’re still struggling with how to this effectively in the rather short time we have; we’re going in the direction of emulators. Software emulators replace the physical devices, and make sure our application works on multiple devices. 

User experience: Ultimately, there’s no substitute for human users. This is where testers come in. Testers provide the voice of the customer, answering questions like: Is this helpful? Can I achieve my goals? This is where we test the real system.

When the rest of the testing is automated, manual exploratory testing is the last part of the puzzle. When the testers approve, it’s time to go live.

Summary

Product development is complex and risky. We want to make sure we’re building the right features, make sure they function correctly, and prepare our application for disaster and success. With understanding that early feedback works, more and more aspects can be tested before going live to answer our question: Yes, it’s ready.

About the Author

Gil Zilberfeld has been in software since childhood. With twenty years of developing commercial software, he has vast experience in software methodology and practices. Gil is the Product Manager at Typemock, working as part of an agile team in an agile company, creating tools for agile developers. He promotes unit testing and other design practices, down–to–earth agile methods, and some incredibly cool tools. In addition to his monthly online webinars, Gil also speaks in international conferences about unit testing, TDD, and agile practices and communication. And in his spare time he shoots zombies, for fun. Gil blogs on different agile topics, including: processes, communication and unit testing.

Rate this Article

Adoption
Style

BT