Leading a Culture of Effective Testing
I vividly remember one of the first software systems I was responsible for developing and operating. I was fortunate to be given the responsibility of everything from design to support. Over the span of several years, there was one particular sub system that had grown popular. Within the sub system complexity arose from a steady stream of requests to add capabilities. With every change, I meticulously verified existing functionality was not lost. The manual verification was time consuming and became exponentially more complex with each added feature.
Like many, I had heard of automated testing but never had the time to learn the techniques. At some point, in my spare time, I happened to pick up a few books and began studying the techniques. Eager to apply this to something real, I realized that automated testing was a perfect fit for the pain I had with the sub system that was growing exponentially difficult to change. One afternoon, I found myself on a road trip with several hours to spare. I had no Internet connection, but I had all the tools I needed to start writing tests. Several days later I had automated the majority of the minutiae I spent hours manually checking. The time savings were insane, several hours of testing turned into several seconds.
Truly though, the time savings were secondary. By taking all of those scenarios out of my mind, I experienced an indelible transformation, a restored sense of confidence in what I was creating! Much like the confidence of developing a new system that has yet to mature any complexity. I used to worry excessively about releasing updates. I checked, double checked and triple checked my work. But no matter how much I checked, I still worried. Being responsible for releasing and supporting the application, I innately resisted releases. With a suite of automated checks to lean on, the fear vanished.
If anything slipped through the cracks, I simply added tests to fix the problem and made sure it never returned. Confidence allowed me to take risks with the customer and quickly add significant new functionality. Confidence allowed me to focus on delivering value instead of obsessing over error-prone, manual verification of even the slightest changes.
In manual testing, each scenario must be carefully setup, executed and verified, all by hand. It's not uncommon for the scenarios to live in someone's mind. It's easy for scenarios to be forgotten.
In automated testing, the scenario, execution and verification are all programmatically documented and automated. The only manual step is in pushing a button to trigger the scenarios to be executed and verified. No longer do we have to rely on memory, or outdated documents, to track the myriad of scenarios and manually run through them.
Most of the software development profession is aware of the value in automated testing but all too often fails to reap the benefits. I was fortunate to find myself in the right situation to reap the benefits.
I'd never have taken the time to automate tests if it weren't my responsibility to test my work. It's as simple as that. By being a part of supporting the system I helped create, I had a vested interest in making sure it worked. I despised the 5 AM phone calls to fix a problem. I diligently worked to make sure problems never saw the light of day.
Developers know best how to validate what they've created. They know what parts they lack confidence in, where the complexity lies, and what parts are likely to have problems. No one else will have the encompassing perspective of the implementer. Developers are the experts of automating work for the rest of the organization and are therefore best suited to automate the verification of their own work!
Many developers are highly capable of supporting the systems they create and would rather not hand off that responsibility. Anyone that creates something has a strong desire to ensure its success. All we have to do is ask what each individual is comfortable contributing. Sharing the responsibility and allowing developers to contribute to supporting the systems they create is the first step towards reaping the benefits of effective testing.
Coalesce Disjoint Teams
If however, organizations scatter the responsibilities of creating software across disjoint teams, testing is going to suffer. Unfortunately this is the knee jerk reaction to dealing with defects that end up in the hands of customers.
When testing is given to another team, it's almost always manual and when it is automated it's usually an end to end test that is difficult to maintain. End to end tests require the entire system to be setup in an environment that mirrors real environments. End to end tests lack the ability to isolate individual parts of the system and when something does go wrong it tends to affect many tests making it hard to discern meaningful feedback from failures. Naturally, testers revert to manual tests instead of constantly updating brittle end to end test code that often breaks with every application update.
Isolated testers rarely have the insight nor capability to test on a lower level within the application and especially not on a unit level where the majority of effective verification is done! Unit tests are capable of isolating small parts of the system and verifying that all of these small parts work. At this level a deep understanding of the code and application is necessary. Only developers have this insight. If isolated testers have this insight there's no reason they shouldn't just be a part of the development team directly.
Additionally, when we hand off responsibility of testing we're telling developers they aren't responsible for making sure things actually work. It sends a message of a lack of trust. It's micromanagement at its worst. We'll find ourselves rushing the implementation of new features that queue up for verification. Days and weeks will pass before test teams get a chance to poke around and give critical feedback about problems. The implications for even small problems will blow up and slow down implementing new features. The delays will exponentially slow progress over the life of the system.
The natural progression of this separation will lead to increasing layers of testers that check, double check and triple check things are OK. I've seen organizations that have three or more separate phases of testing! The entire process grinds to a halt. All it takes is one competitor recognizing the value of directly sharing responsibility with developers and every other organization is going to be significantly disadvantaged and eventually out-competed.
Eliminating handoff is absolutely essential to effective testing. One team should be responsible for testing, delivering and supporting systems. If we find ourselves with responsibilities scattered among teams, all we need to do is coalesce the expertise to form a single team. Testers can work right alongside developers. Or, individuals can take on the role of both developer and tester and help double check each other’s work to avoid bias. Instead of developers writing a few unit tests and handing off the rest to isolated test teams; everyone can decide together whether unit testing, automatic end to end testing or manual exploratory testing best suits each situation.
It’s best to limit the new team's focus to releasing a single feature or set of related features before moving on to the next. Everyone can work together to put all the components in place to develop, test and release a single feature. Instead of batching unrelated changes and moving giant batches between development, testing and support teams over the course of months, we can instead have a single team take care of everything necessary to release a much smaller amount of work within the time frame of a week or two. And, this limited focus actually proves to be more productive for reasons beyond testing.
By working on smaller sets of changes, it’s also possible to get customer feedback much faster. Testing and staging environments can easily be provisioned to contain a smaller set of changes that make external customer reviews much more focused. There’ll be much less ambiguity about what the customer is testing. The feedback will come faster and can be incorporated without significant delay. This stems from the reality that every type of testing is expedited when we work on smaller sets of changes instead of giant batches. We get feedback quickly, we finish the small set of changes quickly, we release them and then we move on to the next small set.
Demonstrate the Value in Automated Testing
Once a team of individuals are given the responsibility to develop, test and support a system, we can either wait for them to come to the conclusion that manual verification isn't going to cut it, or we can avoid gambling and educate them. Making the leap to automated testing can take years without guidance, especially when individuals are overwhelmed with daily responsibilities. Once individuals buy into it, it will take years to become proficient. Most of this can be avoided by leveraging expertise to demonstrate the value.
After I became enthralled with automated testing and had experienced success after success I wanted to get others on board. At first I gave demonstrations and discussed how to utilize automated testing tools. I consider this the first mistake I made: teaching how to test instead of helping others apply it directly to their actual projects. Adoption was fleeting. The take away is simple, teaching how is not enough. The only way to prove the value of testing is to help others use it to build confidence in the work they're actually responsible for.
The most effective way to do this is to create the opportunity to practice the techniques and immediately apply them to actual applications that are difficult and error prone to test. Every developer has at least one system that really gives them a lot of headaches. Making it real is the only way to experience the transformation in confidence to solidify the practice.
Adoption is much more likely if individuals experience:
- Confidence from automated testing of complexity.
- When test driven development, writing the tests before the code, can actually expedite developing software.
- When testing is wasteful. There's a gamut of situations that prove counter-productive to automatically test. Learning this trade off will otherwise take years and it's this wastefulness that drives many to abandon automated testing.
- How tools can significantly reduce the burden of automated testing.
- How testing supports easily changing software to add new features and to simplify existing features.
- How tests can be used to perform root cause analysis of existing issues, fix them, and prevent recurrence.
- How tests can help in areas that are otherwise virtually impossible to verify manually, like integrating with external, real time systems.
These are just a few of the things that will prove automated testing to be of immense value in a developer's toolkit. Once they experience how much automated testing improves their work, the rest is downhill.
After failed attempts to disseminate testing by demonstrating how to test, I at least wanted to get the people on my own team up to speed. One of the next mistakes I made was to demand tests be written for certain parts of our applications. This was a mistake for two reasons.
- I took too much responsibility. I was the one who stayed up late if something went wrong. Because of this, the tests meant a lot to me. I chose tests based on parts of the system that were most likely to fail. Unfortunately, this meaning was lost in the hand-off of a test case.
- Because the meaning was lost, others didn't see value. As such they rushed to simply complete the test cases. The quality of the tests suffered: often failing to be comprehensive, frequently sloppy and occasionally inaccurate! I quickly learned that mandates may lead to more tests, but there's no guarantee that they'll be useful. Testing without purpose is like a form of punishment! Tests that don't help actually hurt, they add to the code that has to be maintained. Furthermore, useless tests lead to confusion in the long term as tests are highly regarded as sacred descriptions of how the system should work.
Sharing responsibility and explaining why specific test cases would be beneficial resulted in immensely valuable tests. The take away is yet again simple: avoid mandates.
Here are some other mandates to avoid:
- Mandating levels of test code coverage
- Test code coverage is a measure of the amount of code covered by test cases. 80% code coverage means 20% of our code is never executed when we run our suite of tests.
- Code coverage is a great tool to find untested areas of an application. But it doesn't guarantee any level of quality. One cannot measure the effectiveness of test cases through code coverage. Additionally, there are going to be areas of applications that aren't worth testing so it's definitely not wise to mandate 100% coverage.
- Mandates for code coverage lead to rushed, wasteful tests.
- Mandating tests be written upfront before the code is written - a practice known as test driven development (TDD).
- TDD is an invaluable practice, but it's not a practice that can be mandated. There are areas of every application that are speculative or just don't benefit from up front testing. It's better to demonstrate the value of TDD and let individuals use their own judgment to apply the technique.
- Mandating a number of tests.
- This is like mandating lines of code, it has no bearing on the quality of the tests.
- If a system has wasteful tests, it can actually be an improvement to get rid of them! Test code adds to the overall code that has to be maintained.
These mandates, on individuals that are often overwhelmed, will result in useless tests at best and will negatively impact system maintainability.
Once developers have embraced automated testing as a valuable practice, it's important to work to lower barriers. This will eliminate the trade-off of manual verification or not at all. Anything that makes it difficult to automate a test is just one more reason not to build automated tests.
There are a plethora of tools that reduce barriers:
- Test runners
- Test runners make it easy to quickly execute tests. It's a good idea for this to be integrated into whatever development environment(s) developers use. As developers leverage practices like TDD, they'll want to work on tests as easily as they work on code to make testing up front feasible.
- Measuring code coverage
- Code coverage gives developers a tool to find under-tested areas of their applications. It's a great tool to facilitate further testing.
- Continuous Integration (CI) servers
- In addition to individuals running the tests, CI servers can automatically run tests on every change to the system.
- It's easy to forget to run the entire suite of tests. Having a safety net significantly boosts the value the tests provide. Without this safety net, I've seen time and time again the set of tests become neglected. People don't even know they've broken tests and over time the tests become out of touch with the functionality of the software.
- Notifications are sent upon failure so problems can be fixed sooner rather than later.
- As developers begin to trust a CI server they'll see more value in the tests they create and they'll want to create them sooner to start reaping the added value.
I've seen too many organizations squabble over the costs of these and related tools. Ironically, many of the tools cost less per developer than a couple hours of salary. What a waste to even debate the merits of a tool that can play such a pivotal role in establishing effective testing practices.
Put in place the practices and tools to reduce barriers now. The longer we wait, the less value we'll reap.
Another barrier to testing is the space necessary to learn and apply the techniques. Do we want developers to happen upon the subject of testing in their free time, or would we rather they have the time and support to do this, now, at work?
After years of studying and applying a variety of testing techniques, I've developed an innate ability to pick up and put down the "automated testing tool," instinctively knowing when it's adding value and when it's not.
Give individuals the space and support to curate automated testing skills:
- Avoid overloading schedules, learning is the first thing abandoned when rushed.
- Discontinue practices that are no longer necessary such as manually documenting test cases that have been automated. This will be an added incentive to automate.
- Sometimes customers and other stakeholders will perceive automated testing as a waste of time. Dispel this myth. Don't let this discourage anyone.
- Foster frequent reflection about what is and isn't working, encourage everyone to share their findings, champion the victories.
Focus on Outcomes
The most effective testing I've been involved with was for projects where the outcome of the software was the foundation of everyone's focus. Instead of being told what to do, for example what report to create, I knew what business outcome was desired and could help decide what reports were necessary.
I knew what parts of the system were most likely to make the business successful and I focused on these areas. I prioritized what got tested based on the desired outcome! Additionally, I was able to design tests in terms of the outcomes and thus was able to discuss test expectations directly with the customer and users.
This is the pinnacle of situations to reap the most from testing. It's definitely something to strive for. There's nothing as debilitating as carefully crafting code and tests for a system that adds no value to the business!
Imagine you have pain in your abdomen. Upon arriving at the hospital you describe your symptoms to management. Management calls in a doctor to make an incision over your appendix. Management then calls in another doctor, with expertise in removing appendices, to remove your appendix. Yet another doctor is brought in to stitch things up. You’re then shipped off to a recovery room, never to hear from the doctors again. Hours later, you still have pain. Upon further diagnosis, you discover the pain is coming from your gall bladder. The appendix was perfectly fine.
Thankfully we don’t live in that world. Doctors aren’t tasked with removing organs, although they clearly have the expertise to do so. They’re asked to diagnose symptoms and prescribe solutions after careful analysis. As they operate, they verify their diagnosis based on what they learn as they open the body. At any point they can make course corrections. When they’re done, they don’t hand the patient off without making sure their work is complete and the patient is safe to move on to recovery. Even in recovery they check in and monitor the situation, prepare instructions to help the patient recover, and schedule follow up visits. Sure, other people help monitor the patient’s health, but the doctor is ultimately responsible for the patient’s recovery.
If we can put our lives in the hands of doctors, why not trust developers with our software?
About the Author
Wes McClure is passionate about helping companies achieve remarkable results with technology and software. He’s had extensive experience developing software and working with teams to improve how software is developed to meet business objectives. Wes launched Full City Tech to leverage his expertise to help companies rapidly deliver high quality software to delight customers.
This is an article every software tester need to read.
Couldn't agree more!
Brandon Holt, Preston Briggs, Luis Ceze, Mark Oskin May 21, 2015
Kai Kreuzer, Olaf Weinmann May 21, 2015