BT

InfoQ Homepage Articles Unlocking Continuous Testing: The Four Best Practices Necessary for Success

Unlocking Continuous Testing: The Four Best Practices Necessary for Success

Bookmarks

Key Takeaways

  • Organizations have shifted en masse toward new agile development methodologies, believing that doing so would lead to faster release cycles, better product quality and happier customers.
  • However problems are afoot: According to Forrester, the percentage of organizations releasing software on an at least monthly basis declined from 36 percent in 2017 to 27 percent in 2018.
  • Testing can be one of the single biggest roadblocks standing between organizations and their agile objectives. 
  • While the majority of organizations have enthusiastically embraced agile planning and development, most still find themselves unable to effectively implement continuous testing throughout the software development lifecycle. 
  • There are four best practices to help overcome this: focus on test quality, keep your tests short and atomic, test across multiple platforms, and leverage parallelization
     

Not long after bursting onto the scene as a revolutionary new approach to software delivery, agile development is already at an inflection point. It started with the best intentions, of course. As it became increasingly clear that the success or failure of a business in the digital age was dependent on the ability to rapidly deliver flawless web and mobile applications to customers, organizations began shifting en masse toward new agile development methodologies, believing that doing so would lead to faster release cycles, better product quality and happier customers. To great fanfare and considerable hype, the age of agile development had arrived. 

And yet, with the end of the decade now in plain sight, something unexpected has happened. Agile development has quietly stalled out. In fact, the pace at which organizations release software is actually on the decline. According to Forrester (Forrester: The Path To Autonomous Testing: Augment Human Testers First, Jan. 2019), the percentage of organizations releasing software on an at least monthly basis declined from 36 percent in 2017 to 27 percent in 2018.

In other words, for most organizations, the promise of agile development has failed to materialize. 

Testing as the Fulcrum Point 

While it wouldn’t be accurate to say that any one factor is 100 percent responsible for a trend as significant as the stalling out of agile development, it’s also by no means an over-generalization to say that testing is one of the single biggest roadblocks standing between organizations and their agile objectives. In fact, in a recent Gitlab developer survey, testing was identified as the most common source of development delays, cited by more than half of respondents.

While the majority of organizations have enthusiastically embraced agile development, most still find themselves unable to effectively implement continuous testing throughout the software development lifecycle. 

And therein sits the roadblock. Organizations are now finding out that when it comes to agile development and continuous testing, you can’t have one without the other. If you’re unable to test your applications quickly, reliably, and at scale, then you’re inevitably left with a choice: slow down your delivery process on account of testing and risk missing your release date, or stick to your release date and risk delivering a poor quality application to your customers. Neither choice is a good one, and both defeat the purpose of implementing agile development in the first place. Agile development should be about not having to choose between speed and quality. (Accelerate by Nicole Forsgren, Jez Humble and Gene Kim is a great book for anyone who wants to dig deeper into this concept.)

Overcoming the continuous testing roadblock is thus the key to delivering quality apps at speed and fully realizing the promise of agile development. Here are four critical best practices organizations can implement to do just that. 

#1: Focus on Test Quality 

If you’re serious about succeeding with continuous testing, test quality is everything. That’s not to say there aren’t other important considerations (the most important of which are addressed later on in this article), but just about every other continuous testing best practice is only relevant to the extent that you’re starting with a foundation of high-quality tests. And the most direct and reliable determinant of test quality is pass rate. 

You should be writing, managing, and executing test suites so the overwhelming majority of your tests pass. Now, should every test pass? Absolutely not. No developer is perfect, and a small percentage of tests should fail. The entire reason we test applications is to discover bugs and fix them before they’re pushed into production and create a poor user experience, so a test that exposes a potential flaw in your application is a test that has served its purpose well. That said, there’s a considerable difference between tests failing on occasion, and tests failing with regularity. 

When tests fail on occasion, developers can safely assume the new code they’re testing has caused something in the application to break, and they can quickly take action to remedy the problem (assuming, of course, that they’re following the next best practice as well). But when tests are consistently failing, developers begin to rightfully question the results, uncertain as to whether the failure was caused by the newly introduced code, or whether it is instead reflective of a problem with the test script itself. When that happens, manual follow-up and exploration are required, and the agile development process into which you’ve invested so much time and energy screeches to a halt.

Having worked directly with many QA teams, including at large enterprises running millions of tests each year, I generally advise that organizations should aim to pass at least 90 percent of all tests they run. In my experience, that’s usually the breaking point at which the number of failed tests starts to exceed an organization’s ability to manually follow-up on those failures. Thus, organizations that find themselves consistently below this threshold should place greater emphasis on designing and maintaining test suites in a manner that will lead to better a pass rate, and allow them to avoid scenarios where the number of failed tests exceeds their bandwidth to implement the appropriate manual follow up. The best way to do that is through our next best practice. 

#2: Keep Your Tests Short and Atomic 

One of the most reliable predictors of test quality is test execution time. It makes sense: the longer and more complex a test becomes, the more opportunity there is for something to go wrong. The shorter a test is, the more likely it is to pass. In fact, according to a new industry benchmark report based on millions of actual end-user tests, tests that complete in two minutes or less are two times more likely to pass than those lasting longer than two minutes. 

Suites with shorter tests are not only more stable, but they execute faster as well. Remember, agile development is first and foremost about speed. The faster your tests execute, the faster you can get feedback to developers, and the more quickly you can deliver apps to production and get that new release into the hands of your customers. Now, it might seem obvious that shorter tests execute faster than longer tests, but most organizations focus on the number of tests within a test suite rather than the length of those tests, mistakenly assuming that a suite with just a few long tests will execute faster than a suite with many short tests. If you’re running tests in parallel (forthcoming best practice spoiler alert), however, the suite featuring many short tests will actually execute exponentially faster than the suite with just a few long tests. 

Take a sample scenario where you have one test suite featuring 18 long-flow, end-to-end tests,  and a second suite featuring 180 atomic tests. In almost every instance, the suite featuring 180 atomic tests will executive significantly faster than the suite featuring just 18 tests. In fact, when we model this exact scenario for customers during live demos, the suite featuring 180 atomic tests typically executes 8 times faster than the suite with 18 long-flow tests.

The Power of Atomic Tests 

If you want to keep your tests short - and you should - the best way to do so is by keeping them atomic. An atomic test is one that is scripted to validate just one single application feature, and nothing more. So, instead of a single test to validate that your home page loads, visitors can log in with their username and password, items can be added to a cart, and a transaction can be successfully processed, you would design four or five separate tests, each measuring just one of those aforementioned pieces of functionality. 

Atomic tests are also considerably easier to debug if and when a test does fail. Being able to quickly get feedback to your developers is great. So is having complete confidence that a failed test signals a break in the application rather than a flawed test script. What is most important is the ability to quickly fix what’s broken, and atomic tests make it far easier for developers to do just that. 

For starters, because atomic tests are inherently short and thus execute more quickly than longer tests (see the sample scenario above), developers are usually receiving feedback on code they only recently wrote. Fixing code you wrote just a few minutes ago is considerably easier than fixing code you wrote hours or days ago. In addition, because atomic tests focus on just a single piece of application functionality, when one does fail, there’s generally no confusion about what’s gone wrong. After all, it can only be that one thing. Developers thus don’t have to spend precious time and energy trying to diagnose the root cause of the problem. They can immediately remedy the bug and quickly get back to the world of developing great software.

#3: Test Across Multiple Platforms

So, you’re scripting short, atomic tests, achieving a high pass rate, and quickly remedying bugs as soon as they’re identified. You’re home free, right? 

Not quite. 

Customers in today's digital world consume information and services across an ever-growing range of browsers, operating systems, and devices. To truly realize the promise of agile development, organizations must rapidly deliver high-quality applications that work as intended whenever, wherever, and however customers want to access them. 

If a customer wants to access your website from a PC using a slightly older version of Firefox, that website needs to look great and function perfectly. If a different customer wants to access your site from an iPad using the latest version of Chrome, the site needs to look great and function perfectly. And if a third customer wants to access your native mobile app from an  Android phone, you guessed it, the app needs to look great and function perfectly.  

The ability to quickly determine whether an application functions correctly across an ever-growing range of device, operating system and browser combinations is a critical component of effective continuous testing. This includes both mobile and web browsers and operating systems, as well as real devices. Once again, based on my experience working with hundreds of enterprise customers to help them execute millions of tests, I recommend striving to test across at least 5 platforms (defined as any combination of a desktop or mobile operating system and browser) with each test implementation. This will usually give you the breadth of coverage you need to confidently release your app on the platforms your customers are most likely using. 

You should also strive to incorporate real-device testing into your overall continuous testing strategy as well. Doing the latter usually requires an organization-wide shift to a "mobile-first" (or at least, "mobile equal") mindset, in which a proportional amount of time and resources are dedicated to ensuring that updates to mobile web and mobile native applications keep pace with updates to web applications.

#4: Leverage Parallelization  

Even if you’re brilliantly executing each of the preceding best practices, you simply cannot scale to meet the needs of your growing digital enterprise without parallel test execution. Without parallelization, test suites will eventually take too long to run, and your automated testing initiatives will invariably fail. 

To understand why parallelization is so important, consider the hypothetical example of a test suite with 200 (hopefully atomic!) tests, each of which takes 2 minutes to complete. If you can run those 200 tests in parallel, you can execute the entire suite in just two minutes, giving your developers immediate access to insight on the validity of at least 200 application functions. If you had to run those same 200 tests sequentially, however, it would take nearly 7 hours for you to get that same amount of feedback. That’s a long time for your developers to be waiting around. (And when your developers have to wait around, your customers inevitably do too.)

The ability to execute tests in parallel is thus table stakes for effective continuous testing. The good news? If you’re following the best practices I’ve already outlined, you’re well on your way. Running tests that are atomic and autonomous (that is, that can execute completely independent of any other tests) is the most important step you can take to position yourself to effectively leverage parallelization. Beyond that, pay attention to designing your test environment in a manner such that you have enough available capacity (usually in the form of VMs) to run tests concurrently. Just as importantly, leverage that capacity to the fullest extent possible when executing your test suites.

Success or Disillusionment?

Theodore Roosevelt is often credited with the saying that "nothing worth having comes easy." More than 100 years later, those words hold true for continuous testing. Continuous testing is not easy. But there is a roadmap for success, and it revolves around the key pillars outlined in this article. To recap:

  • Shorten test execution time by running test suites with many small tests rather than those with just a few long ones
  • Ensure test quality by always running atomic tests 
  • Ensure proper test coverage by expanding the number of platforms against which you test
  • Run tests in parallel to ensure scalability never becomes a roadblock

I’d like to close by revisiting the inflection point at which agile development teams now find themselves. There are two possible paths forward. One is marked by success, one by disillusionment, with little gray area in between. If you haven’t already, the time is now to commit to achieving continuous testing excellence, and finally turn the promise of agile development into a lasting reality.

By Lubos Parobek, VP of Products, Sauce Labs 

About the Author

Lubos Parobek leads product management and user interaction at Sauce Labs, provider of the world's most comprehensive and trusted continuous testing cloud for web and mobile applications. His previous experience includes product leadership positions at organizations including KACE, Sybase iAnywhere, AvantGo and 3Com. Parobek holds a Master of Business Administration from the University of California, Berkeley.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.