BT

InfoQ Homepage Articles Test Automation: Prevention or Cure?

Test Automation: Prevention or Cure?

Bookmarks

Key Takeaways

  • Automated UI testing isn’t all it's cracked up to be
  • Exploratory testing still has lots of benefits over automation  
  • Breaking work up into smaller batches helps you release faster  
  • Having a team understanding on what Agile means can help them make better decisions
  • Giving teams time on the job to learn is a better way of fostering a continuous learning culture within an organisation

Introduction

A lot of teams have the tendency to view test automation as a way of speeding up delivery of software as this is often the perceived bottleneck within the team, but if they were to take a deeper look at their development practices as a whole, they may get better results.   

Preventing bugs

Testing, and especially test automation at the UI level, has a tendency to occur at the end of the software delivery pipeline, generally trying to catch bugs that could slip out into the live environment and adversely affect our end users (like a germ!). Testing in this case detects the symptoms of the bug, and the fix deployed by the developers is the cure. It's almost as if we are waiting for our systems to get sick and then try and do something about it. 

This approach can work well for teams, however, the current working environment pushes us to do more with less people and faster than we ever have before. Therefore, this approach is not going to be sustainable in the long run. This is where the prevention rather than cure approach comes in.

By making adjustments to how we build our systems, we are able to detect issues before they even occur, or better yet, make them less prone to developing bugs in the first place (prevention). This means that we are preventing the bugs from happening, rather than trying to cure the cause at a later date. Prevention is better than cure, as the old saying goes.

Our test automation journey

I started out with the mobile team who built and managed our VOD (video on demand) product. At that time, all our testing was manually executed and we were on average only releasing two to three times per year on each platform. We knew we wanted to speed things up, and the most obvious-looking bottleneck to releasing was testing. Each regression test cycle would take nearly two weeks, and that was when no issues were found. If issues were found, then the development team would need to understand the issue, identify a fix, and then apply it. This could then result in invalidating any testing already carried out so the process would need to start again, leading to test cycles taking twice as long.

So we started to look to automating more of our UI testing. We wanted to start small to see if this would head us in the direction we wanted, and opted to only automate new functionality. Once proven, we would look to automate existing areas of the system or known problem areas.

We used 3 amigos to understand as a team what we wanted to build and what the key acceptance criteria for the feature should be. This gave us a starting point of how to break up the feature and what user journeys to automate.

From there, we identified tools we could use to automate our testing (Calabash and eventually Appium), and run tests in realistic environments. For us, this was on real phones as opposed to emulators/simulators, which resulted in us building our own device testing farm to make better use of our mobile devices, but also allow it to scale across the organisation.

More details can be found on a three-part series on my blog: Automating BBC iPlayer mobile testing part one: 3 amigos to identify use cases, Part two: automation process, and Part three: legacy vs new features

The benefits that test automation brought us 

At first, the automation helped a lot as we could now quickly and reliably run through simple scenarios and get the fast feedback we wanted. But as time went on, and after the first initial set of bugs were caught, it started to find less and less issues unless we actually encoded the automated test cases to look for them.

We also noticed issues were still getting through because for some scenarios we just couldn’t automate; for example, anything related to usability had to be tested manually. So we ended up with a hybrid solution where the automation would run some of the key scenarios quickly e.g. letting the team know they hadn’t broken anything obvious and exploratory testing for any new functionality, which in turn could be automated if suitable. As such, it is difficult to test; we were prone to making mistakes while attempting to test it or it simply took too long to do manually.

An unexpected benefit indirectly linked to our automation journey was that as we started to release faster, it created a stronger focus on what we were trying to achieve.  It resulted in us breaking the new feature down into small chunks that could be worked on independently, and therefore automated. This allowed us to release those chunks into the live environment quicker and start gaining real feedback from realusers. At first this wasn’t apparent, as we were still trying to see how we identified automation scenarios. It was only with hindsight that the team was able to see that this was what they had inadvertently done. Simply put, we started to break down our work into small batches of end-user value. 

Investigating our development lifestyle 

We started to realise that the automated UI testing wasn’t really giving us the returns we wanted. Because of this, we started to look at other areas of our development process to see if we could make any improvements. But one of our problems as a team was we were too close to the processes to see objectively what was and wasn’t working. To overcome this, we brought in an agile coach to help our teams. In fact, we brought in two; one to help the team understand the processes they were using, and the other, an engineering coach, to help us better understand how we were actually building our systems. 

The external viewpoint of the team allowed them to poke at parts of our system without the worry of offending anyone, and ask simple questions to get us to see the reasons behind our methods of working and break us from the “we’ve just always done it this way” cycle. For instance, our stand up boards for managing our work had the usual columns of backlog, next, dev, waiting-for-test, in test and done, but we had never thought to ask whywe had a next and waiting-for-test columns. What our coaches were able to help with was questioning why we let work build up in these columns, and why development and test where seen as two distinct activities. The coaches’ approach was not to simply change our processes, but help us see what problems they were causing (unreleased value sitting in queues masked as next and waiting-for-test) and get the team seeing work through to done by eliminating the Dev and Test columns, replacing them with a simple and self explanatory In-progress column. You can find out more about the benefits of moving to this way of working in my In test column post

What we learned

One of the biggest issues we found out was that we had a lot of cargo-culting going on in our teams, in terms of our agile development practices. Just because we had standups, worked in small teams and released things at the end of a sprint, didn’t mean we were actually agile. It just meant we had a few ceremonies that made us look like we were “agile”. It turned out that not everyone was too sure why we did what we did, and even what the supposed benefits where. One of the first things we did was to clarify what it meant to be agile; that it’s more about sustainable software delivery based on objective feedback, as opposed to just trying to go as fast as possible, releasing whatever you can and hoping for the best. We did this though book clubs and facilitated team discussions to bring about a joint understanding within the team. This helped the team members to get a better grasp of the principles behind agile practices and make better decisions in their way of working.

We also started to look at how we were actually building the system at the code level, and attempted to visualise how developers where commiting code, how often and how big a commit. This wasn’t an attempt to shame the developer, but to help them understand how they, as a group, affected the code base, and to try to encourage more productive developer habits; habits such as smaller, focused regular commits, as opposed to large commits at the end of the day. If they did do a large commit, then that was OK too, but let the other developers know why they did so, so they too can learn. 

One of our biggest changes to the teams was encouraging pair programming, so no one developer was ever working on a feature alone. This quickened up code reviews, but also people were less likely to take shortcuts when they’re being held accountable. It also helped to more quickly improve the skills and knowledge of our more junior members of the team, making them more productive sooner over traditional methods of working on dummy projects, reviewed by more senior members.

My advice for a more productive and healthier development lifestyle

Work as a team, and in that team identify what a productive, healthier development process looks like. One helpful method for starting these discussions is setting up a team video club. This allows the team to take some time from the day-to-day activities and spend it on learning about new tools or approaches to building software. At the end of each session, a team discussion is facilitated by the session leader (project manager, tech lead, or the person bringing the idea to the team) to explore how they could use the concepts to help drive the team to experiment with something new.

Then choose one concept to work on with a clear idea of what the outcome should be. So, if we take the better unit testing as an example, what does unit testing mean to the team? What would having better unit tests give the team? Once you have these answers, develop multiple ideas on how you could achieve this so that you can choose the one which allows you to test this outcome quickly and objectively. You want to be very clear if the new process or technique you used actually helped you achieve your outcome or not within a set timeframe. If it does, great. If not, then do you need to stop it? Do more? Tweak it? You also want to decide who is actually going to run the experiment, and how they will communicate this back to the rest of the team. 

Remember, if you want any new process or idea to take hold within a team, then they all need to have a stake in it; otherwise at the first sign of difficulty the idea will stop or limp along slowly, with only the people invested in it getting any benefit. 

About the Author

Jitesh Gosai has over 15 years Test experience working with a wide variety of companies from Mobile manufactures to OS builders and app developers. He is currently the principal tester in the TV & Radio department working with the Mobile, TV and Web Platforms team within the BBC to help identify their test approaches, and how the teams move to DevOps and beyond.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.