BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Kyle McMeekin on Real World Testing Challenges

Kyle McMeekin on Real World Testing Challenges

This item in japanese

Bookmarks

At the recent Agile 2016 conference, InfoQ spoke to Kyle McMeekin about the real world challenges around software testing in agile development, the push to have more test automation and how exploratory testing is different from and more effective than scripted manual testing. 

infoQ: Kyle, please tell us a little bit about yourself and a little bit about QASymphony.

QASymphony is a software company helping teams create better quality software. We help teams, organizations, companies that are taking an Agile stance or shifting towards Agile methodologies. We provide a software offering that helps facilitate the QA departments to achieve their Agile initiatives. Within QASymphony, I’m a senior product engineer; I’m responsible for essentially mastering the different product offerings that we have, providing product demonstrations, working with teams and organizations to align their processes and their initiatives to fit in with our tools.

InfoQ: What are some of the real world challenges that you see out there in the marketplace?

I think the mentality around shifting towards Agile-  it can just be a blanketed term. So I see often that teams are moving and they’re saying, “Yes, we are now Agile”, but what does that mean? I probably talk to hundreds of customers a month and I always hear a different definition of that. So some of the challenges I see are providing teams with the ability and the tool stack to make their Agile initiative successful. Often, we’re working with organizations where their development teams are becoming more Agile. They’re adopting Agile ALMs like JIRA, Rally, VersionOne, and the QA individuals are kind of just on an island and wondering, “Well, what do we do to become more Agile?”

More often than not, there’s a big push to become more automated. These teams don’t want to be the bottleneck where they’re running through all their manual tests and they’re looking for ways in which they can automate them. That’s a big initiative that I see in terms of teams looking to become more automated. And then on top of that, exploratory-based testing, that’s another thing that I hear of all the time.  Exploratory-based testing is not actually having predefined steps that you’re having your testers or quality individuals run through. So from a waterfall-based approach, you might be running through a lot of manual tests where you have a script and you’re saying, “Okay, I’m going to complete step one and I’m going to expect to see this” and you’re just going through the motions of running tests.

With exploratory-based testing, it’s more where you’re doing chartered sessions, if you will. You’re going through and testing the application with the product knowledge that you possess. So it’s not going through and just wildly clicking around. There is a rhyme to the reason. So putting on a persona. “I’m going to be an admin and I’m going to run through an end-to-end use case, let’s say, where I’m checking out items on Amazon and I’m going to add them to my cart, enter my billing details …” but that’s not something that’s been prescribed to you. There are different ways to go about it, actually, running through that use case.

So that’s something that I think you can help empower testers to speed up, in terms of how much they can cover during maybe a sprint from a testing perspective. They’re not limited or cuffed based on the manual tests that have been brought forth.  It’s been proven to find more bugs, and ultimately, get those discovered during the testing phase instead of them skipping through into production.

InfoQ: If it’s not just randomly clicking around, what is it then? How do you set up an exploratory charter?

There’s typically a bunch of different fields or elements that you’d fill out prior to conducting an exploratory-based test.  You would have an objective.  What’s the purpose of what I’m hoping to test during this? What role am I taking up? Are there any type of preconditions? And you would have a debrief at the end. How do you think the session will end? What was one way that you went about testing it that perhaps, a fellow colleague did not?

So a way that I like to think about it is in terms of directions. I mean, if you are familiar with Waze or Google Maps, you might plug in the location where you’re starting at point A and you’re looking to get to point B but those applications are offering different routes. “Well, hey, let’s avoid this area because of construction,” or “Let’s choose this way because it might be five miles longer but we’ll get to see a certain scenic route.”

So there’s different ways you can ultimately get to your end destination, and that’s really the simplest way that I would think about exploratory-based testing. It’s not always going to be railroad tracks where you’re going from point A to point B and you’re not deviating from that path. So a lot of different ways to test out how a user for example, would test an application. It’s not always going to be this prescribed straightforward way.  I think it’s definitely important to have those details out, have a plan of attack in which you’re hoping to tackle during a session. And ultimately, that’s going to keep testers on their toes. It’s going to allow them to use their brains to think through different use cases and not something where they’re just going through and defining tests based off of prescribed script like a manual-based test case.

InfoQ:  You made a statement that this has proven to find more bugs. Can you give us more background on that?

Our company, QASymphony, did a webinar on it and we pulled together some slide decks, which I don’t have in front of me, but there’s been some case studies that we’ve done also with clients that we’ve had where we actually allow them to leverage an exploratory-based testing tool that we have. I mean just in terms of overall satisfaction, tester job satisfaction, value add where they’re actually able to go out and collaborate amongst themselves and with others to demonstrate what they’ve done. The detail behind this research for this can be found here. It’s been pretty powerful. There’s been a lot of different thought leaders. David Cummings is a big thought leader in this space. He’s mentioned a lot around the value add that exploratory testing can provide.

InfoQ: Thank you very much for taking the time to talk to us.

Absolutely.

About the Interviewee:

Kyle McMeekin is a Senior Product Specialist at QASymphony focusing his time on customer demonstrations of product offerings and technical support.  He previously spent time as a tester at Cognizant Technology Solutions and moved to the Atlanta area after growing up near Washington, DC.  He is an avid technology enthusiast as well as die-hard Michigan Wolverines fan.

 

Rate this Article

Adoption
Style

BT