Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles The Current and Future State of Testing: a Conversation with Lisa Crispin

The Current and Future State of Testing: a Conversation with Lisa Crispin

Key Takeaways

  • To make an Agile transition successful, you have to get management involved to understand and support what Agile means for testing.
  • In Agile development, we still need specialists. The difference is we have specialists who have learned how to collaborate; they've learned how to work together and communicate well. So we can transfer the skills to each other. 
  • Automation is not always the best strategy to find the problem. It's crucial to analyze and experiment with what is the minimum number of automated tests that are good enough, and then we free our time up. We can do other high-risk areas with things like exploratory testing. 
  • Testers need to learn DevOps practices, DevOps culture, continuous delivery in deployment.
  • Testers need to work on their communication skills. Software development is 20 percent coding skills 80 percent communication and collaboration.

Agile is a hot, but debatable, topic in recent years. One argument is how to face the challenge of testing. Agile testing tries to answer this question in many different ways, including organization, people, process, tools, communication, and collaboration. Lisa Crispin and Janet Gregory have written Agile testing books and made online courses. They are also building an agile testing community, for people who want to contribute and learn. All of their efforts help spread Agile knowledge, techniques, and culture across the Agile world.

Xiaoqian (Christina) Geng: You and Janet Gregory are the First Evangelists of agile testing and wrote the well-known books Agile Testing: A Practical Guide for Testers and Teams, and More Agile Testing: Learning Journeys for the Whole Team. Agile testing goes hand in hand with agile development success and is supposed to be part of it. What do you see as the biggest challenges to moving toward a successful agile transition?

Lisa Crispin: I think one of the biggest challenges is that people don't understand agile very well, or that as soon as they start having two-week iterations and standups they'll go faster. We have to learn a lot of new skills, and a massive part of it is the whole team taking responsibility for quality and testing. People have to work together to build quality into the product, to build the infrastructure of testing, to do both automated and manual testing, to have the confidence to deliver those small increments of value very frequently. To do that, everybody on the team needs to get engaged in testing activities. If the management doesn't support that and motivate them to care about testing and care about learning to test, if all of the developers just code features without testing, it won't work.

It's constructive for developers if part of their professional development includes testing skills. Management needs to understand that and incorporate that into their skills.  So we felt a need to educate managers about testing, because they don't understand it very well. Like a lot of development managers and business managers, they say they want quality, but don't understand what that means and why they should make a significant investment. In the end, quality pays off in the long run. If you focus on speed, you'll cut too many corners and go more and more slowly. You have to focus on quality, to get speed in the long run.

Janet and I have a new book out. Because our books are very big, we've written a little book called Agile Testing Condensed: A Brief Introduction, and we're aiming for only a hundred pages. So we're hoping managers, among others, would read it to get an overview. We hope it could appeal to managers and they'll understand why they need to support their teams, getting everybody engaged in testing activities and building quality into the product, so they can make that transition to agile. We are self-publishing it because we don't want it to be expensive. We're just going to have it on LeanPub and let people get an electronic copy, and we'll put it on Amazon in case people want to print copy. We want to spread that information, especially now in the era of everybody wants to get to continuous delivery or continuous deployment.

We had a blog post on a couple of weeks ago, about how if you focus on making the transition to continuous delivery and adopting a DevOps culture, you'll achieve an agile transformation too. When you have to focus on how to deliver twice a week, or once a week, maybe more often, you'll realize we do need automated tests to do that. Oh, we do need the unit tests. We do need to try the practices like Test-Driven Development and pairing all these agile practices. We can't succeed with continuous delivery if we don't do these things. We should start with the goal, and then do what we need to support that goal. Whereas with agile, it's kind of vague because we want to go faster.    

Geng: One of the reasons the development team wants agile development is people think it can help to deliver features faster. Most of the time, at the end of the development lifecycle, the testing becomes a bottleneck if it has not been done correctly at the beginning. If this is not in the developer's priority list, it becomes hard to let them buy-in that tests should be part of development work.

Crispin: Even if they agree with you, they will not do it, because of low priority; that's why I say their management has to be on board.

Like my last team - the director of development understood we needed exploratory testing as well as all the automation. We had many automation tests, but we had problems when it got into production, because we needed exploratory testing. So he made it part of a kind of career ladder for developers at each level - they needed a particular competency and exploratory testing skills. Then, he asked us testers to have workshops teach developers exploratory testing. We also paired with them regularly, so they could see how we did testing. We could also help them with writing tests. For example, what would happen if you did this, or you should have a negative test with this. It helped them start thinking about it; we made them a little checklist of heuristics to go through, and we also use Elizabeth Henderson's test cheat sheet. We just gave them new tools for their toolbox. They liked it, because if they found the bugs early; they could see the benefit.  

Geng: Someone said a 3-year-old boy can find bugs in a video game, and a manufacturing company CEO can find some bugs in a financial application, for example. Does that mean everybody can do testing? In other words, do you think developers can do all the work that testers are doing today?

Crispin: Well, they can undoubtedly develop competency and many testing skills. You know Alan Page and Brad Jensen have an A B testing podcast, and over the past few years they came up with what they called "modern testing principles." Their vision is testers became coaches, and they coach their team until their team doesn't need them anymore. That could work in some business domains where the risk is low. For things like financial services, where the money is at stake or anything critical for business or life, I think you still need the specialists, because it's in-depth knowledge. You know I've been doing testing for 30 years; can I transfer all those skills to somebody? We need testing specialists. Just like we can learn a little bit about user experience design, but for most applications, we wanted designer experts in our team.  

Johanna Rothman had a blog post a few weeks ago where she was saying in the early days of agile, we said: "Well, we'll all be generalists; we'll all have the skill." She said: "You know what's happened? We have specialists who have learned how to collaborate; they've learned how to work together and communicate well. So, we can transfer the skills to each other. However, we still have the specialists, and they're not siloed alone somewhere. We're bringing them all together, and that's the key." I agree with that. We need specialists who are trying to transfer all the skills they can. However, you can't be an expert in everything. I don't think everybody can know everything.

Geng: A large-scale distributed platform may have thousands of configurations and highly customizable deployments. A developer spends time understanding the component’s dependencies, but connecting all dots like a customer is demanding for a developer to learn; it needs in-depth analysis and extensive product understanding.

Crispin: The domain knowledge I think is essential. I've added much value by really getting deep into the business domain. Developers have to focus on some small piece of code they're writing. They have to be focused. They don't have time to step back and learn the whole business that made it so. If I'm there, I can help them win.

Geng: So, we shouldn't put that expectation on a developer?

Crispin: Right!                

Geng: Over time, test automation is prone to become inefficient and expensive. What's the best test automation strategy in Continuous Delivery?

Crispin: The book Accelerate, which is written by Nicole Forsgren, Jez Humble, and Gene Kim, is based on the first four years of the State of DevOps Survey results, and it's a scientific survey. Dr. Forsgren is an expert at using this kind of data and finding the correlations, asking the right questions, and finding how it correlates - high-performing teams that are successful in continuous delivery, teams whose customers are happy, teams whose team members are happy they are enjoying their work. They found that one predictor which correlates with being a high-performing team is for developers to do the test automation. So, they run those tests themselves on their local machine. They deal with the test failures in the continuous integration in their deployment pipeline.

Moreover, they work together with testers. So you need both. The developers have to be doing automation, but they also need testers to help on automation and also to do the manual testing activities. That has been my experience. When testers and developers collaborate with automation, then we have the best results. That was my experience over the past couple decades. Now, their scientific data that backs up my experience, so I was very excited to read that. You know here's something that's not just somebody's theory, it is a fact.  So, I hope it will help persuade more developers and their managers that developers do need to not only the automation. It has not only the service level or API test but also the UI tests, which go through the UI, which many developers want to avoid. Collaboration is essential, because testers are great at specifying test cases. We need those skills set together well.

Geng: After years, automation tests will become a huge testing asset, making them all reliable and useful, but it sometimes turns difficult.     

Crispin: Yes, it's very tough, we struggle with that too. We need to make sure we have the right test; we have enough coverage. We need to make sure we don't say: "Oh, that test fails all the time. I'm going to ignore that whole test suite." It's very dangerous. If it's not an important test, don't have it. If it is a necessary test, make it reliable. We should have confidence in our tests. It's crucial to analyze and experiment with what is the minimum number of automated tests that are good enough, and then we free our time up. We can do other high-risk areas with things like exploratory testing. Some things are better to test manually, and we can test those manually. It needs to be a combination. For continuous delivery, we obviously can't do all tests manually so fast, to support continuous delivery, but we can use things like release feature toggles and say, "OK we're going to deploy it, keep it turned off, or turn it on just for us; we'll do the testing and then we're happy to turn it up for everybody later." We have all this new technology that helps support that. It's inspiring. However, we still have to let the specialists help where they're needed and collaborate, so we have all the skills we need.        

Geng: I see a new word "DevTestOps", in Can you tell us about it?

Crispin: At the start we used the term "DevTestOps", which others have used in the past. I like that because for me, testing is the heart of dev ops. If you read books like Jez Humble and David Farley's Continuous Delivery, that book was published in 2009. They asked me to be a technical reviewer for their manuscript, before they finished the book and published it. And I said, "My team does these practices, but I'm not an expert at those things."

And they said, "No we want you to review it." So I read it. It's a book about testing, you know, the whole book is really about testing. That's the heart of continuous delivery. Jez Humble is very supportive of my saying that. The word "DevTestOps", some people love it, some people said you're making silos, because testing is part of development. I can see that. Testers need to learn DevOps practices, DevOps culture, continuous delivery in deployment. This is where software is going. So we have to get evolving, and our skills are really needed. Things like risk analysis, we're good at that as testers. We're good at identifying patterns, so like monitoring the production using a tool like Splunk, we can start to notice patterns other people might not notice, because that's our thought process. We can use observability to look at what's happening in production. How are people using our product? We need to focus our testing here, we have many tools at our disposal to decide where it has, but testers need to get involved, and I think many testers are scared if they hear the word DevOps, they'll see no place for testing there. In continuous delivery, will we have time to do the manual testing? People aren't interested in becoming test automators, but want to do testing. They think if it's only about automation, they don't want to get involved. We need to show them, "No, all these testing activities still have to happen." We're just using some technology to help us do it differently.  So, that's the purpose of the site, to help people learn about it. So, we have all the links to all the articles, books, videos and podcasts. Then, we're getting fantastic guest-blog posts, and we would love more guest blog posts. We welcome anybody who wants to contribute. We're to help raise awareness, educate and get people excited about it.

Geng: Modern Testing Principles was created by Alan Page and Brent Jensen; they call it the evolution of the Agile Tester. With these seven principles of Modern Testing, testers can start moving from being the owners of quality to being the ambassadors of shippable quality, delivering value, and improving the quality culture of the team. What do you think about it?      

Crispin: In many business domains, I don't think that can happen. At least there needs to be a consultant in the team doing some specialized tests. Developers have a different perspective. They focused very narrowly at a time on small thing they're working. Somebody has to be looking at the big picture. We need testers on the team for that, but maybe not as many testers. In my last team, we have three testers for 30 developers. We couldn't possibly test all the outputs of all the developers, but we could help the developers learn to do exploratory testing at that story level. So, we testers would write exploratory charters on the feature level. When enough stories are done to start testing the feature, testers we can pair up with a product owner, or designer, or developer and do that exploratory testing at a higher level.

I think you can have fewer testers if the rest of the team steps up and if they're building in quality. If the developers are not doing test-driven development or at least automating unit tests, then nothing is going to work. Even if you do full automation at other levels, you can't replace the automation in unit tests in general. Developers have to want to build in quality, and they do. I know every developer wants to deliver a quality product, but if they're being pressured just to get the feature out the door or even worse, "Well, after the next release, we'll start on hardening the feature and automating tests." It is like, I'm too busy drowning to learn how to swim. We should not be quality ambassadors but a promoter. We are somebody to help remind everybody and then also help them achieve it, have them have the skills and other skills to match that level of quality, but it takes a real commitment from the whole team. They have to want it.

Geng: What do you think is a healthy ratio between developer and tester/QA in a team?

Crispin: There's never been any set ratio that works for every team. I think the team needs to ask themselves, "What's the biggest problem we're having?" The most significant problem you're having is related to quality - maybe testing is a bottleneck. You need to look at how to do testing. You may bring in more testers onboard or different types of testers on board. I worked at a team from 2003 to 2012 for a financial services company, a risky software. It's about people's money; our software was for selling and managing retirement accounts provided by employers for their employees. We were a small team, but we were incredibly high performing. Writing the code wasn't too hard, but making sure it worked, and every single edge case worked was challenging. We're talking about money; you have to go down six decimal places. It has to be exact. Because, if you mess it up, it's tough to go back and undo it. The last team I worked on was a project-tracking tool; nobody has big trouble if their project tracking tool is down for a little while. It's not critical to the business. It'll annoy them. It causes pain, but it's not horrible. Having two or three testers for 30 developers was OK, mainly because the culture was very quality-focused. There was pair programming; they did Test Driven Development; they cared about the exploratory testing. They wanted everybody to want to learn how. In that case, it was fine to have fewer testers. I know some teams where the developers are good at testing; it works fine for them without having testers. So it just depends on the case.

Geng: How should you use data to measure product quality?

Crispin: Measuring quality is always a big challenge. I'm always thinking about that. Anne-Marie Charrett has done some really interesting work in this area. Different graphs paint a different picture of quality in different areas. I think now that we have these great analytics tools and all this data, one measure is are people using a particular  feature? If they're not using it, is it because they didn't want it or is it because it's hard to use? Is it hard for them to discover it? We can use learning releases. When we start working on a new feature, let's do a very thin-slice, minimum viable product (MVP) or learning release and get it out, and watch users use it, then interview them get their feedback. We found it very helpful just to have a feedback widget (I’m talking about web-based apps) - a widget application, where users can just click and give feedback. That feedback was really valuable to us. Another thing we track is "rage clicks." So, if somebody gets mad, they just click, click, click ...                  

The tester on our team is gathering metrics every week and putting them on a big monitor in the office. So, we can see a kind of mix of how many customer tickets came in, how many rage clicks were there, what was the rate of usage of different features, and put them together as a wide variety of metrics to use. One of the things our team does is to have frequent calls with customers, especially with new customers. Like yesterday, they talked to a company, who said to us they need our application on mobile devices. We don't support mobile devices yet; we called quite a few people in to hear what the customer wants. If a product person comes to me and says "I need this new feature", I ask, "How will we know if it's successful in production?" Let's think about how we're going to measure it right now. Many times they can't even answer that question.   

Geng: Do you think technology like artificial intelligence (AI) and machine learning can help us in testing?         

Crispin: The testing platform mabl is using machine learning for visual checking. That is very nice because, machine learning, after a certain number of runs, like 10 runs, can identify the parts of the screen that are static and the parts of the screen that change all the time. On a retail website, they may have different dynamic elements like a pop-up banner, advertising, something different every day. So, the machine learning knows that part of the page I'm going to ignore because it's always different. This part of the page looks like always the same. Going forward, if something changes on a static part of the page, it could put out a warning, because you may want to check into this. It allows the users to have a baseline and know if something changes. If that wasn't OK, let's investigate it.

It's not foolproof yet. I think it does help save time and the same thing with page-load times. It gives the average page-load time, and if it goes up by a certain percentage, it's a warning. However, machine learning is only as good as the data you trained it on, so it can be dangerous; you can get the wrong conclusions because it trained on the wrong data. So, it's something we have to learn a lot more about before we use it and widely rely on it. I think we have to get better at knowing how we train it on the correct data; we consider all possibilities because it can be dangerous. With testing, it's not as bad as a self-driving car, but the main parts of our product are just heuristics and some code. Everybody wants AI to fix all our problems. We're certainly moving towards the ability to see the tools that give you information on what people did. Real users did it in the application. There's undoubtedly a capability to see the most valuable part of this user interface; what's our test coverage like for most parts? Are we having tests go to those pages and do assertions? There's a possibility even that tool could generate test cases for these highly used parts.

I think it's going to help with many things like that, but I think we need to be very aware we're using something that's not magic. It's a tool we can use. So, I'm excited about what it might make possible.    

Geng: There are some things researchers have been doing using AI to do development work. Can you say more about that?

Crispin: Developers tell me that AI is more likely to take a developer's job than a tester's job. Because when we use machine learning, for example, we use data; we have to test the data to know if it's validated to train on. We have to test the machine learning algorithms. The testing is still there, but you can give a machine twenty thousand examples of some ways to code something, and it can learn how to code. That's doable. However, could you give it human judgment?

Geng: Last question, what's are your suggestions to new testers?        

Crispin: I think it's important to know that whatever your discovery is - bugs? All are likely to be communication problems. The business didn't express what they wanted very well. Alternatively, the development team misunderstood what the business wanted, so it looks like a defect. It's just because of a miscommunication. So, I would tell testers to work on your communication skills. I've heard many times people say software development is 20 percent coding skills 80 percent communication and collaboration, so-called soft skills. Work on your soft skills, because they're the hardest ones to learn, like communication, listening, observational skills, critical thinking skills, and there are many ways to do that. You also need to be aware of your unconscious biases. Testers are quite vulnerable, and sometimes we test what we expect to see. We have to be able to notice the things we didn't expect, know the unknowns. Always-thinking skills are essential for testers. Don't worry about your technical skills. Those are easy to learn.    

Geng: Thank you, this is the general suggestion to all testers?        

Crispin: I think it is. I think we get involved with learning test automation, or coding, or things which are great to learn. However, I could teach anybody how to code, but I can't necessarily teach somebody how to think critically. I can give them exercises to improve that and work on it. However, it's not black and white that I know how to do this.    

Geng: Thanks for your time and sharing your thoughts, Lisa. It's really a pleasure talking with you.

About the Interviewee

Lisa Crispin is the co-author, with Janet Gregory, of Agile Testing Condensed: A Brief Introduction, More Agile Testing: Learning Journeys for the Whole Team (2014), Agile Testing: A Practical Guide for Testers and Agile Teams (2009), the LiveLessons Agile Testing Essentials video course, and The Whole Team Approach to Agile Testing; 3-day training course offered through the Agile Testing Fellowship. Recently, Lisa Crispin and Janet Gregory published a new book Agile Testing Condensed. Lisa was voted by her peers as the Most Influential Agile Testing Professional Person at Agile Testing Days in 2012. She is a testing advocate working at mabl to explore leading practices in testing in the software community. Please visit Lisa Crispin's website and Agile Testing website for more.

About the Interviewer

Christina Geng is Director of QA/SDET at Splunk HQ, in San Francisco. She has been working in software testing for more than a decade. Christina is passionate at SDLC continuous improvement and testing technique/process evolution, risk control and customer obsession.

Rate this Article