What do you do, Testing or Checking?
Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test. However, this definition does not talk about sapience which brings about a subtle difference between testing and checking. Michael Bolton expressed his opinions about this difference and the reason why there should be a difference between the two.
According to Michael,
Checking is something that we do with the motivation of confirming existing beliefs. Checking is a process of confirmation, verification, and validation. When we already believe something to be true, we verify our belief by checking. We check when we've made a change to the code and we want to make sure that everything that worked before still works.
Testing is something that we do with the motivation of finding new information. Testing is a process of exploration, discovery, investigation, and learning. When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn't anticipated, we're testing.
According to Michael, checks are machine dependent because they give a binary response of pass or fail. Test on the other hand requires sapience. They are an exploratory way of learning the system and answering the question, 'Is there a problem here?'. This also brings about a difference between Testers and Checkers.
A person who needs a clear, complete, up-to-date, unambiguous specification to proceed is a checker, not a tester. A person who needs a test script to proceed is a checker, not a tester. A person who does nothing but to compare a program against some reference is a checker, not a tester.
George Dinwiddie suggested that according to him both checking and testing require sapience. He added that though Michael suggested that 'excercise and observe' is a check, however it is only a part of scripted testing. Once there is a failed test then it requires sapience on the part of the tester to understand what really happened. This might include checking the log files for information, calling someone to see if other systems are working correctly, doing large number of other explorations. This is no different from exploratory testing except for longer time delays between activties.
Michael agreed to this to some extent when he mentioned that a check in itself is relatively trivial. However, a lot of sapience comes into play before and after the check. That is the difference between a check and checking.
So the power play is over which we're going to value: the checks ("we have 50,000 automated tests") or the checking. Mere checks aren't important; but checking—the activity required to build, maintain, and analyze the checks—is. To paraphrase Eisenhower, with respect to checking, the checks are nothing; the checking is everything. Yet the checking isn't everything; neither is the testing.
Johanna Rothman expressed her opinion on smilar lines when she suggested the skills needed to built a sapient approach to testing,
Agile projects require true generalists as testers: people who have requirements-, design-, and code-understanding skills. Without those skills, they can't think critically enough about the product under development, and they might not be able to create enough variety of tests. If they understand the requirements, design, and code, they can turn that understanding into crafty tests. Some of those tests will be exploratory. Even some of the exploratory tests will need to be automated in order to repeat them. And, I've seen great testers on agile projects who can quickly create automated tests to do some of their exploration.
Cem Kaner, however objected to the idea of Agile testers being testing generalists. According to him, since exploratory testing requires both testing and exploration, hence in order to do valid exploration, it is necessary that specific skills are used. According to him,
Programmers understand many of a project's risks. They are probably better equipped to create thoughtful tests to explore those risks than non-programmers. Other people are more focused on the integration of the software within its environment. Similarly, we know specialists in performance evaluation and security evaluation. Some are programmers, some not.All of these folks, along with the system-level software validators, can test in a scripted way or an exploratory way.
According to George there are relative strengths and weakness of both the models. Both, checks and tests, qualify as testing as per Cem Kaner's definition. The key lies not in the script but the process of how you are doing it.
Both are testing, in terms of finding new information, and in terms of requiring sapience. If you’re not thinking about it, you’re doing it wrong.
Recently, InfoQ also published a post about Reasons to Love Agile Testing.
What are other Agile teams doing?
Motivation for using the term "checking"?
I don't have an issue with using an "exploratory" testing approach, but it would be dangerous to suggest that its not a best practice to take exploratory test patterns to machine and human repeatable steps for the purpose of regression "testing". It may be that Mr. Bolton doesn't believe in documenting tests.
I am not sure what Mr. Bolton is trying to achieve other than a thinly veiled attempt to confuse potential customers into buying their "Rapid Software Testing" course. Scary.
Software Testing *is* "Testing"
In most cases, but certainly not all, I know the result of the test beforehand. That first test for the "happy case", I know it will fail. But in some, I need to test whether the existing code solves a new problem. I don't really know the truth, walking in, especially with complex or old pieces of code.
I think that writing tests for "checking", only applies when writing the tests last, and usually only on the more straight-forward stories and tasks.
John Krewson, Steve Ropa and Matt Badgley Nov 24, 2014