You Don’t Have Test Cases, Think Again

NOTICE:
We, at the QABlog, are always looking to share ideas and information from as many angles of the Testing Community as we can.  

Even if at times we do not subscribe to all points and the interpretations, we believe there is value in listening to all sides of the dialogue, and allowing this to help us think where do we stand on the different issues being reviewed by our community.

We want to invite people who want to provide their own opinion on this or any other topic to get in touch with us with their ideas and articles.  As long as we believe the article is written in good faith and provides valid arguments for their views, we will be happy to publish it and share it with the world.

Let communication make us smarter, and productive arguments seed the ideas to grow the next generation of open-minded testing professionals!


*The following is a guest post by Robin F. Goldsmith, JD, Go Pro Management, Inc. The opinions stated in this post are his own. 


 

You Don’t Have Test Cases, Think Again

Recently I’ve been aware of folks from the Exploratory Testing community claiming they don’t have test cases. I’m not sure whether it’s ignorance or arrogance, or yet another example of their trying to gain acceptance of alternative facts that help aggrandize them. Regardless, a number of the supposed gurus’ followers have drunk the Kool-Aid, mindlessly mouthing this and other phrases as if they’d been delivered by a burning bush.

What Could They Be Thinking?

The notion of not having test cases seems to stem from two mistaken presumptions:

1. A test case must be written.
2. The writing must be in a certain format, specifically a script with a set of steps and lots of keystroke-level procedural detail describing how to execute each step.

Exploratory Testing originated arguing that the more time one spends writing test cases, the less of limited test time is left for actually executing tests. That’s true. Conclusions Exploratory claims flow from it is not so true because it’s based on false presumptions that the only alternative to Exploratory is such mind-numbing tedious scripts and that Exploratory is the only alternative to such excessive busywork.

Exploratory’s solution is to go to the opposite extreme and not write down any test plans, designs, or cases to guide execution, thereby enabling the tester to spend all available time executing tests. Eliminating paperwork is understandably appealing to testers, who generally find executing tests more interesting and fun than documenting them, especially when extensive documentation seems to provide little actual value.

Since Exploratory tends not to write down anything prior to execution, and especially not such laborious test scripts, one can understand why many Exploratory testers probably sincerely believe they don’t have test cases. Moreover, somehow along the way, Exploratory gurus have managed to get many folks even beyond their immediate followers to buy into their claim that Exploratory tests also are better tests.

But, In Fact…

If you execute a test, you are executing and therefore have a test case, regardless whether it is written and irrespective of its format. As my top-tip-of-the-year “What Is a Test Case?” article explains, at its essence a test case consists of inputs and/or conditions and expected results.

Inputs include data and actions. Conditions already exist and thus technically are not inputs, although some implicitly lump them with inputs; and often simulating/creating necessary conditions can be the most challenging part of executing a test case.

Exploratory folks often claim they don’t have expected results; but of course they’re being disingenuous. Expected results are essential to delivering value from testing, since expected results provide the basis for the test’s determination of whether the actual results indicate that the product under test works appropriately.

Effective testing defines expected results independently of and preferably prior to obtaining actual results. Folks fool themselves when they attempt to figure out after-the-fact whether an actual result is correct—in other words, whether it’s what should have been expected. Seeing the actual result without an expected result to compare it to reduces test effectiveness by biasing one to believe the expected result must be whatever the actual result was.

Exploratory gurus have further muddied the expected results Kool-Aid by trying to appropriate the long-standing term “testing,” claiming a false distinction whereby non-Exploratory folks engage in a lesser activity dubbed “checking.” According to this con, checking has expected results that can be compared mechanically to actual results. In contrast, relying on the Exploratory tester’s brilliance to guess expected results after-the-fact is supposedly a virtue that differentiates Exploratory as superior and true “testing.”

Better Tests?

Most tests’ actual and expected results can be compared precisely—what Exploratory calls “checking.” Despite Exploratory’s wishes, that doesn’t make the test any less of a test. Sometimes, though, comparison does involve judgment to weigh various forms of uncertainty. That makes it a harder test but not necessarily a better test. In fact, it will be a poorer test if the tester’s attitudes actually interfere with reliably determining whether actual results are what should have been expected.

I fully recognize that Exploratory tests often find issues traditional, especially heavily-procedurally-scripted, tests miss. That means Exploratory, like any different technique, is likely to reveal some issues other techniques miss. Thus, well-designed non-Exploratory tests similarly may detect issues that Exploratory misses. What can’t be told from this single data point is whether Exploratory tests in fact are testing the most important things, how much of importance they’re missing, how much value actually is in the different issues Exploratory does detect, and how much better the non-Exploratory tests could have been. Above all, it does not necessarily mean Exploratory tests are better than any others.

In fact, one can argue Exploratory tests actually are inherently poorer because they are reactive. That is, in my experience Exploratory testing focuses almost entirely on executing programs, largely reacting to the program to see how it works and try out things suggested by the operating program’s context. That means Exploratory tests come at the end, after the program has been developed, when detected defects are hardest and most expensive to fix.

Moreover, reacting to what has been built easily misses issues of what should have been built. That’s especially important because about two-thirds of errors are in the design, which Exploratory’s testing at the end cannot help detect in time to prevent their producing defects in the code. It’s certainly possible an Exploratory tester does get involved earlier. However, since the essence of Exploratory is dynamic execution, I think one would be hard-pressed to call static review of requirements and designs “Exploratory.” Nor would Exploratory testers seem to do it differently from other folks.

Furthermore, some Exploratory gurus assiduously disdain requirements; so they’re very unlikely to get involved with intermediate development deliverables prior to executable code. On the other hand, I do focus on up-front deliverables. In fact, one of the biggest-name Exploratory gurus once disrupted my “21 Ways to Test Requirements Adequacy” seminar by ranting about how bad requirements-based testing is. Clearly he didn’t understand the context.

Testing’s creativity, challenge, and value are in identifying an appropriate set of test cases that together must be demonstrated to give confidence something works. Part of that identification involves selecting suitable inputs and/or conditions, part of it involves correctly determining expected results, and part of it involves figuring out and then doing what is necessary to effectively and efficiently execute the tests.

Effective testers write things so they don’t forget and so they can share, reuse, and continually improve their tests based on additional information, including from using Exploratory tests as a supplementary rather than sole technique.

My Proactive Testing™ methodology economically enlists these and other powerful special ways to more reliably identify truly better important tests that conventional and Exploratory testing commonly overlook. Moreover, Proactive Testing™ can prevent many issues, especially large showstoppers that Exploratory can’t address well, by detecting them in the design so they don’t occur in the code. And, Proactive Testing™ captures content in low-overhead written formats that facilitate remembering, review, refinement, and reuse.


About the Author

Robin GoldsmithRobin F. Goldsmith, JD helps organizations get the right results right. President of Go Pro Management, Inc. Needham, MA consultancy which he co-founded in 1982, he works directly with and trains professionals in requirements, software acquisition, project management, process improvement, metrics, ROI, quality and testing. .

Previously he was a developer, systems programmer/DBA/QA, and project leader with the City of Cleveland, leading financial institutions, and a “Big 4” consulting firm.

Author of the Proactive Testing™ risk-based methodology for delivering better software quicker and cheaper, numerous articles, the Artech House book Discovering REAL Business Requirements for Software Project Success, the forthcoming book Cut Creep—Put Business Back in Business Analysis to Discover REAL Business Requirements for Agile, ATDD, and Other Project Success, and a frequent featured speaker at leading professional conferences, he was formerly International Vice President of the Association for Systems Management and Executive Editor of the Journal of Systems Management. He was Founding Chairman of the New England Center for Organizational Effectiveness. He belongs to the Boston SPIN and served on the SEPG’95 Planning and Program Committees. He is past President and current Vice President of the Software Quality Group of New England (SQGNE).

Mr. Goldsmith Chaired attendance-record-setting BOSCON 2000 and 2001, ASQ Boston Section’s Annual Quality Conferences, and was a member of the working groups for the IEEE Software Test Documentation Std. 829-2008 and IEEE Std. 730-2014 Software Quality Assurance revisions, the latter of which was influenced by his Proactive Software Quality Assurance (SQA)™ methodology. He is a member of the Advisory Boards for the International Institute for Software Testing (IIST) and for the International Institute for Software Process (IISP). He is a requirements and testing subject expert for TechTarget’s SearchSoftwareQuality.com and an International Institute of Business Analysis (IIBA) Business Analysis Body of Knowledge (BABOK v2) reviewer and subject expert.

He holds the following degrees: Kenyon College, A.B. with Honors in Psychology; Pennsylvania State University, M.S. in Psychology; Suffolk University, J.D.; Boston University, LL.M. in Tax Law. Mr. Goldsmith is a member of the Massachusetts Bar and licensed to practice law in Massachusetts.

www.gopromanagement.com
robin@gopromanagement.com

 

  • Steve Fenton

    Is there any chance this article could get an edit to reduce levels of ad hominem?

  • I’m not sure where to begin with the issues I have with this article, I started reading with an expectation for an intriguing argument, moved to i don’t think I agree with the claims and then to “I don’t believe the author has any idea what he’s talking about”.
    So, first of all, why I don’t use “test cases” in my language – for me, it’s a matter of efficiency, when I say “test case” my environment is expecting heavy documentation, of the sort I don’t find effective. When I use “test charter” I convey the level of documentation I will be providing, so even if we accept the of equivalence between a test case and a test charter (which is a bit more encompassing, in my view) I would still prefer to use the latter just as a way to manage the expectations from me.

    My main problem with this article is that it attacks a straw man, spitting out many inaccuracies just to sort its claim.
    1. Saying that “the gurus” claim that checking is inferior to”testing” shows a deep misunderstanding of both terms, since checking is a crucial part of testing. This separation is effective as a way to distinguish parts of testing that can be automated and to stress the need of *human* skills in the rest of testing. Just because we can describe am algorithm for a certain dimension of our work does not make it inferior.
    2. The assumption that ET has no”planning” phase is wrong. Sure, the planning does not always take the form of “action ->expected result”, but when doing ET testers choose charters, create checklists, investigate the design, etc. ET also uses retrospects to further plan according to the findings.

    I could probably go on, but let’s keep our that way, as wing on a phone isn’t that comfortable.

  • Pingback: Testing Bits – 6/25/17 – 7/1/17 | Testing Curator Blog()

  • Robin Goldsmith

    By the way, an article on “The Debate over Testing versus Checking” by someone totally unrelated to me was just posted at https://www.techwell.com/techwell-insights/2017/06/debate-over-testing-versus-checking.

  • Robin Goldsmith

    @Amit, thank you for your comments. You seem to be focusing on two things I didn’t say and therefore probably losing sight of what I did say, including some things you might find more acceptable on reconsideration.

    The thrust of the article is about test cases and that if you’re executing tests, you’ve got test cases whether or not you recognize it, whether or not they’re written, and whether or not they’re written in some particular format. The article acknowledges right up-front that many people believe, as apparently you and your organization do, that test cases must be in writing, and moreover written in a “heavy” manner, which I’d guess involves a lot of keystroke-level procedural detail.

    When one recognizes that the essential elements of a test case are inputs and/or conditions and expected results, other factors can be seen to be irrelevant, such as: written vs. not written, written in a specific format, including procedural detail, manual vs. automated, and allowing even-mechanical straightforward or more analytical comparison of actual to expected results.

    I agree that such high-overhead test cases are counterproductive; but I find that not writing test cases at all is not the only, and certainly not the best alternative. Low-overhead test case formats help overcome weaknesses of not writing the test cases at all in ways that I find provide net benefits.

    Test charters are an example of a low-overhead technique which can be helpful for identifying test cases, but test charters are not test cases. I did not say
    Exploratory doesn’t plan. I did say, though, that Exploratory is less effective than it could be because such planning seldom is done before the coding, which is when more effective approaches such as my Proactive Testing™ have the opportunity to help prevent many of the defects which have already been put in the code by the time Exploratory usually gets involved.

    I think it’s pretty clear that “checking” was introduced in order not only to distinguish it from what Exploratory does but to demean it relative to what Exploratory does.

  • Robin Goldsmith

    @Steve, I believe an ad hominem statement has to be directed toward a specific individual; but none is named in the article. If you think I’m referring to a specific person, then your mind has filled in the gaps. It’s often difficult to separate some of the gurus’ attention-craving from the content of their statements. You may find interesting my “Testing’s Donald Trumps” article at https://huddle.eurostarsoftwaretesting.com/unconventional-wisdom-v4-testings-donald-trumps/. Perhaps you could offer some examples of revised wording that conveys the cited issues in a manner you find more acceptable.

  • Steve Fenton

    It applies to both individuals and groups. I don’t know who you are referring to when you use the “them” groupings of “gurus”, or when you state that their belief is either ignorant or arrogant (perhaps it is neither, the difference of opinion may simply be semantic).

    Any argument based on the character of some “other party”, whether named or not, individual or group, is ad hominem. This dilutes the argument, which is a pity as “If you execute a test, you are executing and therefore have a test case, regardless whether it is written and irrespective of its format.” is a statement that makes some sense.

Shares