Do I need to write my test cases before I run them?

Joel: Needless to say and come clean, I work for PractiTest and we have a Test Management tool / Service.  This means that I basically get paid by the fact that people write their test cases upfront and then track their runs.

But working for a company such as PractiTest, and before that for Mercury Interactive (if you recognize the name consider yourself a living fossil), I have seen thousands of teams documenting their tests in more ways and styles that you can imagine.  Some good and many bad.

I’ve also had many chats with testing peers who chose to invest (or maybe waste) their time explaining why writing down test cases upfront is a waste of good testing efforts, and why it is simply better to test and then choose what to report or communicate upfront. 

Since I’ve had to do this again lately I thought it might be a good chance to recount why I think that the vast majority of times (not all times) it is better, maybe even essential, to document your test cases upfront.

Rob: I’ll set the scene with a short story.

I was working for a fairly big corporate and we were running the final stages of regression testing on a major release. The Test Leads printed out EVERY SINGLE test case ever created for the product that had all already been run. They piled them high on a table in the middle of the office. Each tester was expected to run 10 test case per day each for a two week period.

And here was the problem: Not all test cases are created equal. Some were well written, some not, some were really long and complex, some very simple, some were useful at a point in time, but the product had changed since then, some were testing very little, some were covering too many checks, some took 10 minutes to run, some 3 days. 

Testers who were thinking ahead turned up at 4am to collect 100 really simple test cases. Those who didn’t think ahead, turned up at 9 and were left with test cases that might take days to run. Because of the target of 10 per day, people didn’t run them properly, bugs escaped, ticks were supplied for passes on test cases that had not been run. Test cases are important, measures are important, but so too is all the stuff we’re going to talk about today – the judgment needed to understand the role and purpose of the test case.

Not all tests should be documented upfront

I can think of a very small number of limited circumstances when you might not want to document ANYTHING upfront:

  • If someone tells you that you have 5 minutes to test something – from the start you are not expected to do a thorough job, and basically just fire without taking aim to see if you can bring something down at random.
  • If you are only playing around with a system in order to get your very first idea of what you are going to test later.
  • If you are running a blind usability test.

I am sure there are a couple of other situations when you may not want to document upfront what you are going to test.

There are many ways to document a test

  • When you think about documenting a test, most people erroneously think that you are supposed to write very deep down scripted tests where the whole scenario is defined for the tester simply to read and “dumbly” execute it.
  • This cannot be farther from the truth.
  • This is the core of the issue – what constitutes a documented test case? What level of detail? What format? And it’s often where people spend the most time messing around with – rather than running the test and finding out what information it uncovers.

What other types of test documentation are there, other than the traditional steps with descriptions and expected results?

The steps with descriptions and expected results are not bad in themselves.  If you think about it, it is the most straightforward way of defining a testing flow (emphasis here on flow) and ensure that even if you are not an expert on the application you will still be able to test it based on what the test architect though before hand.  They are not needed in all cases, but many times this is exactly what you need, and then again you can choose how deep or shallow you will want these steps to be.

But there are countless other ways of documenting:

  • Checklists
  • High level flows
  • Stories
  • Mind maps
  • Task cards

And the list goes on and on and on.

How do we decide what level to use?

Think about who will be running your tests and then decide what support do they need?

  • If you are giving the test to a dedicated tester, who knows the system and works with it for a number of years, a checklist should be more than enough.
  • If this is a tester that works outsourced or offshore, who might not be very familiar with your system, and you want to ensure the important points are covered, then go into more detail when writing steps.
  • For developers I find that it is best not to go into many details, and use checklists with some more descriptions.
  • But it is important to find what works for you.

Sometimes it is OK to have more than one way to run the same test, only if you will run this test many times and by many people.

What happens if my team works with Exploratory Tests, then we do not need to document upfront right?

  • I love this misconception!  ET means that you will explore the system and do the low level plan of your actual test while running the test, you will also document it and make sure you can then have a good way to explain and even demonstrated what you tested to others when they ask you about it.
  • Don’t take my word for it, I am actually in touch with James Bach and we talked about this in the last webinar we did together.
  • I loved it when we got to talk and he even said that every test is scripted and it is also exploratory at the same time.  Explaining that it is actually a continuum where you can be very scripted or not scripted, and also very exploratory or not very exploratory, with most tests falling somewhere in the middle.
  • Even when you will run an ET, you will want to understand what you are testing, research the historic issues and runs from previous tests, talk to developers and product people, etc.  All this to make sure you are testing the system thoroughly. You will also want to keep notes with points you want to make sure not to miss during your tests.
  • And all this doesn’t even starts to talk about the heuristics you want to keep handy…
  • In short, if you are not planning and documenting a bit of your Exploratory tests then you might not be testing as you want to test.

Close Out

Scripted tests are not the enemy, scripting doesn’t mean that as a tester I will not research the system as I am reading and running your tests.  You will always look around your tests, follow your hunches and look for bugs outside Yellow Brick Road defined by your plans.

 

No comments yet.

Leave a Reply