We’ve all heard about the “Infinite Monkey Theorem” whereby a monkey hitting keys on a keyboard (typewriter) at random will eventually come up even with the complete works of Shakespeare.
But the problem is that it would take it an infinite number of years, and maybe more importantly the work would be buried under so much junk and gibberish that it would be impossible to find it.
What’s the status of your test cases?
Now take a look at your test repository. What do you see?
Is there anything in common with the work of the monkey above?
I am not implying your team is made up of chimps typing at random – even though if it is, please take a picture of them and send it back to me, I promise to publish it!!!
But something I see a lot in the context of my work with PractiTest‘s customers is that people tend to concentrate on the quantity of their test cases, and fail to put enough efforts on the quality of their resources.
The result is repository with a lot more test cases than it should actually have, many of them covering the same feature too many times, others describing functionality that was modified a number of releases back, and some that have not been run in a number of years because they are not really important anymore.
I don’t think this comes from incompetence, but I do believe that a big factor for this is the fact that it is easier to create a new test than to find an existing case (or cases) and modifying it accordingly.
Another cause is the fact that it is a lot easier to measure the number of tests than the measure the quality of your testing coverage (and the quality of the individual tests cases themselves).
Process and rules of thumb for writing test cases
A good way of stopping problems of this type is to have some process and rules of thumb in place to help testers write better cases.
- Setting upper and lower limits for the number of steps per test case.
- Setting maximum number of test cases per feature or functional area.
- Working with modular test cases that you can use as building blocks for your complex test scenarios.
- Have separate test cases for positive and negative scenarios.
- Use attachments and spreadsheets to decouple testing data from testing processes.
Regarding process, this is a little harder but it is also a lot more effective in the long run. Some examples might be:
- Before starting to write test cases have a small team create the test list and their scope, only then fill out the steps.
- Break your test repository into areas, assign each area to a tester in your team and make him/her responsible for all aspects of its maintenance.
- Have peer test creating or review sessions.
- Visit and validate “old” test cases & tests that have not run in the last 6 months.
- Review tests that do not find any issues in the last year.
Create visibility and accountability into your test management system
A big factor that will help or hamper the way you maintain your test cases is how you manage and organize them. You can obviously do this in your file system repository or using something like Dropbox to share these resources with your team.
But after your tests grow in size or your team expands above a certain level it makes more sense to find a professional test management system that will help you to organize your test cases and generate good visibility and control over them.
I don’t want to make this a marketing post, but I do recommend you take a look at PractiTest and the way we provide excellent visibility into your test cases with the use of our exclusive Hierarchical Filtering system.
Other than creating visibility, make sure there is accountability for each area of your testing repository. As I wrote above, it is important to assign testing areas and their cases to specific players in your team.
Give them tasks (and the time) to update and maintain their tests, both during the preparation stages but also during the regular testing process. You should not expect people to maintain their test cases during the frenzy and craziness of your regular testing cycle.
How do you do it?
Share them with us to understand what other ways are there to keep our test cases sane and effective.