I was listening to a report on the radio the other day on my way to work.
They talked about a new study showing that the most common reason behind doctors’ mistakes is their inability to look for alternative explanations once they made their initial diagnosis on a patient, even when new information came up pointing at other possible causes for the patient’s condition.
The article explained that this behaviour is related to the concept of anchoring, developed by the Nobel Price laureate Daniel Kahneman together with Amos Traversky back in the 70’s (and written extensively by Dan Arieli in his “Irrational” book series).
In plain words anchoring means that once you believe in something you will find it hard to move away from this idea. Sometimes even after being presented with hard data showing that you may be wrong.
I read (and enjoyed) the “Irrational” book series by Arieli, but only when I heard about anchoring in the context of doctors’ malpractices did I realize that testers are not immune to this behaviour too.
Anchoring in the context of QA & Testing
We tend to make up our minds on someone based solely on our (erroneous?) first impressions.
You may decide a tester is smart only because she said something intelligent during the first 5 minutes of the work session you had with her.
Or vice-versa, you may decide a tester is dumb because he was too quiet or maybe because he made a “trivial” mistake during your first work session together (and you failed to notice he was timid and even somewhat nervous by the whole situation).
The same goes for applications or features we need to test.
You may think that a feature is “clean of bugs” because you hardly found any important issues in your first round of sanity testing.
Then you may let important bugs slip out because you “lowered your testing defences” and failed to check thoroughly during the following deep-down testing round.
What about the bugs we find?
(This happened to me a number of times in the past)
You report a bug and for some reason you decide to categorize it as a Critical defect.
Then, even if people come to you with good and valid reasons to lower its priority you find it hard to agree with them. You go on to provide your own reasons for keeping this issue as a Critical bug (even if in your heart you know their reasons are valid).
BTW, you can say that same about reporting a bug, and later on not agreeing that the behaviour was not a defect in the first place…
Giving a “scientific” name to your irrational behaviour helps
After all these years I am happy that my sometimes-irrational behaviour got a scientific name, it definitely beats the feeling that I had of sometimes being an egocentric fool. Now I understand that this is caused by an intrinsic flaw in my basic human design 🙂
More over, I believe that once we are aware of the cause it is easier to fight and even get over these behavioural issues.
Techniques to fight your anchoring
There are a number of things you can do if you suspect that you are suffering form Anchoring in your QA tasks.
In order to keep it short I will only explain a handful of them:
Defocus – Defocusing basically mean to create a perspective on what you are testing by looking at the whole picture and understanding the complete “why” of your application, instead of only concentrating on the details or the “what” of the specific feature you are about to test.
I’ve read many articles that explains defocusing, but I personally like this one by Anne-Marry Charrett (Maverick Tester) that explains why she is the Queen of Defocus.
Walking the dog – This is one of my favourite techniques.
As a tester, once you feel that you have run all your testing scenarios (both formal and informal), and that you are ready to provide your testing feedback back to your team, WAIT.
Take some time to switch your context by “walking the dog around the block”, by going to drink coffee in the kitchen, by running some other testing task, or simply by doing anything that will clear your mind.
After this, get back and review all the testing tasks you ran, look for things that you may have missed or paths that you forgot to follow before.
Only once you are sure there are no new areas or paths to test you should go ahead and “mark” your tasks as done.
You’ll be surprised how many “small but important things” you might have forgotten to do in the first place.
Swap testing – This is a basic but useful technique.
Don’t trust only yourself to perform any task. Whenever possible, schedule time for someone else from your team to run some tests on the app you are testing.
It’s not that you want another person to run the same scenarios you just did.
What you are looking for is for the inputs and the new scenarios other testers may check that you did not even think to review in the first place.
Another common technique closely related to this one is peer-testing
5 critical bugs – This is mostly a planning technique.
Even if you “think” that the feature you are about to tests will be clean of bugs (this may be because it is an easy feature, or because you trust the specific developer, or because of any other esoteric reason), take the time to define the 5 most critical bugs that may be hiding in this feature. Then use these theoretical bugs to plan your tests around them.
BTW, it is even better if you can brainstorm about these bugs with other people in your team.
Fighting the up-hill battle against anchoring in testing 🙂
Once we understand that we may be “a little biased” or that we have a tendency to look at the world in “a certain way” it is easier to correct this behaviour.
Have you had experiences where you’ve been “anchored” in your testing?
How did this affect your work and what did you do to correct it?
Share these testing-war stories with us by adding them as comments, and help other testers fight against their natural anchoring behaviour.
Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.
Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.
With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.