Why can’t developers be good testers?

You can teach a dog many tricks but you can’t teach it how to fly, that is reserved for birds, planes or flying saucers…..

wonder-dog

I’ve been trying to explain to a couple of Agile Teams why developers are usually not good testers; so after working hard to remember all the reasons I could think of (based on my experience so far) I decided to put together a short list and post it.

Don’t get me wrong I think developers should take part in the testing tasks, specially on Agile Teams, but I am also aware of their limitations and the cognitive blind-spots that tend to harm their testing; and as it has been said before, the first step to improve your weaknesses is to understand you have them.

Why developers (usually) suck at testing?

1. “Parental feelings” towards their code

Developers are emotionally linked to the stuff they write.  It may sound silly but it is hard to be objective towards the stuff you create.
For example, I know my kids are not perfect and still I am sure I would have a hard time if someone would come to me and starts criticizing them in any way (after all they are perfect, right? :) ) .

2. Focus on the “Positive Paths”

Development work is based on taking positive scenarios and enabling them on the product.  Most of their efforts are concentrated on how to make things work right, effectively, efficiently, etc.  The mental switch required to move them from a positive/building mind-set to a negative/what-can-go-wrong mind-set is not trivial and very hard to achieve in a short time.

3. Work based on the principle of simplifying of complex scenarios

One of the basic things a tester does as part of his work is to look for complex scenarios (e.g. do multiple actions simultaneously, or make an operation over and over again, etc) in order to break the system and find the bugs.  So we basically take a simple thing and look for ways in which we can complicate it.

On the other hand our developer counterparts are trained into taking a complex process or project and breaking it down into the smallest possible components that will allow them to create a solution (I still remember my shock in college the first time I understood that all a computer could do was work with AND, OR, NOT, NAND, NOR, XOR and XNOR operations on Zeros & Ones).

4. Inability to catch small things in big pictures

I can’t really explain the reason behind this one, but I have seen it many times in my testing lifetime.
One of the side-effects from becoming a good tester is to develop a sense to (almost unconsciously) detect what “doesn’t fit” in the picture.  The best way to describe it is by the feeling one gets when something “doesn’t fit” in the picture but we just can’t put our hand on it; then by applying some systematic processes we are able to find the specific problem.

Where's Waldo?

Where’s Waldo??

I had a developer once tell me that good testers can “smell bugs”, and maybe he was not very far from the truth.

5. Lack of end-to-end & real-user perspective

Do the nature of their tasks most developers concentrate on a single component or feature in their product, while they still maintain a vague idea of how their users work with their end-to-end system.
Testers need to have a much broader perspective of our products, we are required to understand and test them as a whole while using techniques that allow us to simulate the way users will eventually work in the real world.

6. Less experience with common bugs & application pitfalls

Again something that comes with time and experience is our knowledge of the common bugs and application pitfalls.  Obviously as a developer accumulates KLOCs on his keyboard he will also get to meet many bugs and pitfalls, but as a tester we are going to gain this experience faster and in a more deeper sense.

An experienced tester sees a form and automatically starts thinking about the common bugs and failures he may find in it and starts testing for them.

My bottom line

It’s not they don’t want to do it, developers simply are not able to test in the same way we tester do.  This doesn’t mean they cannot help in testing, and in some specific areas they will be able to do it even better than we do, but before they start it may help if they are able to map their testing-blind-spots in a way that will allow them to compensate for them.

Developer testing adds a lot of value to the general testing process…  I am even thinking as I write this about a future post on the subject of the added value gained from pairing developers and testers.

In the meantime, if I forgot something or if anyone disagrees feel free to let me know :-)

Update (December 23rd, 2010)

I did not get to write the blog of pairing developers and testers (yet!) but I did write a post about sessions teaching programmers how to overcome their testing limitations (listed above) and giving them some techniques to help them test better.   If you thought this post is valuable you may want to check the other one as well.

,

  • http://blogs.msdn.com/anutthara Anu

    good one – i cant tell you how many times i have been hit by 4 and 5 when my devs test their code!

  • joelmonte

    Agreed! it is even more annoying since they are not aware of it and believe otherwise.

  • http://www.practitest.com Yaniv

    cool!
    I'll surely remember this post next time you'll ask me 'give a hand' with PractiTest's testing ;)

  • http://automation-beyond.com/ Albert Gareev

    Hi Joel,

    Thanks for the great compilation. What I especially like is how politely you approach this question.

    May I suggest adding another couple of reasons as well?

    * Complexity of modern software systems
    These days programmers simply do not know everything about how the code they write will be used – and, therefore, have to rely on their assumptions.
    Plus, complexity and a vast number of code modules create new, unintentional relationships within the code. “Quantitative change leads to qualitative change” – 3rd concept of Dialectics.

    * Involvement of 3rd party frameworks and libraries
    Each code unit is written with certain assumptions. This could be, for example, relying upon verification of boundary cases made in the calling functions, or exit codes convention. While using 3rd party code modules, developers are naturally not completely aware of those assumptions, and thus how “bug shelters” appear.

    What do you think?

    Thank you,
    Albert Gareev

  • joelmonte

    Hi Albert,

    I always try to be polite, after all I still need to work with developers and one of them might read my blogs ;-)

    With regards to your suggestions, the first one (complexity) might be a relative of end-to-end or real-user perspective but I can definitely see why you can also see it as a stand-alone reason.

    3rd party frameworks is also a big factor but I think that at least in the projects I worked in, this was a difficulty both for testers as well as developers. I do agree that once the issue was detected it was easier for the testers to cope with the problems by learning the integration and it's specific issues so the problem itself can be more acute to developers.

    In any case this is an idea for a separate topic… so thanks!!

    Cheers,

    -joel

  • joelmonte

    HOLD YOUR HORSES!!!

    This was not meant (obviously!) for PractiTest developers who are *Always* willing to help and cooperate on testing tasks and do it perfectly…

    (did that sounds sincere enough?)

  • http://blogs.msdn.com/anutthara Anu

    good one – i cant tell you how many times i have been hit by 4 and 5 when my devs test their code!

  • joelmonte

    Agreed! it is even more annoying since they are not aware of it and believe otherwise.

  • http://www.practitest.com Yaniv

    cool!
    I'll surely remember this post next time you'll ask me 'give a hand' with PractiTest's testing ;)

  • http://automation-beyond.com/ Albert Gareev

    Hi Joel,

    Thanks for the great compilation. What I especially like is how politely you approach this question.

    May I suggest adding another couple of reasons as well?

    * Complexity of modern software systems
    These days programmers simply do not know everything about how the code they write will be used – and, therefore, have to rely on their assumptions.
    Plus, complexity and a vast number of code modules create new, unintentional relationships within the code. “Quantitative change leads to qualitative change” – 3rd concept of Dialectics.

    * Involvement of 3rd party frameworks and libraries
    Each code unit is written with certain assumptions. This could be, for example, relying upon verification of boundary cases made in the calling functions, or exit codes convention. While using 3rd party code modules, developers are naturally not completely aware of those assumptions, and thus how “bug shelters” appear.

    What do you think?

    Thank you,
    Albert Gareev

  • joelmonte

    Hi Albert,

    I always try to be polite, after all I still need to work with developers and one of them might read my blogs ;-)

    With regards to your suggestions, the first one (complexity) might be a relative of end-to-end or real-user perspective but I can definitely see why you can also see it as a stand-alone reason.

    3rd party frameworks is also a big factor but I think that at least in the projects I worked in, this was a difficulty both for testers as well as developers. I do agree that once the issue was detected it was easier for the testers to cope with the problems by learning the integration and it's specific issues so the problem itself can be more acute to developers.

    In any case this is an idea for a separate topic… so thanks!!

    Cheers,

    -joel

  • joelmonte

    HOLD YOUR HORSES!!!

    This was not meant (obviously!) for PractiTest developers who are *Always* willing to help and cooperate on testing tasks and do it perfectly…

    (did that sounds sincere enough?)

  • http://www.developsense.com Michael Bolton

    Hi, Joel…

    I understand your points, but I'd suggest some reframes.

    I disagree that programmers (which is what you mean when you say “developers”) can't test. I think that some programmers can be very good at testing indeed, and in my experience, the better the programmers they are, they better the testers they are, and vice versa.

    The concerns that you raise have validity, but I think it might be more productive to suggest that programmers tend to test in very different ways from testers, and that no one (including testers) is perfect at testing, least of all at testing something that s/he created. In addition, rather than thinking in terms of abilities or inabilities, I'd suggest thinking in terms of different heuristics, different focuses, and different cognitive biases.

    —Michael B.

  • joelmonte

    Hi Michael,

    We agree that programmers or developers (I've seen both names) should not test stuff they created.
    I also agree that *some* programmers are incredible testers, btw on many of these cases what I realized after getting to know these guys better was that they had actually worked on testing previously and I am sure they were good and thorough testers back then too.

    You do bring forward a very valid point, with which I also agree, that in some areas, programmers can test even better than testers due to their knowledge and experience.

    Still, overall if I focus on the bulk of the programmers I interact day-to-day and specially on those who for some reason or another perform testing tasks, my impression is that they “suffer from a number of illnesses” that make them less effective testers.

    And as I said, this doesn't mean they shouldn't work in testing tasks, but it does put part of the responsibility on us the testers or test managers to make them realize these blind-spots and help them compensate for them.

    Thanks for the feedback!!

    -joel

  • Sharath Chandrashekar

    Hi Joel, I completely agree with your points…..Also i guess know you personally after ur Test management demo…so i understand how deep you think about testing too:) which a developer can never think since his primary objective is to build expertise in pgming skills

  • joelmonte

    Hi Sharath, this is my point exactly, although I think you should not use the generalization “never” since there are some great developers out-there who test better than most testers I know :-)

  • http://www.clearsightstudio.com Jamon Holmgren

    Good article, and one I will be sharing with my team. However, I disagree with point #2. As a programmer, one of the things I notice is that I'm always trying to think of ways that my code could fail–whether it's unexpected input, unexpected database results, wrong data types, etc–and then building in “sanity checks” that deal with it gracefully.

    “Parental feelings” are very real, however, and difficult to deal with in a team. Lack of real user experience is another one. And programmers tend to be able to look directly at the problem and break it down, including interfaces, which isn't always how a typical user works. A typical user seems to bumble their way through the interface, arriving at the destination almost accidentally–so you want them to have as easy an “accidental” path as possible.

  • http://www.practitest.com Joel Montvelisky

    Thanks for the feedback!

    I’ve learned that not 2 projects and companies are alike, and I believe you fall under a small number of developers who do test for more than their “sunny day” scenarios (and maybe a little to the right and left of them). Still my experience is that you fall under a rare minority :)

    I was reading the other day a book called Agile Testing by Lisa Crisping and Janet Gregory where they talked about the way most developers, even when they work with TDD, write at most scenarios around the boundary values and not reach any of the more complex cases that are usually run by testers as part of their (exploratory) tests.

    In any case, thanks for the comment!

    -joel

  • JulienV

    Interesting points… I agree on some of those (yes I'm a developer), especially if we are talking about making developers testing their own code.
    But I've seen several cases where a tester with no background on programming doesn't think about test scenarios that seem logical for a programmer and can reveal serious errors in code.

  • John Cadenas

    How about schedule? Time plays a major role in testing. It can make testing a very subjective activity. As a deadline approaches, test result becomes less relevant since they take second place to the release goal. As a result, test coverage is reduced, failures are more easily ignored, or become good candidates for post-release mitigation. But who are usually the ultimate authority in deciding whether to release buggy software? Developers. Remember, they don't just develop, they are usually rewarded to achieving their goal. Yes, product managers put down their electronic signature, but developers are usually the only ones with the political power to stop incomplete work or cheer a release. That's where I'd draw the invisible test (and quality) line.

  • joelmonte

    Hi John,

    I don't think all companies or teams work in this way, I've seen some of them doing it but once we (the testing team) started working closer with the development and the product teams it was possible to make more shared decisions.

    It may sound bad, but I think that it is more a question of strength and how strong is the testing team within the development. If a testing team works focus and are able to make a difference within the team then it will have more strength to make decision. But this is something that like respect comes with a lot of hard work from us.

    -joel

  • Pingback: There is value in changing hats | QA Intelligence - a QABlog

  • Pingback: The Power of Empowerment | QA Intelligence - a QABlog

  • suhasini konduru

    Interesting.I agree with you

  • Naresh

    Good compilation of reasons, why the developers find difficult to find defects/ bugs in their projects. What ever methodolgy employed, should clearly compute the percent of defects uncovered by developers and testers. If you find that 50% defects found by testers and 20% found by developers. These metric and analysis need to be discused for understanding current scenario and how to improve. One side observation: Your financial incentives depends on the quality and quantity. Currently, developers get more and testers get less. Even appraisal system needs to be linked to defects and quantity.
    Let`s keep our code clean by appropriate cost effective strategy.
    Naresh Maheshwari

  • Pingback: Teaching programmers to test | QA Intelligence - a QABlog