Manual and automated tests together are challenging

I have a simple question to ask those of you who have any kind of test automation in your teams:

“Do you have tests that you still run manually, even though
they are also running as part of your automation test sets?

I am talking about the fact that even though you invested the time to automate something you still choose to run it manually.

Notice that I am asking for any tests, including the ones you run manually only once in a while or those you give to your junior testers to be 100% sure all is OK; in fact any tests you are still running both automatically and manually at the same time.

Surprisingly enough, even organizations with a relatively mature automation processes still run a significant number of their automatic scenarios as part of their manual tests on a regular basis, even when this doesn’t make any sense (at least on the theoretical level).

After realizing this was the case I sat down with a number of QA Managers (many of them PractiTest users) and asked them about the reason for this seemingly illogical behavior.

They provided a number of interesting reasons and I will go over some of them now:

We only run manually the tests that are really important

The answer that I got the most was that some teams choose tests to be run automatic and manually only when they are “really important or critical”.

This may sound logical at first, but on the other hand when you ask what is their criteria for selecting the tests that should be automated most companies say they select cases based on the number of times they will need to run them and also based on the criticality or importance of the business scenario.  In plain English, they automated the important test cases.


So if you choose to automate the test cases that are important then why do you still run them manually under the same excuse of them been really important…?  Am I the only one confused in here?


We don’t trust our automation 100%

The answer to the question I asked above, of why run the important tests even though they are already automated comes in the form of an even more interesting (and simple) answer: “We don’t really trust our test automation”

So this basically means they are investing 10 or even 50 man-moths of work and in most cases thousands of dollars on software and hardware in order to automate something and then they don’t really trust the results?  Where is the logic in this?

OK, so I’ve worked enough with tools such as QTP and Selenium in order to know that it is not trivial to write good and robust automation, but on the other hand if you are going to invest in automation you might as well do it seriously and write scripts that you can trust.  In the end it is a matter of deciding to invest on the platform and be serious in the work you are doing in order to get results you can trust (and I don’t mean buy expensive tools, selenium will work fine if you have a good infrastructure and write your scripts professionally).

The alternative is really simple, if you have automatic tests you can’t trust because they constantly give you wrong results (either false negative or even worst false positives!) you will eventually stop using them and finally throw all the work and money out the window…

 

We don’t know what is covered and what is not covered by the automated tests

This is also another big reason for why people waste time running manual tests that are already automated, they are simply not aware of which scenarios are included on their automation suite and which aren’t. In this situation they decide, based on their best judgement, to assume that “nothing is automated” and so run their manual test cases as if there was no automation.

If this is the case, then why do these companies have automation teams in the first place?

 

The automated tests are the responsibility of another team

Now for the interesting question, how come a test team “doesn’t know” which scenarios are automated and which aren’t?  The most common answer is that the tests are been written by a completely different team, a team of automation engineers, that is completely separate from the one running the manual tests.

Having 2 test teams, one manual and one automatic, is not something bad and in many cases it will be the best approach to achieve effective and trustworthy automation.  The bad thing is that these teams can sometimes be completely disconnected and so work on the same project without communicating and cooperating as it should be.

I will talk about how to communicate and cooperate in a future post, but the point here is that when you have 2 teams (one automated and one manual) you need to make an extra effort to make sure both teams are coordinated and as a minimum each of them know what the other is doing in order to plan accordingly.

 

We want to have all the results in a single place to give good reports

PractiTestFinally, I wanted to mention a reason that was brought up by a number of test managers, even though they brought it as a difficulty and not a show stopper but it was brought up many times and so it sounded interesting enough to mention it.  The fact that they needed to provide a unified testing report for their project, and for this they either run part of their tests manually, or created manual tests to reflect the results of their automation.

Again, this looks like a simple and “relatively cheap” way of coordinating the process and even producing a unified report, but it suffers from the problem of being a repetitive manual job that needs to be done even after you already have an automation infrastructure and it will eventually but surely (specially as more and more automation is added) run into issues of coordination and maintenance that will make more expensive and in some cases will render it misleading or even obsolete.

 

What’s your take?

I am actively looking for more issues, experiences or comments like the ones above that revolve around the challenges in manual and automated testing.  Do you have stuff you want to share?  Please add it as comments or mail me directly to joel-at-practitest-com.

We’ve been working on a solution for these types of issues and so we are looking for all the inputs we can get in order to make sure it will provide an answer to as many of the existing challenges as possible.  I will be grateful for any help you can provide!

,

  • http://twitter.com/ezeetester Prasad

    Hey Joel, nice article.  Now there is another side to automating tests.  Outsourced projects look for automating many of its projects and unfortunately leading to more jobs in automation.  This is making the life of a manual tester difficult when he wants to find jobs.  When will people learn that a manual tester can be more effective than an automation testing effort, when it comes to finding more defects.
    twitter: ezeetester
    http://ezeetester.wordpress.co

  • http://www.abakas.com Catherine Powell

    We do this, at a very high level. For example, we have upgrade tests that we do manually, and we have upgrade tests that we have automated. However, it diverges when we get into the details. Our automated upgrade tests verify things like:
    - ability to upgrade from build X to build Y (within a release)
    - ability to upgrade from release X to release Y
    - correct error message when attempting to downgrade or use a bad configuration
    - ability to resume an aborted or failed upgrade

    Our manual upgrade tests cover other things, mostly things that meet the criteria “this is hard or expensive to automate”:
    - hardware failures during upgrade
    - exploration of upgrade (e.g., measuring network traffic generated by the upgrade process)

    In the course of doing manual tests, we happen to accomplish some of the same things in an automated manner. For example, part of testing a hardware failure during upgrade is testing that upgrade still works after that hardware is recovered. The last half of that happens to overlap with an automated test, so technically we're manually doing an automated test. I think the different purposes and different setup make them both useful, though.

  • Issi Hazan-Fuchs

    some reasons to run some of your automated tests manualy:
    1) Human might observe more of the system under test behaviour then your automation suite which is limited to the programmed checks.
    2) Manual test is a sanity test for the automation suite.
    3) Be a starting point for more exploration of the system

  • halperinko

    #1,#3 meaning it's not the same test anymore, but a different – more extended one.
    #2 I would hope it would be vice verse, and again it's not the same test, so it's results if logged at all, will have a different test case.

  • halperinko

    Just a short take, as I wrote more in Hebrew in Tapuz forum.

    I haven't seen many such cases, but mostly running adjacent test cases which are not covered by automation at all, may these be extended exploratory tests or just items hard to automate.

    I would expect more false negative than false positives in automation, and I believe it is much quicker just to tackle these which show up in automation session, rather than redo the whole set.
    (Anyhow – these will have to be investigated manually to decide if it's a bug or…)

    Most test management systems can manage both types of tests, and there should be no difference, nor lack of ability to show the whole picture.

    It's the automation TC writer responsibility, to split any existing “manual” test into a TC covered by automation and another with any “Leftovers”.
    If auto written from scratch, then again – you know what you would want to test but unable to with automation.

    While auto Infrastructure are definitely a different group (developers not testers), test case writers should be at least part if not the whole manual group team – these are testers who write, execute and investigate the results.
    If this is not possible – use Keyword driven or etc., which will enable non-devs. to write test cases, and will ensure structural writing of the automation which will eventually mean – you are able to maintain it. 

    A point often forgotten, is that automation must support also a Semi-Manual process, and ease the testing process in all positions within the company, using a single infrastructure.

  • joelmonte

    Hi Prasad,

    I think that there is a real issue in educating mostly inexperienced project managers and development managers that you cannot automate every test (a good example is what Catherine provides on her reply above).

    Still I do believe that in many cases automation won't find more defects but it will find them quicker, like in the case of running a CI (continuous integration) framework or even when running a simple nightly build.  The main reason for this is really simple, you run these test more times and at shorter intervals so you can catch the bugs faster (and there are other deeper reasons like repeatability, etc)

    I tend to believe that a manual tester needs to understand the value he can gain from automation and leverage this value whenever possible. I also agree that it is difficult to find a job on a firm where they don't value your manual testing skills, but on the other hand I also think you wouldn't want to work in such a company either.

    The good news, at least from my experience, is that dev managers are getting to understand more and more the value of manual testers, and I believe that if you are good then your reputation will help you get good working opportunities.

  • joelmonte

    Thanks Catherine,
    This is a great point!

    A great example of a set of tests that are not a logical candidate for automation on the basis of ROI.

    I think the challenge in these cases will be trying to coordinate and even correlate both cases not only when reporting them but also when scheduling and executing them. 

    In the past I used an approach where we would automate the installation up to a certain point using a “simple script” and have it stop at the point where we wanted to run the more complex manual scenario.

  • joelmonte

    I agree only partially with Kobi's comments on the points brought up Issi.

    I think that sometimes you will want to combine human observations on top of automatic tests, it is a great way of generating the inputs if you want to do Blink Testing (you can read more about this type of testing from my blog or simply searching on google), and so you can count it as the same test but a different review of the results.

    I also think that manually sanitizing an automated test should be done only while developing the test, and in this phase you are testing the code behind your test as you would any other piece of code.  I don't believe that a good test (manual or automatic) should have the need to be sanitized…

    The third point, bringing the system to a “more advanced level” and then kicking off manual testing is a great practice!  I like it because it is a way of reusing simulators and other scripts in a way they don't only generate testing data but in the process they verify some basic functionality of the system,

  • joelmonte

    I definitely agree on the need to extend automated tests by running more extensive manual tests, and also on the fact that for this ET is a great approach.

    On the other hand I think you are letting false negatives “off the hook” to easily in here.  I agree and understand that sometimes you will have false negatives and obviously that will want to review each failure before automatically reporting the bug.  But in my experience False Negatives can be a plague that will even get to reject the complete automatic testing effort altogether.  People will simply stop reviewing the tests when they fail saying stuff like “ah no need to check, these tests always fail!”  So I would simply be more careful when accepting them as is.

    The point I like the most of what you wrote is about the TC Writer, I call her Test Architect, and I think this is a role that is developing to be a vital one in every team.  The Test Architect (TA) is quickly becoming central on her role of defining what needs to be automated and making sure that the scripts are written in ways that will compliment the manual testing efforts, like in the case of the Semi-Manual (I call them semi-automated) testing efforts you name.

  • halperinko

    I guess my point on false negatives was not communicated well,
    It's not that I expect and agree to have many false negatives, but rather that I assume that false positive are rather rare in compare.
    Under that assumption, most failures will be things we wish or need to investigate, but on the other hand we gain much confidence on the rest of the tests which did not fail.
    So for instance – it should be much easier to manually execute just the failed 30% rather than switch into a “whole manual” mode, where we need to execute 100% manually.

    I kind a dislike the term Test Architect, just for the reason that it might be used to create several tester casts – in my view all testers are “equal”, all should participate in reviews, write test procedures, write automatic test cases and execute them (manually & automatic).
    A tester who will not do one of these tasks, will soon become a weaker tester.
    A team of “Test Architects” who just write tests and let others do the dirty work of executing them, will soon loose their Hands-On touch with the system.
    So if you call the who testing team “Test Architects” – that's fine with me, but if you use the term to distinguish between tester levels – I don't really like that.

  • Sergey E. Yaroslavtsev

    There could be many reasons mentioned, these are on the top of my mind:
    - QC/QA guys should be able supporting development and certification teams. The fully automated testing system IS NOT the best option when there is a need in particular case's fresh debugging data;
    - QA/QC analyst need to know how to configure and tune both standard and non-standard (i.e. customer-unique or site-specific) testing environments, create, collect, and document dumps, traces and/or sniffer reports for different purposes;
    - There is a need in regular hands-in training even for highly experienced QA/QC analysts. There is a need in teams able handling urgent requests for changes in testing or technical specifications. At least one analyst should be able to implement and run wanted tests properly without a painful delay;
    - Normally there is no human-factor involved in fully automatic systems;

    I am keeping a couple of bikes in my garage, and that is not just for fun.

  • Pingback: Coordinating your automatic and manual test management | QA Intelligence - a QABlog

  • http://www.qainfotech.com/tools_automation_testing_services.html Lisa Davidson

    I agree.Well, in my opinionbe it manual or automation it requires a proper methodology and set of process for proper execution. As a latest trend  Software Test Automation is being widely used, I would like to know if automation is the perfect solution?

  • Anonymous

    The perfect solution for what…?

    I think many people, specially tool vendors and technology enthusiasts, try to paint automation as magic or something that will make all your pains go away.  And guess what, it’s not!

    Automation is a great tool, and so is a chainsaw.  Does that meant that you would use a chainsaw to trim your nails?  or to cut paper in the kitchen?  Guess not (or at least I’d hope not!)

    So is automation.  Many times automation will help you a lot, but many times its simply not the right tool to use?  How do you use an automation tool to test usability?  Or, is it effective to write a test script in order to run a test that you need to run one time and will take you 30 minutes to do?

    So there is no simple answer, but for sure we need to “educate” people in our industry that we don’t have a holly grail (at least not yet).

    My 2 cents.

  • Pingback: Why Run Tests Manually Even Though They Are Automated? - Testing Excellence