Coordinating your automated and manual test management


Two weeks ago I published a blog called Manual and automated tests together are challenging that got a large number of replies and comments both here in the blog as well as in some LinkedIn groups.

To summarize the post, it talked about the fact that some QA teams that have automation in place still run part of their automatic test cases manually.

WHY do they do this?  Some of the reasons I got were that (1) they like running some of their tests manually as well since they are “really important”, (2) that automation cannot be trusted 100%, (3) there is no clear definition or knowledge of what is exactly automated, and (4) they run test manually in order to update their test management system with the results (of their automation) and be able to generate complete and comprehensive reports for their management.


Some more reasons why to manually run your automatic test cases

So as I said, I got additional feedback from a number of people who read the post (thanks to everyone who got back to me!).  Some of the most interesting points raised were the following:

1. Even though the test may look similar at first sight, they are testing different aspects of the same feature.  For example an automatic test may verify the installation procedure on all O/S, and a “related manual test” will verify the effects of abnormal network traffic during the installation process.

2. There are some tests that are “semi-automatic” and not fully automatic, like blink-tests where you perform a number of operations and constantly take screen-captures of the application that are then reviewed quickly by the tester in order to detect GUI issues or bugs.

3. You can start additional manual tests on the places where your automation concluded.  For example you can use automation to check the installation of the system and to validate the procedure to add, remove and modify data.  Once these tests are done you can take the system created and continue running additional manual tests based on existing data and system setup.


Additional challenges when coordinating manual and automatic testing

Part of the feedback I got to the post was also in the way of additional challenges that raise in coordinating the work of your manual and automatic testing (not directly related to re-running your tests).  Here are some of the most interesting points:


Figuring out what needs to be moved from manual to automatic testing

This is the first challenge and I guess one of the most significant decisions you need to make, since it will dictate the value you realize from your automation effort.  I am not sure there is a text-book answer to this question, but I guess it needs to be related to 3 central factors:

ROI – the return on the investment to automate your test, or in plain English what will you gain from automating this specific script.  This is usually a matter of how long it will take you to automate your script vs. how many times will you run it automatically and how much time do you save because of it.

Complexity of the Test – this refers to both how hard it is to create the automatic script but also how easy it is to do the test manually.  There are tests that are simply too hard to do manually (e.g. testing the API of your product) and there are others that are practically impossible to automated (e.g. usability of a feature).   You need to choose correctly what tests are more suited to be either automatic or manual.

Stability of your AUT – maybe the biggest challenge of any automatic test is to cope with changes on your AUT (Application Under Test).  There are some automation platforms that do this better than others, but no automatic system will be able to “improvise and understand” in the way human-testers do.  Since this is the case you need to add this factor to the list of things that will help you choose what to automate, and try not to work on areas of the product that you know will go through radical changes in the near future.


Coordinating the tasks between the manual and the automation testing teams

These are also big challenges that raise from the intrinsic differences between a good manual test engineer and a good automation engineer.

Who does what?
– Do the automation engineers define what to test or is this the call of the manual testers?
– Is scheduling tests a task of the manual testers or the automation testers?
– What happens when an automatic test fails, is the manual tester in charge of verifying if there is a bug or is this the job of the automation team?

All these questions and many more like them need to be defined up-front, and again here there is no text-book answer.

Personally I believe that a good automation engineer is closer to a developer than to a tester and so I like placing more weight on the tasks of my manual testers.  Basically I ask my automation engineers to be “service providers” and to work with my manual testing engineers to supply them with the best possible automatic answer to their testing scenarios.

I also believe that automation is part of the complete testing effort and so the decision and responsibility of what to run and when to run it should be in the hands of the group in charge of the complete testing efforts (this is usually the task of the manual testers too).


Lifting the tabu from test automation

I really liked this comment from Marco Venzelaaer in LinkedIn: “Some automation teams make test automation a ‘dark art’ which is closely guarded within the team…”

This point got me laughing but also grabbing my head as he managed to articulate something I had felt for a number of years.  Some automation engineers feel that by treating their work as something mysteriously hard and extremely complex they gain some sort of work security. They are specially successful by seeding these feelings on manual testers, who don’t have any coding experience on their side and so believe what their automation colleagues are telling them.

Other than a false feeling of job security, these automation engineers also manage to generate a sort of tabu around their work that will make manual testers refrain from trying to understand it or have any influence on the process.  In the end this behavior is harmful to the coordination of efforts and has a deterrent effect on the overall achievements of the testing team.


So how do you coordinate the work of you automated and manual testing in our team?

I think that the key to maintaining a healthy manual-vs-automation relation in testing is TEAMWORK & COMMUNICATION

You need to make sure both teams, the manual testers and the automation engineers, are working based on the same agenda, coordinating their tasks based on the same priorities, and understanding how each player helps the other to fulfill the goals of the organization.  In simple therms, how they work together in order to make their testing process faster, broader and eventually more effective.

Trivial things to take into account are:

1.  Make sure they are coordinated, by having regular update meetings and making sure there is active participation of both teams when planning the testing tasks for each project.

2. Have both teams work on an integrated environment where both manual and automated tests are managed and executed.  This will allow all your testers, and also every other person in your company, to see the “full picture” and understand both on a planning and execution level what is been covered, what has been tested, and what are the results of these tests.  After all no one really cares if the test was run manually or automatically, they care about the results of the test.

3. Have a process in place where automation is a tool to hep your manual testing team.
The correct process is what will eventually make or break the relationship between manual and automatic tests.  Both teams need understand that the idea of automation is to free the time of manual testers to run the more complex tests, those that are hard or expensive to automated.  Once they understand that they compliment each other and not compete with one-another they will be able to focus on their shared challenges instead of their rivalries.

In short you need to make sure your teams have both the agenda as well as the infrastructure required in order to coordinate their work.


In the end, it is the work of the manager and the Organization to create the environment and the “team rules” that will allow all members to feel like they provide their work and help drive the project forward TOGETHER.

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

, ,

6 Responses to Coordinating your automated and manual test management

  1. Tarun K June 18, 2011 at 8:16 am #

    I can not agree more with “teams guarding test automation” and the prime reason is – when markets are down first manual testers are laid off because automated tester can do both manual and automated testing (I know I am talking non sense but this is very common perception)

  2. joelmonte June 22, 2011 at 3:04 am #

    I have actually seen both things happen, manual testers been fired when “the cuts begin” but in a number of companies I saw how the automated testers were the first to let go.

    I think that it will depend on the perceived value of the automation.  If the automation has perceived value then management may want to keep them and ask them to do *some* manual work.  But if they see automation as a waste they may let the automation engineers go and keep the testers who are really providing value to the organization.

    So in short, I think that it boils down to value (and sometimes to the salary of the person who is been asked to leave…)

    My 2 cents

  3. halperinko July 7, 2011 at 3:45 pm #

    A key point in enabling the coordination of the “transfer process” between manual and automatic tests, is how to identify and handle the leftovers.
    Here the ALM tools can support the process, by laying down a sort of traceability feature between the two.
    This becomes even more tricky, as some tests require redesign in order to become a useful and best “value for money” regression test.

  4. halperinko July 7, 2011 at 3:45 pm #

    A key point in enabling the coordination of the “transfer process” between manual and automatic tests, is how to identify and handle the leftovers.
    Here the ALM tools can support the process, by laying down a sort of traceability feature between the two.
    This becomes even more tricky, as some tests require redesign in order to become a useful and best “value for money” regression test.

  5. Lisa Davidson September 5, 2011 at 11:34 am #

    Thank you for sharing this post. .Nice and Informative post. Software Test Automation is such an interesting topic that I can’t stop reading the views and learn new techniques. With growing demand in Software Test Automation, QA Testing partners are in great demand too.


  1. Launching Test Automation Support | PractiTest Blog - June 27, 2011

    […] May and June published a number of posts in our QAblog, linked-in and other forums, asking testers what are their biggest challenges when coordinating […]

Leave a Reply