Coordinating your automated and manual test management

Are you doubling down for no reason?

doubling down on automationIt has long ago come to my attention that some QA teams that have automation in place still run part of their automatic test cases manually.

I wondered WHY do they do this?  

When I reached out to the testing community for some reasons I got the following:

1. QA teams like running some of their tests manually as well since they are “really important” – this had me slightly confused as in seems really counter-productive, right? But the 2nd answer I go illuminated some of the reasoning here.

2. QA teams don’t have 100% trust in their automation test. OK, so with a lack of trust, doubling down means the manual tests are functioning as their own quality assurance for the automation test. Still, this can’t be the way to go in my opinion. While it is not trivial to write good and robust automation, if you are going to invest in automation, you might as well do it seriously and write scripts that you can trust.

3. Where there is no clear definition or knowledge of what is exactly automated. So people waste time running manual tests that are already automated, based on their best judgment, assuming that “nothing is automated” and so run their manual test cases as if there was no automation. If this is the case, then I wonder why have automation teams in the first place.

4. The automated tests are the responsibility of another team. Having 2 test teams, one manual, and one automation is not something bad and in many cases, it will be the best approach to achieve effective and trustworthy automation. The bad thing is that these teams can sometimes be completely disconnected and so work on the same project without communicating and cooperating as it should be.

5. Teams run tests manually in order to update their test management system with the results of their automation, in order to then generate complete and comprehensive reports for their management.

automation test managementThis one really hit home for me, since this aspiration for process visibility is one of the leading beacons of our product development at PractiTest. No matter how many integrations or test management features we have added (you should check out our most recent Test Automation capabilities by the way while we’re on the topic), enabling cross-team communication has always been in focus. You can follow some of the links here to see what I mean.

6. Some automation and manual test are “related” not “duplicated”. For example, an automatic test may verify the installation procedure on all O/S, and a “related manual test” will verify the effects of abnormal network traffic during the installation process. So essentially they complete each other to provide coverage.

7. Some tests are “semi-automatic” and not fully automatic, like blink-tests where you perform a number of operations and constantly take screen-captures of the application that are then reviewed quickly by the tester in order to detect GUI issues or bugs.

8. Additional manual tests are added in places where the automation was concluded.  For example, you can use automation to check the installation of the system and to validate the procedure to add, remove, and modify data.  Once these tests are done you can take the system created and continue running additional manual tests based on existing data and system setup.

While some of these reasons make more sense than others, they all still face the one challenge of properly coordinating both.

Challenges when coordinating manual and automatic testing

Part of the feedback I got regarded challenges that arise in coordinating the work of your manual and automatic testing (not directly related to re-running your tests).  Here are some of the most interesting points:

Figuring out what needs to be moved from manual to automatic testing

This is the first challenge and I guess one of the most significant decisions you need to make since it will dictate the value you realize from your automation effort.  I am not sure there is a textbook answer to this question, but I guess it needs to be related to 3 central factors:

ROI – the return on the investment to automate your test, or in plain English what will you gain from automating this specific script.  This is usually a matter of how long it will take you to automate your script vs. how many times will you run it automatically and how much time do you save because of it.

The complexity of the Test – this refers to both how hard it is to create the automatic script but also how easy it is to do the test manually.  There are tests that are simply too hard to do manually (e.g. testing the API of your product) and there are others that are practically impossible to automate (e.g. usability of a feature).   You need to choose correctly what tests are more suited to be either automatic or manual.

Stability of your AUT – maybe the biggest challenge of any automatic test is to cope with changes on your AUT (Application Under Test).  There are some automation platforms that do this better than others, but no automatic system will be able to “improvise and understand” in the way human-testers do.  Since this is the case you need to add this factor to the list of things that will help you choose what to automate, and try not to work on areas of the product that you know will go through radical changes in the near future.

 

Coordinating the tasks between the manual and the automation testing teams

These are also big challenges that raise from the intrinsic differences between a good manual test engineer and a good automation engineer.

Who does what?
– Do the automation engineers define what to test or is this the call of the manual testers?
– Is scheduling tests a task of the manual testers or the automation testers?
– What happens when an automatic test fails, is the manual tester in charge of verifying if there is a bug, or is this the job of the automation team?

All these questions and many more like them need to be defined up-front, and again here there is no text-book answer.

Personally, I believe that a good automation engineer is closer to a developer than to a tester and so I like placing more weight on the tasks of my manual testers.  Basically, I ask my automation engineers to be “service providers” and to work with my manual testing engineers to supply them with the best possible automatic answer to their testing scenarios.

I also believe that automation is part of the complete testing effort and so the decision and responsibility of what to run and when to run it should be in the hands of the group in charge of the complete testing efforts (this is usually the task of the manual testers too).

 

Lifting the tabu from test automation

I really liked this comment from Marco Venzelaaer in LinkedIn: “Some automation teams make test automation a ‘dark art’ which is closely guarded within the team…”

This point got me laughing but also grabbing my head as he managed to articulate something I had felt for a number of years.  Some automation engineers feel that by treating their work as something mysteriously hard and extremely complex they gain some sort of work security. They are especially successful by seeding these feelings on manual testers, who don’t have any coding experience on their side and so believe what their automation colleagues are telling them.

Other than a false feeling of job security, these automation engineers also manage to generate a sort of tabu around their work that will make manual testers refrain from trying to understand it or have any influence on the process.  In the end this behavior is harmful to the coordination of efforts and has a deterrent effect on the overall achievements of the testing team.

 

So how do you coordinate the work of you automated and manual testing in our team?

I think that the key to maintaining a healthy manual-vs-automation relation in testing is TEAMWORK & COMMUNICATION

You need to make sure both teams, the manual testers and the automation engineers, are working based on the same agenda, coordinating their tasks based on the same priorities, and understanding how each player helps the other to fulfill the goals of the organization.  In simple terms, how they work together in order to make their testing process faster, broader and eventually more effective.

Trivial things to take into account are:

1.  Make sure they are coordinated, by having regular update meetings and making sure there is the active participation of both teams when planning the testing tasks for each project.

2. Have both teams work on an integrated environment where both manual and automated tests are managed and executed.  This will allow all your testers, and also every other person in your company, to see the “full picture” and understand both on a planning and execution level what is been covered, what has been tested, and what are the results of these tests.  After all no one really cares if the test was run manually or automatically, they care about the results of the test.

3. Have a process in place where automation is a tool to help your manual testing team.
The correct process is what will eventually make or break the relationship between manual and automatic tests.  Both teams need understand that the idea of automation is to free the time of manual testers to run the more complex tests, those that are hard or expensive to automated.  Once they understand that they compliment each other and not compete with one another they will be able to focus on their shared challenges instead of their rivalries.

In short you need to make sure your teams have both the agenda as well as the infrastructure required in order to coordinate their work.

 

In the end, it is the work of the manager and the Organization to create the environment and the “team rules” that will allow all members to feel like they provide their work and help drive the project forward TOGETHER.


If you’d like to read more about how to coordinate your automated and manual test together I suggest checking out the following webinar I had on the subject. If you’d rather get advice in text form, then go to this resource 


 

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

, ,

7 Responses to Coordinating your automated and manual test management

  1. Tarun K June 18, 2011 at 8:16 am #

    I can not agree more with “teams guarding test automation” and the prime reason is – when markets are down first manual testers are laid off because automated tester can do both manual and automated testing (I know I am talking non sense but this is very common perception)

  2. joelmonte June 22, 2011 at 3:04 am #

    I have actually seen both things happen, manual testers been fired when “the cuts begin” but in a number of companies I saw how the automated testers were the first to let go.

    I think that it will depend on the perceived value of the automation.  If the automation has perceived value then management may want to keep them and ask them to do *some* manual work.  But if they see automation as a waste they may let the automation engineers go and keep the testers who are really providing value to the organization.

    So in short, I think that it boils down to value (and sometimes to the salary of the person who is been asked to leave…)

    My 2 cents

  3. halperinko July 7, 2011 at 3:45 pm #

    A key point in enabling the coordination of the “transfer process” between manual and automatic tests, is how to identify and handle the leftovers.
    Here the ALM tools can support the process, by laying down a sort of traceability feature between the two.
    This becomes even more tricky, as some tests require redesign in order to become a useful and best “value for money” regression test.

  4. halperinko July 7, 2011 at 3:45 pm #

    A key point in enabling the coordination of the “transfer process” between manual and automatic tests, is how to identify and handle the leftovers.
    Here the ALM tools can support the process, by laying down a sort of traceability feature between the two.
    This becomes even more tricky, as some tests require redesign in order to become a useful and best “value for money” regression test.

  5. Lisa Davidson September 5, 2011 at 11:34 am #

    Thank you for sharing this post. .Nice and Informative post. Software Test Automation is such an interesting topic that I can’t stop reading the views and learn new techniques. With growing demand in Software Test Automation, QA Testing partners are in great demand too.

  6. Amit August 23, 2020 at 10:03 am #

    I was going to respond and say that I like the analysis, but that the underlying assumptions are strange.
    Then I’ve seen comments from 2011 to a post from 2020, and assumed it was just some reorganizing the website.
    So, Joel, what are your thoughts on this? Has this post aged well?

Trackbacks/Pingbacks

  1. Launching Test Automation Support | PractiTest Blog - June 27, 2011

    […] May and June published a number of posts in our QAblog, linked-in and other forums, asking testers what are their biggest challenges when coordinating […]

Leave a Reply

shares