Measuring the execution performance of a testing team

There’s no replacement for good management. The leader of a team shouldn’t need artificial methods to know whether the group or one of its members is not performing correctly. Having said that, a responsible Test Team Manager should always track and analyze objective and empirical measurements of her team; these metrics will provide very little indications about the individual team members, but they will give invaluable information about the team’s performance.

What you measure is what you get…

Should you keep track for the amount of defects each team member reports? Do this only if you want to see large quantities of useless bugs in your tracking system.

How about counting the number of tests written by each tester? Measure this and you’ll get a lot of atomic test cases that could have been written as one, more complex and intelligent, testing scenario.

Can you measure the number of daily tests each engineer executes? By doing this you are telling your team that you don’t care about (important) bugs that are not directly related to their specific tests and may take a large amount of time to reproduce and understand.

The bottom line is: if you measure Quantity you are automatically compromising on Quality.

So what’s the alternative? How do you know if your team is doing a good job?
Let’s be sincere; if you are managing the team correctly and you understand what they are doing you already know.
If you were just brought into the Company, it will not be automatic, but after 3 to 4 weeks you’ll start getting the feeling, and by 2 to 3 months you better know for sure.

Good managers don’t need “a tool”, they use the MBWA (management by walking around) approach. They are interested in what’s being done, ask intelligent and relevant questions, and most importantly they are constantly learning (learning the products, the methodologies and the team).

So, should you stop measuring…? Absolutely NOT!

As a manager, you and your team have an objective that needs to be tracked and measured.

Most of us (Test Team Managers) are trusted with the objective of providing correct and timely visibility to the team in order to develop and release a product that answers our customers’ expectations; we are also expected to do this as close as possible to our targets for schedule, features and cost. [Author’s comment: since this is not an article about the objective of the testing team, even if you don’t completely agree with this definition try using it as an example for setting a set of metrics.]

There are 2 aspects to keep in mind about team performance metrics:
1. They should never be individual – a team is composed of members that compliment each other and thus should not be measured independently.
2. They should focus on the empirical outputs of your objectives – this is the hardest part of the metrics work; since objectives are usually abstract and metrics are extremely concrete. Let’s focus on this point.

How to set concrete measurements for abstract objectives?

Let’s take the objective definition we used above, and break it down into smaller pieces, then we will think of a set of metrics for each piece.

“…providing correct and timely visibility to the team…”

We can think of many different metrics for this part, these are two examples:
1. “Correct visibility” can be measured by tracking the percentage of rejected defects we get from development (vs. the total amount of defects reported). Doing this we are seeking information about the quality of our information.

2. To measure “timely visibility” we can add estimations of the time it took from the moment the bug was introduced into the product until the testing team detect the issue, calculating the project average. Take into account that design bugs were introduced long before they were coded into the product.

“…as close as possible to our targets for schedule, features and cost.”
To measure these 3 targets we can use the trivial metrics of planned vs. actual for timelines, features and cost measurements.

“…to develop and release a product that answers our customers’ expectations…”
Leaving the best for last :o)
How do we know if a product answers our customers expectations? I usually go for the simple solution of counting the amount of different issues reported from the field. Then I normalize this number, dividing it by the amount of issues previously fixed in the version, this way I get a metric called the escaping defect rate.

This metric provides a way to look deeper into the performance improvement opportunities of our team.
If we examine the issues reported from the field, they can be categorized into 2 groups:
a. Issues we missed in our testing
b. Issues we found in our testing but choose not to fix
The first group will point at places where we need to improve our detection methods (a.k.a. our tests).
The second group will provide information about places where we need to improve our ability to judge what is important to our users.

What to do with our metrics?

Once you have your metrics you can do at least a couple of things with them:

1. Create benchmarks for on-going improvement. Taking the measurements from the latest project and making sure that each following project will only provide better results.

2. A different approach is to look for Industry Averages for the metrics you are tracking and evaluate your team based on these numbers. Finding these averages is sometimes difficult (if not impossible!) but if they are known they can provide a great starting point.

The unstated objective of every Manager should be to constantly improve the effectiveness and efficiency of his team. Using well design performance metrics you can point to the places where the team has both the need and the opportunity to improve itself.

The biggest obstacle for working with Team Productivity Metrics comes from the natural human fear of making our weaknesses public. There’s no magic pill for this issue, only the knowledge that you are doing something right for the team and the Company. If your metrics program is good and provides results, you will see how in no-time other managers start leveraging your idea into their own teams.

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

No comments yet.

Leave a Reply

shares