Improving the efficiency by keeping track of your waste

All development organizations have a number of recurring events that waste the time of their teams. When reviewing the subject of Defect Lifecycle Management two of the most important undesired incidents are :
(1) Rejected Defects – Defects that are reported by the QA and rejected by the product or development teams.
(2) Reopened Defects – Defects that are fixed or rejected by the development and are reopened by the QA.

These incidents continually waste the time required to detect, report, review, analyze, assign, and/or fix an important number of defects; and they usually are the result of human error and/or lack of communication (and understanding) between people in different teams. Even if at the first glance the number of defects may appear small, when reviewed closely they add up to days and weeks of wasted time per project.

A couple of years back I worked at a company that implemented an effective way to fight this unnecessary waste. As part of our monthly QA reports with all sorts of project and defect data we started providing the accumulated statistics for the Percentage of Rejected and Reopened Defects per project together with a threshold and traffic light dashboard for each. Deviation of up to 10% brought a yellow-light indicator and after that the projects turned to red on these specific measurements.

The reports were sent each week to each Development & QA Manager and the dashboards were presented once a month to the VP R&D during the periodic Directors’ Meeting where they quickly became a topic of choice, especially as our organization was looking for ways to improve the efficiency of our internal processes.

Each manager was responsible for his own area and was required to explain any major deviations from the set threshold. At the beginning only one in seven teams was bellow the threshold and two more where close to it; more importantly many managers thought the measurement unfair and the threshold impossible to maintain over time.

Then, with the constant pounding of Management and after some months of analyzing the chain of events around defect reporting, more teams started showing green-light indicators. We continued producing and studying these numbers until most teams had improved their metrics and defect management processes considerably around this area.

The above exercise showed me 2 things:
(1) Teams can work more effectively by making sure they communicate better, creating less unnecessary garbage and friction in the process.
(2) There are many hard things that can be achieved once the correct information is placed in the spotlight and enough focus is put in the right places.

Organizations should implement the good habit of keeping track of statistics for these kind of unwanted behaviours. They should keep thresholds for the maximum number of trash their systems produces and in cases where these thresholds are exceeded they should understand the root cause and fix it.

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

Trackbacks/Pingbacks

  1. Measuring the execution performance of a testing team | QA Intelligence - a QABlog - April 11, 2013

    […] these are two examples: 1. “Correct visibility” can be measured by tracking the percentage of rejected defects we get from development (vs. the total amount of defects reported). Doing this we are seeking […]

  2. How to use easy to find bug information to improve the quality of your testing | QA Intelligence - a QABlog - April 11, 2013

    […] II. Defect Rejected Reasons Goal: Understand why your testers are wasting time reporting the wrong bugs, rewriting their bugs better, or arguing with developers unnecessarily. How to track it? Create a list field with the possible reasons for rejecting bugs, make this field mandatory whenever an issue is set to status rejected. Rejected reasons vary between organizations, but the list will always include values such as: “Missing Information”, “Duplicate Bug”, “Not a Bug” and “Unable to Reproduce”. Possible improvements: If the team has a high rate of rejections look for the main problematic reasons and implement corrective actions. These actions may range from creating a new bug description template, to training engineers on how to search for duplicates bugs in your existing repository, or any other action to treat the source of the problem. Notes: (1) Sometimes you will need to look for specific team members. (2) Make sure you have a problem before implementing a cure; it is natural to have rejections, just make sure they are keep at acceptable levels. […]

Leave a Reply

shares