As software development organizations grow and evolve part of their internally defined processes start loosing their effectiveness and/or efficiency; this means that once in a while they need to re-evaluate these processes in order to adapt and improve them. This is the case of the Defect Lifecycle, that in most organizations needs to be reviewed every 2 to 3 years.
Even if this process appears trivial at first (“Open > Fixed > Closed” right? – WRONG!) the lifecycle of a bug is directly linked to multiple aspects of the organization and it needs to capture these constraints in order to achieve a correct flow definition (I wrote “a” and not “the” correct definition since there are usually multiple alternative flows, each with its pros & cons).
First Stop: What is a bug lifecycle?
The simplest definition of a lifecycle (in our case a bug lifecycle) is a “flow of states” that is followed by every bug in the system. This flow defines the initial (or entry) states, intermediate states, and final (or exit) states; it also defines the paths that may be followed by each bug, intrinsically defining the paths that cannot be followed.
The simplest bug lifecycle I can think off would look something like this:
In addition to the states and paths, some lifecycle definitions also include “transition rules” describing who can move bugs between each state and under what conditions.
For example, we may define that only the development manager can move a bug from NEW to OPEN and that in order to do so the field “Responsible Developer” should be filled with the developer in charge; or we may define that a developer can only set a bug to FIXED if she has filled the “Comments Field” with an explanation of her fix and suggestions to the testing team on how to test it.
Now for the main course: what should we consider when defining our lifecycle?
1. Who logs bugs into the system?
In some cases only the testing team logs bugs into the system, in other cases we provide the customer support team with the possibility to log bugs originated in the field, while some companies may have a strong quality culture where each developer logs bugs detected not only on his peers code but also on his own.
The flow for each of the cases above would be somewhat different: Bugs from the QA may go to a “Dispatcher” that assigns the issue to the engineer who is both qualified to tend the defect but also available based on the urgency of the fix. Bugs from customer support may be reviewed first by the QA in order to assure that the issue has not been already submitted into the system (avoid duplication), that it can be reproduced in-house or in case that it cannot be reproduced that we have the means to understand the issue and validate a fix, and that the description has all the information required to handle the bug. Finally we may decide that bugs that a developer logs to himself can be fixed directly by him, but defects detected on the code of other developers need to pass through the “Dispatcher” for approval and prioritization.
2. What information needs to be entered into a bug?
As people from different internal organizations start reporting defects and as more information is required by your developers you may be tempted to create flows specifically for each internal organization; try to avoid this behavior.
Paths should be simple and kept to as few as possible; in many cases we may place an internal validator, such as a product manager or a test manager to filter bugs and reject those that do not meet reporting standards or are missing the required information.
3. How many teams/groups handle issues?
Similarly to the influence of multiple organizations reporting bugs, we may have multiple teams handling them. In such cases you want to assign a dispatcher that understands who should handle the issues and what information will they require; consider also the cases where multiple teams may need to handle a single bug and how will you manage the transition between the development teams.
4. How does your company handle communication around bugs?
Some people try avoiding formality in their organizations, I know I do; but we need to differentiate between formality and order when we define our bug lifecycle.
A project will have hundreds if not thousands of bugs reported throughout its lifecycle, and there will be a time when we will review a bug that was handled 2 or 3 years ago in order to understand what was done with it.
When defining your bug flow, make sure to think about logging not only the initial and final information for the defect but also any correspondence or communication around it. The best way to do this is by keeping a comments log where developers and/or QA engineers communicate around the bug (avoiding the use of offline e-mail messaging) and implementing a history tracking mechanism logging the changes done.
5. Is your project running multiple product lines? Can bugs be shared?
If your company has multiple product lines you may need to fix the same bug on multiple places or products. Does this mean one defect report or multiple defects that need to be cloned? Think about who will fix and verify the bugs when making this decision.
6. What do developers do with bugs that they don’t understand or cannot reproduce?
Many companies use a REJECTED status that gives developers a way to request more information about the issues.
The catch with this status is that some engineers abuse it when they are to lazy to try and reproduce the problem by themselves – I worked in a project with a developer would not fix any bug unless the tester sat with him and reproduced the bug on his machine, he lasted about 2 weeks before he got fired!
7. What happens to bugs that we decide not to fixed? (Now or Ever?)
Think about how to handle the defects that are been postponed or delayed.
Many companies create a “bug limbo” where issues get lost once we decide they will not be handled on the current version, should this be the case?
If you decide a bug will not be fixed on the current version then make sure to review it for the next version. Avoid treating a POSTPONED status as a way to avoid rejecting a bug. When rejecting a bug write a comment indicating the reason in order to keep track of your decision (see point 4).
8. What happens to bugs after they are fixed?
Who will validate the bug, the person who reported it or someone who is currently in charge of the feature? How will we communicate this to the tester? Is bug verification done as soon as we receive the fixed build or as part of a concentrated effort in the project lifecycle? How will we return to development bugs that are either not fixed or that were only partially fixed?
These are only part of the questions to answer when looking at the verification part of our process.
9. How do you handle bug fixing updates to the field (Release Notes or Read-Me’s)?
It is important to close the loop with customers who report defects. If a customer opened a support ticket with a bug you want to inform him of the outcome of his request even if the bug was rejected for some reason (this is another reason for keeping track of your rejection considerations).
Additionally, some organizations release a list of important fixes as part of their documentation. How will you select the issues to publish?
For dessert: Technical Considerations
The following 2 considerations are external to the process but they are not less important, and in some cases they will dictate how you work more strongly than many other factors.
10. What information are you looking to get from your defects?
We all use defects statistics to understand the quality of our product, but we can also use them to measure the quality of our process.
For example, do you measure rejection rates to qualify the work of your testers?, or the time it takes to fix and release a bug from the field to measure your support levels?
You can only measure the data that you are keeping track of; but on the other hand if you log every piece of information your database will become un-supportable.
Define up-front what information is or may be relevant and keep track of it; in the worst-case scenario you can always start keeping track of data in the middle of the process and reach conclusions by extrapolating the information.
11. What defect management tool(/s) do you have in place to handle the process?
Every respectable defect management platform lets you define your own automatic bug flow but some are more flexible and offer more advance functionality than others. If you already have a tool in-house then understand what it can do (and how much effort will it take to do it!), and if you don’t have a tool yet define your requirements and then look for one that will be able to handle them (and hope it doesn’t break your cost constraints!)
Many companies have multiple tools that are directly or indirectly involved in this process.
For example some have separate bug tracking tools for their testing and development teams, others integrate the flows between their CRM and bug tracking systems. Even if your tools support an automatic integration mechanism, it will always come at a cost and with multiple constraints; make sure you understand these constraints too.
As I started this post: there is no best approach, specially as the number of people and teams involve in the process grows bigger.
The best advice I can give you is to try to keep your process as simple as possible. In many cases the best way to approach a complex problem is by giving it a simple solution (and training your users on how to adapt to your simplicity constraints).
Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.
Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.
With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.