Using a Severity Look-up Table for better and more accurate Bug Classification.

In the past I posted articles on the importance of differentiating between the severity and priority of a bug and about how to report a bug correctly; but there is one subject that I left out and got questioned about it a couple of days ago:

how to give bugs the correct (and objective) severity?

I know of at least 2 ways to set the priority of each bug correctly, the hard way and the easy way.

The hard way: to define the Severity each time from scratch

Sadly enough most groups work this way, asking their testers and/or bug-reporters to set the severity based on a scale (of either 3 or 5 levels) according to their objective understanding of the harm the bug causes the system or end-user.

What’s the problem with this approach?

1. It’s not objective, so two people may look at the same bug and set 2 different severities.

2. It usually gathers bugs towards the middle of the scale.
From personal experience if you leave your severity definition to the testers’ gut-feeling, the bug distribution at the end of the project will look like a normal curve.
At first you may think this is OK but think again… There is no logical explanation for this behavior, on the contrary for many or most projects you expect to see shifts to the right or left based on factors such as the maturity of the team, application, technology, etc.
My explanation is that we like to do things on a balanced way, so unconsciously we try to balance even the severities of the bugs we report (don’t blame your mother, its genetic…).

3. Your severity appreciation can shift with time.
You may have seen this in the past projects, as we advance in our testing we may look at a bug we reported in the past and feel that we gave it a severity that is too high compared to the ones we are reporting today.
This may be related to the old saying: “just when you though things can’t get any worst, they usually do”.

The easy way: create a Severity Look-up Table

In my opinion the best way to give the correct severity is by having a look-up table to help us classify our bugs. By thinking ahead of time about most bug cases and creating a table we are able to classify bugs more accurately and based on a standard system.

An example of a Severity Look-up Table I’ve used in the past is the following:

S1 – Critical

A defect that causes the complete system to stop functioning or that will result in unrecoverable data-loss. The bug has no workaround.
Additional specific bugs:
– Spelling or Grammar Mistakes
– Important bugs that were reported from customers and we assured them will be fixed.

S2 – High
The defect causes a part of the system to be inaccessible or to stop functioning. The bug has no workaround or it has a workaround that will not be easily found by users.
Additional specific bugs:
– High visibility GUI issues.

S3 – Medium
The defect causes a non-critical failure on the system but it will allow users to continue working. The bug has a workaround that is relatively easy to find and will be acceptable by most users.

S4 – Low
The defect causes no dysfunctions to the system and may even be unnoticed by most users . The bug has an easy workaround or may not even require a workaround at all.

S5 – Enhancement Request or Suggestion
No need to explain.

Notice that on some levels I added specific cases that even if they don’t fit the definition of the severity will still make it to that level. This is a practice that will help you include your specific issue themes into the Severity Look-up table.

Each Organization / Team / Project may need to review or fine-tune their Severity Look-up Table once in a while based on their individual definitions, requirements and customer needs.

I remember once I told a group I was training about this practice. A young tester got up and said that she thought using a Look-up Table would make her look inexperienced and unprofessional.
My reply was that I still use lists like this in almost every project I work, and that it would be less professional to use another approach if she knew that this one was easier and gave better results…

  • Hi Joel, I
    agree heartily with most of what you have written here, useful and plainly set
    out, thanks.

    The one
    thing I perhaps disagree on is all Spelling or Grammar Mistakes being “S1 – Critical”
    – and thereby considered in the same equivalence class to say loosing
    significant volume of user-entered data? Spelling/Grammar mistakes would, in my
    view, only be Critical if they led to complete non-performance of the
    application – and this would certainly be rare in the kind of business software
    I have seen over the last decade. Perhaps if the application is a language
    dictionary, translation tool, a software code or letter generation tool or
    similar then that might be fair but
    in most cases this is exactly the kind of issue that would end up with a high priority and low severity. The priority would be higher for
    more prominent and embarrassing typos (e.g. on the first welcome screen) because
    of the impact to reputation, perceived quality etc but at the end of the day, would
    the issue severely impact the rest of the system or cost the client $$$?

    I am currently
    trailing PractiTest and so far I’m very impressed. I was however, very
    surprised to find – after reading the above article – that your application
    demo delivers the Issue management module with only a Priority field “out of the box”. Yes, Priority and Severity
    do often end up being in check – as one of your customer stated in the linked
    article but I don’t agree with that being a reason to consider them the same. Even
    you say “I think that in most projects Priority and Severity should be 2
    separate fields.”

    Having had a
    very useful initial call with one of your account managers I am now content that
    we would be able to configure your SaaS app to accommodate our needs &
    wants but I do think it’s worth considering showing this module with the richer
    reporting capability at first sight. I certainly could have decided against PractiTest
    as my tool choice had I not been a reader of this blog.

    S Pattisson

    http://www.seymourito.com/

  • joelmonte

    Hi Seymour,

    I think your point about spelling/grammar mistakes is a valid one, but in the end (like with everything else!) it will depend on context, in this case the most important factor to consider will be the type of application under development – as well as the company/audience/etc.

    My feeling is that for commercial software products issues like this ones will still carry an S1 in my book because (as you said so yourself) they will carry with them reputation implications, and with the speed and reach of the Internet today companies will succeed and also fail many times on reputation alone.  

    Still, not all bugs are created equal, and so if the spelling mistake is in a page with a lot of wording (e.g. license agreement, or help) they may be considered less critical.

    Regarding your comment about PractiTest, I think you also have a valid point there.  We try to make the demo as simply as possible while still giving users a feeling for the product, but this point is important and I will make sure to raise it to the team.

    Thanks for your feedback & I hope you continue enjoying PractiTest!

    Cheers,

    -joel

Shares