This topic may seem simple, but I’ve been asked about it a couple of times in the last month, and I also saw it in multiple questions on QA Forums, SoftwareTestingClub, and other places where I go visiting once in a while.
Let’s start by admitting we have a problem in the Testing Community. Not only do we call the same process by multiple names, but sometimes some of also use the same name to call different processes. This is starting to sound like The Bible’s story of the Tower of Babylon, but it is a fact and maybe part of the reason for the confusion.
So instead of trying to propose a Definitive Naming Convention I will present mine and I hope that it will make sense (pay attention to the content and if necessary disregard the names!).
Since this can become a very long post, I will condense it by naming each Test Type and then answering 3 simple questions:
– When (do we perform it)?
– What (is the content of the test)?
– Why (is it important)?
1. Development Integration Testing
WHEN? Before each build is delivered to QA.
WHAT? End to end scenarios covering the main path of the application. Should not last more than 30 min to 1 hour.
WHY? Assure that development does not “throw their builds over the wall” without making sure they meet the minimal stability requirements.
2. Build Acceptance Test
WHEN? The minute QA receives a build from development and before all the team starts deploying it on their testing environments.
WHAT? A 1 to 2 hour intensive test suit that comes to assure the build is good enough for the whole team to commit to it. This is a great candidate for automation (and if so, it can be run each day as part of nightly build system).
WHY? Even after developers perform their Integration Tests it is not always enough for the QA to start testing it. We need to make sure there are no blockers or other stability issues that would prevent you from executing your test plan.
3. New Feature Testing
WHEN? On each build that includes new features or functionality.
WHAT? Test the new functionality as deep and wide as you can, go over all the scenarios and the functionality you can think off.
WHY? The trivial answer: new features come with a high risk of containing bugs and thus need to be thoroughly tested.
4. Bug Verification Testing
WHEN? I do it on each new build, together or immediately after the New Feature Testing. There are some organizations where they do it at the end of the project but I think this is risky.
WHAT? For each bug that was fixed (or in some cases a family of bugs) you need to test the reported reproduction scenario, and then with the information provided by the developer regarding the root cause of the defect and the fix he implemented you should test additional scenarios that may still have bugs or new bugs that may result as a side-effect of the fix itself.
WHY? Similar to the New Feature Testing, places where people touch the code (even if it is done to fix a bug) can cause additional defects.
5. Regression Testing
WHEN? Once the new features of the product reach a minimal level of stability and you want to start testing and looking for bugs on other less-trivial areas.
WHAT? This test is the most complex suit to design. On the one hand it cannot be too extensive since you never have enough time to run all your tests, on the other hand it should cover all or most of the application and provide a good indication that the AUT or the specific Area that is being tested is CLEAN and stable.
WHY? Software Development is very risky, changes in one place can have unexpected negatives effects in other areas, and it is our job to lower the risk of these bugs being released to the field.
6. Sanity Testing
WHEN? When you need to check the application on a high level and don’t want to or cannot run a Regression Test (i.e. after all tests are done and you want to check the final CD that was burned and sent to mass-production).
WHAT? Similar to the Regression Testing in the fact that you choose specific scenarios that cover all the AUT or application area but shorter and based on the highest risk factors.
WHY? Same as Regression Testing, but here you are required to make a risk judgment call and compromise part of your testing.
7. User Acceptance Test
WHEN? When the user receives the final product in his environment, but sometimes since the testing team already has these scenarios up-front they can run before they release of the product.
WHAT? What the user decides to test. Usually End-to-End scenarios for their most important features and functionality.
WHY? Because they want to make sure the product works.
There is one additional test type I want to mention that does not really fit on the group of tests I already described, but since it is very useful I will add it anyway.
8. Smoke Test
WHEN? If you don’t have time, and need to make a fast analysis of the application in order to understand if you should run more tests on a specific area.
WHAT? Even shorter than the Sanity Test, the smoke will include only quick scenarios that will point at major areas of the product. The idea is to probe these areas in search of smoke that will signal there is a FIRE hiding bellow the surface.
WHY? You either don’t have time or you don’t know where to start and you need a quick test to serve as a very high level indicator for your version or product.
I am sure I missed some test types, and I will be happy to include them in my personal list if you care to comment on them, but these are my main and most important test types.
At the end of the day, these types will only serve as a starting template and you will need to create your custom test suites that will suit the needs of your product and project.
Good Luck & Have Fun!