I’ve been working with a couple of start-ups lately, I like the informality and the adrenaline rush you get from interacting with such teams.
One of the things that characterize young software firms is their need to react quickly, to be able to put something together fast and release as soon as possible. This requirement/constraint may be frustrating to a tester, especially if her testing approach does not match the need to provide answers fast.
How to reconcile between speed and assurance?
The answer rests on how you approach your tasks as a tester.
I personally don’t see my job as being responsible for all the bugs in the system (do you?). I see myself as the person in charge of providing visibility into the product under development. WHAT DOES THIS MEAN?
In principle, it means that:
1. I work for an Internal Customer.
The identity of my internal customers varies from company to company but it is usually the Director of the Development team together with the person in charge for the product (e.g. Product Manager).
From my internal customers I need to understand what parts of the product are more important and what parts of the product are more delicate. Once I understand this I am able to decide what areas I need to test more and deeper.
2. I work based on Company Time Constraints.
Since there is never enough time (NEVER!), we need to make due with what we have.
When starting the testing phase I make sure that my Internal Customers understand how much can be covered in the available time, and together with them I create a plan around what to test and in what order.
Once you stop assuming you will cover all the application you can use your time in deciding what to test and what to leave out.
3. You are not the Gate Keeper.
At some point in the process you will be required to provide your opinion. This means to say whether you feel that, based on the priorities and the requirements defined with your Internal Customers, you managed to get an idea of whether the product is ready to be released or not.
I do this by understanding what tests I did, correlating this with the results and bugs I found, and complementing it with what I did not manage to do.
As a rule, you will be required to provide a simple answer (e.g. Ready vs. Not Ready), but make sure you add the risks and additional suggestions your testing process provided you.
How do we do all this?
It starts by thinking of your tests in means of coverage levels and defining different but complementing suites of scripts that you can run on your system under test.
As an example, I structure my tests as Smoke, Sanity, Progression and Regression suites, with each level covering additional areas of the same area of the product.
I believe I already talked about part of these tests in the past, but since it is a broad subject I believe I will also do it in a concentrated post sometime in the future.