Leveraging Customers Rejections into Your Testing Environment

Some years ago I managed the testing team for an off-the-shelf platform aimed at Enterprise Companies. One of our biggest problems happened each time we changed the platform’s DB schema and released a utility to upgrade the system. In too many cases the automatic upgrade would get stuck in the middle of the process, leaving customers dead-on-the-water with a system half way upgraded and without the possibility to roll back to a previous version. It got so bad that we instructed our users to always back-up their databases before upgrading; and had a team of 3 engineers ready to fly around the world fixing the issues and manually upgrading the platforms for some months after each major release.

We analyzed the problem and concluded that we were falling on optimizations and customizations that users had done to the databases; we had not tested all possible configurations and got stuck on stuff that was within the boundaries of a “supported environment”. We now had around 25,000 configurations to test if we wanted to cover all possible settings for all the parameters at stake. Since that solution was not feasible we tested on the 20 configurations we judged to be the most popular and went out with our next release.

The results were not significantly better, we needed another approach.

During a brainstorming session one of the testers gave a brilliant idea; he said that the problem would stop if we could test the upgrade on all customer databases before releasing the product. This solution was also not feasible, but it got us thinking in the right direction: since we could not test on all databases, we could at least test on all databases we already knew had problems.

We created a plan together with our Customer Support Organization; whenever a client called with a DB upgrade issue, in parallel to solving their specific problem we would ask them for a copy of their database project. We told them that to insure the problem would not return in the future we would specifically test our upgrade procedure on their database and correct any issues before the release. With this approach we got over 75% of the database projects where customers reported problems.

At the beginning we did the tests manually: restored the projects into our DB servers, tested the upgrade, made a short sanity and reported any defects we would found. The effort gave good results; even when the bugs from the previous upgrades had been corrected we found new ones related to changes done for the current upgrade utility. The issues we found were good, but it was taking too many efforts to run these tests once every 2 to 3 weeks, specially as we were getting an additional 5 to 10 new database projects a month.

We decided to invest some resources (both in development time and machines) and created a complete environment specifically intended for testing the upgrade procedure in an almost automatic way. It had multiple database servers to perform parallel tests; an automatic process that systematically restored each customer’s project, tried to upgrade the db schema, performed a simple sanity check, and reported the results one after the other. It took it 2 to 3 days to upgrade around 100 projects; and we ran the complete test once a week!

After 2 releases, and once we had around 150 customer projects, we were able to declare our upgrade procedure a non-issue. And to the best of my knowledge the system is still in place within the testing organization.

I was reminded of this testing system this week, talking with a customer who told me about the issues they were having with all sorts of user server configurations and their inability to test all the possible environment configurations as part of their testing efforts.

My take on the subject is that at the end of the day you don’t need to test all the environment configurations, it is enough to have a sample of environments that is (1) large enough to be representative; and (2) close enough to the real world environments in order to provide the same issues and effects your customers experience.

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

No comments yet.

Leave a Reply