A par-wise solution for your planning your testing environment matrix

Most development projects need to handle multiple user environments, this is especially true for web applications and/or J2EE systems supporting multiple back-end and front-end configurations at once. A major concern in these projects is how to test all the platform-combinations needed to achieve the required level of environment support.

Since there is no economic or realistic way to test all the possible combinations I use a method based on the pair-wise (or all-pairs) approach, testing the most common interactions between every 2 relevant environment variables. Citing Wikipedia and a description by Rex Black:

“…the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing…”

If so, the question becomes how to plan the testing strategy to efficiently cover all the possible pairs? For this I use an iterative heuristic based on simultaneous coverage matrices. This procedure is not as complicated as it sounds and I will show an example using a simplified example from a typical J2EE application testing process.

Step 1.

List all the relevant parameters and their possible values. Since our Testing Application is a J2EE System we have both server-side and client-side components:

a. Server O/S

b. Server App Server

c. Server Database

d. Client O/S

e. Client Browser

f. Upgrading User Client vs. New User Client (this reflects if the user has a clean environment or if she already has some of our previous components in his machine).

Step 2.

Define the testing coverage distribution rate for each parameter. One way of doing this is by thinking of the percentage of users on each environment. In our example we will use the following distributions:

Server Side:

a. Win 2003 30% — AIX 30% — RH Linux 40%

b. JBoss 50% — WebLogic 50%

c. Oracle 9i 20% — Oracle 10g 50% — DB2 30%

Client Side:

d. Win XP SP2 75% — Win Vista 25%

e. IE 6.5 20% — IE 7 40% — FireFox 40%

f. Upgrading Users 60% — New Users 40%

Step 3.

We make a list of all our tests and randomly perform an initial distribution of the testing environments based on the rate numbers defined above.

Starting from this step I will use MS-Excel and Data Pivot Tables, mainly since I have not found any testing tool that provides these views in a comfortable way to perform these operations.

Step 4.

For each relevant combination, we create a Pivot Table with 2 of the parameters.
i. Server O/S vs. App Server
ii. Server O/S vs. DB Server
iii. DB Server vs. App Server
a. Client O/S vs. Browser
b. Client O/S vs. User type
c. Browser vs. User type

All the information for the tables is taken from our test list, the Grand Total for each parameter represents the target number of tests we want to perform on the individual environment, while the body of the matrix shows the 2-parameter environment combinations that are been covered based on our current Test Execution Plan.

Notice that in order to refresh the data on the Pivot Tables after changing the test distribution on the Execution Plan you need to right-click on the table itself and request it to refresh.

Step 5.

Our iterative heuristic will consist on leveling the numbers within the pivot tables while making sure that the Grand Totals are maintained as they are right now.

By “leveling” we mean that each row and column needs to have a statistical distribution similar to their respective Grand Totals (these been the distribution rates we on Step 2 above for each of the individual parameters).

I will start on the first Pivot Table in the servers Excel sheet tabs, in principle, it’s not important which table or which tab since we will need to cover all of them.

I already explained that each cell in the table represents a testing environment defined by the parameters in its respective row and column, and the number in the cell represents the number of tests we are planning to run on each of these environments based on our current distribution. We now need to distribute the tests for each testing environment based on our coverage rates.

For example, to level the first table:

1. I will move 10 tests from RH-Linux/WebLogic to RH-Linux/JBoss

2. Then I’ll move 3 tests from AIX/JBoss to AIX/WebLogic and another 7 from Win-2003/JBoss to Win-2003/WebLogic in order to maintain the grand total numbers with the original distribution (50% JBoss & 50% WebLogic; 30% AIX & 40% RH-Linux & 30% Win-2003).

It’s important to remember to do all the changes in the Test Execution Plan tabs and then refresh our table to see the correct distribution.

Step 6.

We perform the same operation for the second Pivot Table (DB Server vs. Server OS); remembering that our goal is to maintain the Grand Totals while leveling the internal numbers.

Step 7.

The third table is similar to the first 2.

Step 8.
If you look now at our first Pivot Table on the excel sheet you will see that the internal rates we set at the beginning were distorted by our operations in Steps 6 & 7. Don’t worry, this is a normal side-effect of our method!
In order to solve this we will return to our first Table and repeat the leveling operations in order to re-achieve internal distributions similar to our target Grand Totals.

After finishing this operation we’ll see that all 3 Pivot Tables provide the internal and external coverage rates we are looking for and this means we can stop at this point. Still, this might not have been achieved with the first correction step, and in such a case this we would have continued with our incremental changes on the following Pivot Tables until we receive the desired distribution results (FYI – larger numbers of tests tend to take 2 to 3 complete iterations over all the Pivot Tables to reach the desired leveling).

Step 9.
After finishing the Server Side configuration we need to do the same for the client-side.
Since the configurations and their respective parameters are independent of each other the changes done to them will have no effect on the 3 matrices we worked just now.

End Product:

After all our iterations we can go back to our Test Execution Plan and take an extract from it our new Leveled and Efficient Environment Testing Matrix.

This process is nor infallible, but it has helped me to plan my testing cycles for over 10 years now with very good results.

About PractiTest

Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.

Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.

With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.

4 Responses to A par-wise solution for your planning your testing environment matrix

  1. Anonymous July 3, 2008 at 2:00 pm #

    One can use a multidimensional table (more then 2 dimensions in a single table, by using multi-level heading, color schemes etc.), for displaying a clearer view of the testing scope.

    Kobi Halperin

  2. Joel Montvelisky July 6, 2008 at 9:47 am #

    That’s a great tool that can save a lot of time.
    If you have a template for it, feel free to post a link, or send it to me and I will post it on the main blog.

  3. Joris July 6, 2015 at 12:23 pm #

    The files are not available 🙁

Trackbacks/Pingbacks

  1. Master Test Plan – the strategic side of testing | QA Intelligence - a QABlog - October 4, 2012

    […] In the cases when the list of officially supported configurations is more extensive than the systems you plan to test you should provide 2 separate lists: (a) The list of theoretically supported configurations, as specified by your customer or product management team. (b) The list of actual configurations you will test in order to achieve the above level of support, together with the distribution or percentage of tests that will be run on each. In order to do this I suggest a presentation format and planning method similar to what I described in my last post. […]

Leave a Reply