Most development projects need to handle multiple user environments, this is especially true for web applications and/or J2EE systems supporting multiple back-end and front-end configurations at once. A major concern in these projects is how to test all the platform-combinations needed to achieve the required level of environment support.
Since there is no economic or realistic way to test all the possible combinations I use a method based on the pair-wise (or all-pairs) approach, testing the most common interactions between every 2 relevant environment variables. Citing Wikipedia and a description by Rex Black:
“…the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing…”
If so, the question becomes how to plan the testing strategy to efficiently cover all the possible pairs? For this I use an iterative heuristic based on simultaneous coverage matrices. This procedure is not as complicated as it sounds and I will show an example using a simplified example from a typical J2EE application testing process.
List all the relevant parameters and their possible values. Since our Testing Application is a J2EE System we have both server-side and client-side components:
a. Server O/S
b. Server App Server
c. Server Database
d. Client O/S
e. Client Browser
f. Upgrading User Client vs. New User Client (this reflects if the user has a clean environment or if she already has some of our previous components in his machine).
Define the testing coverage distribution rate for each parameter. One way of doing this is by thinking of the percentage of users on each environment. In our example we will use the following distributions:
a. Win 2003 30% — AIX 30% — RH Linux 40%
b. JBoss 50% — WebLogic 50%
c. Oracle 9i 20% — Oracle 10g 50% — DB2 30%
d. Win XP SP2 75% — Win Vista 25%
e. IE 6.5 20% — IE 7 40% — FireFox 40%
f. Upgrading Users 60% — New Users 40%
We make a list of all our tests and randomly perform an initial distribution of the testing environments based on the rate numbers defined above.
Starting from this step I will use MS-Excel and Data Pivot Tables, mainly since I have not found any testing tool that provides these views in a comfortable way to perform these operations.
For each relevant combination, we create a Pivot Table with 2 of the parameters.
i. Server O/S vs. App Server
ii. Server O/S vs. DB Server
iii. DB Server vs. App Server
a. Client O/S vs. Browser
b. Client O/S vs. User type
c. Browser vs. User type
All the information for the tables is taken from our test list, the Grand Total for each parameter represents the target number of tests we want to perform on the individual environment, while the body of the matrix shows the 2-parameter environment combinations that are been covered based on our current Test Execution Plan.
Notice that in order to refresh the data on the Pivot Tables after changing the test distribution on the Execution Plan you need to right-click on the table itself and request it to refresh.
Our iterative heuristic will consist on leveling the numbers within the pivot tables while making sure that the Grand Totals are maintained as they are right now.
By “leveling” we mean that each row and column needs to have a statistical distribution similar to their respective Grand Totals (these been the distribution rates we on Step 2 above for each of the individual parameters).
I will start on the first Pivot Table in the servers Excel sheet tabs, in principle, it’s not important which table or which tab since we will need to cover all of them.
I already explained that each cell in the table represents a testing environment defined by the parameters in its respective row and column, and the number in the cell represents the number of tests we are planning to run on each of these environments based on our current distribution. We now need to distribute the tests for each testing environment based on our coverage rates.
For example, to level the first table:
1. I will move 10 tests from RH-Linux/WebLogic to RH-Linux/JBoss
2. Then I’ll move 3 tests from AIX/JBoss to AIX/WebLogic and another 7 from Win-2003/JBoss to Win-2003/WebLogic in order to maintain the grand total numbers with the original distribution (50% JBoss & 50% WebLogic; 30% AIX & 40% RH-Linux & 30% Win-2003).
It’s important to remember to do all the changes in the Test Execution Plan tabs and then refresh our table to see the correct distribution.
We perform the same operation for the second Pivot Table (DB Server vs. Server OS); remembering that our goal is to maintain the Grand Totals while leveling the internal numbers.
The third table is similar to the first 2.
If you look now at our first Pivot Table on the excel sheet you will see that the internal rates we set at the beginning were distorted by our operations in Steps 6 & 7. Don’t worry, this is a normal side-effect of our method!
In order to solve this we will return to our first Table and repeat the leveling operations in order to re-achieve internal distributions similar to our target Grand Totals.
After finishing this operation we’ll see that all 3 Pivot Tables provide the internal and external coverage rates we are looking for and this means we can stop at this point. Still, this might not have been achieved with the first correction step, and in such a case this we would have continued with our incremental changes on the following Pivot Tables until we receive the desired distribution results (FYI – larger numbers of tests tend to take 2 to 3 complete iterations over all the Pivot Tables to reach the desired leveling).
After finishing the Server Side configuration we need to do the same for the client-side.
Since the configurations and their respective parameters are independent of each other the changes done to them will have no effect on the 3 matrices we worked just now.
After all our iterations we can go back to our Test Execution Plan and take an extract from it our new Leveled and Efficient Environment Testing Matrix.
This process is nor infallible, but it has helped me to plan my testing cycles for over 10 years now with very good results.
Practitest is an end-to-end test management tool, that gives you control of the entire testing process - from manual testing to automated testing and CI.
Designed for testers by testers, PractiTest can be customized to your team's ever-changing needs.
With fast professional and methodological support, you can make the most of your time and release products quickly and successfully to meet your user’s needs.