You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

DesignWise allows you to adjust testing coverage to focus more thorough coverage on selected, high-priority areas

One straightforward risk-based testing technique (at the value level) is to simply enter a specific value multiple times in the “Inputs” screen. Entering “Male, Male, Female, Male” as Values, for example, would result in “Male” appearing three times as often as “Female.”

A more powerful risk-based testing strategy (at the parameter level) is generating Mixed-Strength test sets, as described in the following example. This feature can be helpful to use in many (if not most) of the plans you will create.



Let’s set the context and the problem we’re trying to address

We have a mission-critical application that includes several large changes in this release. We’ve filled in parameters for our System Under Test, and we see that a 2-way solution based on our variation ideas would require 87 scenarios.

Curious to see how many tests would be needed for a more thorough test plan? Generate a set of 3-way tests! Unfortunately, this more-thorough solution requires almost 5 times as many tests. Ugh! We don’t have time all of those!


Determine what Values we want to devote more thorough testing

What we want to do now is to generate a set of scenarios that will focus extra coverage on the high-priority parameters below, while maintaining pairwise coverage for every Value in our plan. Creating a “Mixed-strength” model will allow us to do exactly that!

For this example, let's assume "User Type", "Customer Authorization Limit", and "Transaction Exchange (Country)" are the most important and we must cover all triplets of values across them. Select “Mixed-strength interactions” from the drop down list on the “Scenarios” screen:

Next, select the 3-way coverage option for the critical parameters we identified above. Click on “Reapply” to generate a new set of scenarios:

Another way to look at the set of Risk-Based Scenarios that we have just created is to imagine temporarily removing everything from our plan except for the high-priority Parameters. With 3 values for "User Type", 6 for "Customer Authorization Limit", and 14 for "Transaction Exchange (Country)", simple math tells us that to test every possible combination we require 3×6×14 = 252 tests. All 252 of these 3-way combinations are included in the Mixed-Strength (“Risk-Based Testing”) scenarios that we created…

In addition to achieving comprehensive 3-way coverage of the high priority parameters we identified, the 252 Mixed-strength scenarios we created ALSO made sure that we tested every single pair of Values together in at least one test case.

The DesignWise test generation algorithm is able to achieve both of these objectives in only 252 tests. We were able to focus the additional coverage where we wanted it without generating lots of additional scenarios that we do not have time to execute.



Use "Mixed-Strength" scenarios more often than "regular" higher-strength ones


Explaining why it is often recommended to use Mixed-Strength coverage rather than 3-way or 4-way options.

Written by Scott Johnson
Updated over a week ago

This lesson explains why testers will usually find that Mixed-Strength scenarios provide a better balance of "additional thoroughness" vs. "additional time required" than regular 3-way, 4-way, 5-way, or 6-way tests.


Many testers new to this kind of testing are too quick to use sets of 3-way tests when they are seeking more-thorough coverage than pairwise sets can provide

Instead of executing an entire set of higher strength tests, it is usually more efficient and effective to execute a set of Mixed-Strength tests instead. These tests have the ability to include more thoroughness in selected areas (where you want the extra thoroughness).  Sets of Mixed-strength scenarios will usually be fewer in number than "regular" higher strength models.


There are only two possible reasons that a set of 2-way tests could fail to trigger a software defect:

  1. There was a missing test "idea" (e.g., the only way the defect could be found is if the application were tested using a specific operating system and that specific system was not included as one of the parameters/values).

  2. All of the test ideas and test conditions were included as values, but the defect could only be triggered by the scenario that included three or more of those existing test conditions together at the same time.

In our experience working with hundreds of software teams, the first reason (not thinking to include a particular test idea) is responsible for more defects slipping by testing than the second one (specific combinations of 3 or more already-included ideas).


Accordingly, this is what is recommended for testers who are planning to manually execute sets of 3-way scenarios:

  • Don't*

  • At least not until you first experiment a bit with executing well-thought-through sets of Mixed-Strength scenarios

  • Why?  90% of the extra thoroughness you might be looking for can probably be achieved by a well-thought-out set of mixed-strength tests. These tests might be half as numerous as the full, "regular" higher-strength test set.

  • With the extra time that the team saves by not executing all of those additional tests, they should use it to add more wrinkles/testing ideas into their testing.  

*This advice is for manual testing projects (where the costs of executing extra tests is relatively high), not for automated test execution projects (where the costs of executing extra tests is relatively low).


Examples of good sources for additional testing ideas include these:

  • No labels