Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Learn how to use 4 “levers” in Test Case Designer to adjust your suite size to your needs.

Overview

The default 2-way set of scenarios generated by DesignWise the Test Case Designer is often a solid choice when it comes to regarding the balance of quantity and quality. However, it may not be right for you, depending on the project scope, timelines, etc.

There are 4 strategies that can help you optimize the scenario suite further and tailor it to your goals. We will look at the “metamorphosis” of a single model and discuss those features in the increasing order of effort.

The base model:

Image Modified


The “benchmark” scenario count: 152 at 2-way strength.Side note: we


Info
We have chosen a small number of parameters to make it easier to track the impact. The steps below are applicable regardless of the model size, but the count reduction metric would of course depend on the

...

model shape.


Table of Contents



Strategy 1 – Select a subset of tests

This method leverages the Analysis features of DesignWiseTCD, specifically the Coverage Matrix in this example:

...

Based on your system knowledge, project scope, and risk tolerance, there may be a better stopping point than 100% pairwise coverage. The visualization above will help you find it as, from the tooltip, you will know exactly which interactions are “sacrificed”.Side note: if

Info
If you are generally happy with

...

the 80% state but there are a few gaps you want to close,

...

Forced Interactions could be used to achieve that efficiently (if moving the slider does not).

Impact – reduces the count by 42 at 80%.

Please note – when you export or sync test sets, 100% of your tests will be exportedmigrated, so you would need to remove the extra scenarios post-exportafter the export/sync operation.


Strategy 2 – Adjust the coverage dial setting away from 2-way

It is common that Commonly, parameters in the model do not all have the same level of significance. Therefore, therefore risk-based testing is applied. In DesignWiseTest Case Designer, that feature is called “mixed-strength testing”.

Let’s say, in this example, Applicant Age is not as relevant, and getting each value once would be sufficient (i.e., its interactions don’t matter). We can reduce the setting for that parameter to 1-way: 

...

Potential downside – especially early in the project, the exact settings are often “best guesses” based on personal judgments.


Strategy 3 – Change

...

how you describe your inputs (e.g., with

...

optimizations to Parameters, Values, and/or Value Expansions)

It is also common that not all parameter values are equally important – the system rules can be structured in such a way , so that ages within a certain range behave similarly. Similarly, state groups have the same laws, and roof shapes have the same coefficients in the pricing engine.

The DesignWise TCD features to account for that are the variations of the "Equivalence Classes" approach - Ranged Values and Value Expansions:


Keep in mind the system rules in mind when applying this method, as it is not possible impossible to apply constraints to value expansions in TCD.Side note: it

Info
It is not required to have all values grouped into expansions or to have ranges cover the whole continuum (i.e., it’s ok to have a break between 50 and 55 as long as the ranged values don’t overlap).

Impact – reduces the count by 128 with the restructure of all parameters

Potential downside – excessively aggressive grouping may lead to some value expansions being completely removed from scenarios (i.e, if there are 10 expansions , but 8 scenarios with the value, the last 2 expansions would never be used).

Strategy 4 – Add "artificial" constraints

The general logic of this method is invalidating interactions that can happen, but we are not interested in them.

For this example, let’s say fewer than 1% of our customers a) have Gable roof in IN; b) are 16 y.o. in CA, so we exclude those pairs:

Side note: one

Info
One-off situations can still be incorporated by forced interactions overwriting the constraint.

Impact – reduces the count by 5 with 2 artificial constraints.

Potential downside – 1) keeping track of which constraints are fake vs. real; 2) implementing enough fake constraints to move the needle in scenario count.


Conclusion

The methods described above are not mutually exclusive. At the extreme end of the spectrum, you can use all 4 together and settle on , say, 80% of mixed-strength after the parameter restructure and 2 artificial constraints (which results in just 18 scenarios – a count reduction of 134 compared to the benchmark).

The reverse is also true – you can boost the thoroughness by increasing the algorithm settings, “unpacking” value expansion into standalone values, etc.

As you can see, the “test case count” metric is very volatile when it comes to regarding model-based testing, so you don’t need to settle for the excessive (or insufficient) count with the default settings. Instead, re-evaluate the model elements and identify which ones need to be tweaked – the impact on the count would often be disproportionately large.

...