Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The default 2-way set of scenarios generated by the Test Case Designer is often a solid choice when it comes to regarding the balance of quantity and quality. However, it may not be right for you, depending on the project scope, timelines, etc.

There are 4 strategies that can help you optimize the scenario suite further and tailor it to your goals. We will look at the “metamorphosis” of a single model and discuss those features in the increasing order of effort.

The base model:

Image Modified


The “benchmark” scenario count: 152 at 2-way strength.

...

Info
If you are generally happy with , say, the 80% state but there are a few gaps you want to close, Forced Interactions could be used to achieve that efficiently (if moving the slider does not).

...

Strategy 2 – Adjust the coverage dial setting away from 2-way

It is common that Commonly, parameters in the model do not all have the same level of significance. Therefore, therefore risk-based testing is applied. In Test Case Designer, that feature is called “mixed-strength testing”.

Let’s say, in this example, Applicant Age is not as relevant, and getting each value once would be sufficient (i.e., its interactions don’t matter). We can reduce the setting for that parameter to 1-way: 

...

It is also common that not all parameter values are equally important – the system rules can be structured in such a way , so that ages within a certain range behave similarly. Similarly, state groups have the same laws, and roof shapes have the same coefficients in the pricing engine.

The TCD features to account for that are the variations of the "Equivalence Classes" approach - Ranged Values and Value Expansions:

...

Keep the system rules in mind when applying this method, as it is not possible impossible to apply constraints to value expansions in TCD.

Info
It is not required to have all values grouped into expansions or to have ranges cover the whole continuum (i.e., it’s ok to have a break between 50 and 55 as long as the ranged values don’t overlap).

...

Potential downside – excessively aggressive grouping may lead to some value expansions being completely removed from scenarios (i.e, if there are 10 expansions , but 8 scenarios with the value, the last 2 expansions would never be used).

...

The general logic of this method is invalidating interactions that can happen, but we are not interested in them.

...

Potential downside – 1) keeping track of which constraints are fake vs. real; 2) implementing enough fake constraints to move the needle in scenario count.

...

The methods described above are not mutually exclusive. At the extreme end of the spectrum, you can use all 4 together and settle on , say, 80% of mixed-strength after the parameter restructure and 2 artificial constraints (which results in just 18 scenarios – a count reduction of 134 compared to the benchmark).

The reverse is also true – you can boost the thoroughness by increasing the algorithm settings, “unpacking” value expansion into standalone values, etc.

As you can see, the “test case count” metric is very volatile when it comes to regarding model-based testing, so you don’t need to settle for the excessive (or insufficient) count with the default settings. Instead, re-evaluate the model elements and identify which ones need to be tweaked – the impact on the count would often be disproportionately large.

...