You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Learn how to use 4 “levers” in DesignWise to adjust your suite size to your needs.

Overview

The default 2-way set of scenarios generated by DesignWise is often a solid choice when it comes to the balance of quantity and quality. However, it may not be right for you depending on the project scope, timelines, etc.

There are 4 strategies that can help you optimize the scenario suite further and tailor it to your goals. We will look at the “metamorphosis” of a single model and discuss those features in the increasing order of effort.

The base model:


The “benchmark” scenario count: 152 at 2-way strength.

Side note: we have chosen a small number of parameters to make it easier to track the impact. The steps below are applicable regardless of the model size, but the count reduction metric would of course depend on the plan shape.


Strategy 1 – Select a subset of tests

This method leverages the Analysis features of DesignWise, specifically the Coverage Matrix in this example:

Based on your system knowledge, project scope, and risk tolerance, there may be a better stopping point than 100% pairwise coverage. The visualization above will help you find it as, from the tooltip, you will know exactly which interactions are “sacrificed”.

Side note: if you are generally happy with, say, the 80% state but there are a few gaps you want to close, forced interactions could be used to achieve that efficiently (if moving the slider does not).

Impact – reduces the count by 42 at 80%.

Please note – when you export test sets, 100% of your tests will be exported, so you would need to remove the extra scenarios post-export.


Strategy 2 – Adjust the coverage dial setting away from 2-way

It is common that parameters in the model do not all have the same level of significance, therefore risk-based testing is applied. In DesignWise, that feature is called “mixed-strength testing”.

Let’s say, in this example, Applicant Age is not as relevant, and getting each value once would be sufficient (i.e. its interactions don’t matter). We can reduce the setting for that parameter to 1-way: 

Impact – reduces the count by 52 with one largest parameter set to 1-way.

Potential downside – especially early in the project, the exact settings are often “best guesses” based on personal judgments.


Strategy 3 – Change the how you describe your inputs (e.g., with changes to Parameters, Values, and/or Value Expansions)

It is also common that not all parameter values are equally important – the system rules can be structured in such a way that ages within a certain range behave similarly, state groups have the same laws, and roof shapes have the same coefficients in the pricing engine.

The DesignWise features to account for that are Ranged Values and Value Expansions:

Keep in mind the system rules when applying this method as it is not possible to apply constraints to value expansions.

Side note: it is not required to have all values grouped into expansions or to have ranges cover the whole continuum (i.e. it’s ok to have a break between 50 and 55 as long as the ranged values don’t overlap).

Impact – reduces the count by 128 with the restructure of all parameters

Potential downside – excessively aggressive grouping may lead to some value expansions being completely removed from scenarios (i.e if there are 10 expansions, but 8 scenarios with the value, the last 2 expansions would never be used).

Strategy 4 – Add artificial constraints

The general logic of this method is invalidating interactions that can happen but we are not interested in them.

For this example, let’s say fewer than 1% of our customers a) have Gable roof in IN; b) are 16 y.o. in CA, so we exclude those pairs:

Side note: one-off situations can still be incorporated by forced interactions overwriting the constraint.

Impact – reduces the count by 5 with 2 artificial constraints.

Potential downside – 1) keeping track of which constraints are fake vs real; 2) implementing enough fake constraints to move the needle in scenario count.


Conclusion

The methods described above are not mutually exclusive. At the extreme end of the spectrum, you can use all 4 together and settle on, say, 80% of mixed-strength after the parameter restructure and 2 artificial constraints (which results in just 18 scenarios – count reduction of 134 compared to the benchmark).

The reverse is also true – you can boost the thoroughness by increasing the algorithm settings, “unpacking” value expansion into standalone values, etc.

As you can see, the “test case count” metric is very volatile when it comes to model-based testing, so you don’t need to settle for the excessive (or insufficient) count with the default settings. Instead, re-evaluate the model elements and identify which ones need to be tweaked – the impact on the count would often be disproportionately large.

Finding the initial quality/quantity balance often requires at least a few subjective assumptions based on the SME knowledge, so it is important to establish the feedback loop for the longer term and track the execution results, specifically the scenario-to-defect ratio. If it remains consistently in double-digits, it may be a sign to try the methods described in this article.


  • No labels