Learn about optimizing coverage, traceability, and the E2E scenario count in TCD, taking Guidewire implementation as an example.

Core Testing Challenge

While functional testing is a common entry point for Test Case Designer implementations, and we have multiple articles describing the benefits achieved, integration testing (both system-to-system and E2E) is often even more applicable because of the increase in scale and number of dependencies (and, consequently, in number of important possible interactions).

However, with that increase comes greater difficulty in decision-making, leading to the major challenge in testing complex systems – “how much testing is enough?”

In such situations, teams use TCD to quickly determine the optimal answers to both (1) “how many tests?” and (2) “which specific tests?”.

We will describe how these benefits come to life on the generalized example of the Guidewire suite implementation at a large insurance client (specifically, the “Auto Policy Bind and Rewrite” workflow).

Test Case Designer Solution & Modeling Process

Before we start the design, the following decisions need to be made with the feedback from all stakeholders involved in the process:

Note 1: The variation % of 70 is used for example purposes and highly depends on the project infrastructure, rule complexity, etc. But that value is fairly common across the implementations we have seen.

Note 2: The execution/design split is rarely 100 to 0 in either direction, so the decision is more between, e.g., 70/30 or 30/70.

Note 3: Release history and functional/integration testing changes could alter the approach decisions throughout the project lifecycle.

In this article, we will focus on a more “strategic” level (“Yes” for the first diamond) and will briefly touch upon the extra steps/considerations to achieve the “No -> Design” tree path.

Building a Model – Step 1 – Parameter & Value Selection

The general logic of “Parameter = a step in the flow that impacts system outcomes and can be represented with finite variation” still applies throughout the article, the notes below build on top of it.

The general rules for E2E parameter & value identification:

  1. prioritize parameters that affect more than 1 system;
  2. prioritize values heavily involved in business rule/integration triggers;
  3. consider using value expansions for less impactful options.

Drop-down menus in the Policy Creation system itself could be responsible for most of the various ideas to get you started. Often though, additional insights into the optimal values to use for elements such as dates, payment methods, or user/agent profiles will need to be added.

Therefore, the stakeholders from each involved system should provide input about a) integration factors that matter to them (e.g., “given our objectives in this set of tests, it is important to include policy duration and payment methods, but not delivery preferences”), and b) the appropriate level of detail for those (e.g., “when it comes to payment methods, it is the category of payment method that’s most important to vary; be sure to include some scenarios with ACH and some with Credit Card).

Those 2 points apply whether it is billing…

Or licensing services…

Or expected results for document generation…

Or customer front end, etc.

But don’t overdo it here – don’t add extra tests to ensure that 20 different types of credit cards are tested combinatorially because that would result in more tests than we need; instead, sprinkle in a few common types of credit cards into the test set.
When it comes to transactions like Change or Rewrite, the value nature in “strategic” E2E models shifts to declarative or directional rather than imperative:

I.e., the E2E scenarios generated from the inputs above would be highly-valuable (because they would result in covering the most important kinds of variation in E2E scenarios) but they would be “incomplete” – they would not specify what exact kind of SNI is added, which exact coverage(s) is increased, the precise amount(s) they would be increased by, etc. Making such values more robust would be part of the transition to “No -> Design” approach from the decision tree above.

The “nested” values deserve a special mention:

In some situations, you want to have more control over interactions, and achieving that via constraints becomes too cumbersome. User Profile is a common example, especially because it often depends on the availability of vendor test data that is outside of your control. In that case, the pre-determined combination of factors becomes a single value (while used in this example, the pipe delimiter format is not special or required).

Such nesting also allows test designers to “connect” TCD models in a way since the same profile can be reused in the models responsible for parallel (create Umbrella policies) or sequential (test Renewals later) steps.

Value expansions can further serve as a test data specification if needed:

For maintenance reasons, we recommend keeping TCD values at the category level (especially dates) and setting up execution in a way that would calculate the actual date/would, query the credit database given the score range/etc.

Repetitive model elements can be handled by appending the “extra detail” to the parameter name (e.g., "- Original State"):


For convenience, we suggest keeping the parameters in the flow order on the first screen. For the same purpose, elements like this can serve as visual dividers on the Parameters/Scenarios screens and are optional:

Once the draft model is created, the auto-generated Mind Map presents an intuitive view of the model elements and can be easily shared for collaboration and approval.

This approach allows teams to communicate clearly and collaborate efficiently by confirming early in the testing process that 1) all critical aspects have been accounted for, and 2) they have been included at the right level of detail (which is one part of the Test Case Designer answer to “how much testing is enough?”).

Building a Model – Step 2 – Generating Optimal Scenarios

Implementing system logic via TCD constraints would occur at this step, but there are no aspects unique to E2E testing about constraint handling, so we are skipping it.

When it comes to efficiently covering all critical interactions in a system using as few tests as possible, the TCD test generation algorithm will dramatically outperform humans in speed, accuracy, thoroughness, and number of tests used. Even if there are more than 10,000 critical interactions in a system, the algorithm will cover every single one, guaranteed. And do so in just several minutes using as few tests as mathematically possible. No human brain could come close. And production data wouldn’t come close.

Having said that, the algorithm would not have any way of knowing whether a specific combination involving, say, 10 values is important to include in a single “special” test. So the test designer, ideally collaborating with a subject matter expert, should force the inclusion of any specific “high-priority scenarios” into the generated suite.

To accomplish that, we use Forced Interactions to specify all the factors that constitute the “core” scenarios. For example, let us say we have 3 such high-priority use cases:

  • Happy Path: Full Term, No change in premium, single driver, single car, monthly, etc.
  • High complexity 1: New Business, Increase in premium, 2 drivers, 2 vehicles, etc.
  • High complexity 2: Full Term, Decrease in premium, 1 driver, 2 vehicles, etc.

This is how they would look inside Test Case Designer:

Happy Path - full list of forced values

All 3 high-priority E2E scenarios after being entered into Test Case Designer

It would be best to let the TCD algorithm handle the factors that are not explicitly specified. In other words, if your model contains 10 parameters but the special edge-case requires 4 specific values, then specify only those 4. TCD will fill in the blanks automatically and, while doing so, identify the most varied, highest-coverage, least-wastefully repetitive 6 values possible.

Pro tip: one subtle trick for one-off testing in TCD is that Forced Interactions can overwrite Constraints and vice versa. You can use that workaround for the scenarios deemed “very low probability” by the business, but it still needs to be tested from an IT standpoint.

The last point in this step is to select an appropriate level of thoroughness for your needs. It is rare for E2E TCD models to utilize anything other than 2-way (at the “Strategic” level) or Mixed-strength (at any level). 3-way or higher coverage strengths would typically be overkill. The dropdown settings in Mixed-strength are generally chosen based on the following parameter logic:

  • Does it impact 2+ systems and have numerous rules/dependencies associated with it? -> Include using at least a 2-way coverage selection.
  • Does it impact 2+ systems and have few/no rules/dependencies associated with it? -> Include with 2-way selection given short value lists + value expansions or with 1-way otherwise.
  • Does it impact only 1 system but have numerous rules/dependencies associated with it? -> Include with 1-way coverage selection and a fairly exhaustive list of values (because of the constraints).
  • Does it impact only 1 system and have few/no rules/dependencies associated with it? -> Likely should not have been included in the model, but 1-way otherwise.

The resulting scenarios table could look like this:

The ability to iterate at this step and quickly regenerate test cases (based on, e.g., requirement updates) for review provides immense help clarifying ambiguities much earlier in the process.

The effective combination of the level of detail in Parameters and the business-relevant coverage strength in Scenarios guarantees that the Test Case Designer algorithm optimizes your total model scope to have a minimal number of tests that cover all the important interactions.

And next, we will discuss the last piece of the core testing “puzzle” – given the total scope, how we can use TCD visualizations to select the right stopping point.

Building a Model – Step 3 – Coverage Results Comparison

When end-to-end scenarios are created by hand, they often represent a fragmented system view and struggle with redundancy or omissions. Instead, Test Case Designer maximizes the interaction coverage in fewer scenarios and provides complete control and traceability for each test case's steps.

If we now analyze the coverage achieved across, e.g., 8 critical parameters and compare it with the typical manual solution, the results would often look like this:

As you can see, TCD-generated tests benefit from Intelligent Augmentation that ensures coverage of both (1) all specified requirements and (2) every critical system interaction. Our scenarios consistently find more defects than hand-selected test sets because interactions are a major source of system issues.

Taking this analysis a step further, given typical schedule deadlines, etc., we can identify the exact subset of the total scope that will be sufficient for the immediate testing goals and communicate that decision clearly to the management with the combination of the Mind Map + Coverage Matrix.

Building a Model – Step 4 – Scripting & Export

Some teams may choose to execute directly from the test cases table. They would leverage the “Save As” dropdown on the Scenarios screen and skip this section.

We are observing more & more teams switching to BDD so we will cover TCD Automate in this article, but most general principles also apply to Manual Scripts.

First, the overall script structure is completely up to you. The number of steps, length of each, number of parameters per each, etc., depends on your guidelines for both test design and execution. Test Case Designer has the flexibility to support a wide range of preferences.

Second, for the review and export efficiency, we will be using {[]} filters to separate Full Term and New Business scenarios (assuming they have different validation steps, for example purposes).

You can check the “Usage” button on the Automate screen for more details about the syntax rules.

The sequential time aspect can be accounted for by “On Day X,…” parts of the steps. The system-to-system transition can be reflected in a similar manner as well as in commented-out lines. Parameters that didn’t “qualify” for model inclusion and static validations can be hard coded (i.e., you don’t need to include <> syntax in every line). Test data generated during the execution can be captured using steps like this:

One tricky area is conditional/”vague” steps vs further script subsetting. In our example, Full Term and New Business are significantly different validation-wise, so separating them with {[]} is an obvious choice. For smaller instances, however, like 2 out of 50 being different between IN and OH, you will be better off if conditional steps like these are allowed:

Otherwise, if two scenario blocks are created, your maintenance of the common 48 steps will be harder than optimal.

Lastly, we strongly recommend sharing the TCD models with automation engineers early in the process to allow them enough time to review and provide feedback on step wording, value names, etc.

If present, only value expansions are exported in the script (the same is true for CSV), so you can have abstract/categorical value names and then provide execution-ready details in the expansion.

Once the script is finalized, you can proceed to export the scenarios in a format compatible with the test management tools and/or automation frameworks. E.g., without any extra actions, you can generate the CSV for Xray alongside Java files.

Tool-specific dialogs support metadata entry/selection that further smoothes the integration process.

This step enables accelerated, optimized automation because you can:

  • Rapidly create clear, consistent steps that leverage Behavior Driven Development principles.
  • Export 1 Scenario block into multiple scripts based on the references to the data table.
  • Improve collaboration across business and technical teams to understand the testing scope as a group.

Building a Model – What is Different for “No -> Design” decision tree path

The extension from “strategic” to “highly detailed” still follows the same steps, but there are 3 nuances.

First, the “hard” Test Case Designer limits are 256 parameters and 5000 tests per model. Highly detailed models will require you to consider non-traditional TCD parameters (e.g., test data elements, more expected results) that can exhaust the limit fairly quickly, so prioritizing the scope and the balance between design & execution becomes even more critical.

Second, the extension requires more attention to how parameters & values are organized (e.g., value vs. value expansion, nested vs. standalone) and mixed-strength settings. Keep in mind that even a single additional parameter with a long list of values and a 2-way strength setting will result in a disproportionally large increase in the scenario count.

Lastly, additional model elements may require more precise scripting (if conditional or “vague” steps are not an option). It becomes even more important to keep track of 1) which {[]} filters are used; 2) how mixed-strength affects the possible combinations of {[]} filters (you may not need to create a scenario block for each possible combo).

Summary & Case Studies

In conclusion, the combination of TCD features will allow you to generate the optimal set of scenarios quickly and answer the “how much testing is enough?” question with clarity & confidence.

The image above should be familiar to our other educational materials. Hopefully, it underscores the notion that the process & methodology are not strongly dependent on the type of testing, type of system, industry, etc.

The goal of applying the Test Case Designer is to deal with such challenges of manual test creation as prolonged and error-prone scenario selection, gaps in test data coverage, tedious documentation, and excessive maintenance.

  • No labels