Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Learn about two ways to design data validation models in DesignWise for use cases like consumer-driven contract testing

In the help article related to APIs, we looked at the basics of applying DesignWise TCD to testing the business logic (e.g. successful payment given multitude of factors, their interactions and associated rules).

In this one, let’s talk about the data as it relates to the communication between systems – using this guide by Pact as a reference (note however – these DesignWise Test Case Designer methods are not limited to or specific to contract testing).

...

Some products are new and have not been evaluated and/or sold yet

Modeling in

...

Test Case Designer

Both approaches described below are viable, and the choice will depend on the specific system & testing goals, as discussed in the “Decision Points” section later.

Approach 1 – Whole response profile per test case

This is similar to the plan model designed in the API article. We will have a DesignWise parameter TCD parameter for each eligible request and response attribute that a) varies in finite manner; b) needs to be validated.

...

Side note: if there are attributes that don’t have format variations, can’t be blank and therefore wouldn’t become DesignWise parametersTCD parameters, the steps & validations for those would be hardcoded (i.e. without <> syntax) in the script.

Approach 2 – Attribute per test case

To enable this in DesignWiseTCD, we will “transpose” our thinking from Approach 1. We will have a pair of parameters – “Validation Element” (the list of all non-status response attributes we need to check) and “Validation Value” (the list of all non-status response values we need to check).

...

A single script with a single “Then” line would cover all scenarios, because the key wording is dynamically tied to the TC table:


Decision points

Approach 1 – Pros:

  • If there is any validation dependency between response attributes, this approach has a much higher change to catch defects.
  • Less vulnerable to setup costs per TC (i.e., in an absurd example, if each test requires a unique API token that costs $1000, then executing a test per response profile is much cheaper than a test per attribute).

...

  • More complex and less flexible execution-wise (i.e. more steps to get to the end of the scenario).
  • More vulnerable to test data availability (if the request is sent against the real database or the mock that was built only based on production sample, the “free” combinations that DesignWise algorithm Test Case Designer algorithm generates may result in “record not found” too often).

...


Side note: “number of tests” as a metric becomes irrelevant in this comparison since the number of steps per test and the corresponding execution time/effort are too different.

Conclusion

Hopefully this article has demonstrated how DesignWise can Test Case Designer can be applied to use cases where n-way interactions are not really the priority anymore. The speed of scenario generation and the one-to-many scripting move into the spotlight, so the tool can still deliver benefits in either approach.

Extra consideration: shared DesignWise TCD model can serve as another collaboration artifact between the consumer and the provider, which could allow to uncover mismatching expectations between much faster.

...