You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »


In the help article related to APIs, we looked at the basics of applying TCD to testing the business logic (e.g. successful payment given multitude of factors, their interactions and associated rules).

In this one, let’s talk about the data as it relates to the communication between systems – using this guide by Pact as a reference (note however – these Test Case Designer methods are not limited to or specific to contract testing).

We will leverage a slightly modified setup. There are still a consumer (Order Web) and its provider (the Order API). And we will still be submitting the request for product information from Web to API, but will “enrich” the attributes a bit:

Request:

  • Market (e.g. NA, EU)
  • Product Category (e.g. A, B)

One or both factors need to be present for the successful request

Response :

(keeping in mind we care more about the structure and format compatibility than about the business logic calculations)

  • Quantity (e.g. whole, partial)
  • Value (e.g. whole, decimal)
  • Date last sold (e.g. mm/dd/yyyy format, dd/mm/yyyy format)
  • primary vendor ID (e.g. company itself, partner, unrelated 3rd party)
  • Status (200 for the scope of this article, but other non-error ones could be included in the same model as well with relevant triggers)

Some products are new and have not been evaluated and/or sold yet

Modeling in Test Case Designer

Both approaches described below are viable, and the choice will depend on the specific system & testing goals, as discussed in the “Decision Points” section later.

Approach 1 – Whole response profile per test case

This is similar to the model designed in the API article. We will have a TCD parameter for each eligible request and response attribute that a) varies in finite manner; b) needs to be validated.

Value expansions play the role of data specification in such models (e.g. 2/3/2022) since they would be the only element populated in the CSV or Automate export (which are typically used for this type of testing).

Each row in the Scenarios table would describe all parameterizable response attributes for a given request combination.



With data validation testing, there are usually fewer interactions/dependencies between the attributes. So, the mixed-strength setting with some 2-way on the request side and mostly 1-way on the response one is expected.

One script with “Then” line per attribute would cover all scenarios:



Side note: if there are attributes that don’t have format variations, can’t be blank and therefore wouldn’t become TCD parameters, the steps & validations for those would be hardcoded (i.e. without <> syntax) in the script.

Approach 2 – Attribute per test case

To enable this in TCD, we will “transpose” our thinking from Approach 1. We will have a pair of parameters – “Validation Element” (the list of all non-status response attributes we need to check) and “Validation Value” (the list of all non-status response values we need to check).

Then we will constrain each Validation Element only to the relevant Value.




A single script with a single “Then” line would cover all scenarios, because the key wording is dynamically tied to the TC table:



Decision points

Approach 1 – Pros:

  • If there is any validation dependency between response attributes, this approach has a much higher change to catch defects.
  • Less vulnerable to setup costs per TC (i.e., in an absurd example, if each test requires a unique API token that costs $1000, then executing a test per response profile is much cheaper than a test per attribute).

Approach 1 – Cons:

  • More complex and less flexible execution-wise (i.e. more steps to get to the end of the scenario).
  • More vulnerable to test data availability (if the request is sent against the real database or the mock that was built only based on production sample, the “free” combinations that Test Case Designer algorithm generates may result in “record not found” too often).


To no surprise, the points below are the mirror reflection of the above.


Approach 2 – Pros:

  • Quicker and more flexible execution of “componentized” TCs (i.e. if only 1 API response attribute changes, you don’t need to re-execute all the steps just to get to that one).
  • Less vulnerable to test data availability (if a valid standalone attribute value is not present in the mock/real database, that’s probably not a good sign and should be solved separately).

Approach 2 – Cons:

  • Will have a much lower chance to catch any interaction defects (i.e. if Value is not retrieved correctly only when unrelated 3rd party vendor is involved).
  • More vulnerable to setup costs per TC (higher total setup cost in the “$1000 per token” example above).


Side note: “number of tests” as a metric becomes irrelevant in this comparison since the number of steps per test and the corresponding execution time/effort are too different.

Conclusion

Hopefully this article has demonstrated how Test Case Designer can be applied to use cases where n-way interactions are not really the priority anymore. The speed of scenario generation and the one-to-many scripting move into the spotlight, so the tool can still deliver benefits in either approach.

Extra consideration: shared TCD model can serve as another collaboration artifact between the consumer and the provider, which could allow to uncover mismatching expectations between much faster.



  • No labels