You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

Overview

Systems in general, including software and hardware, usually have some input parameters that will affect the output produced by the system. The number of parameters and the possible values, their ranges, vary and can be limited, huge but finite, or even infinite.

Taking a simple flight booking site as an example, we can easily have thousands of combinations for Flying From, Flying to, Class, (number of) Adults, (number of) Children input parameters.



This leads to a potential high-number of scenarios to be tested. For the previous example, and considering a model where parameters have just a limited number of possible values, that would still lead to 3*3*3*2*3=162 scenarios.



System produce different results not only due to changes on the input parameters but also due to using the system on different contexts (e.g., configurations, operating system, timezones, cloud provider).



In general, we could think that if we aim to test systems like this in-depth, we would need to test the system with all the possible combinations of values of these parameters.

But does it even make sense? Are any of those scenarios redundant, or in other words, is there any manageaable subset of scenarios that can be tested that can still help us find bugs?

In this tutorial we'll learn about the testing challenges of these systems and how to overcome them efficiently.

Initial testing options

Test using some examples for the parameters

The first strategy that we may come with would be adopting data-driven testing.

Data-driven testing is a technique where a well-defined test script is executed multiple times, taking into account a "table" of parameters and corresponding values.

Usually, data-driven testing is used as a way to inject data to test automation scripts but it can also be used for manually performing the same test multiple times against different data.

The exact combination of parameter values to be used is beyond of the scope of data-driven testing. However, usually testers include parameter combinations that represent examples coming as a direct consequence of acceptance criteria or from well-known "happy paths".


Learn more

Xray has built-in support for datasets where testers can explicitly enumerate parameters and the combination of values to be tested.


Please see Parameterized Tests for more info.

Test every parameter/value combination

Testing every possible combination of parameters is only viable if we have very few parameters with very few possible values for each one of them.

In general, testing all combinations:

  1. takes considerable time
  2. is costly from human resources or infrastructure perspective
  3. may be innefficient (more info ahead)


Combinatorial testing is a "black-box test technique in which test cases are designed to exercise specific combinations of values of several parameters" (ISTQB). 


Learn more

Xray also supports combinatorial parameters, where the user defines the values for each parameter and Xray calculates all the possible combinations, turning that into the dataset to be used.


It's possible to remove some values of the combinations to be generated. For example, we can exclude the "First" Class. That would lead to less scenarios to test (e.g., 162 => 108) but could still not be enough if we aim to have a limited set of tests.


Please see Parameterized Tests for more info.

Test using random combination of parameter values

Random testing is always an option that comes but it doesn't ensure we test combinations that matter unless we perform a very high number of tests, which would probabilisticly include a certain % of combinations or even all of them if we spend an infinite time randomly testing.

Nobody wants to perform testing endlessly, without any sort of criteria. Random testing doesn't ensure we cover combinations that matter with a very limited set of tests.

Empyrical data

Several studies indicate that the vast majority of defects (67%-84%) related to input values are due to either to a problem in a parameter value (single-value fault) or in a combination of two parameter values (2-way interaction fault).

Single-value faults are mostly probable to typical mistakes, such as the off-by-one bug (e.g., imagine using a loop and using the < operator instead of <=). The interaction of 2 parameters may be to bugs around implementing cascade conditional logic statements (e.g. using if or similarinvolving those parameters/variables.

Bugs related to the interaction of more parameters decrease with the number of parameters; in other words, finding these rare bugs will require much more tests to be performed, leading to more time/costs.


Pairwise and n-wise Testing

Given the empyrical data, adopting pairwise testing to test all the ombinations of pairs of parameters (sometimes also called as "all pairs testing") is a technique that is not only feasible but also provides great results in terms of fault-detection.

Reducing the number of test scenarios

Imagining the previous example, instead of having XXX test scenarios to perform, we would need just XX.

Sometimes, we may need to test more thoroughly some parameters, and for those we may choose to 3-way testing, for example, to ensure that we cover all the combinations of values of 3 relevant parameters. In this case we may have mixed-strength scenarios where combinations of certain parameters may be tested more thoroughly than others.



Having a limited set of test generated, we can then execute them. However, usually algoritms generate these tests in a order, so that coverage is greater with the first tests and lesser with the last tests. This way, if we stop testing at a given moment, we can make sure that we track coverage and that we tested the most combinations possible.


In sum, there is a balance between the number of tests we execute and the coverage we will obtain.



Please note

Let's say that we have 5 parameters. In this case, 5-way testing would generate all the possible combinations of these 5 parameters. Therefore, 5-way testing is precisely the same as saying that we're going to test all combination of parameter values.


Optmizing further the test scenarios to be performed

The first level of optimization is further reducing the number of generated test scenario. 

Even if we use pairwise testing, or n-wise testing in general, to dramatically reduce the number of test scenarios, not all of these combinations may make sense for several reasons.

For example, in our flight booking scenario the Departure and Destination parameters need to be different. Also, we may have some rules in place where, for example, the First class is not available to children.

These are restrictons that we can use in order to limit the generation of parameter combinations used by our test scenario.


Example with Xray's Test Case Designer

In Test Case Designer we can apply "constraints" involving the combination of 2 parameter values. We can apply several constrains as shown in the followin example: Class=First cannot exist together neither with Children=1 nor Children=Mode than 1.



The second level of optimization is about including important scenarios first.

Not all combination of parameters may be equaly representative. Sometimes there are parameter combinations we know that are highly important as they represent highly used happy paths, or whose business impact is high.

We can enforce these to appear in the generated scenarios and be the first ones.


Example with Xray's Test Case Designer

In Test Case Designer we can . In the following example, we considered an interaction that we need to test due to an hypothetic legislation where some warning must be shown to users who are departing from USA, using the First class, and have more than 1 children. That scenario will be added on the generated ones.




Challenges

Pairwise or t-wise testing even though useful, it's not a silver bullet.

Some challenges or limitations to be aware of include:

  1. test oracle: this technique doesn't address finding the proper test oracle for the generated scenarios. How do we know the scenario is behaving as expected? How do we know that a given scenario has issues or not?
  2. modeling: depicting a "good" model requires the intervention of testers. Testers with the help of other team members are the ones able to figure out representative and important scenarios to model , their parameters, the values for those parameters, constraints, etc. 


Using pairwise and t-wise for scripted testing and exploratory testing

Whenever generating an optimized dataset (i.e., multiple "rows" of values for the parameters) this will be typically used to data-drive a scripted test case (e.g., a "manual" test composed of steps, or an automated test script).

In that case, testers would specify the steps to follow and include references to the parameters on those steps. To perform testing, the test is iterated multiple times, as many as of the generated dataset rows (i.e. combination of parameter/values). In each iteration the parameters are replaced by the corresponding values on the dataset row.


Generating these combinations is useful not only for this testing approach though.

Pairwise and t-wise testing don't tell us how to actually perform testing; it just generates the combination of parameters. Therefore, we can use this technique also if choose to adopt a more exploratory testing approach, for example for certain configurations of hardware/software.

Xray datasets and Xray Test Case Designer



Xray

(available in all Xray versions)

Xray Test Case Designer

(part of Xray Enterprise only)

Parameters

define parametersxx
parameters: enumerate possible valuesxx
parameters: range of values-x

custom dataset

(i.e., enumeration of values for all parameters)

x-
generation of all combinations of parameters/valuesx

x

generation of a partial combination of parametersx-
smart generation of scenarios using pairwise (2-way testing)-x
smart generation of scenarios using n-way testing-x
constraints/rules on generation of scenarios-x
forced interactions-x
Creation of tests using generated data

authoring test cases  (definition of steps) using the generated dataxx
generation of test automation code skeleton for multiple testing frameworks, using the generated data-x
Reporting

tracking n-way coverage-x





  • No labels