Data-driven testing (DDT) is the practice of keeping the execution script separate from the test data. In other words, you will have a single script that will reference a data table containing multiple rows with scenarios. DDT offers numerous benefits in both effectiveness and efficiency, so it’s no surprise that most modern test management tools support that feature, although the implementation can be a bit different.
In this tutorial, we will cover how to achieve the same goal of executing different datasets in ALM Octane and Xray.
The information in this tutorial is based on the "23.4-24.3" version of ALM Octane documentation for their Data Sets. In HP QC, the same functionality was referred to as Test Configurations. While there may be visual/terminology differences between ALM versions, the overall advice for Xray steps after migration should be fairly consistent. We also assume your intermediate level of familiarity with how this feature works in ALM Octane. |
The DDT feature we want to analyze is called “Data Set” in ALM Octane and “Dataset” in Xray.
At the core level, both represent the same concept - a collection of data in a tabular format where every column name is a particular parameter/variable, and each row cell corresponds to a given value of that parameter. The number of rows in your dataset determines the number of iterations the test will execute.
Therefore, for the base use case of just wanting to run a given test through N iterations of a single dataset, the process mapping between the tools is pretty much 1-to-1:
However, there are differences for the advanced use cases, specifically:
That is why, once we extend the use case scope to multiple datasets, the process mapping gets a bit more interesting. Yet the similar enough approach can be implemented in both tools, which is what we will explore below.
Specific example for this tutorial - a Login test with variables for User, Password, System, and Language. Based on the Language values, we want to have 3 datasets.
ALM Octane Test details view - Data Set setup
5. On the Runs tab, see an execution row per Data Set, with the unique corresponding ID, status, etc.
Three key points to keep in mind before we dive in:
Those points mean that if our end goal includes 3 different execution runs, the process in Xray will involve either 3 Test Execution issues or 3 Test entities (issues or versions). And we will need to find an alternative method to distinguish between the datasets.
There are two primary approaches we recommend - Dataset Scope (available in Xray Standard and Enterprise) and Test Versioning (only available in Xray Enterprise).
The initial steps are the same:
You can define a dataset in Xray at multiple levels, the most relevant for our goal is the Test Execution one (i.e., in the corresponding Test Run for that Test, in the scope of a given Test Execution).
4. Utilizing the Test Runs tab (the green highlight on the screenshot above), we can add our test to existing Test Executions or create a new one. For the tutorial goal, we will create 3 new Test Executions, naming each one to match the Language value of the iteration group. If mentioning the language value in the Summary is not sufficient, you can apply labels or other metadata values to Test Executions.
In each Test Execution, we will define a different dataset for our test.
In the Test Run Details screen, you may need to choose “Merge Test Definition” for the parameter values from the dataset to properly integrate with the parameter names in the test steps. |
5. The run results will be shown per each dataset (i.e. Test Execution) on the Test Runs tab of our test.
While not particularly relevant for the scope of this tutorial, you can also keep the datasets at different levels at the same time and then leverage the override functionality. |
You can learn more about the feature overall in our documentation.
For the tutorial goal, two characteristics are important:
4. We rename the default version and add the corresponding dataset to it (red highlight on the screenshot at the beginning of this section). Then we create two more versions with the “copy from the original” checkbox enabled. Lastly, we edit the dataset values for each version to match our requirements.
5. Similar to the Dataset Scope approach, you will need to create 3 Test Executions. When assigning the test to each, select the corresponding version from the dropdown you see below.
The execution results across versions will be displayed on the Test Runs tab again. You can add the “Test Version” column to clearly see the assignments.
The advantage of this approach is that you handle different datasets and names at the more reusable Test level (i.e. you would need to repeat the Dataset Scope approach each time you need to run the iterations, e.g. from Sprint 1 to Sprint 2).
As a side note, if you are using Xray Enterprise Test Case Designer to generate datasets, you can save different model versions as labeled revisions. |
While they are not recommended due to the level of effort required, we wanted to mention these other options for the completeness sake:
References