Data-driven testing (DDT) is the practice of keeping the execution script separate from the test data. In other words, you will have a single script that will reference a data table containing multiple rows with scenarios. DDT offers numerous benefits in both effectiveness and efficiency, so it’s no surprise that most modern test management tools support that feature, although the implementation can be a bit different.
In this tutorial, we will cover how to achieve the same goal of executing different datasets in ALM Octane and Xray.
Concept mapping summary
The DDT feature we want to analyze is called “Data Set” in ALM Octane and “Dataset” in Xray.
At the core level, both represent the same concept - a collection of data in a tabular format where every column name is a particular parameter/variable, and each row cell corresponds to a given value of that parameter. The number of rows in your dataset determines the number of iterations the test will execute.
Therefore, for the base use case of just wanting to run a given test through N iterations of a single dataset, the process mapping between the tools is pretty much 1-to-1:
- Create a test
- Add parameterized steps to the test
- Assign 1 dataset to the test and run it -> report at the iteration level, if desired.
However, there are differences for the advanced use cases, specifically:
- Data Set in ALM Octane has a unique identifier (Name), so multiple instances of a Data Set can be associated with a given Test.
- Dataset in Xray does not have a unique identifier and only 1 instance of a Dataset can be associated with a given Test. Instead, you can assign a Dataset at different levels throughout the Test lifecycle (Test, Test Version, Test Plan, Test Execution).
That is why, once we extend the use case scope to multiple datasets, the process mapping gets a bit more interesting. Yet the similar enough approach can be implemented in both tools, which is what we will explore below.
Tutorial goal
- For the same test, we want to execute multiple datasets, each having the same variables but different data values.
- Furthermore, we want to easily switch between those datasets, assuming each represents the scope to be executed for a specific goal or during different sprints/ other milestones.
- Execution results should be shown at the same time per each dataset.
Specific example for this tutorial - a Login test with variables for User, Password, System, and Language. Based on the Language values, we want to have 3 datasets.
Implementation in ALM Octane
- Create the test
- Add steps to the test
- Use parameters in the steps
- Use the “Data Table” section of the Steps tab to add 3 Data Sets, one for each language, with the corresponding Data Set name and values for each parameter.
ALM Octane Test details view - Data Set setup
5. On the Runs tab, see an execution row per Data Set, with the unique corresponding ID, status, etc.
Implementation in Xray
Three key points to keep in mind before we dive in:
- Between an Xray Test and its run result there is always at least one more entity - Xray’s Test Execution issue (i.e. the run does not “belong” to the test directly).
- The relationship between an Xray Test and an Xray Test Execution is 1-to-1.
- While you can overwrite the run details within the same Test Execution, that approach is not viable for our goal since we wouldn’t be able to preserve execution results per dataset at the same time.
- Datasets in Xray do not have their own identifiers (names, labels, etc.), unlike “English”, “Dutch”, “German” in the ALM Octane dropdown.
Those points mean that if our end goal includes 3 different execution runs, the process in Xray will involve either 3 Test Execution issues or 3 Test entities (issues or versions). And we will need to find an alternative method to distinguish between the datasets.
There are two primary approaches we recommend - Dataset Scope (available in Xray Standard and Enterprise) and Test Versioning (only available in Xray Enterprise).
The initial steps are the same:
- Create the test
- Add steps to the test
- Use parameters in the steps (note the syntax difference with ${} instead of <>)
Dataset Scope
You can define a dataset in Xray at multiple levels, the most relevant for our goal is the Test Execution one (i.e., in the corresponding Test Run for that Test, in the scope of a given Test Execution).
4. Utilizing the Test Runs tab (the green highlight on the screenshot above), we can add our test to existing Test Executions or create a new one. For the tutorial goal, we will create 3 new Test Executions, naming each one to match the Language value of the iteration group. If mentioning the language value in the Summary is not sufficient, you can apply labels or other metadata values to Test Executions.
In each Test Execution, we will define a different dataset for our test.
5. The run results will be shown per each dataset (i.e. Test Execution) on the Test Runs tab of our test.
Test Versioning
You can learn more about the feature overall in our documentation.
For the tutorial goal, two characteristics are important:
- You can assign different datasets to different Test versions.
- You can rename different Test versions.
4. We rename the default version and add the corresponding dataset to it (red highlight on the screenshot at the beginning of this section). Then we create two more versions with the “copy from the original” checkbox enabled. Lastly, we edit the dataset values for each version to match our requirements.
5. Similar to the Dataset Scope approach, you will need to create 3 Test Executions. When assigning the test to each, select the corresponding version from the dropdown you see below.
The execution results across versions will be displayed on the Test Runs tab again. You can add the “Test Version” column to clearly see the assignments.
The advantage of this approach is that you handle different datasets and names at the more reusable Test level (i.e. you would need to repeat the Dataset Scope approach each time you need to run the iterations, e.g. from Sprint 1 to Sprint 2).
While they are not recommended due to the level of effort required, we wanted to mention these other options for the completeness sake:
- Separate Test issues - you can clone the original test 2 times and assign different datasets to each issue at the Test level. This would allow you to assign all 3 datasets to 1 Test Execution issue. However, the overall maintenance of 3 test issues will be more challenging.
- Dataset CSV import is supported, so you could store multiple CSVs elsewhere, then rotate the import of them at the Test level before each execution. Distinguishing between the datasets (properly naming the tests/executions, etc.) and simultaneous reporting will be difficult.
References