This report lists some details of the selected requirements (Stories or Epics) in Xray, enabling them to be extracted in an Excel format. With this ability to extract the report you can use it for analysis of trends and current testing status, or process this information to generate some metrics, for example, or even share it with someone else who still needs access to Jira.
Possible usage scenarios:
The following table shows an example of the columns/rows you should expect.
This report can be generated from different places/contexts, including:
General information about all the existing places available to export from and how to perform it is available on the Exporting page. |
This report is applicable to:
The standard output format is .XLSX so you can open it in Microsoft Excel, Google Sheets, and other tools compatible with this format. From those tools, you can generate a .CSV file.
The template has a set of assumptions that you have to make sure your Jira/Xray environment complies with:
If any of these assumptions is not met, you need to update the template or the environment accordingly.
from the Issue Navigator/Search, search by the issueType (i.e., "Story") from your project (e.g., "BOOK") and then use bulk export or Export->Xporter
project = "BOOK" and issuetype="Story" order BY created DESC |
project = "BOOK" and issuetype= "Epic" and fixVersion=1.2 order BY created DESC |
from the Issue Navigator/Search, search by Stories assigned to that Test Environment (e.g., "chrome") and then use bulk export or Export->Xporter
project = "BOOK" and issuetype= "Story" and testEnvironments = chrome order BY created DESC |
The report shows information about the Requirements in a list form.
The report is composed of two sheets - "TestingEffortOverview" and "DetailedExecutionBreakdown" - with the information on the "Requirements" (by default, Stories and Epics). Each sheet will present a line per each Requirement.
The DC version has 2 extra sheets with graphs based on the "DetailedExecutionBreakdown" tab (variables needed for the graphs are not yet available in Cloud).
Column | Notes |
---|---|
Requirement Key | Issue key of the requirement |
Summary | Summary of the requirement |
Component | Component(s) of the requirement |
Priority | Priority of the requirement |
Assignee | Assignee of the requirement |
Workflow Status | Workflow status of the requirement |
Total Workflow Lifetime | The difference between "Created" and "Resolved" dates. If resolution is empty, the calculation will fail. |
Total Number of Defects | Total number of linked defects (directly or through linked Tests) |
Number of Unique Test Run Assignees | Number of unique assignees across all the associated test runs. "Unassigned" will count as 1. |
Total Number of Test Runs | Total number of associated test runs |
Total Number of Failed Test Runs | Total number of associated test runs with the "Fail" status |
Total Number of Test Runs with Comments | Total number of associated test runs that have comments (regardless of the status) |
Total Time Spent in Execution | Total elapsed time calculated from "Started On" and "Finished On" fields from test runs |
Total Number of Linked Tests | Sum of the 4 columns below |
Total Number of Linked Manual Tests | Total number of linked tests with "manual" type (regardless of the link type) |
Total Number of Linked Cucumber Tests | Total number of linked tests with "cucumber" type (regardless of the link type). By default, "cucumber" is defined as a list of ‘cucumber,gherkin,behave,behat’. See the customization section below. |
Total Number of Linked Exploratory Tests | Total number of linked tests with "exploratory" type (regardless of the link type) |
Total Number of Linked Non-Exploratory Generic Tests | Total number of linked tests with "generic" type (regardless of the link type) |
Total Time Specifying Tests (in minutes) | Total aggregated time calculated from logged time across test runs. If time is not being logged, the calculation will need to be adjusted based on "Created"/"Resolved" dates (see "Total Workflow Lifetime" column) or similar date fields. |
Column | Notes |
---|---|
Key | Issue key of the requirement |
Summary | Summary of the requirement |
Component | Component(s) of the requirement (repeated on this tab in the DC version only to support graphs) |
Priority | Priority of the requirement (repeated on this tab in the DC version only to support graphs) |
Requirement Status (DC) / TestRunStatus (Cloud) | DC: overall requirement coverage status Cloud: list of test run statuses |
Test Run Comments | List of test run comments |
List of Unique Test Run Assignees | List of unique assignees across test runs. "Unassigned" will be represented as "[]" |
Total Number of Defects | Test Plan(s) linked to the Test Execution |
Number of Unique Test Run Assignees | Defects linked to the Test Execution (at either the test run or the test step level) |
Passed | Number of runs in the passed status. |
Passed (%) | Percentage of runs in the passed status. |
Failed | Number of runs in the failed status. |
Failed (%) | Percentage of runs in the failed status. |
Executing | Number of runs in the executing status. |
Executing (%) | Percentage of runs in the executing status. |
To do | Number of runs in the to do status. |
To do (%) | Percentage of runs in the to do status. |
Aborted | Number of runs in the aborted status. |
Aborted (%) | Percentage of runs in the aborted status. |
The common customization actions are:
As this report is column-based, if some columns are not relevant to you, you should be able to delete them. Make sure that no temporary variables are created in the cells of those columns that are used in other subsequent columns.
At the top of the template, locate this line
#{if (%{'${IssueTypeName}'.equals('Epic') || '${IssueTypeName}'.equals('Story')})}
Add, remove, or rename types in brackets after "equals", for example
#{if (%{'${IssueTypeName}'.equals('Story') || '${IssueTypeName}'.equals('Request') || '${IssueTypeName}'.equals('FeatureIdea') })}
On the "TestingEffortOverview" tab, columns O-R provide the breakdown by 4 common test types. You can add more columns or modify the code in one of the existing ones, depending on which type you are interested.
For example, if you have a custom "Robot" type, you can add it to the first line of the "Non-exploratory Generic" column, then the conditional line below will automatically account for the new entry, no further changes are needed.
${set(genericTestTypes, ‘generic, Robot’)}
...
#{if (%{',${genericTestTypes},'.indexOf(',${Links[j].Test Type},'.toLowerCase()) >= 0})}
Keep in mind that if you add columns, you will also need to update the formula in the "Total Number of Linked Tests" column.
You can further finetune the content and formatting via JavaScript, you can find more useful snippets in this tutorial for Xporter and DocGen.
Performance can be impacted by the information that is rendered and by how that information is collected/processed.
The number of Test Executions and Test Runs, depending on scenarios, can be considerably high, especially with CI/CD. As this report sums up quite a lot of information, please use it wisely.
|