Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This report lists some details of the selected Test Executions requirements (Stories or Epics) in Xray, enabling them to be extracted in an Excel format. With this ability to extract the report you can use it for analysis of trends and current testing status, or process this information to generate some metrics, for example, or even share it with someone else who still needs access to Jira.

Possible usage scenarios:

  • get an overview of Requirements by Component and/or Priority
  • track the time spent on specifying and executing tests associated with a given Requirement 
  • see all the Test Run statuses count for a given Test Execution
  • see the linked Test Plans and Defects
  • track the Test Executions that are taking the most time
  • Requirement


Store - DC; Cloud

Output Example(s)

The following table shows an example of the columns/rows you should expect.


Image Added

Image AddedImage Removed

How to use

This report can be generated from different places/contexts, including:

...

This report is applicable to:

  • 1 or more Test Execution Story or Epic issues

Output format

The standard output format is .XLSX so you can open it in Microsoft Excel, Google Sheets, and other tools compatible with this format. From those tools, you can generate a .CSV file.

...

  1. Issue types having the name: "Test Execution""Story", "Epic".
  2. Direct issue links existing between requirements and tests (otherwise you get blank cells for most metrics related to the testing effort).
  3. Time calculations require certain statuses or logging (see the "Understanding the report" section below).

If any of these assumptions is not met, you need to update the template or the environment accordingly.

Usage examples

Export

...

Stories of your project

  1. from the Issue Navigator/Search, search by the issueType (i.e., "Test ExecutionStory") from your project (e.g., "BOOK") and then use bulk export or Export->Xporter

Code Block
titleexample of JQL expression to use
project = "BOOK" and issuetype="Test ExecutionStory" order BY created DESC

Export

...

Epics associated with a given fix version

  1. from the Issue Navigator/Search, search by the release (i.e., "fixVersion") of your project (e.g., "EWB") and then use bulk export or Export->Xporter

Code Block
titleexample of JQL expression to use
project = "BOOK" and issuetype= "Test ExecutionEpic" and fixVersion=1.2 order BY created DESC

Export

...

Stories obtained on a given Test Environment

  1. from the Issue Navigator/Search, search by Test Executions Stories assigned to that Test Environment  (e.g., "chrome") and then use bulk export or Export->Xporter

...

Code Block
titleexample of JQL expression to use
project = "BOOK" and issuetype= "Test ExecutionStory" and testEnvironments = chrome order BY created DESC

Export

...

Story from its detailed view

  1. open the Test Execution Story issue and export it using this template

...

The report shows information about the Test Executions Requirements in a list form.

Layout

The report is composed of one sheet two sheets - "TestingEffortOverview" and "DetailedExecutionBreakdown" - with the information on the "Test Executions".By Requirements" (by default, and to avoid overload/redundancy of information, only the "Test Executions" are rendered; all the other types will not be parsed.

"Test Executions" sheet

...

Stories and Epics). Each sheet will present a line per each Requirement.

The DC version has 2 extra sheets with graphs based on the "DetailedExecutionBreakdown" tab (variables needed for the graphs are not yet available in Cloud).

"TestingEffortOverview" sheet


ColumnNotes

Requirement Key

Issue key of the Test Executionrequirement
SummarySummary of the Test Executionrequirement
Fix versionsComponent

Fix versionComponent(s) defined in the Test Executionof the requirement

Priority

Priority of the requirement

Assignee

Assignee of the requirement

Workflow Status

Workflow status of the requirement

Total Workflow Lifetime

The difference between "Created" and "Resolved" dates. If resolution is empty, the calculation will fail.

Total Number of Defects

Total number of linked defects (directly or through linked Tests)

Number of Unique Test Run Assignees

Number of unique assignees across all the associated test runs. "Unassigned" will count as 1.

Total Number of Test Runs

Total number of associated test runs

Total Number of Failed Test Runs

Total number of associated test runs with the "Fail" status

Total Number of Test Runs with Comments

Total number of associated test runs that have comments (regardless of the status)

Total Time Spent in Execution

Total elapsed time calculated from "Started On" and "Finished On" fields from test runs

Total Number of Linked Tests

Sum of the 4 columns below

Total Number of Linked Manual Tests

Total number of linked tests with "manual" type (regardless of the link type)

Total Number of Linked Cucumber Tests

Total number of linked tests with "cucumber" type (regardless of the link type). By default, "cucumber" is defined as a list of ‘cucumber,gherkin,behave,behat’. See the customization section below.

Total Number of Linked Exploratory Tests

Total number of linked tests with "exploratory" type (regardless of the link type)

Total Number of Linked Non-Exploratory Generic Tests

Total number of linked tests with "generic" type (regardless of the link type)

Total Time Specifying Tests (in minutes)

Total aggregated time calculated from logged time across test runs. If time is not being logged, the calculation will need to be adjusted based on "Created"/"Resolved" dates (see "Total Workflow Lifetime" column) or similar date fields.


"DetailedExecutionBreakdown" sheet


ColumnNotes

Key

Issue key of the requirement
SummarySummary of the requirement
Component

Component(s) of the requirement (repeated on this tab in the DC version only to support graphs)

Priority

Priority of the requirement (repeated on this tab in the DC version only to support graphs)

Requirement Status (DC) / TestRunStatus (Cloud)DC: overall requirement coverage status
Cloud: list of test run statuses
Test Run Comments

List of test run comments

List of Unique Test Run Assignees

List of unique assignees across test runs. "Unassigned" will be represented as "[]"

Total Number of Defects

Test Plan(s) linked to the Test Execution

Number of Unique Test Run Assignees

Defects linked to the Test Execution (at either the test run or the test step level)

Revision

Revision defined in the Test Execution

Begin Date

Timestamp of when the Test Execution started, in the format "dd-MM-yyyy hh:mm:ss"

End Date

Timestamp of when the Test Execution  ended, in the format "dd-MM-yyyy hh:mm:ss"

Test Environment

Test Environment(s) defined in the Test Execution

Test Plan

Test Plan(s) linked to the Test Execution

Defects

Defects linked to the Test Execution (at either the test run or the test step level)

Elapsed Time

Sum of elapsed time of all associated test runs in HH:MM:SS format. Please note that it will output "00:00:00" when there are no tests associated with the execution (or your executions are really fast) and " " when there are tests in progress (i.e. "Executing" or "To Do" status).

#Test Runs

Number of test runs that are part of the Test Execution.

Passed

Number of runs in the passed status.

Passed (%)

Percentage of runs in the passed status.

Failed

Number of runs in the failed status.

Failed (%)

Percentage of runs in the failed status.

Executing

Number of runs in the executing status.

Executing (%)

Percentage of runs in the executing status.

To do

Number of runs in the to do status.

To do (%)

Percentage of runs in the to do status.

Aborted

Number of runs in the aborted status.

Aborted (%)

Percentage of runs in the aborted status.

...

  • adding/removing columns
  • changing the level of detail in the columns

Exercise 1: add a field from the related Test Execution issue 

...

We can copy the column "Summary" and adapt it.

  1. Copy the "Summary" column 
  2. Insert the copied content in a column next to the “Fix Versions”
    1. Change the header of the column to be “Description”
    2. Change the cell content from ${Summary} to ${Description}
    3. Save the template and upload it

As this report is column-based, if some columns are not relevant to you, you should be able to delete them. Make sure that no temporary variables are created in the cells of those columns that are used in other subsequent columns.

Exercise

...

1: add

...

a custom issue type that belongs to Requirements in your configuration

At the top of the template, locate this line

    #{if (%{'${IssueTypeName}'.equals('Epic') || '${IssueTypeName}'.equals('Story')})}

Add, remove, or rename types in brackets after "equals", for example

#{if (%{'${IssueTypeName}'.equals('Story') || '${IssueTypeName}'.equals('Request') || '${IssueTypeName}'.equals('FeatureIdea') })}


Exercise 2: track custom test types

On the "TestingEffortOverview" tab, columns O-R provide the breakdown by 4 common test types. You can add more columns or modify the code in one of the existing ones, depending on which type you are interested.

For example, if you have a custom "Robot" type, you can add it to the first line of the "Non-exploratory Generic" column, then the conditional line below will automatically account for the new entry, no further changes are needed.

${set(genericTestTypes, ‘generic, Robot’)}
...
#{if (%{',${genericTestTypes},'.indexOf(',${Links[j].Test Type},'.toLowerCase()) >= 0})}

Keep in mind that if you add columns, you will also need to update the formula in the "Total Number of Linked Tests" column

...

Let's say you want to see not only the Test Plan Key, but also the Test Plan Summary, and make it a hyperlink while you are at it.

  1. Navigate to the "Test Plan" column 
  2. Edit the statement from ${Test Plan} to ${link:title=${Test Plan},href=${BaseURL}/browse/${Test Plan}}

You can apply similar changes to the Defects column by editing the following statements:

  • ${TestRuns[n].ExecutionDefects[d].Key}  (line 6)
  • ${TestRuns[n].Iterations[it].TestSteps[r].Defects[dc].Key} (line 22)

with the syntax like (the example is for line 6): 
@{title=${TestRuns[n].ExecutionDefects[d].Key}|href=${BaseURL}/browse/${TestRuns[n].ExecutionDefects[d].Key}} | ${TestRuns[n].ExecutionDefects[d].Summary} | ${TestRuns[n].ExecutionDefects[d].Priority}
Keep in mind that only 1 hyperlink will be active per Excel cell (the one associated with the first line item). Also, ${Test Plan} value is treated as a single list, so if you have multiple test plans linked to the same execution and you want to separate the urls, you will need to further customize the code from the example above. On a related note, as you probably noticed, Test Plan and Test Environment are displayed slightly differently from Defects by default - comma separation, square brackets (the DocGen version has the brackets for empty Test Plan cells, doesn't have them for non-empty Test Environments).


You can further finetune the content and formatting via JavaScript,you can find more useful snippets in this tutorial for Xporter and DocGen.

...