This report enables you to extract details of a Test Execution, such as the Tests that are part of it, Defects, Requirements and iterations details, so that you can generate a document report focusing in what matter the most for your team, or even share it with someone else that hasn't access to Jira.

Possible usage scenarios:

  • see all the requirements covered by the Tests in the Test Execution
  • see all the defects linked to the Tests in this Test Execution
  • see an overall status summary of the Test Execution
  • check a specific detail of a Test Runs (like evidences, attachments, assignee, etc)

Output Example(s)

The following screenshots shows an example of the sections you should expect in this report.

How to use

This report can be generated from the Issue details screen.

Learn more

General information about all the existing places available to export from, and how to perform it, is available in the Exporting page.

Source data

This report is applicable to:

  • 1 Test Execution issue

Output format

The standard output format is .DOCX, so you can open it in Microsoft Word, Google Docs, and other tools compatible with this format. 

Report assumptions

The template has a set of assumptions that you make sure that your Jira/Xray environment complies with:

  1. Issue types having these names
    1. "Test", "Test Set", "Test Plan", "Test Execution"

If any of these assumptions is not met, you need to update the template accordingly.

Usage examples

Export all details obtained in the context of a given Test Execution

  1. open the Test Execution issue and export it using this template

Understanding the report

The report shows detailed information about the Test Execution provided.


The report is composed by several sections. Two major sections are available: Introduction and Test Runs details.

By default, all the details of of the Test Run will be rendered; you can change this behavior on the template (more info ahead).

"Introduction" section

This section is divided into 6 sub-sections to have an overview about the Test Plan we have just exported:

  1. Document Overview
  2. Test Execution Details
  3. Requirements covered by the Tests in this Test Execution
  4. Overall Execution Status
  5. Defects
  6. Test Runs

Each of these sections is explained below.

Document Overview

Brief description of what you will find in this report and how it was generated.

Test Execution Details

In this section we are extracting the Test Plan key in the header and show the Begin and End Date (formatted as demonstrated below), the Summary and the Description present in the Test Plan.

FieldDescriptionSample Code
DescriptionDescription of the Test Execution (as this field accepts wiki markup we will use "wiki:" in the code to be interpreted by the document)${wiki:Description}
Begin DateTimestamp of the Begin Date field present in the Test Execution (with proper format)${dateformat("dd-MM-yyyy HH:mm:ss"):Begin Date}
End DateTimestamp of the End Date field present in the Test Execution (with proper format)${dateformat("dd-MM-yyyy HH:mm:ss"):End Date}
RevisionRevision of the Test Execution


Test EnvironmentsTest Environments associated to this Test Execution${TestEnvironments}
Test PlanTest Plan associated with this Test Execution (if any)#{for testPlans}${TestPlans[n].Key} - ${TestPlans[n].Summary} #{end}

The output will have the following information, notice that as the Description field support wiki markup we are using "wiki:" keyword so that it is correctly interpreted.

Requirements covered by the Tests in this Test Execution

In this section we have an overview of all the requirements that are covered by Tests in this Test Plan, we extract the Key, Summary, Workflow and the Test Status Status removing all the repeated entries.

In the Server version we have a query that fetches the Requirements linked with a Test: testRequirements('${Tests[n].Key}').

FieldDescriptionSample Code
KeyKey of the Requirement (in this case we are adding it as link)@{title=${Tests[n].Links[k].Key}|href=${BaseURL}/browse/${Tests[n].Links[k].Key}}
SummarySummary of the Requirement${Tests[n].Links[k].Summary}
Workflow StatusWorkflow Status of the Requirement


The requirements are listed in a table with the informations explained above.

Overall Execution Status

As the name suggests we have an overview about the executions of the Tests in this Test Execution, here you will have information about how many Test Runs you have in this Test Execution and what are the statuses of their executions.

To obtain this information we are using:

FieldDescriptionSample Code
TestsRunsCountThe total number of Test Runs in this Test Execution${TestsCount}

To extract the count of the overall execution status per each <status> (TO DO, EXECUTING, PASSED, FAILED, ABORTED)

${Overall Execution Status.<status>.Count}

To extract the percentage of the overall execution status per each <status> (TO DO, EXECUTING, PASSED, FAILED, ABORTED)

${Overall Execution Status.<status>.Percentage}

This will produce the following output:


In this section we are listing all the defects found that are associated with this Test Execution, we consider defects associated with TestRuns, defects in Test Steps or defects found during the iterations. We do not print duplicates.

FieldDescriptionSample Code
KeyKey of the Defect
  • TestRun
    • @{title=${TestRuns[n].ExecutionDefects[d].Key}|href=${BaseURL}/browse/${TestRuns[n].ExecutionDefects[d].Key}}
  • TestSteps
    • @{title=${TestRuns[n].TestSteps[j].Defects[m].Key}|href=${BaseURL}/browse/${TestRuns[n].TestSteps[j].Defects[m].Key}}
  • Iteration TestSteps
    • @{title=${TestRuns[n].Iterations[it].TestSteps[r].Defects[dc].Key}|href=${BaseURL}/browse/${TestRuns[n].Iterations[it].TestSteps[r].Defects[dc].Key}}


Summary of the Defect
  • TestRun
    • ${TestRuns[n].ExecutionDefects[d].Summary}
  • TestSteps
    • ${TestRuns[n].TestSteps[j].Defects[m].Summary}
  • Iteration TestSteps
    • ${TestRuns[n].Iterations[it].TestSteps[r].Defects[dc].Summary}
PriorityPriority of the Defect
  • TestRun
    • ${TestRuns[n].ExecutionDefects[d].Priority}
  • TestSteps
    • ${TestRuns[n].TestSteps[j].Defects[m].Priority}
  • Iteration TestSteps
    • ${TestRuns[n].Iterations[it].TestSteps[r].Defects[dc].Priority}

The Defects appear in the document as a table with information regarding the defects found during the executions of the Test Execution.

Test Runs

In this section we have a table with information regarding the Test Runs in this Test Execution. You can find the following information about each Test Run:

FieldDescriptionSample Code
KeyKey of the Test Run${TestRuns[n].Key}
SummaryThe Summary of the Test Run


Test TypeThe type of Test that was executed in this Test Run${TestRuns[n].TestType}
#ReqNumber of Requirements associated to this Test Run

${set(count, 0)}

#{for t=TestsCount}

#{if (%{'${Tests[t].Key}'.equals('${TestRuns[n].Key}')})}

#{for j=Tests[t].LinksCount}

#{if (%{'${Tests[t].Links[j].LinkType}'.equals('tests')})}







#DefCalculation of the number of Defects associated to this Test Run%{var total=${TestRuns[n].ExecutionDefectsCount} + ${TestRuns[n].TestStepsDefectsCount};  var total2=’’+total;var totalParts= total2.split(‘.’); totalParts[0];}
Test SetsTest Set Key (if this Test Run was part of a Test Set)


AssigneeFull name of the Assignee${fullname:TestRuns[n].AssigneeId}
StatusStatus of the Test Run

${TestRuns[n].Execution Status}

This information is presented in a table as we can see below:

Some particularities to highlight a different behavior about the code needed to show the Tests Runs section:

  • Usage of ${fullname:Tests[n].AssigneeId}, this allows us to fetch the full name of the assignee instead of the key associated to it.

Test Run Details

This section will gather all the information related to each Test Run of each Test in the Test Execution with all the possible details.

It is composed with several sub-sections that will be filled with information if it is available or be filled with a message showing that no information is available.

Test Executions Summary

This section have a table with information regarding each Test Run in this Test Execution (and will repeat these sections for each Test Run). The information is presented as a table with the following fields:

FieldDescriptionSample Code
Execution statusExecution Status of the Test Run${TestRuns[n].Execution Status}
AssigneeFull Name of the Assignee of the Test Run${fullname:TestRuns[n].AssigneeId}
Executed ByFull Name of the entity that has executed this Test Run${fullname:TestRuns[n].Executed By}
Started OnTimestamp of the Started Date from the TestRun${dateformat('dd-MM-yyyy HH:mm:ss'):TestRuns[n].Started On}
Finished OnTimestamp of the Finished Date from the TestRun${dateformat('dd-MM-yyyy HH:mm:ss'):TestRuns[n].Finished On}
VersionsFix Version field associated with the TestRun


RevisionRevision assigned to the TestRun


All of these fields have code to handle empty fields. The resulting table look like the one below.

Execution Defects

If any Defects was found and associated globally with a Test Run it will appear here in the form of a table with the following fields:

KeyDescriptionSample Code
KeyJira Key of the Defect in the form of a link@{title=${TestRuns[n].ExecutionDefects[d].Key}|href=${BaseURL}/browse/${TestRuns[n].ExecutionDefects[d].Key}}
SummarySummary of the Defect${wiki:TestRuns[n].ExecutionDefects[d].Summary}
PriorityPriority associated with the defect${TestRuns[n].ExecutionDefects[d].Priority}

The table will be similar to the one below.

Execution Evidences

If any Evidence was attached to the TestRun we are showing it in table with the FileName.

To obtain that information we have used the following code:

KeyDescriptionSample Code
File NameThe File Name of the Evidence attached to the Test Run@{title=${TestRuns[n].ExecutionEvidences[d].Name}|href=${TestRuns[n].ExecutionEvidences[d].FileURL}}
AuthorAuthor of the Evidence${TestRuns[n].ExecutionEvidences[d].Author}
File SizeFile Size of the Evidence in bytes${TestRuns[n].ExecutionEvidences[d].Size}
EvidenceThe Evidence attached to the Execution!{${TestRuns[n].ExecutionEvidences[d].Evidence|maxwidth=100}}

The table in case of an Evidence is of the type image will have the following aspect:


The comment associated to the TestRun (${wiki:TestRuns[n].Comment}).

Test Description

The description of the TestRun (${wiki:TestRuns[n].Description}).

Test Issue Attachments

This section only appears if you have any attachments associated to the Test Run.

KeyDescriptionSample Code
File NameFile Name of the Attachment


AuthorThe Author of the attachment


File SizeFile Size of the attachments in bytes.


This appears in the document in a table form:


This section only appear if you have a Precondition associated with the TestRun.

KeyDescriptionSample Code
KeyKey of the Precondition


DefinitionDefinition field present in the Precondition${wiki:TestRuns[n].PreConditions[l].PreCondition.Definition}

A sub section will appear with the preconditions definitions.


This section lists the existing parameters of the TestRun (we are iterating through the Parameters of the TestRun with: #{for m=TestRuns[n].ParametersCount}).

KeyDescriptionSample Code
NameKey of the parameter


ValueValue of the parameter${TestRuns[n].Parameters[m].Value}

It will list the Key and the Value of each parameter in a table.


This section uses a sentence to show how many interactions we will go into more details in the next sections.

KeyDescriptionSample Code
IterationsThe iterations count of the Test Run${TestRuns[n].IterationsCount}

A sentence is added to the document with this information.

Iteration Overall Execution Status

To obtain the overall execution status of the iteration we use two variables:

KeyDescriptionSample Code
List of StatusesShow the List of Statuses${TestRuns[n].IterationsOverallExecutionStatus}






Overall Execution Status per Status

${TestRuns[n].IterationsOverallExecutionStatus.TO DO}





The above code will produce the below table.

Test Run details

In this section we are showing the Test Run details with the Name, Status and Parameters.

We extract that information using the following fields:

KeyDescriptionSample Code
Iteration NameName of the iteration${TestRuns[n].Iterations[m].Name}
StatusStatus of the iteration


Total ParametersTotal number of parameters


ParametersLists all parameters in the form of Key=Value${TestRuns[n].Iterations[m].Parameters}

This section will have the below appearance:

Iteration precondition definition

If a precondition is present we will use the following fields to extract that information:

KeyDescriptionSample Code
KeyIteration precondition key${TestRuns[n].Iterations[m].PreConditions[l].Key}
DefinitionIteration precondition definition


This will produce an entry like the one below:

Iteration parameters details

For that given Iteration we are listing the parameters used, that information is extracted with the following fields:

KeyDescriptionSample Code
NameParameter Key${TestRuns[n].Iterations[m].Parameters[l].Key}
ValueParameter Value${TestRuns[n].Iterations[m].Parameters[l].Value}

It generates a table of the following form:

Iteration Test Step Details

In this section we are listing the details of an iteration, we are listing each step present with details, the code we use for that purpose i present in the below table.

KeyDescriptionSample Code


The Step Number ${TestRuns[n].Iterations[m].TestSteps[r].StepNumber}
ActionAction defined in the Test Step${wiki:TestRuns[n].Iterations[m].TestSteps[r].Action}
DataData defined in the Test Step${wiki:TestRuns[n].Iterations[m].TestSteps[r].Data}
Expected ResultExpected Result defined in the Test Step${wiki:TestRuns[n].Iterations[m].TestSteps[r].ExpectedResult}
AttachmentsAttachments present in each Test Step (showing the FileURL and a screenshot in case of the Attachment being an image)



DefectsDefects associated to this Iteration


EvidenceFileURL and screenshot (if it is an image) of the Evidence

@{title=${TestRuns[n].Iterations[m].TestSteps[r].Evidences[e].Name}|href=${TestRuns[n].Iterations[m].TestSteps[r].Evidences[e].FileURL}} by ${TestRuns[n].Iterations[m].TestSteps[r].Evidences[e].Author} - ${TestRuns[n].Iterations[m].TestSteps[r].Evidences[e].Size}


StatusTest Step Status


The above information is gathered in a table like the one below:

Test Details

This section shows the Test details, for that we are considering the different possible Tests we can have in Xray: Generic, Manual and Cucumber. For each type we will fetch different information.

It may seem similar with the Iteration Test Step Details section but in this section we will show the Test details (not instantiated in each Iteration like the previous section).

TypeKeyDescriptionSample CodeOutput
GenericTest TypeTest Type field

${TestRuns[n].Test Type}

SpecificationDefinition of the Generic test${TestRuns[n].Generic Test Definition}
CucumberTest TypeTest Type field

${TestRuns[n].Test Type}

Gherkin SpecificationGherkin specification of the Test

${TestRuns[n].Cucumber Scenario}

ManualStepStep Number${TestRuns[a].TestSteps[r].StepNumber}

ActionAction of the Test Step${TestRuns[a].TestSteps[r].Action}
DataData of the Test Step${TestRuns[a].TestSteps[r].Data}
Expected ResultExpected Result of the Test Step${TestRuns[a].TestSteps[r].ExpectedResult}
AttachmentsAttachment of the Test Step



CommentComment of the Test Step${wiki:TestRuns[a].TestSteps[r].Comment}
DefectsDefects associated with the Test Step


EvidenceEvidence with the Test Step


StatusStatus of the Test Step


Requirements linked with this test

For each Test we are listing the Requirements linked 

KeyDescriptionSample Code
Requirement KeyKey of the Requirement${Tests[t].Links[j].Key}
Requirement SummarySummary of the Requirement${Tests[t].Links[j].Summary}
Workflow StatusWorkflow status of the Requirement


This section present a table with that information like the one below:

Appendix A: Approval

This section is added for the cases where you need to have a signature validating the document.

Customizing the report

Sections that can be hidden or shown

The report has some variables/flags that can be used to show or hide some sections whose logic is already implemented in the template.

These variables are defined at the top of each sheet, at the report template; the variables are scoped just to the current sheet.

On the template, use one of these values for flag type of variables:

  • 0: to hide a section
  • 1: to show a section

The format for other types of variables is detailed ahead.

Example of setting a variable to, in this case, render information on the section "Test Executions"

${set(showTestRunDetails, 1)}



render the details section

  • format: 0 or 1


${set(showTestRunDetails, 0)}


render this section

  • format: 0 or 1

(Make sure to define showTestRunDetails at 1)


${set(showTestRunEvidences, 0)}


render this section

  • format: 0 or 1

(Make sure to define showTestRunDetails at 1)


${set(showTestRunAttachments, 0)}


render this section

  • format: 0 or 1

(Make sure to define showTestRunDetails at 1)


${set(showTestRunIterations, 0)}

Adding or removing information to/from the report

As this report is a document with different sections, if some sections are not relevant to you, you should be able to simply delete them. Make sure that no temporary variables are created in that section that are used in other subsequent sections or if any all conditional blocks are properly closed.

To add additional information, usually we're thinking of adding:

  • fields of the Test Execution itself
  • fields of the Tests associated with the Test Execution
  • fields of the Test Runs associated with the Test Execution

Eventually, also:

  • fields of the Test Plan (if associated to any)
  • fields of the covered issue(s) associated with the Test that is associated with the Test Execution

The later may be harder to implement, so we won't consider them here.

Exercise: add a field from the related Test issue 

Let's say we have a "Severity" field on the Defect that is connected to the Test Execution, and that we want to show it on the report.

We can copy the column "Comment" from the "Tests Details" section and adapt it.

  1. insert new column in the table
  2. on the "Tests Details" section,
    1. copy "Comment" (i.e., insert a column  next to it and copy the values from the existing "Comment" column)
    2. change
      1. ${wiki:TestRuns[n].TestSteps[r].Comment} to ${TestRuns[n].TestSteps[r].Severity}


Performance can be impacted by the information that is rendered and by how that information is collected/processed.

The number of Test Runs and Tests depending on scenarios, can be considerably high, especially with CI/CD. As this report sum-up quite information, please use it wisely.

Data-driven tests may also add an overhead, as iterations need to be individually processed, for collecting all the reported/linked defects for example.

Some tips

  • Use the variables/flags to adjust sections or the Test Plan that will be processed/shown in the report; more info in "Customizing the report"
  • limit the number of input issues; in Xporter there's a global setting for this purpose

Known limitations

  • Test Execution comments are not formatted
  • Gherkin Scenario Outlines are not considered as data-driven (i.e., only one Test Run will appear)

  • No labels