Page History
...
- see all the Test Runs for a given Test , or a Test Execution, or a Test PlanExecution, Test Plan, Test, Test Set, or Story
- see the failed Test Runs and understand what happened and its impacts at high-level
- see the defects linked to failed Test Runs
- see the requirements linked to failed Test Runs
- see the failed Test Runs that have no defects
- track the Test Runs that are taking most time
...
This report is applicable to:
- 1 or more Test Plan issues
- 1 or more Test Set issues
- 1 or more Test Plan Execution issues
- 1 or more Test Execution issues
- 1 or more Story issues
- a combination of the previous
Output format
Please note: to avoid redundancy in the output, we recommend not mixing source types, especially if the entities are also linked with each other (e.g. passing the issue keys for Stories and Test Executions linked to them to the template at the same time)
Output format
The standard output format is .XLSX, so you can open it in Microsoft Excel, The standard output format is .XLSX, so you can open it in Microsoft Excel, Google Sheets, and other tools compatible with this format. From those tools you may be able to generate a .CSV file.
...
- Issue types having these names
- "Test", "Test Set", "Test Plan", "Test Execution", "Story"
If any of these assumptions is not met, you need to update the template accordingly.
...
Column | Notes |
---|---|
Test Key | issue key of the Test associated to this Test Run |
Test Summary | Summary of the Test associated to this Test Run |
Test Type | test type of the Test associated to this Test Run (e.g., Manual, Cucumber, Generic) |
Test Label(s) | labels of the Test associated to the Test Run, delimited by comma (e.g., "UI,selenium") |
Test Component(s) | Components field of the Test associated to the Test Run, delimited by comma (e.g., "core,backend") |
Test Priority | Priority field of the Test associated to the Test Run, delimited by comma (e.g., "Major") |
Total Parameters | number of test parameters, for parameterized tests |
Parameters | list of test parameters, for parameterized tests (e.g., key: <param_name> value: <param_value>") |
Requirements | list of covered items (e.g., requirements, stories) by the Test associated to the Test Run, delimited by comma (e.g., "CALC-1") |
Test Plan Key | issue key of the Test Plan(s), associated to the Test Execution, that in turn is associated to the Test Run |
Test Execution Key | issue key of the Test Execution associated to the Test Run |
FixVersion(s) | FixVersion field issue key of the Test Execution associated to the Test Run |
Revision | Revision field of the Test Execution associated to the Test Run |
Execution Status | the status/result reported for this Test Run |
TE assignee | the display name of the assignee of the Test Execution associated to the Test Run |
TE planned begin date | planned date for starting the Test Execution, in the format "dd/MM/yyyy" |
TE planned end date | planned date for finishing the Test Execution, in the format "dd/MM/yyyy" |
Assignee | (full) name of the user assigned to the Test Run |
Executed by | (full) name of the user who has executed the Test Run |
Test Environment(s) | Test Environment(s) of the Test Execution associated to the Test Run, delimited by comma (e.g., "firefox", "edge,windows") |
Started on | timestamp of when the Test Run started, in the format "dd/MM/yyyy hh:mm:ss" |
Finished on | timestamp of when the Test Run finished, in the format "dd/MM/yyyy hh:mm:ss" |
Elapsed | elapsed time, in the format "hh:mm:ss", for the Test Run; for data-driven tests, this corresponds to the overall Test Run elapsed time |
Elapsed (sec) | elapsed time, in seconds, for the Test Run; for data-driven tests, this corresponds to the overall Test Run elapsed time |
Comment | Test Run comment |
Total Defects | number of unique defects linked/reported to the Test Run, either globally, at step level, including from iterations in data-driven tests |
Defects | list of issue keys of unique defects linked/reported to the Test Run, either globally, at step level, including from iterations in data-driven tests, delimited by comma |
...
- insert column
- on the "Test Runs" sheet,
- copy "Test Summary" (i.e., insert a column next to it and copy the values from the existing "Test Summary" column)
- change
- ${TestRuns[n].Summary} to ${TestRuns[n].Severity}
- ${TestExecutions[j].TestRuns[n].Severity} to ${TestExecutions[j].TestRuns[n].Severity}
- follow a similar approach for the "Test Runs including iterations" sheet
Exercise: add total Evidence count
Let's say we want to know the total count of evidence items for each run and iteration. You can add the column next to the "Comment" one, then the code logic is similar to the defect count. Please see the snippet below (this exact syntax is for the "Test Runs" tab):
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
${set(totalStepEvidenceCount, 0)}
${set(totalEvidenceCount, 0)}
#{if (%{!${TestExecutions[j].TestRuns[n].IsDataDriven}})}
#{for m=TestExecutions[j].TestRuns[n].TestStepsCount}
#{for l=TestExecutions[j].TestRuns[n].TestSteps[m].EvidencesCount}
${set(totalStepEvidenceCount,%{${totalStepEvidenceCount} + 1 })}
#{end}
#{end}
${set(totalEvidenceCount,%{${totalStepEvidenceCount} + ${TestExecutions[j].TestRuns[n].ExecutionEvidencesCount} })}
#{end}
#{if (%{${TestExecutions[j].TestRuns[n].IsDataDriven}})}
#{for m=TestExecutions[j].TestRuns[n].IterationsCount}
#{for k=TestExecutions[j].TestRuns[n].Iterations[m].TestStepsCount}
#{for l=TestExecutions[j].TestRuns[n].Iterations[m].TestSteps[k].EvidencesCount}
${set(totalStepEvidenceCount,%{${totalStepEvidenceCount} + 1 })}
#{end}
#{end}
${set(totalEvidenceCount,%{${totalStepEvidenceCount} + ${TestExecutions[j].TestRuns[n].ExecutionEvidencesCount} })}
#{end}
#{end}
${totalEvidenceCount} |
Keep in mind that the core part of the path to the target (i.e. "TestExecutions[j].TestRuns[n]." above) is dependent on the source issue type, so you will need to adjust it based on the row in the template. You can refer to other cells in the same row for the path syntax.
For the "Test Runs including Iterations" tab, you will only need the content of one of the two "if" loops for each row and, for the data-driven rows, you will need to drop the additional "for" loop (i.e. "#{for m=TestExecutions[j].TestRuns[n].IterationsCount}" from the second "if" loop in the snippet above).
Performance
Performance can be impacted by the information that is rendered and by how that information is collected/processed.
...