Slack is a popular communication platform adopted by many development and non-development teams.
Teams use it daily to collaborate, make calls, share information, ask for help, and be notified of certain events to increase visibility of the status of the projects they're working on.
In this article, we'll see an everyday use case: sharing test results with the team in their collaboration tool (i.e., "Slack"). However, Xray and Jira's built-in automation capabilities can implement many more use cases.
To integrate with Slack, we can set up an incoming webhook in a channel and send notifications (i.e., "messages").
This common use-case is about sharing information about test results, usually from test automation, to a Slack channel so the team can be aware of testing progress and analyze the results if needed.
At high-level, we will:
|
Xray provides an automatic mechanism to transition Test Execution issues if all of its tests have a final status (i.e., if all tests have reported results). This is useful whenever uploading test automation results from a CI/CD pipeline. Please read the available configuration options to enable this under the Miscellaneous tab in Xray global settings. |
We can compare the number of tests that passed vs the total number of tests on the Test Execution, to identify wether all test results were successful or not. We'll use information from the "Test Execution Status" custom field that is associated with the Test Execution issue; if you want to see what's within this field, you can use the "Add value to the audit log"
{{Test Execution Status.statuses.get(0).statusCount}} ... {{Test Execution Status.count}} |
{{Test Execution Status.statuses.get(1).statusCount)}}
If not all tests passed, we can send the following content.
{ "blocks": [ { "type": "context", "elements": [ { "type": "mrkdwn", "text": "Test results were reported for project *{{project.name}}*" } ] }, { "type": "context", "elements": [ { "type": "mrkdwn", "text": "{{issue.summary}}" } ] }, { "type": "section", "text": { "type": "mrkdwn", "text": "Some tests reported by {{issue.reporter.displayName}} failed. Please check the results." }, "accessory": { "type": "image", "image_url": "https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/error.png", "alt_text": "test results with failures" } }, { "type": "divider" }, { "type": "section", "text": { "text": "Test Execution details, including its Test Runs", "type": "mrkdwn" }, "fields": [ { "type": "mrkdwn", "text": "*Test Execution*\n<{{issue.url}}|{{issue.key}}>" }, { "type": "mrkdwn", "text": "*Version*\n{{issue.fixVersions.name}}" }, { "type": "mrkdwn", "text": "*Revision*\n{{issue.Revision}}" }, { "type": "mrkdwn", "text": "*Test Environment(s)*\n{{issue.Test Environments}}" }, { "type": "mrkdwn", "text": "*Test Plan*\n<{{baseUrl}}/browse/{{issue.Test Plan}}|{{issue.Test Plan}}>" }, { "type": "mrkdwn", "text": "*Total tests*\n{{Test Execution Status.count}}" }, { "type": "mrkdwn", "text": "*Passed tests*\n{{Test Execution Status.statuses.get(0).statusCount}}" }, { "type": "mrkdwn", "text": "*Failed tests*\n{{Test Execution Status.statuses.get(1).statusCount}}" }, { "type": "mrkdwn", "text": "*Other tests*\n{{#=}}{{Test Execution Status.count}} - {{Test Execution Status.statuses.get(0).statusCount}} - {{Test Execution Status.statuses.get(1).statusCount}}{{/}}" } ] } ] } |
If all tests passed, within the "ELSE" section of the IF block, we can send the following content.
{ "blocks": [ { "type": "context", "elements": [ { "type": "mrkdwn", "text": "Test results were reported for project *{{project.name}}*" } ] }, { "type": "context", "elements": [ { "type": "mrkdwn", "text": "{{issue.summary}}" } ] }, { "type": "section", "text": { "type": "mrkdwn", "text": "All tests reported by {{issue.reporter.displayName}} passed. Nothing to worry about." }, "accessory": { "type": "image", "image_url": "https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/check.png", "alt_text": "test results without failures" } }, { "type": "divider" }, { "type": "section", "text": { "text": "Test Execution details, including its Test Runs", "type": "mrkdwn" }, "fields": [ { "type": "mrkdwn", "text": "*Test Execution*\n<{{issue.url}}|{{issue.key}}>" }, { "type": "mrkdwn", "text": "*Version*\n{{issue.fixVersions.name}}" }, { "type": "mrkdwn", "text": "*Revision*\n{{issue.Revision}}" }, { "type": "mrkdwn", "text": "*Test Environment(s)*\n{{issue.Test Environments}}" }, { "type": "mrkdwn", "text": "*Test Plan*\n<{{baseUrl}}/browse/{{issue.Test Plan}}|{{issue.Test Plan}}>" }, { "type": "mrkdwn", "text": "*Total tests*\n{{Test Execution Status.count}}" }, { "type": "mrkdwn", "text": "*Passed tests*\n{{Test Execution Status.statuses.get(0).statusCount}}" }, { "type": "mrkdwn", "text": "*Failed tests*\n{{Test Execution Status.statuses.get(1).statusCount}}" }, { "type": "mrkdwn", "text": "*Other tests*\n{{#=}}{{Test Execution Status.count}} - {{Test Execution Status.statuses.get(0).statusCount}} - {{Test Execution Status.statuses.get(1).statusCount}}{{/}}" } ] } ] } |
Information to obtain | Jira Automation smart values | Notes |
---|---|---|
Total Tests | {{Test Execution Status.count}} | |
Number of tests passing (i.e., reported as "Pass") | {{Test Execution Status.statuses.get(0).statusCount}} | A more accurate expression would be:
|
Number of failed tests (i.e., reported as "FAIL") |
| A more accurate expression would be:
|
Test Environments | {{issue.Test Environments}} | |
Linked Test Plan(s) | {{issue.Test Plan}} |