Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

At high-level, we will:

  1. make an authentication request to Xray's API to get an authentication token to be used on the GraphQL query
  2. make a GraphQL query to obtain the Test Execution details, including recorded test resultsprocess the response, storing relevant information on some variables, using information stored on the "Test Execution Status" custom field of the corresponding Test Execution issue
  3. build an HTTP request to send a notification with the test results summary by invoking a webhook on MS Teams

...

  1. .In Jira's project settings, under Automation, create the Jira automation rule
    1. define the trigger; we can have a manual trigger that will provide an action from the issue screen or a trigger based on creating a specific issue type. In this case, it would probably make the most sense to have a trigger based on the transition to some status (e.g., "Done"). We can also restrict this to the issue types "Test Execution" and "Sub-Test Execution" using an IF condition.
      1. Image Removed
    2. create an action using the "Send web request" template to make the authentication request and obtain a token to be used on the GraphQL queries; we need a client_id and a client_secret (please see how to create API keys).
    3. Image Removed
    4. save the response on a variable (e.g., "token")
      1. Image Added
      2. Image Added
      3. Info
        titlePlease note

        Xray provides an automatic mechanism to transition Test Execution issues if all of its tests have a final status (i.e., if all tests have reported results). This is useful whenever uploading test automation results from a CI/CD pipeline. Please read the available configuration options to enable this under the Miscellaneous tab in Xray global settings.

  2. make a IF THEN ELSE block to adjust the text message and image of the notification, to tailor our notification for successful and unsuccessful testing case
    1. We can compare the number of tests that passed vs the total number of tests on the Test Execution, to identify wether all test results were successful or not. We'll use information from the "Test Execution Status" custom field that is associated with the Test Execution issue; if you want to see what's within this field, you can use the "Add value to the audit log"

      1. Image Added
      2. Image Added
      3. Code Block
        collapsetrue
        
        {{Test Execution Status.statuses.get(1).name}} {{Test Execution Status.statuses.get(1).statusCount}}
        
        ...
        
        {{Test Execution Status.count}}
    2. Note: we could also do it by looking at the number of failed tests using the expression {{Test Execution Status.statuses.get(0).statusCount)}}
  3. send the notification by using a "Send web request" to invoke the webhook on Teams (two requests, one for success and another if tests failed)
    1. If not all tests passed, we can send the following content.

      1. Image Added


        Code Block
        collapsetrue
        {
            "@type": "MessageCard",
            "@context": "http://schema.org/extensions",
            "themeColor": "0076D7",
            "summary"
      make a GraphQL query to obtain the details of the Test Execution and its results
    get the total tests in other statuses and store it in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
    1. Image Removed Code Block
      1. Use the GraphQL API, namely the getTestExecution function. We'll need the issue id of the Test Execution that was triggered; we can use the smart values feature from Jira automation to obtain it. We need to also obtain the custom field id of the "Revision" custom field under the "Issue Fields" section of your Jira administration. 

        Code Block
        titlesample GraphQL query
        collapsetrue
        { getTestExecution(issueId
        : "{{issue.
        id
        summary}}
        ") {
        ",
            "sections": [{
                
        issueId
        "activitySubtitle": "{{issue.summary}}",
                
        jira(fields
        "activityTitle": 
        ["key","customfield_10033"])
        "Test results for project {{project.name}}",
                "activityImage": "https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/error.png",
                "facts": [
              
        testRuns(limit:
         
        100)
        {
                    "name": 
        results{
        "Test Execution",
                    
        status{
        "value": "[{{issue.key}}]({{issue.url}})"
                }, {
                    "name": "Reporter",
                    "value": "{{issue.reporter.displayName}}"
                }, {
              
              "name": "Version",
               
        test {
             "value": "{{issue.fixVersions.name}}"
                }, {
                 
        jira(fields
           "name": 
        [
        "
        key
        Revision"
        ])
        ,
                    "value": "{{issue.Revision}}"
                }
        , {
                    "name": "Test Environment(s)",
                    "value": "{{issue.Test Environments}}"
                }
        testEnvironments testPlans(limit: 1) { results{ jira(fields: ["key"]) } } } }

         

      2. Escape the GraphQL query (e.g., using the online GraphQL to JSON Body Converter tool)
  4. get the total tests and store it in a variable(note that this is an approximate value as GraphQL results can be limited and paginated)
    1. Image Removed
    2. Code Block
      {{webResponse.body.data.getTestExecution.testRuns.results.size|0}} 
  5. get the total tests passing and store it in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
    1. Image Removed
    2. Code Block
      {{webResponse.body.data.getTestExecution.testRuns.results.status.name.match(".*(PASSED).*").size|0}} 
  6. get the total failed tests and store it in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
    1. Image Removed

      Code Block
      {{webResponse.body.data.getTestExecution.testRuns.results.status.name.match(".*(FAILED).*").size|0}} 
  7. get the total tests in TO DO and store it in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
    1. Image Removed

      Code Block
      {{webResponse.body.data.getTestExecution.testRuns.results.status.name.match(".*(TO DO).*").size|0}} 
      1. , {
                    "name": "Test Plan",
                    "value": "[{{issue.Test Plan}}]({{baseUrl}}/browse/{{issue.Test Plan}})"
                }, {
                    "name": "Total tests",
                    "value": "{{Test Execution Status.count}}"
                }, {
                    "name": "Passed tests",
                    "value": "{{Test Execution Status.statuses.get(1).statusCount}}"
                }, {
                    "name": "Failed tests",
                    "value": "{{Test Execution Status.statuses.get(0).statusCount}}"
                }, {
                    "name": "Other tests",
                    "value": "{{#=}}{{
      totalTests
      1. Test Execution Status.count}} - {{
      passedTests}}
      1. Test 
      - {{failedTests
      1. Execution Status.statuses.get(0).statusCount}} - 
      {{todoTests}}
      1. {{
      /}}
  8. get the linked Test Plan and store it in a variable
    1. Image Removed
    2. Code Block
      {{webResponse.body.data.getTestExecution.testPlans.results.jira.key}}
  9. Use a "IF" block to store the URL of the linked Test Plan, if any
    1. Image Removed
  10. get the associated Test Environments and store it in a variable
    1. Image Removed
    2. Code Block
      {{webResponse.body.data.getTestExecution.testEnvironments.join(",")}}
  11. get the list of distinct statuses reported and store it in a temporary variable
    1. Image Removed
    2. Code Block
      {{webResponse.body.data.getTestExecution.testEnvironments.join(",")}}
  12. make a IF THEN ELSE block to adjust the text message and image of the notification, to tailor our notification for successful and unsuccessful testing cases
    1. Image Removed
    2. As a mere example, for the "failure" image we can use the following one.

    3. Image Removed
    4. Code Block
      https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/error.png
    5. As a mere example, for the "success" image we can use the following one.

    6. Image Removed

      Code Block
      https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/check.png
    7. Image Removed
      1. Test Execution Status.statuses.get(1).statusCount}}{{/}}"
                }],
                "markdown": true
            }]
        }
    1. If all tests passed, within the "ELSE" section of the IF block, we can send the following content.

    send the notification by using a "Send web request" to invoke the webhook on Teams
    1. Image Removed

      1. Image Added

      1. Code Block
        collapsetrue
        {
            "@type": "MessageCard",
            "@context": "http://schema.org/extensions",
            "themeColor": "0076D7",
            "summary": "{{issue.summary}}",
            "sections": [{
                "activitySubtitle": "{{issue.summary}}",
                "activityTitle": "Test results for project {{project.name}}",
                "activityImage": "
      {{notificationImageUrl}}
      1. https://docs.getxray.app/s/e1tqew/8402/f0863dd17de361916f7914addff17e0432a0be98/_/images/icons/emoticons/check.png",
                "facts": [
               {
                    "name": "Test Execution",
                    "value": "[{{issue.key}}]({{issue.url}})"
                }, {
                    "name": "Reporter",
                    "value": "{{issue.reporter.displayName}}"
                }, {
                    "name": "Version",
                    "value": "{{issue.fixVersions.name}}"
                }, {
                    "name": "Revision",
                    "value": "{{issue.Revision}}"
                }, {
                    "name": "Test Environment(s)",
                    "value": "{{
      testEnvironments
      1. issue.Test Environments}}"
                }, {
                    "name": "Test Plan",
                    "value": "
      {{testPlanUrl}}
      1. [{{issue.Test Plan}}]({{baseUrl}}/browse/{{issue.Test Plan}})"
                }, {
                    "name": "Total tests",
                    "value": "{{
      totalTests
      1. Test Execution Status.count}}"
                }, {
                    "name": "Passed tests",
                    "value": "{{
      passedTests
      1. Test Execution Status.statuses.get(1).statusCount}}"
                }, {
                    "name": "Failed tests",
                    "value": "{{
      failedTests
      1. Test Execution Status.statuses.get(0).statusCount}}"
                }, {
                    "name": "
      To Do
      1. Other tests",
                    "value": "{{
      todoTests
      1. #=}}
      "
      1. {{Test Execution Status.count}} - {{Test Execution Status.statuses.get(0).statusCount}} - 
      },
      1. {{
      1. Test 
      "name": "Other tests", "value": "{{otherTests
      1. Execution Status.statuses.get(1).statusCount}}{{/}}"
                }],
                "markdown": true
            }]
        }


Example of the output

Further ideas to try out

  • send the notification only if there are test failures
    • send the notification only if there is a specific label or custom field on the Test Execution
    • add comments and/or logs of the failed tests to the notification

References