Add custom steps to a manual test

In this simple scenario we are adding manual steps to a Test previously created. We'll use Xray's GraphQL API to achieve this goal. Even though this use case is mostly for demonstration purposes only, it's a good example of how to take advantage of Xray API to query/modify information in Xray side.

Automation Configuration

  1. create a new rule and define the "When" (i.e. when it should be triggered ), to be "Manually triggered".
  2. for this example, we have added a condition so that this automation can only be triggered by Jira issues of the type: Test.
  3. The next component to be added is a of type: "Send web request" where we will perform the authentication with the Xray API. 
    1. Make sure to fill the fields with correct information:
      1. Web request URL: Authentication URL of Xray API.
      2. Headers: Add the "Content-Type" header with the value: "application/json".
      3. Define the HTTP method as POST.
      4. Choose the "Custom data" as the Web request body.
      5. Add the Json request as seen in the picture with a valid client_id and client_secret (obtained in your Jira instance).
      6.  Tick the "delay execution..." option at the bottom as we want to use the token generated in this request in the next ones.
  4. Next add another "Send web request" component to make the GraphQL request.
    1. Make sure to fill the fields with correct information:
      1. Web request URL with the endpoint of the Xray graphQL.
      2. Add two headers:
        1. "Content-Type" with "application/json".
        2. "Authorization" with "Bearer <token>". Notice that we are using a special way to get the token {{webResponse.body}} provided by Jira automation, this will get the value from the last web request made and fetch the body of the answer.
      3. Define the HTTP method as POST.
      4. Choose the Web request body to be Custom data.
      5. Fill the Custom data with a proper formatted graphQL request as seen in the picture (make sure the format is correct or the request will fail).

Usage

Once the automation is defined we can access it in the test detail view of a Test in Jira. On the right side you have an entry called "Automation".

After clicking on it another screen will load with all the automation rules defined and the possibility to run each of them. Once you choose the correct rule and press "Run" the rule will be executed.


The status of the execution is showed in the section "Recent rule executions".


When the automation rule is executed with success a new test step will be added to the Test as we can see below.

Copy fields from requirement/Story to Test whenever creating a Test or linking it to a story

Sometimes it may be useful to copy some fields from the requirement/Story to the Tests that cover it.


Automation configuration

On the Jira side we will use the Automation capabilities that it provides out of the box, so within the administration area go to the automation entry in the system settings and: 

  1. create a new rule and define the "When" (i.e. when it should be triggered) to be "Issue linked". Since Xray, by default, uses the issue link type "Test" to establish the coverage relation between a Test and the requirement, we can take advantage of that to trigger the rule whenever such issue link is created.
  2. create a condition to ensure that the rule only runs for Test issues
  3. use an "Edit issue" action to set the fields on the Test based on the fields of the linked requirement/Story (i.e., the destination issue of the linking event). In this example, we'll copy the values of Labels and Priority custom fields.  


This rule will run:

  • whenever a Test is created from the requirement/Story issue screen
  • whenever a Test has been initially created and later on linked to the requirement

Reopen/transitionTests linked to a requirement whenever the requirement is transitioned or changed

Whenever you change the specification of a requirement/story, you most probably will need to review the Tests that you have already specified.

The following rule tries to perform a transition of all Tests linked to a requirement.

Automation configuration

On the Jira side we will use the Automation capabilities that it provides out of the box, so within the administration area go to the automation entry in the system settings and: 

  1. create a new rule and define the "When" (i.e. when it should be triggered) to be "Field value changed"
    1. Note: we could also define the trigger to be based on the transition of the requirement issue to a certain workflow status; in that case we would define it, for example, as shown below.
  2. create a condition to ensure that the rule only runs for Story and Epic issues; adjust these to include all the "requirement" issue types (i.e., the ones that you can cover with Tests)
  3. create a "branch rule / related issues" to obtain related Tests using JQL and the requirementTests() JQL function and run one, or more, action(s) on them
  4. under the "For JQL" block, create a action "Transition the issue to" in order to reopen the related Test issues

Create a new Test Execution for the Tests that failed on a previous Test Execution

Whether you're using Test Plans or not, you may want to rerun the failed Tests from a given Test Execution by scheduling a new Test Execution containing just those Tests (e.g., like rerunning the failed tests, but on a new Test Execution).

In this example, we'll associate the new Test Execution to the same Test Environments and the same Test Plan of the Test Execution where we triggered the rule from (i.e., the one that contains some tests that failed).

Automation configuration

On the Jira side we will use the Automation capabilities that it provides out of the box.

This use case requires a slightly more complex flow to set up due to some limitations of Jira Automation.

We need to create 2 rules:

  1. an automation rule to associate the new Test Execution (created on the next rule) to the same Test Plan, if required
  2. an automation rule to create the Test Execution with only the Tests that failed on the original Test Execution

Automation rule 1

  1. we start by creating the rule to associate the Test Execution with a Test Plan; we need to make sure this rule can be triggered by another rule
  2. in this rule, the "When" trigger should be "Incoming webhook"
  3. use an "IF" block to only proceed if there are Test Plan AND Test Execution issue ids
  4. make a GraphQL API call to associate the Test Execution that will be created with the Test Plan
    1. sample GraphQL query
      { "query": "mutation { addTestExecutionsToTestPlan(  issueId: \"{{webhookData.testPlanId}}\", testExecIssueIds: [\"{{webhookData.testExecutionId}}\"] ) { warning }}" }

Automation rule 2

  1. create a new rule (i.e., the one that will actually create a new Test Execution and trigger the previous rule) and define the "When" (i.e. when it should be triggered) to be "Manual Trigger"
  2. create an action using the "Send web request" template to make the authentication request and obtain a token to be used on the GraphQL operations; we need a client_id and a client_secret (please see how to create API keys).
  3. save the response in a variable (e.g., "token")
  4. make a GraphQL query to obtain the details of the Test Execution and its results
    1. Use the GraphQL API, namely the getTestExecution function. We'll need the issue id of the Test Execution that was already completed; we can use the smart values feature from Jira automation to obtain it. We need to also obtain the custom field id of the "Revision" custom field, if we want to set it later on, under the "Issue Fields" section of your Jira administration. 

    2. Escape the GraphQL query (e.g., using the online GraphQL to JSON Body Converter tool)


      sample GraphQL query
      {
        "query": "{ getTestExecution(issueId: \"{{issue.id}}\") { issueId jira(fields: [\"key\",\"customfield_10033\"]) testRuns(limit: 100){ results{ status{ name } test { issueId jira(fields: [\"key\"]) } } } testEnvironments testPlans(limit: 1) { results{ issueId jira(fields: [\"key\"]) } } }}"
      }
  5. get the linked Test Plan issue id and store it in a variable
    1. {{webResponse.body.data.getTestExecution.testPlans.results.get(0).issueId}}
  6. get the associated Test Environments and store it in a variable
    1. {{webResponse.body.data.getTestExecution.testEnvironments.asJsonStringArray}}
  7. post-process it and store it in another variable, as we need to escape some characters to embed the Test Environments on the GraphQL request later on
    1. {{testEnvironments.replaceAll("\"","\\\\\"")}}
  8. get all the failed tests and store them in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
    1. {{webResponse.body.data.getTestExecution.testRuns.results.status.name.match(".*(FAILED).*").size|0}} 
  9. store the issue ids of the failed tests in a variable (note that this is an approximate value as GraphQL results can be limited and paginated) 
    1. {{#webResponse.body.data.getTestExecution.testRuns.results}}{{#if(equals(status.name,"FAILED")) }}"{{test.issueId}}" {{/}}{{/}}
  10. post-process it and store it in another variable, as we need to escape some characters to embed the Test issue ids on the GraphQL request later on
    1. {{testIds.split(" ").join(",")}}
    2. {{testIds.replaceAll("\"","\\\\\"")}}
  11. use an "IF" block to only proceed if there are failed tests
  12. make a GraphQL API request to create a new Test Execution containing just the Tests that failed
    1. {
        "query": "mutation { createTestExecution( testEnvironments: {{escapedTestEnvironments}}, testIssueIds: [{{escapedTestIds}}], jira: { fields: { summary: \"dummy Test Execution\", project: {key: \"{{project.key}}\"} } } ) { testExecution { issueId jira(fields: [\"key\"]) } warnings }}"
      }
  13. store the issue id of the new Test Execution in a variable
    1. {{webResponse.body.data.createTestExecution.testExecution.issueId}}
  14. use the "Send web request" action to trigger the incoming webhook we defined in the "Automation rule 1" mentioned at start  
    1. {
        "issues": [
          "{{newTestExecutionId}}",
          "{{testPlanId}}"
        ],
        "testExecutionId": "{{newTestExecutionId}}",
        "testPlanId": "{{testPlanId}}",
        "token": "{{token}}"
      }

References