Add custom steps to a manual test
In this simple scenario we are adding manual steps to a Test previously created. We'll use Xray's GraphQL API to achieve this goal. Even though this use case is mostly for demonstration purposes only, it's a good example of how to take advantage of Xray API to query/modify information in Xray side.
Automation Configuration
- create a new rule and define the "When" (i.e. when it should be triggered ), to be "Manually triggered".
- for this example, we have added a condition so that this automation can only be triggered by Jira issues of the type: Test.
- The next component to be added is a of type: "Send web request" where we will perform the authentication with the Xray API.
- Make sure to fill the fields with correct information:
- Web request URL: Authentication URL of Xray API.
- Headers: Add the "Content-Type" header with the value: "application/json".
- Define the HTTP method as POST.
- Choose the "Custom data" as the Web request body.
- Add the Json request as seen in the picture with a valid client_id and client_secret (obtained in your Jira instance).
- Tick the "delay execution..." option at the bottom as we want to use the token generated in this request in the next ones.
- Next add another "Send web request" component to make the GraphQL request.
- Make sure to fill the fields with correct information:
- Web request URL with the endpoint of the Xray graphQL.
- Add two headers:
- "Content-Type" with "application/json".
- "Authorization" with "Bearer <token>". Notice that we are using a special way to get the token {{webResponse.body}} provided by Jira automation, this will get the value from the last web request made and fetch the body of the answer.
- Define the HTTP method as POST.
- Choose the Web request body to be Custom data.
- Fill the Custom data with a proper formatted graphQL request as seen in the picture (make sure the format is correct or the request will fail).
Usage
Once the automation is defined we can access it in the test detail view of a Test in Jira. On the right side you have an entry called "Automation".
After clicking on it another screen will load with all the automation rules defined and the possibility to run each of them. Once you choose the correct rule and press "Run" the rule will be executed.
The status of the execution is showed in the section "Recent rule executions".
When the automation rule is executed with success a new test step will be added to the Test as we can see below.
Copy fields from requirement/Story to Test whenever creating a Test or linking it to a story
Sometimes it may be useful to copy some fields from the requirement/Story to the Tests that cover it.
Automation configuration
On the Jira side we will use the Automation capabilities that it provides out of the box, so within the administration area go to the automation entry in the system settings and:
- create a new rule and define the "When" (i.e. when it should be triggered) to be "Issue linked". Since Xray, by default, uses the issue link type "Test" to establish the coverage relation between a Test and the requirement, we can take advantage of that to trigger the rule whenever such issue link is created.
- create a condition to ensure that the rule only runs for Test issues
- use an "Edit issue" action to set the fields on the Test based on the fields of the linked requirement/Story (i.e., the destination issue of the linking event). In this example, we'll copy the values of Labels and Priority custom fields.
This rule will run:
- whenever a Test is created from the requirement/Story issue screen
- whenever a Test has been initially created and later on linked to the requirement
Reopen/transitionTests linked to a requirement whenever the requirement is transitioned or changed
Whenever you change the specification of a requirement/story, you most probably will need to review the Tests that you have already specified.
The following rule tries to perform a transition of all Tests linked to a requirement.
Automation configuration
On the Jira side we will use the Automation capabilities that it provides out of the box, so within the administration area go to the automation entry in the system settings and:
- create a new rule and define the "When" (i.e. when it should be triggered) to be "Field value changed"
- Note: we could also define the trigger to be based on the transition of the requirement issue to a certain workflow status; in that case we would define it, for example, as shown below.
- create a condition to ensure that the rule only runs for Story and Epic issues; adjust these to include all the "requirement" issue types (i.e., the ones that you can cover with Tests)
- create a "branch rule / related issues" to obtain related Tests using JQL and the
requirementTests()
JQL function and run one, or more, action(s) on them - under the "For JQL" block, create a action "Transition the issue to" in order to reopen the related Test issues
Create a new Test Execution for the Tests that failed on a previous Test Execution
Whether you're using Test Plans or not, you may want to rerun the failed Tests from a given Test Execution by scheduling a new Test Execution containing just those Tests (e.g., like rerunning the failed tests, but on a new Test Execution).
In this example, we'll associate the new Test Execution to the same Test Environments and the same Test Plan of the Test Execution where we triggered the rule from (i.e., the one that contains some tests that failed).
Automation configuration
On the Jira side we will use the Automation capabilities that it provides out of the box.
This use case requires a slightly more complex flow to set up due to some limitations of Jira Automation.
We need to create 2 rules:
- an automation rule to associate the new Test Execution (created on the next rule) to the same Test Plan, if required
- an automation rule to create the Test Execution with only the Tests that failed on the original Test Execution
Automation rule 1
- we start by creating the rule to associate the Test Execution with a Test Plan; we need to make sure this rule can be triggered by another rule
- in this rule, the "When" trigger should be "Incoming webhook"
- use an "IF" block to only proceed if there are Test Plan AND Test Execution issue ids
- make a GraphQL API call to associate the Test Execution that will be created with the Test Plan
Automation rule 2
- create a new rule (i.e., the one that will actually create a new Test Execution and trigger the previous rule) and define the "When" (i.e. when it should be triggered) to be "Manual Trigger"
- create an action using the "Send web request" template to make the authentication request and obtain a token to be used on the GraphQL operations; we need a client_id and a client_secret (please see how to create API keys).
- save the response in a variable (e.g., "token")
- make a GraphQL query to obtain the details of the Test Execution and its results
Use the GraphQL API, namely the getTestExecution function. We'll need the issue id of the Test Execution that was already completed; we can use the smart values feature from Jira automation to obtain it. We need to also obtain the custom field id of the "Revision" custom field, if we want to set it later on, under the "Issue Fields" section of your Jira administration.
- Escape the GraphQL query (e.g., using the online GraphQL to JSON Body Converter tool)
- get the linked Test Plan issue id and store it in a variable
{{webResponse.body.data.getTestExecution.testPlans.results.get(0).issueId}}
- get the associated Test Environments and store it in a variable
{{webResponse.body.data.getTestExecution.testEnvironments.asJsonStringArray}}
- post-process it and store it in another variable, as we need to escape some characters to embed the Test Environments on the GraphQL request later on
{{testEnvironments.replaceAll("\"","\\\\\"")}}
- get all the failed tests and store them in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
{{webResponse.body.data.getTestExecution.testRuns.results.status.name.match(".*(FAILED).*").size|0}}
- store the issue ids of the failed tests in a variable (note that this is an approximate value as GraphQL results can be limited and paginated)
{{#webResponse.body.data.getTestExecution.testRuns.results}}{{#if(equals(status.name,"FAILED")) }}"{{test.issueId}}" {{/}}{{/}}
- post-process it and store it in another variable, as we need to escape some characters to embed the Test issue ids on the GraphQL request later on
{{testIds.split(" ").join(",")}}
{{testIds.replaceAll("\"","\\\\\"")}}
- use an "IF" block to only proceed if there are failed tests
- make a GraphQL API request to create a new Test Execution containing just the Tests that failed
{ "query": "mutation { createTestExecution( testEnvironments: {{escapedTestEnvironments}}, testIssueIds: [{{escapedTestIds}}], jira: { fields: { summary: \"dummy Test Execution\", project: {key: \"{{project.key}}\"} } } ) { testExecution { issueId jira(fields: [\"key\"]) } warnings }}" }
- store the issue id of the new Test Execution in a variable
{{webResponse.body.data.createTestExecution.testExecution.issueId}}
- use the "Send web request" action to trigger the incoming webhook we defined in the "Automation rule 1" mentioned at start
{ "issues": [ "{{newTestExecutionId}}", "{{testPlanId}}" ], "testExecutionId": "{{newTestExecutionId}}", "testPlanId": "{{testPlanId}}", "token": "{{token}}" }