You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

What you'll learn

  • How to configure the remote jobs triggering feature
  • How to trigger remote jobs from Test Plans
  • How to configure and validate shipping the test results in Jira

Source-code for this tutorial

  • code is available in GitHub


Overview

|

RJT allows users to configure and invoke remote jobs in different CI/CD tools without leaving Xray improving tester performance and streamlining the workflow. 

Most pipelines are triggered by a commit action but sometimes we have the necessity to trigger a remote job to perform some actions, such as:

  • Validate a change in a specific feature
  • Validate a new deployment or new environment
  • Validate new tests on the fly
  • Run automation from Xray Test Plan or Test Execution

The remote job can perform all sort of tasks, including building, deploying the project to an environment, and/or running automated tests.

Most common use is to trigger the execution of automated tests.


In this example we are configuring a Remote Job Trigger for Jenkins that execute Playwright tests and send the execution results back to Xray.

Prerequisites


For this example we will use Jenkins as the CI/CD tool that will execute Playwright tests.


 What you need:

  • Access to a Jenkins instance
  • Xray Enterprise installed in your Jira instance
  • Have a Jenkins job that you can adapt/use to invoke remotely
  • Understand Jenkinsfile


Configure a new RJT for Jenkins in Xray


Configure Jenkins using a jenkinsfile


Configure a Remote Jobs Trigger in Xray for Jenkins


./models/Login.js
const config = require ("../config.json");

// models/Login.js
class LoginPage {

    constructor(page) {
      this.page = page;
    }

    async navigate() {
      await this.page.goto(config.endpoint);
    }
    
    async login(username, password) {
        await this.page.fill(config.username_field, username);
        await this.page.fill(config.password_field, password);
        await this.page.click(config.login_button);
    }

    async getInnerText(){
        return this.page.innerText("p");
    }

  }
  module.exports = { LoginPage };

plus a configuration file where we have the identifiers that will match the elements in the page

config.json
{
    "endpoint" : "https://robotwebdemo.onrender.com/",
    "login_button" : "id=login_button",
    "password_field" :"input[id=\"password_field\"]",
    "username_field" : "input[id=\"username_field\"]"
}


And define the test that will assert if the operation is successful or not


login.spec.ts
import {it, describe, expect} from "@playwright/test"
import { LoginPage } from "./models/Login";

describe("Login validations", () => {

    it('Login with valid credentials', async({page}) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login succeeded. Now you can logout.');
    });

    it('Login with invalid credentials', async({page}) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode1");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login failed. Invalid user name and/or password.');
    });
}) 


The Playwright Test Runner provides a Jest like way of describing test scenarios, here you can see that it uses 'it, describe, expect'.

These are simple tests that will validate the login functionality by accessing the demo site, inserting the username and password (in one test with valid credentials and in another with invalid credentials), clicking the login button and validating if the page returned is the one that matches your expectation.

For the below example we will do a small change to force a failure, so in the login.spec.ts file remove "/or" from the expectation on the Test ' Login with invalid credentials', this is the end result:

login.spec.ts
import { test, expect } from "@playwright/test"
import { LoginPage } from "./models/Login";

test.describe("Login validations", () => {

    test('Login with valid credentials', async({ page }) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login succeeded. Now you can logout.');
    });

    test('Login with invalid credentials', async({ page }) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode1");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login failed. Invalid user name and password.');
    });
}) 

Once the code is implemented (and we will make it fail on purpose on the 'Login with invalid credentials' test due to missing word, to show the failure reports), can be executed with the following command:


npx folio -p browserName=chromium --reporter=junit,line --test-match=login.spec.ts


First, define one extra parameter: "browserName" in order to execute the tests only with the chrome browser (chromium), otherwise the default behaviour is to execute the tests for the three available browsers (chromium, firefox and webkit).

The results are immediately available in the terminal 


In this example, one test has failed and the other one has succeed, the output generated in the terminal is the above one and the corresponding Junit report is below:

Junit Report
<testsuites id="" name="" tests="2" failures="1" skipped="0" errors="0" time="2.592">
<testsuite name="login.spec.ts" timestamp="1617094735952" hostname="" tests="2" failures="1" skipped="0" time="2.37" errors="0">
<testcase name="Login validations Login with valid credentials" classname="login.spec.ts Login validations" time="1.358">
</testcase>
<testcase name="Login validations Login with invalid credentials" classname="login.spec.ts Login validations" time="1.012">
<failure message="login.spec.ts:14:5 Login with invalid credentials" type="FAILURE">
  login.spec.ts:14:5 › Login validations Login with invalid credentials ============================
  browserName=webkit, headful=false, slowMo=0, video=false, screenshotOnFailure=false

    Error: expect(received).toBe(expected) // Object.is equality

    Expected: &quot;Login failed. Invalid user name and password.&quot;
    Received: &quot;Login failed. Invalid user name and/or password.&quot;

      17 |         await loginPage.login(&quot;demo&quot;,&quot;mode1&quot;);
      18 |         const name = await loginPage.getInnerText();
    > 19 |         expect(name).toBe('Login failed. Invalid user name and password.');
         |                      ^
      20 |     });
      21 | }) 

        at /Users/cristianocunha/Documents/Projects/Playwrighttest/login.spec.ts:19:22
        at runNextTicks (internal/process/task_queues.js:58:5)
        at processImmediate (internal/timers.js:434:9)
        at WorkerRunner._runTestWithFixturesAndHooks (/Users/cristianocunha/Documents/Projects/Playwrighttest/node_modules/folio/out/workerRunner.js:198:17)

</failure>
</testcase>
</testsuite>
</testsuites>

Repeat this process for each browser type in order to have the reports generated for each browser.

Notes:

  • By default it will execute tests for the 3 browser types available (that is why we are forcing it to execute for only one browser)
  • By default all the tests will be executed in headless mode
  • Folio command line will search and execute all tests in the format: "**/?(*.)+(spec|test).[jt]s" 
  • In order to get the Junit test report please follow this section



Integrating with Xray

As we saw in the above example, where we are producing Junit reports with the result of the tests, it is now a matter of importing those results to your Jira instance. You can do this by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins) or using the Jira interface to do so.


API

Once you have the report file available you can upload it to Xray through a request to the REST API endpoint for JUnit. To do that, follow the first step in the instructions in v1 or v2 (depending on your usage) to obtain the token we will be using in the subsequent requests.


JUnit XML results

We will use the API request with the definition of some common fields on the Test Execution, such as the target project, project version, etc.

In the first version of the API, the authentication used a login and password (not the token that is used in Cloud).

curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=@junit.xml" 'http://<LOCAL_JIRA_INSTANCE>/rest/raven/1.0/import/execution/junit?projectKey=COM&testPlanKey=COM-9'

With this command, you will create a new Test Execution in the referred Test Plan with a generic summary and two tests with a summary based on the test name.


On Xray, you can see the tests and you can identify which tests are failing or passing. Below you can see two tests (for valid and invalid credentials):

You can also notice that the summary is now defined based on the files we used for uploading the test results.

Jenkins

As you can see below we are adding a post-build action using the "Xray: Results Import Task" (from the Xray plugin available), where we have some options. For now, we will focus on two of those, one called "Junit XML" (simpler) and another called "Junit XML multipart" (both are explained below and will require two extra files).


Junit XML

  • the Jira instance (where you have your Xray instance installed)
  • the format as "JUnit XML"
  • the test results file we want to import
  • the Project key corresponding of the project in Jira where the results will be imported

Tests implemented using Jest will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Jest tests are auto-provisioned, unless they already exist.


Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using playwright-test runner.

Detailed results, including logs and exceptions reported during the execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:


As you can see here:





Tips

  • after results are imported in Jira, Tests can be linked to existing requirements/user stories, so you can track the impact of their coverage.
  • results from multiple builds can be linked to an existing Test Plan in order to facilitate the analysis of test result trends across builds.
  • results can be associated with a Test Environment, in case you want to analyze coverage and test results by that environment later on. A Test Environment can be a testing stage (e.g. dev, staging, preprod, prod) or an identifier of the device/application used to interact with the system (e.g. browser, mobile OS).



References



Jenkinsfile

Create a Test Execution for the test that you have

Fill in the necessary fields and press "Create."

Open the Test Execution and import the JUnit report. 


Choose the results file and press "Import."


The Test Execution is now updated with the test results imported. 

Tests implemented using Jest will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Jest tests are auto-provisioned, unless they already exist.


Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using playwright-test runner.

Detailed results, including logs and exceptions reported during execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:


As we can see here:


  • No labels