Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Info
titleWhat you'll learn
  • How to define tests using Playwright
  • Run the test and push the test report to Xray
  • Validate that the test results are available in Jira
Note
iconfalse
titleSource-code for this tutorial
typeInfo


Overview

Playwright is a recent browser automation tool that provides an alternative to Selenium.


Prerequisites


Expand

For this example we will use Playwright Test Runner, which is based on Folio - a customizable test framework used to build higher-level test frameworks using Jest-like assertions that accommodate the needs of the end-to-end testing. It does everything you would expect from the regular test runner.

Playwright Test Runner is still fairly new as you can see in the official documentation:

UI Text Box

Zero config cross-browser end-to-end testing for web apps. Browser automation with  Playwright , Jest-like assertions and built-in support for TypeScript.

Playwright test runner is available in preview and minor breaking changes could happen. We welcome your feedback to shape this towards 1.0.

If you want, you can use other runners (e.g. Jest, AVA, mocha).


 What you need:

Implementing tests

To start using the Playwright Test Runner, follow the Get Started documentation.

The test consists of validating the login feature (with valid and invalid credentials) of the demo site, for which we have created a page object that will represent the loginPage


Code Block
languagejs
title./models/Login.js
collapsetrue
const config = require ("../config.json");

// models/Login.js
class LoginPage {

    constructor(page) {
      this.page = page;
    }

    async navigate() {
      await this.page.goto(config.endpoint);
    }
    
    async login(username, password) {
        await this.page.fill(config.username_field, username);
        await this.page.fill(config.password_field, password);
        await this.page.click(config.login_button);
    }

    async getInnerText(){
        return this.page.innerText("p");
    }

  }
  module.exports = { LoginPage };

plus a configuration file where we have the identifiers that will match the elements in the page

Code Block
languagejs
titleconfig.json
collapsetrue
{
    "endpoint" : "https://robotwebdemo.herokuapp.com/",
    "login_button" : "id=login_button",
    "password_field" :"input[id=\"password_field\"]",
    "username_field" : "input[id=\"username_field\"]"
}


And define the test that will assert if the operation is successful or not


Code Block
languagejs
firstline1
titlelogin.spec.ts
linenumberstrue
collapsetrue
import {it, describe, expect} from "@playwright/test"
import { LoginPage } from "./models/Login";

describe("Login validations", () => {

    it('Login with valid credentials', async({page}) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login succeeded. Now you can logout.');
    });

    it('Login with invalid credentials', async({page}) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode1");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login failed. Invalid user name and/or password.');
    });
}) 


The Playwright Test Runner provides a Jest like way of describing test scenarios, here you can see that it uses 'it, describe, expect'.

These are simple tests that will validate the login functionality by accessing the demo site, inserting the username and password (in one test with valid credentials and in another with invalid credentials), clicking the login button and validating if the page returned is the one that matches your expectation.your expectation.

For the below example we will do a small change to force a failure, so in the login.spec.ts file remove "/or" from the expectation on the Test ' Login with invalid credentials', this is the end result:

Code Block
languagejs
firstline1
titlelogin.spec.ts
linenumberstrue
collapsetrue
import { test, expect } from "@playwright/test"
import { LoginPage } from "./models/Login";

test.describe("Login validations", () => {

    test('Login with valid credentials', async({ page }) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login succeeded. Now you can logout.');
    });

    test('Login with invalid credentials', async({ page }) => {
        const loginPage = new LoginPage(page);
        await loginPage.navigate();
        await loginPage.login("demo","mode1");
        const name = await loginPage.getInnerText();
        expect(name).toBe('Login failed. Invalid user name and password.');
    });
}) 

Once the code is implemented (and we will make it fail on purpose on the 'Login with invalid credentials' test due to missing word, to show the failure reports), can be executed with the following command:


Code Block
languagebash
themeDJango
firstline1
npx folio -p browserName=chromium --reporter=junit,line --test-match=login.spec.ts


First, define one extra parameter: "browserName" in order to execute the tests only with the chrome browser (chromium), otherwise the default behaviour is to execute the tests for the three available browsers (chromium, firefox and webkit).

The results are immediately available in the terminal 


In this example, one test has failed and the other one has succeed, the output generated in the terminal is the above one and the corresponding Junit report is below:

Code Block
firstline1
titleJunit Report
linenumberstrue
collapsetrue
<testsuites id="" name="" tests="2" failures="1" skipped="0" errors="0" time="2.592">
<testsuite name="login.spec.ts" timestamp="1617094735952" hostname="" tests="2" failures="1" skipped="0" time="2.37" errors="0">
<testcase name="Login validations Login with valid credentials" classname="login.spec.ts Login validations" time="1.358">
</testcase>
<testcase name="Login validations Login with invalid credentials" classname="login.spec.ts Login validations" time="1.012">
<failure message="login.spec.ts:14:5 Login with invalid credentials" type="FAILURE">
  login.spec.ts:14:5 › Login validations Login with invalid credentials ============================
  browserName=webkit, headful=false, slowMo=0, video=false, screenshotOnFailure=false

    Error: expect(received).toBe(expected) // Object.is equality

    Expected: &quot;Login failed. Invalid user name and password.&quot;
    Received: &quot;Login failed. Invalid user name and/or password.&quot;

      17 |         await loginPage.login(&quot;demo&quot;,&quot;mode1&quot;);
      18 |         const name = await loginPage.getInnerText();
    > 19 |         expect(name).toBe('Login failed. Invalid user name and password.');
         |                      ^
      20 |     });
      21 | }) 

        at /Users/cristianocunha/Documents/Projects/Playwrighttest/login.spec.ts:19:22
        at runNextTicks (internal/process/task_queues.js:58:5)
        at processImmediate (internal/timers.js:434:9)
        at WorkerRunner._runTestWithFixturesAndHooks (/Users/cristianocunha/Documents/Projects/Playwrighttest/node_modules/folio/out/workerRunner.js:198:17)

</failure>
</testcase>
</testsuite>
</testsuites>

Repeat this process for each browser type in order to have the reports generated for each browser.

Notes:

  • By default it will execute tests for the 3 browser types available (that is why we are forcing it to execute for only one browser)
  • By default all the tests will be executed in headless mode
  • Folio command line will search and execute all tests in the format: "**/?(*.)+(spec|test).[jt]s" 
  • In order to get the Junit test report please follow this section



Integrating with Xray

As we saw in the above example, where we are producing Junit reports with the result of the tests, it is now a matter of importing those results to your Jira instance. You can do this by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins) or using the Jira interface to do so.


UI Tabs
UI Tab
titleAPI

API

Once you have the report file available you can upload it to Xray through a request to the REST API endpoint for JUnit. To do that, follow the first step in the instructions in v1 or v2 (depending on your usage) to obtain the token we will be using in the subsequent requests.


JUnit XML results

We will use the API request with the definition of some common fields on the Test Execution, such as the target project, project version, etc.

In the first version of the API, the authentication used a login and password (not the token that is used in Cloud).

Code Block
languagebash
themeDJango
curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=@junit.xml" 'http://<LOCAL_JIRA_INSTANCE>/rest/raven/1.0/import/execution/junit?projectKey=COM&testPlanKey=COM-9'

With this command, you will create a new Test Execution in the referred Test Plan with a generic summary and two tests with a summary based on the test name.

JUnit XML results Multipart 

However, there's another endpoint that is more flexible and allows the customization of any field on the target Test Execution; this is the specific JUnit multipart endpoint.

This endpoint follows a JSON-based syntax based on Jira's REST API for updating issues. As an example of uploading the results to a Test Execution with a given Summary, we have created these two additional files: issueFields.json and testIssueFields.json, where we are doing the above associations.

Code Block
languageactionscript3
titleissueFields.json
collapsetrue
{
    "fields": {
        "project": {
            "id": "12400"
        },
        "summary": "Login validation [Webkit]",
        "issuetype": {
            "id": "10100"
        },
        "components" : [
            {
            "name":"Interface"
            },
            {
            "name":"Login"
            }
        ]
    }
}
Code Block
languageactionscript3
titletestIssueFields.json
collapsetrue
{
    "fields": {
        "project": {
            "id": "12400"
        }
    }
}

To upload the reports through Junit multipart endpoint, use the following command:

Code Block
languageactionscript3
themeDJango
curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=@junit.xml" -F "info=@xray_multipart/issueFields.json" -F "testInfo=@xray_multipart/testIssueFields.json" 'http://192.168.56.111:8080/rest/raven/1.0/import/execution/junit/multipart'


On Xray, you can see the tests and you can identify which tests are failing or passing. Below you can see two tests (for valid and invalid credentials):

You can also notice that the summary is now defined based on the files we used for uploading the test results.

UI Tab
titleJenkins

Jenkins

As you can see below we are adding a post-build action using the "Xray: Results Import Task" (from the Xray plugin available), where we have some options. For now, we will focus on two of those, one called "Junit XML" (simpler) and another called "Junit XML multipart" (both are explained below and will require two extra files).


Junit XML

  • the Jira instance (where you have your Xray instance installed)
  • the format as "JUnit XML"
  • the test results file we want to import
  • the Project key corresponding of the project in Jira where the results will be imported

Tests implemented using Jest will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Jest tests are auto-provisioned, unless they already exist.


Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using playwright-test runner.

Detailed results, including logs and exceptions reported during the execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:


As you can see here:

Junit XML multipart

  • the Jira instance (where you have your Xray instance installed)
  • the format as "Junit XML Multipart"
  • the two files already added to the repo: "issueFields.json" and "testIssueFields.json" (in the xray_multipart directory, note that you must update the inner values to have the correct labels, projectid, issueType and environments)
  • The results file, in our case "junit.xml"

In this integration we have more control over the import to Jira. In this particular case, you can see that we will import these results to the Project with the id defined in the file, with a specific summary, all of this is specified in the files (issueFields.json and testIssuesFields.json).

UI Tab
titleJira UI

Jira UI

UI Steps
UI Step

Create a Test Execution for the test that you have

UI Step

Fill in the necessary fields and press "Create."

UI Step

Open the Test Execution and import the JUnit report. 


UI Step

Choose the results file and press "Import."


UI Step

The Test Execution is now updated with the test results imported. 

Tests implemented using Jest will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Jest tests are auto-provisioned, unless they already exist.


Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using playwright-test runner.

Detailed results, including logs and exceptions reported during execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:


As we can see here:



Tips

  • after results are imported in Jira, Tests can be linked to existing requirements/user stories, so you can track the impact of their coverage.
  • results from multiple builds can be linked to an existing Test Plan in order to facilitate the analysis of test result trends across builds.
  • results can be associated with a Test Environment, in case you want to analyze coverage and test results by that environment later on. A Test Environment can be a testing stage (e.g. dev, staging, preprod, prod) or an identifier of the device/application used to interact with the system (e.g. browser, mobile OS).



References

Table of Contents
classtoc-btf

CSS Stylesheet
.toc-btf {
    position: fixed;
}



Table of Contents
classtoc-btf

CSS Stylesheet
.toc-btf {
    position: fixed;
}