What you'll learn

  • How to define accessibility tests using axe-playwright
  • Run the test and push the test report to Xray
  • Validate that the test results are available in Jira

Source-code for this tutorial


Overview

Playwright is a recent browser automation tool that provides an alternative to Selenium.

Axe is an accessibility test engine for web and other HTML based user interfaces.

We are using the axe-playwright package to integrate the axe commands alongside the playwright tests to automate the accessibility testing.

Prerequisites


For this example we will use Playwright Test Runner, that accommodate the needs of the end-to-end testing. It does everything you would expect from the regular test runner.

If you want, you can use other runners (e.g. Jest, AVA, mocha). In addition we will use the Axe-Playwright library to assist on the accessibility testing.


 What you need:

Implementing tests

To start using the Playwright Test Runner, follow the Get Started documentation.


The test consists of validating the accessibility of the demo site.

First we need to configure JUnit reporter for taking advantage of some Xray integration capabilities that will allow us to add some additional metadata. This can be done using the xrayOptions available in Playwright. Please check below for more information on how to do this.


Once the configuration is done, we can define the tests.

We have created two tests that will show two different ways to use the Axe-Playwright library.

The first test will use the method checkA11y() to perform accessibility validations according to the rules we are defining in the axe options.

./tests/accessibilityTest.spec.js
const { test, expect } = require ('@playwright/test');
const { injectAxe, checkA11y, configureAxe, getViolations, reportViolations } = require ('axe-playwright')


test('Playwright dev accessibility', async ({ page }, testInfo) => {

  testInfo.annotations.push({ type: 'test_key', description: 'XT-506' });
  testInfo.annotations.push({ type: 'test_summary', description: 'Accessibility Validations' });
  testInfo.annotations.push({ type: 'requirements', description: 'XT-507' });
  testInfo.annotations.push({ type: 'test_description', description: 'WCAG AA' });
  await page.goto('https://playwright.dev/');

  await injectAxe(page);
  
  await checkA11y(page, null, {
    axeOptions: {
      runOnly: {
        type: 'tag',
        values: ['wcag2a'],
      },
      detailedReport: true,
    },
  })
});


First we are adding annotations to the report using the testInfo.annotations.push method. In the above example, we are defining the test key, test summary, and requirement that are linked to this test.


The next step is to access the page and use injectAxe() method to inject the axe-core runtime into the page under test. Finally perform the accessibility validations calling checkA11y().

From the example we can see that checkA11y() accepts the definitions of options that will let you configure what accessibility validations to perform; in our case we want to use the rules defined by the tag wcag2a (more information on tags here).

There are many ways to configure your accessibility tests, so please check the documentation to better define it for your needs.


To execute the test we run the following command:

npx playwright test


The results are immediately available in the terminal.

If some accessibility issue is detected the test will fail automatically and we will have the details available in the terminal.


A JUnit XML report is also generated with the extra information we added with the xrayOptions (test_key, test_summary and requirements) and details on the failure:

xray-report.xml
<testsuites id="" name="" tests="1" failures="1" skipped="0" errors="0" time="2.178456000000238">
<testsuite name="accessibilityTest.spec.js" timestamp="1661510064197" hostname="" tests="1" failures="1" skipped="0" time="1.29" errors="0">
<testcase name="Playwright dev accessibility" classname="[chromium] › accessibilityTest.spec.js:5:1 › Playwright dev accessibility" time="1.29">
<properties>
<property name="test_key" value="XT-506">
</property>
<property name="test_summary" value="Accessibility Validations">
</property>
<property name="requirements" value="XT-507">
</property>
<property name="testrun_evidence">
</property>
</properties>
<failure message="accessibilityTest.spec.js:5:1 Playwright dev accessibility" type="FAILURE">
<![CDATA[  [chromium] › accessibilityTest.spec.js:5:1 › Playwright dev accessibility ========================

    AssertionError [ERR_ASSERTION]: 2 accessibility violations were detected

        at Object.testResultDependsOnViolations (/Users/cristianocunha/Documents/Projects/tutorials/jest-axe-junit/playwright-axe/node_modules/axe-playwright/dist/utils.js:16:26)
        at /Users/cristianocunha/Documents/Projects/tutorials/jest-axe-junit/playwright-axe/node_modules/axe-playwright/dist/index.js:106:13
        at Generator.next (<anonymous>)
        at fulfilled (/Users/cristianocunha/Documents/Projects/tutorials/jest-axe-junit/playwright-axe/node_modules/axe-playwright/dist/index.js:24:58)
]]>
</failure>
<system-out>
<![CDATA[┌─────────┬─────────────┬────────────┬────────────────────────────────────────────────────────────────────────────────┬───────┐
│ (index) │     id      │   impact   │                                  description                                   │ nodes │
├─────────┼─────────────┼────────────┼────────────────────────────────────────────────────────────────────────────────┼───────┤
│    0    │ [32m'image-alt'[39m │ [32m'critical'[39m │ [32m'Ensures <img> elements have alternate text or a role of none or presentation'[39m │  [33m10[39m   │
│    1    │ [32m'link-name'[39m │ [32m'serious'[39m  │                     [32m'Ensures links have discernible text'[39m                      │   [33m9[39m   │
└─────────┴─────────────┴────────────┴────────────────────────────────────────────────────────────────────────────────┴───────┘
]]>
</system-out>
</testcase>
</testsuite>
</testsuites>


The second way of defining the accessibility tests is by using the accessibility validations with the method getViolations that will return an array of accessibilities violations detected in the page for you to process. This will allow you to customize what you want to do with the information returned by the test.

The way to use getViolations() is very similar to checkA11y():

./tests/violationsTest.spec.js
const { test, expect } = require ('@playwright/test')
const { injectAxe, checkA11y, configureAxe, getViolations, reportViolations } = require('axe-playwright')
const { parseViolations } = require('../utils/violationsHelper')

test('Playwright dev accessibility getViolations', async ({ page }, testInfo) => {

  testInfo.annotations.push({ type: 'test_key', description: 'XT-506' });
  testInfo.annotations.push({ type: 'test_summary', description: 'Accessibility Validations' });
  testInfo.annotations.push({ type: 'requirements', description: 'XT-507' });
  testInfo.annotations.push({ type: 'test_description', description: 'WCAG AA' });
  await page.goto('https://playwright.dev/');

  await injectAxe(page);
  
  const violations = await getViolations(page, null, {
    runOnly: {
      type: 'tag',
      values: ['wcag2a'],
    },
    detailedReport: true,
    detailedReportOptions: {html: true}
  })

  var results=parseViolations(violations);
  testInfo.annotations.push({ type: 'testrun_comment', description: results });

  expect(violations.length).toBe(0);
});


The getViolations() method will return a list of violations detected that we are processing using a method we created for the purpose: parseViolations(violations). This allows us to add valuable information to the testrun_comment field.

Notice that the method getViolations() does not fail the test if violations are found so we have added expect(violations.length).toBe(0) in order to fail the test, otherwise even if violations were found the test will succeed.


We added an helper method to format the violations being returned in a way that is useful after being imported into Xray.

./utils/violationsHelper.js
function parseViolations(violations){
    var results='';
    var violationsSize = violations.length;
    var separator = ' | ';
    var endline = '\n';
  
    if (violationsSize > 0){
      results = results + endline + 'Found ' + violationsSize + ' accessibility violations.' + endline;
      var index = 0;
      violations.forEach(element => {
        results = results + index + ' - ' + element.id + separator + element.impact + separator + element.description + endline;
        index++;
      });
    }
    return results;
}

export {parseViolations};


In this method, we iterate through the violations and format them in a way that they are readable after being imported into Xray.


Once the code is implemented it can be executed with the following command:

npx playwright test


The results are immediately available in the terminal.


Notice that this time the violations are not being written in the terminal, instead we only see the test success or failure.

In the JUnit report generated we will that in the testrun_comment field we have added information of the accessibility violations:

Junit Report
<testsuites id="" name="" tests="1" failures="1" skipped="0" errors="0" time="2.4394780000001193">
<testsuite name="violationsTest.spec.js" timestamp="1661521665034" hostname="" tests="1" failures="1" skipped="0" time="1.503" errors="0">
<testcase name="Playwright dev accessibility getViolations" classname="[chromium] › violationsTest.spec.js:5:1 › Playwright dev accessibility getViolations" time="1.503">
<properties>
<property name="test_key" value="XT-506">
</property>
<property name="test_summary" value="Accessibility Validations">
</property>
<property name="requirements" value="XT-507">
</property>
<property name="test_description">
<![CDATA[WCAG AA]]>
</property>
<property name="testrun_comment">
<![CDATA[
Found 2 accessibility violations.
0 - image-alt | critical | Ensures <img> elements have alternate text or a role of none or presentation
1 - link-name | serious | Ensures links have discernible text
]]>
</property>
<property name="testrun_evidence">
</property>
</properties>
<failure message="violationsTest.spec.js:5:1 Playwright dev accessibility getViolations" type="FAILURE">
<![CDATA[  [chromium] › violationsTest.spec.js:5:1 › Playwright dev accessibility getViolations =============

    Error: expect(received).toBe(expected) // Object.is equality

    Expected: 0
    Received: 2

      24 |   testInfo.annotations.push({ type: 'test_description', description: results });
      25 |
    > 26 |   expect(violations.length).toBe(0);
         |                             ^
      27 | });
      28 |

        at /Users/cristianocunha/Documents/Projects/tutorials/jest-axe-junit/playwright-axe/tests/violationsTest.spec.js:26:29
]]>
</failure>
</testcase>
</testsuite>
</testsuites>


Repeat this process for each browser type in order to have the reports generated for each browser.

Notes:

  • By default it will execute tests for the 3 browser types available (that is why we are forcing it to execute for only one browser)
  • By default all the tests will be executed in headless mode
  • if you want to filter tests executions pass the test filename in the command line, for example: npx playwright test tests/violationsTest.spec.js
  • In order to get the JUnit test report please follow this section


Integrating with Xray

As we saw in the above example, where we are producing JUnit reports with the result of the tests, it is now a matter of importing those results to your Jira instance. You can do this by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins) or using the Jira interface to do so.


API

Once you have the report file available you can upload it to Xray through a request to the REST API endpoint for JUnit. To do that, follow the first step in the instructions in v1 or v2 (depending on your usage) to obtain the token we will be using in the subsequent requests.

Authentication

The request made will look like:

curl -H "Content-Type: application/json" -X POST --data '{ "client_id": "CLIENTID","client_secret": "CLIENTSECRET" }'  https://xray.cloud.getxray.app/api/v1/authenticate


The response of this request will return the token to be used in the subsequent requests for authentication purposes.


JUnit XML results

Once you have the token we will use it in the API request with the definition of some common fields on the Test Execution, such as the target project, project version, etc.

curl -H "Content-Type: text/xml" -X POST -H "Authorization: Bearer $token"  --data @"xray-report.xml" https://xray.cloud.getxray.app/api/v2/import/execution/junit?projectKey=XT&testPlanKey=XT-505


With this command, you will create a new Test Execution in the referred Test Plan and you will be able to see the overall status directly in the Test Plan.


Looking into more detail in each result we can see that in the first Test (XT-506) we do not have details of the violations, we only have the overall result and some indication that something is not ok regarding accessibility:


If we look into the the second Test (XT-552), as we have added a custom test-description we will have information right in the Test Execution panel:


Passing additional test related information to Xray

We manage to have our contribution to Playwright approved and the end result is that you can use the native JUnit reporter to enrich the JUnit report with information that will be treated by Xray natively.

Now you can use the testInfo object to add properties in the JUnit report, adding information that is natively supported by Xray.

Configuring the test reporter

To use it start by including a configuration file 'playwright.config.js' with the following content:

playwright.config.js
// JUnit reporter config for Xray
const xrayOptions = {
  // Whether to add <properties> with all annotations; default is false
  embedAnnotationsAsProperties: true,

  // By default, annotation is reported as <property name='' value=''>.
  // These annotations are reported as <property name=''>value</property>.
  textContentAnnotations: ['test_description', 'testrun_comment'],

  // This will create a "testrun_evidence" property that contains all attachments. Each attachment is added as an inner <item> element.
  // Disables [[ATTACHMENT|path]] in the <system-out>.
  embedAttachmentsAsProperty: 'testrun_evidence',

  // Where to put the report.
  outputFile: './xray-report.xml'
};

const config: PlaywrightTestConfig = {
  reporter: [ ['junit', xrayOptions] ]
};

module.exports = config;

This configuration setup properties with particular annotations that are natively interpreted by Xray.


On the tests we can now add information using the testInfo object available:

accessibilityTest.spec.js
const { test, expect } = require ('@playwright/test')
const { injectAxe, getViolations } = require('axe-playwright')
const { parseViolations } = require('../utils/violationsHelper')

test('Playwright dev accessibility getViolations', async ({ page }, testInfo) => {

  testInfo.annotations.push({ type: 'test_key', description: 'XT-552' });
  testInfo.annotations.push({ type: 'test_summary', description: 'Accessibility - getViolations' });
  testInfo.annotations.push({ type: 'requirements', description: 'XT-507' });
  testInfo.annotations.push({ type: 'test_description', description: 'WCAG AA' });
  await page.goto('https://playwright.dev/');

  await injectAxe(page);
  
  const violations = await getViolations(page, null, {
    runOnly: {
      type: 'tag',
      values: ['wcag2a'],
    },
    detailedReport: true,
    detailedReportOptions: {html: true}
  })

  var results=parseViolations(violations);
  testInfo.annotations.push({ type: 'testrun_comment', description: results });

  expect(violations.length).toBe(0);
});
 


We added several properties in the test to showcase the capabilities of these annotations but you can use only the ones that are useful in your case.

All annotations will be added as <property> elements on the JUnit XML report. The annotation type is mapped to the name attribute of the <property>, and the annotation description will be added as a value attribute.

Resuming the annotations we are using:

  • test_key: Link to the test in Xray with the specified key.
  • test_summary: Redefine the summary of the test.
  • test_description: Redefine the test description.
  • requirements: Link to one or several requirements in Xray.


There's a special way to add attachments, using the testInfo object; as an example, you could add a screenshot to thee following test with the following code:

test('Playwright dev accessibility getViolations', async({ page }, testInfo) => {
  ...
  const path = testInfo.outputPath('tmp_screenshot.png');
  await page.screenshot({ path });
 
  testInfo.attachments.push({ name: 'screenshot.png', path, contentType: 'image/png' });\
  ...


Seeing additional test information in Xray

If you are using the JUnit reporter defined above, the results will contain information that Xray can process and show to users.

To import these results you should use exactly the same approach as described here because the report generated will be a valid JUnit report with extra information.

Once imported we can see the redefinition of the summary, the screenshot added, the redefinition of the test description and the link added to the requirement.



Tips

  • After results are imported in Jira, and if you havent'd already done so directly in the test code, Tests can be linked to existing requirements/user stories, so you can track the impact of their coverage.
  • Results from multiple builds can be linked to an existing Test Plan in order to facilitate the analysis of test result trends across builds.
  • Results can be associated with a Test Environment, in case you want to analyze coverage and test results by that environment later on. A Test Environment can be a testing stage (e.g. dev, staging, prepod, prod) or an identifier of the device/application used to interact with the system (e.g. browser, mobile OS).



References