|
|
Playwright is a recent browser automation tool that provides an alternative to Selenium.
Applitools Eyes is a visual AI test automation tool that have an SDK available that you can add to your test project allowing visual validations.
For this example we will use Playwright Test Runner and Applitools Eyes SDK. We will need:
|
To start using the Playwright Test Runner please follow the Get Started documentation.
The tests consist in validating 3 features of the demo site: Home link, Find owners functionality and veterinarians link.
We want to add visual validations to these tests, so we have included the Applitools Eyes SDK to be able to use the comparison abilities of the tool.
Before coding the tests start by registering in the Applitools Eyes site and obtain an API-KEY (that is what we will use in the test execution to ship screenshots to the tool for comparison), more information on how to do it here.
We started by defining PageObjects that will represent the pages we will interact with, we have defined three, as we see below:
const config = require ("../config/config.json"); class OwnersPage { constructor(page) { this.page = page; } async navigate() { await this.page.goto(config.endpoint); } async click_find_owners_button(){ await this.page.click(config.find_owners_button); } } module.exports = { OwnersPage }; |
const config = require ("../config/config.json"); class HomePage { constructor(page) { this.page = page; } async navigate() { await this.page.goto(config.endpoint); } async getMenuEntry(){ return await this.page.locator(config.top_menu_entry).first(); } async getHomeText(){ return config.home_text; } } module.exports = { HomePage }; |
const config = require ("../config/config.json"); class VetsPage { constructor(page) { this.page = page; } async navigate() { await this.page.goto(config.endpoint); } async getTopMenuEntry(){ return this.page.locator(config.vet_menu_entry).first(); } async getVetsText(){ return config.vet_text; } } module.exports = { VetsPage }; |
Plus a configuration file where we have the identifiers that will match the elements in the page, this will add an extra abstraction layer to the tests allowing us to redefine locators or text without changing the code.
{ "endpoint" : "https://xray-essentials-petclinic.herokuapp.com/", "owners_link" : "a[title=\"find owners\"]", "top_menu_entry" :"//*[@id=\"main-navbar\"]/ul/li[1]/a", "vet_menu_entry" : "//*[@id=\"main-navbar\"]/ul/li[3]/a", "find_owners_button" : "a[title=\"find owners\"]", "vet_text" : "Veterinarians", "home_text" : "Home" } |
We added an helper file that will parse returned information and add valuable information returned by Applitools Eyes to the Junit report.
class Helper { constructor() { } handleTestResults(summary){ let ex = summary.getException(); if (ex != null ) { console.log("System error occurred while checking target.\n"); } let result = summary.getTestResults(); if (result == null) { console.log("No test results information available\n"); } else { console.log("[Eyes URL|%s] \\\\ AppName = %s \\\\ testname = %s \\\\ status = %s \\\\ different = %s \\\\ Browser = %s \\\\ OS = %s \\\\ viewport = %dx%d \\\\ matched = %d \\\\ mismatched = %d \\\\ missing = %d\\\\ aborted = %s\\\\", result.getUrl(), result.getAppName(), result.getName(), result.getStatus(), result.getIsDifferent(), result.getHostApp(), result.getHostOS(), result.getHostDisplaySize().getWidth(), result.getHostDisplaySize().getHeight(), result.getMatches(), result.getMismatches(), result.getMissing(), (result.getIsAborted() ? "aborted" : "no")); let steps = result.getStepsInfo(); steps.forEach(step => { console.log("StepName = %s, different = %s\\\\", step.getName(), step.getIsDifferent()); }); } }; } module.exports = { Helper }; |
The tests that validate if the features are behaving as expected are below, notice that we are using the Applitools Eyes SDK and adding checks on the tests in the moments we want to have visual validations.
For the tutorial purpose we will focus in the Owners validations (the others will be similar with more or less actions).
import { test, expect } from '@playwright/test'; import { OwnersPage } from "../models/owners"; import { Helper } from "../models/helper" const { Eyes, ClassicRunner, Target , Configuration, BatchInfo, MatchLevel, TestResultContainer, TestResults} = require('@applitools/eyes-playwright') test.describe("PetClinic validations", () => { let eyes, runner;//, default_url; test.beforeEach(async () => { // Initialize the Runner for your test. runner = new ClassicRunner(); // Create Eyes object with the runner eyes = new Eyes(runner); // Initialize the eyes configuration const configuration = new Configuration(); // create a new batch info instance and set it to the configuration configuration.setBatch(new BatchInfo('PetClinic Batch - Playwright - Classic')); // Define the match level we need for our tests eyes.setMatchLevel(MatchLevel.Strict); // Set the configuration to eyes eyes.setConfiguration(configuration); }); test('Validate find owners link', async ({ page }) => { const ownersPage = new OwnersPage(page); await ownersPage.navigate(); await eyes.open(page, 'PetClinic', 'FindOwnersLink', { width: 800, height: 600 }); await ownersPage.click_find_owners_button(); await eyes.check(Target.window().fully()); await eyes.close(); }); test.afterEach(async () => { const helper = new Helper(); // If the test was aborted before eyes.close was called, ends the test as aborted. await eyes.abort(); // We pass false to this method to suppress the exception that is thrown if we // find visual differences const results = await runner.getAllTestResults(false); results.getAllResults().forEach(result => { helper.handleTestResults(result); }); }); }) |
Looking to this class in more details we can see different areas:
In the "test.beforeEach" we are configuring the runner that will be used by the Eyes instance, as you can see, we are using the classic one (Eyes have another available called "Visual Grid Runner" that interacts with the Eyes Ultrafast Grid server to render the checkpoint images in the cloud).
We defined a configuration object that will hold the configuration for the instance, we are defining the Batch named 'PetClinic Batch - Playwright - Classic', defining the match level (in our case we are using the recommended one Strict but there are more available).
In the test itself we have a normal Playwright test with additions from the Applitools Eyes SDK, let's look into those in more detail:
Finally in the "test.afterEach" we make sure to close all eyes instances by calling the "abort" method and process the results that are returned by the runner.
Once the code is implemented, we will run it to define the baseline (A baseline stores a sequence of reference images), that will be used to compare to the next tests. We achieve that with the following command:
APPLITOOLS_API_KEY="API_KEY" PLAYWRIGHT_JUNIT_OUTPUT_NAME=results.xml npx playwright test ./tests/* --browser=chromium --reporter=junit,line |
If the APPLITOOLS_API_KEY is not defined the tests will be executed but the screenshots will not be sent to Applitools Eyes. |
The output generated shows how many tests have been executed, produces a Junit report and returns the link to check the visual assertions.
In Applitools Eyes interface we can see that a new application was created with 3 tests:
If we navigate to the test results we will see the three tests properly named, information about the OS, Browser and Viewport used, a screenshot taken and the notion if is new or not and a date.
At this point we have generated our baseline and the tests are behaving as expected, we will now introduce a change in the application and remove strings from the Owners test that will make the test to succeed but the visual validation will fail as it will not match the baseline and thus failing the tests overall.
After the second execution the output terminal will have the following information:
The report generated will contain the following information:
<testsuites id="" name="" tests="3" failures="1" skipped="0" errors="0" time="23.226"> <testsuite name="tests/home.spec.ts" timestamp="1639395430782" hostname="" tests="1" failures="0" skipped="0" time="14.955" errors="0"> <testcase name="PetClinic validations Validate home link" classname="[chromium] › tests/home.spec.ts:32:3 › PetClinic validations › Validate home link" time="14.955"> <system-out> [Eyes URL|https://eyes.applitools.com/app/batches/00000251762905362858/00000251762905362326?accountId=o9A_TwFAGkSW8d9i1ZlDBg~~] \\ AppName = PetClinic \\ testname = HomeLink \\ status = Passed \\ different = false \\ Browser = Chrome 97.0 \\ OS = Mac OS X 10.15 \\ viewport = 800x600 \\ matched = 1 \\ mismatched = 0 \\ missing = 0\\ aborted = no\\ StepName = , different = false\\ </system-out> </testcase> </testsuite> <testsuite name="tests/owners.spec.ts" timestamp="1639395430782" hostname="" tests="1" failures="1" skipped="0" time="21.788" errors="0"> <testcase name="PetClinic validations Validate find owners link" classname="[chromium] › tests/owners.spec.ts:31:3 › PetClinic validations › Validate find owners link" time="21.788"> <failure message="owners.spec.ts:31:3 Validate find owners link" type="FAILURE"> [chromium] › tests/owners.spec.ts:31:3 › PetClinic validations › Validate find owners link ======= Error: Test 'FindOwnersLink' of 'PetClinic' detected differences! See details at: https://eyes.applitools.com/app/batches/00000251762905361920/00000251762905361545?accountId=o9A_TwFAGkSW8d9i1ZlDBg~~ 35 | await ownersPage.click_find_owners_button(); 36 | await eyes.check(Target.window().fully()); > 37 | await eyes.close(); | ^ 38 | }); 39 | 40 | test.afterEach(async () => { at Eyes.close (/Users/cristianocunha/Documents/Projects/applitoolseyes/node_modules/@applitools/eyes-api/dist/Eyes.js:247:23) at processTicksAndRejections (internal/process/task_queues.js:93:5) at /Users/cristianocunha/Documents/Projects/applitoolseyes/tests/tests/owners.spec.ts:37:5 at WorkerRunner._runTestWithBeforeHooks (/Users/cristianocunha/Documents/Projects/applitoolseyes/node_modules/@playwright/test/lib/workerRunner.js:478:7) </failure> <system-out> [Eyes URL|https://eyes.applitools.com/app/batches/00000251762905361920/00000251762905361545?accountId=o9A_TwFAGkSW8d9i1ZlDBg~~] \\ AppName = PetClinic \\ testname = FindOwnersLink \\ status = Unresolved \\ different = true \\ Browser = Chrome 97.0 \\ OS = Mac OS X 10.15 \\ viewport = 800x600 \\ matched = 0 \\ mismatched = 1 \\ missing = 0\\ aborted = no\\ StepName = , different = true\\ </system-out> </testcase> </testsuite> <testsuite name="tests/veterinarians.spec.ts" timestamp="1639395430782" hostname="" tests="1" failures="0" skipped="0" time="15.489" errors="0"> <testcase name="PetClinic validations Validate veterinarians link" classname="[chromium] › tests/veterinarians.spec.ts:31:3 › PetClinic validations › Validate veterinarians link" time="15.489"> <system-out> [Eyes URL|https://eyes.applitools.com/app/batches/00000251762905363795/00000251762905363202?accountId=o9A_TwFAGkSW8d9i1ZlDBg~~] \\ AppName = PetClinic \\ testname = VetsLink \\ status = Passed \\ different = false \\ Browser = Chrome 97.0 \\ OS = Mac OS X 10.15 \\ viewport = 800x600 \\ matched = 1 \\ mismatched = 0 \\ missing = 0\\ aborted = no\\ StepName = , different = false\\ </system-out> </testcase> </testsuite> </testsuites> |
When we access the link provided by Applitools Eyes we can see the visual changes detected:
When accessing the details we can see the actual differences detected between the baseline and the latest test side by side:
Notes:
As we saw in the above example, where we are producing Junit reports with the result of the tests, it is now a matter of importing those results to your Jira instance, this can be done by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins) or using the Jira interface to do so.
|
.toc-btf { position: fixed; } |