Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

...

Broadly speaking, it can be used to automate acceptance “test cases” (i.e. scripts) no matter the moment you decide to do so or the practices your team follows even though it's preferable to do it at start, involving the whole team in order to pursue shared understanding.

In in this tutorialarticle, we will specify some tests using Robot Framework assuming that your team is adopting ATDD and see how we can have visibility of the corresponding results in Jira, using Xray.

This tutorial explores the specific integration Xray provides for Robot Framework XML reports.

...

Info

You may find the full source for this example in this GitHub repository, which corresponds in essence to previous work by Pekka Klärck from the Robot Framework Foundation.

...


Common requirements

  • Robot Framework
  • SeleniumLibrary
  • Java (if using the Java variant of the "Robot Framework")

...

Main example: the full ATDD workflow

In this example we're going to validate a dummy website (provided in the GitHub repository), checking for valid and invalid logins. 


If the team is adopting ATDD and working collaboratively in order to have a shared understanding of what is going to be developed, why and some concrete examples of usage, then the flow would be something similar to the following diagram.

...

Within the execution screen details, accessible from each row, you can look at the Test Run details which include the overall result and also specifics about each keyword, including duration and status.



Second example: running tests in parallel, against different environments

In this distinct and more evolved example we're going to run tests in parallel using "pabot".

This example uses a fake travel agency site (kindly provided by BlazeMeter) as the testing target.


Image Added

We have two tests that use low-level keywords (note: this is not a good practice; it's just for simplicity) and one of those keywords is defined within a SeleniumLibrary plugin (i.e. it extends the keywords provided by SeleniumLibrary).


Code Block
titlesearch_flights.robot
collapsetrue
*** Settings ***
Library  SeleniumLibrary    plugins=${CURDIR}/MyPlugin.py
Library  Collections

Suite Setup     Open browser    ${URL}   ${BROWSER}
Suite Teardown  Close All Browsers


*** Variables ***
${URL}          http://blazedemo.com/
${BROWSER}      Chrome
@{allowed_destinations}  Buenos Aires   Rome    London  Berlin  New York    Dublin  Cairo

*** Test Cases ***
The search page presents valid options for searching
    [Tags]    1
    Go To    ${URL}
    Title Should Be     BlazeDemo
    Element Should Be Visible    css:input[type='submit']
    Wait Until Element Is Enabled    css:input[type='submit']
    Wait Until Element Is Clickable    input[type='submit']
    ${values}=  Get List Items    xpath://select[@name='fromPort']   values=True
    Log  ${values}
    ${allowed_departures}=  Create List  Paris  Philadelphia  Boston  Portland  San Diego  Mexico City  São Paolo
    Lists Should Be Equal    ${allowed_departures}   ${values}
    ${values}=  Get List Items    xpath://select[@name='toPort']   values=True
    Log  ${values}
    Should Be Equal    ${allowed_destinations}   ${values}


The user can search for flights
    [Tags]         search_flights
    Go to    ${URL}
    Select From List By Value   xpath://select[@name='fromPort']  Paris
    Select From List by Value   xpath://select[@name='toPort']    London
    Click Button    css:input[type='submit']
    @{flights}=  Get WebElements    css:table[class='table']>tbody tr
    Should Not Be Empty     ${flights}
Code Block
languagepy
titleMyPlugin.py
collapsetrue
from robot.api import logger

from SeleniumLibrary.base import LibraryComponent, keyword
from SeleniumLibrary.locators import ElementFinder

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.support.expected_conditions import element_to_be_clickable
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By

class MyPlugin(LibraryComponent):

    def __init__(self, ctx):
        LibraryComponent.__init__(self, ctx)

    @keyword
    def wait_until_element_is_clickable(self, selector):
        """Adding new keyword: Wait Until Element Is Clickable."""
        self.info('Wait Until Element Is Clickable')
        wait = WebDriverWait(self.driver, 10)

        my_elem = self.element_finder.find("css:"+selector)
        print(my_elem) 
        first_result = wait.until(element_to_be_clickable((By.CSS_SELECTOR, selector)))
        return first_result


Running the tests in parallel is possible using pabot.

Tests can be parallelized in different ways; we'll split them for running on a test basis.

We can also specify some variables; in this case, we'll use it to specify the "BROWSER" variable which is passed to the SeleniumLibrary.


Code Block
titlechromebrowser.txt
collapsetrue
--variable BROWSER:Chrome
No Format
pabot  --argumentfile1 ffbrowser.txt --argumentfile2 chromebrowser.txt --argumentfile3 headlessffbrowser.txt --argumentfile4 safaribrowser.txt --testlevelsplit 0_basic/search_flights.robot


Running these tests will produce a report per each "argumentfileX" parameter (i.e. per each browser). We can then submit those to Xray (e.g. using "curl" and the REST API), and assign it to distinct Test Executions where each one is in turn assigned to a specific Test Environment identifying the browser.


No Format
#!/bin/bash

BROWSERS=(firefox chrome headlessff safari)
PROJECT=CALC
TESTPLAN=CALC-6424

i=1
for browser in ${BROWSERS[@]}; do
 curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=@pabot_results/output$i.xml" "http://192.168.56.102/rest/raven/1.0/import/execution/robot?projectKey=$PROJECT&testPlanKey=$TESTPLAN&testEnvironments=$browser"
 i=$((i+1))
done



Tracking

...

automation results

Besides tracking automation results on the Test Execution issues themselves, it's also possible to track in different places so the team gets fully aware of them.

On the user story issue screen

...