Overview
In this tutorial, we will create some tests in Behave, which is a Cucumber variant for Python.
The test (specification) is initially created in Jira as a Cucumber Test and afterward, it is exported using the UI or the REST API.
We'll show you how to use both the Behave JSON report format and also the Cucumber JSON report format, in case you need it.
Usage scenarios
Behave (and Cucumber) can be used in diverse scenarios. Next, you may find some usage patterns, even though using Behave is mostly recommended only if you are adopting BDD.
- Teams adopting BDD, start by defining a user story and clarify it using Scenario(s); usually, Scenario(s)/Scenario Outline(s) are specified directly in Jira, using Xray
- Teams adopting BDD but that favor a more Git-based approach (e.g. GitOps). In this case, stories would be defined in Jira but Behave .feature files would be specified using some IDE and would be stored in Git, for example
- Teams not adopting BDD but still using Behave, more as an automation framework. Sometimes focused on regression testing; sometimes, on non-regression testing. In this case, Cucumber would be used...
- With a user story or some sort of "requirement" described in Jira
- Without any story/"requirement" described in Jira
You may be adopting, or aiming to, one of the previous patterns.
Before moving into the actual implementation, you need to decide which workflow you'll use: do you want to use Xray/Jira as the master for writing the declarative specification (i.e. the Gherkin based Scenarios), or do you want to manage those outside using some editor and store them in Git, for example?
Learn more
Please see Testing in BDD with Gherkin-based frameworks (e.g. Cucumber) for an overview of the possible workflows.
The place that you'll use to edit the Gherkin Scenarios will affect your workflow. There are teams that prefer to edit Scenarios in Jira using Xray, while there are others that prefer to edit them by writing the .feature files by hand using some IDE.
Example
We'll use some dummy examples from Behave's documentation.
The test (specification) is initially created in Jira as Cucumber Tests and afterward, it is exported using the UI or the REST API.
This tutorial has the following requirements:
- Python 3.x
- behave and PyHamcrest Python libraries
In case you need to interact with Xray REST API at low-level using scripts (e.g. Bash/shell scripts), this tutorial uses an auxiliary file with the credentials (more info in Global Settings: API Keys).
Using Jira and Xray as master
This section assumes using Xray as master, i.e. the place that you'll be using to edit the specifications (e.g. the scenarios that are part of .feature files).
The overall flow would be something like this, assuming Git as the source code versioning system:
- define the story (skip if you already have it)
- create Scenario/Scenario Outline as a Test in Jira; usually, it would be linked to an existing "requirement"/Story (i.e. created from the respective issue screen)
- implement the code related to Gherkin statements/steps and store it in Git, for example. To start, and during development, you may need to generate/export the .feature file to your local environment
- commit previous code to Git
- checkout the code from Git
- generate .feature files based on the specification made in Jira
- run the tests in the CI
- obtain the report in Cucumber JSON format
- import the results back to Jira
Note that steps (5-9) performed by the CI tool are all automated, obviously.
To generate .feature file(s) based on Scenarios defined in Jira (i.e. Cucumber Tests and Preconditions), we can do it directly from Jira, by the REST API, or using a CI tool; we'll see that ahead in more detail.
Step-by-step
Everthing starts with a user story or some sort of “requirement” that you wish to validate. This is materialized as a Jira issue and identified by the corresponding issue key (e.g. CALC-1206).
We can promptly check that it is “UNCOVERED” (i.e. that it has no tests covering it, no matter their type/approach).
If you have this "requirement" as a Jira issue, then you can just use the "Create Test" on that issue to create the Scenario/Scenario Outline and have it automatically linked back to the Story/"requirement".
Otherwise, you can create the Test using the standard (issue) Create action from Jira's top menu.
We need to create the Test issue first and fill out the Gherkin statements later on in the Test issue screen.
After the Test is created, and since we have done it from the user story screen, it will impact the coverage of related "requirement"/story.
The coverage and the test results can be tracked on the "requirement" side (e.g. user story). In this case, you may see that coverage changed from being UNCOVERED to NOTRUN (i.e. covered and with at least one test not run).
We repeat the process for additional "requirements" and/or test Scenarios.
The related statement's code is managed outside of Jira and stored in Git, for example.
You can then export the specification of the test to a Cucumber .feature file via the REST API, or the Xray - Export to Cucumber UI action from within the Test/Test Execution issue or even based on an existing saved filter. As a source, you can identify Test, Test Set, Test Execution, Test Plan, or "requirement" issues. A plugin for your CI tool of choice can be used to ease this task.
So, you can either:
- use one of the available CI/CD plugins (e.g. see details of Integration with Jenkins; don't forget to define the issue keys or the filter id)
- use the REST API directly (more info here)
- ... or even use the UI (e.g. from a Test issue)
We will export the features to a new directory named features/.
After being exported, the created .feature(s) will contain references to the Test issue key, eventually prefixed (e.g. "TEST_") depending on an Xray global setting, and the covered "requirement" issue key, if that's the case. The naming of these files is detailed in Generate Cucumber Features.
The corresponding steps implementation code lives in the following files.
Running tests
In order to run the tests there 2 options available:
- Using the native Behave JSON (JSON pretty) report => recommended way
- Using a custom reporter that generates a compatible Cucumber JSON report
If you choose the latter, the following code is based on a sample code provided by an open-source contributor "fredizzimo" (see original code here), with small changes to make it handle correctly the JSON serialization of status results. You may create this cucumber_json126.py
at the root of your project.
export PYTHONPATH=`pwd` behave --format=cucumber_json126:PrettyCucumberJSONFormatter -o results/cucumber.json --format=json -o results/behave.json features
Import results
After running the tests and generating the Behave report, it can be imported to Xray via the REST API or the Xray - Import Execution Results action within the Test Execution.
If we use the Cucumber JSON formatter instead, then the endpoint to be used needs to be changed accordingly.
The execution page provides detailed information, which in this case includes the results for the different examples along with the respective step results.