In this tutorial, we will create some tests in Behave, which is a Cucumber variant for Python.

The test (specification) is initially created in Jira as a Cucumber Test and afterward, it is exported using the UI or the REST API.

We'll show you how to use both the Behave JSON report format and also the Cucumber JSON report format, in case you need it.

Source-code for this tutorial

Code is available on GitHub; the repo contains some additional tests beyond the scope of this tutorial and some auxiliary scripts.

Usage scenarios

Behave (and Cucumber) can be used in diverse scenarios. Next, you may find some usage patterns, even though using Behave is mostly recommended only if you are adopting BDD.

  1. Teams adopting BDD, start by defining a user story and clarify it using Scenario(s); usually, Scenario(s)/Scenario Outline(s) are specified directly in Jira, using Xray
  2. Teams adopting BDD but that favor a more Git-based approach (e.g. GitOps). In this case, stories would be defined in Jira but Behave .feature files would be specified using some IDE and would be stored in Git, for example
  3. Teams not adopting BDD but still using Behave, more as an automation framework. Sometimes focused on regression testing; sometimes, on non-regression testing. In this case, Cucumber would be used...
    1. With a user story or some sort of "requirement" described in Jira
    2. Without any story/"requirement" described in Jira

You may be adopting, or aiming to, one of the previous patterns.

Before moving into the actual implementation, you need to decide which workflow you'll use: do you want to use Xray/Jira as the master for writing the declarative specification (i.e. the Gherkin based Scenarios), or do you want to manage those outside using some editor and store them in Git, for example?

Learn more

Please see Testing in BDD with Gherkin-based frameworks (e.g. Cucumber) for an overview of the possible workflows.

The place that you'll use to edit the Gherkin Scenarios will affect your workflow. There are teams that prefer to edit  Scenarios in Jira using Xray, while there are others that prefer to edit them by writing the .feature files by hand using some IDE.


We'll use some dummy examples from Behave's documentation.

The test (specification) is initially created in Jira as Cucumber Tests and afterward, it is exported using the UI or the REST API.

This tutorial has the following requirements:

  • Python 3.x
  • behave and PyHamcrest Python libraries

In case you need to interact with Xray REST API at low-level using scripts (e.g. Bash/shell scripts), this tutorial uses an auxiliary file with the credentials (more info in Global Settings: API Keys).

Example of cloud_auth.json used in this tutorial
{ "client_id": "215FFD69FE4644728C72180000000000","client_secret": "1c00f8f22f56a8684d7c18cd6147ce2787d95e4da9f3bfb0af8f020000000000" }

Using Jira and Xray as master

This section assumes using Xray as master, i.e. the place that you'll be using to edit the specifications (e.g. the scenarios that are part of .feature files).

The overall flow would be something like this, assuming Git as the source code versioning system:

  1. define the story (skip if you already have it)
  2. create Scenario/Scenario Outline as a Test in Jira; usually, it would be linked to an existing "requirement"/Story (i.e. created from the respective issue screen)
  3. implement the code related to Gherkin statements/steps and store it in Git, for example. To start, and during development, you may need to generate/export the .feature file to your local environment
  4. commit previous code to Git
  5. checkout the code from Git
  6. generate .feature files based on the specification made in Jira
  7. run the tests in the CI
  8. obtain the report in Cucumber JSON format
  9. import the results back to Jira

Note that steps (5-9) performed by the CI tool are all automated, obviously.

To generate .feature file(s) based on Scenarios defined in Jira (i.e. Cucumber Tests and Preconditions), we can do it directly from Jira, by the REST API, or using a CI tool; we'll see that ahead in more detail.


Everthing starts with a user story or some sort of “requirement” that you wish to validate. This is materialized as a Jira issue and identified by the corresponding issue key (e.g. CALC-1206).

We can promptly check that it is “UNCOVERED” (i.e. that it has no tests covering it, no matter their type/approach).

If you have this "requirement" as a Jira issue, then you can just use the "Create Test" on that issue to create the Scenario/Scenario Outline and have it automatically linked back to the Story/"requirement".

Otherwise, you can create the Test using the standard (issue) Create action from Jira's top menu. 

We need to create the Test issue first and fill out the Gherkin statements later on in the Test issue screen.


After the Test is created, and since we have done it from the user story screen, it will impact the coverage of related "requirement"/story.

The coverage and the test results can be tracked on the "requirement" side (e.g. user story). In this case, you may see that coverage changed from being UNCOVERED to NOTRUN (i.e. covered and with at least one test not run).

We repeat the process for additional "requirements" and/or test Scenarios.

The related statement's code is managed outside of Jira and stored in Git, for example.

You can then export the specification of the test to a Cucumber .feature file via the REST API, or the Xray - Export to Cucumber UI action from within the Test/Test Execution issue or even based on an existing saved filter. As a source, you can identify Test, Test Set, Test Execution, Test Plan, or "requirement" issues. A plugin for your CI tool of choice can be used to ease this task.

So, you can either:

  • use one of the available CI/CD plugins (e.g. see details of Integration with Jenkins; don't forget to define the issue keys or the filter id)
  • use the REST API directly (more info here)
    • example of a shell script to export/generate .features from Xray
      token=$(curl -H "Content-Type: application/json" -X POST --data @"cloud_auth.json" https://xray.cloud.getxray.app/api/v2/authenticate| tr -d '"')
      curl -H "Content-Type: application/json" -X GET -H "Authorization: Bearer $token" "https://xray.cloud.getxray.app/api/v2/export/cucumber?keys=CALC-1206;CALC-1207" -o features.zip
      rm -rf features/*.feature
      unzip -o features.zip  -d features
  • ... or even use the UI (e.g. from a Test issue)

We will export the features to a new directory named features/.

After being exported, the created .feature(s) will contain references to the Test issue key, eventually prefixed (e.g. "TEST_") depending on an Xray global setting, and the covered "requirement" issue key,  if that's the case. The naming of these files is detailed in Generate Cucumber Features.

Feature: Showing off behave

	Scenario: Run a simple test
		Given we have behave installed
		When we implement a test
		Then behave will test it for us!
Feature:  Scenario Outline (tutorial04)

	Scenario Outline: Use Blender with <thing>
		Given I put "<thing>" in a blender
		When I switch the blender on
		Then it should transform into "<other thing>"

		Examples: Amphibians
		    | thing         | other thing |
		    | Red Tree Frog | mush        |
		    | apples        | apple juice |

The corresponding steps implementation code lives in the following files.

# file:features/steps/blender.py
# -----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
class Blender(object):
        "Red Tree Frog": "mush",
        "apples": "apple juice",
        "iPhone": "toxic waste",
        "Galaxy Nexus": "toxic waste",
    def __init__(self):
        self.thing  = None
        self.result = None

    def select_result_for(cls, thing):
        return cls.TRANSFORMATION_MAP.get(thing, "DIRT")

    def add(self, thing):
        self.thing = thing

    def switch_on(self):
        self.result = self.select_result_for(self.thing)
# file:features/steps/step_tutorial01.py
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
from behave import given, when, then
@given('we have behave installed')
def step_impl(context):
@when('we implement a test')
def step_impl(context):
    assert True is not False
@then('behave will test it for us!')
def step_impl(context):
    assert context.failed is False
# file:features/steps/step_tutorial03.py
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
from behave   import given, when, then
from hamcrest import assert_that, equal_to
from blender  import Blender

@given('I put "{thing}" in a blender')
def step_given_put_thing_into_blender(context, thing):
    context.blender = Blender()

@when('I switch the blender on')
def step_when_switch_blender_on(context):

@then('it should transform into "{other_thing}"')
def step_then_should_transform_into(context, other_thing):
    assert_that(context.blender.result, equal_to(other_thing))

Running tests

In order to run the tests there 2 options available:

  • Using the native Behave JSON (JSON pretty) report => recommended way
  • Using a custom reporter that generates a compatible Cucumber JSON report

If you choose the latter, the following code is based on a sample code provided by an open-source contributor "fredizzimo" (see original code here), with small changes to make it handle correctly the JSON serialization of status results. You may create this cucumber_json126.py at the root of your project.

# -*- coding: utf-8 -*-

from __future__ import absolute_import
from behave.model_core import Status
from behave.formatter.base import Formatter
import base64
import six
import copy
    import json
except ImportError:
    import simplejson as json

# -----------------------------------------------------------------------------
# CLASS: JSONFormatter
# -----------------------------------------------------------------------------
class CucumberJSONFormatter(Formatter):
    name = 'json'
    description = 'JSON dump of test run'
    dumps_kwargs = {}

    json_number_types = six.integer_types + (float,)
    json_scalar_types = json_number_types + (six.text_type, bool, type(None))

    def __init__(self, stream_opener, config):
        super(CucumberJSONFormatter, self).__init__(stream_opener, config)
        # -- ENSURE: Output stream is open.
        self.stream = self.open()
        self.feature_count = 0
        self.current_feature = None
        self.current_feature_data = None
        self._step_index = 0
        self.current_background = None
        self.current_background_data = None

    def reset(self):
        self.current_feature = None
        self.current_feature_data = None
        self._step_index = 0
        self.current_background = None

    def uri(self, uri):

    def feature(self, feature):
        self.current_feature = feature
        self.current_feature_data = {
            'id': self.generate_id(feature),
            'uri': feature.location.filename,
            'line': feature.location.line,
            'description': '',
            'keyword': feature.keyword,
            'name': feature.name,
            'tags': self.write_tags(feature.tags),
            'status': feature.status.name,
        element = self.current_feature_data
        if feature.description:
            element['description'] = self.format_description(feature.description)

    def background(self, background):
        element = {
            'type': 'background',
            'keyword': background.keyword,
            'name': background.name,
            'location': six.text_type(background.location),
            'steps': []
        self._step_index = 0
        self.current_background = element

    def scenario(self, scenario):
        if self.current_background is not None:
        element = self.add_feature_element({
            'type': 'scenario',
            'id': self.generate_id(self.current_feature, scenario),
            'line': scenario.location.line,
            'description': '',
            'keyword': scenario.keyword,
            'name': scenario.name,
            'tags': self.write_tags(scenario.tags),
            'location': six.text_type(scenario.location),
            'steps': [],
        if scenario.description:
            element['description'] = self.format_description(scenario.description)
        self._step_index = 0

    def make_table(cls, table):
        table_data = {
            'headings': table.headings,
            'rows': [ list(row) for row in table.rows ]
        return table_data

    def step(self, step):
        s = {
            'keyword': step.keyword,
            'step_type': step.step_type,
            'name': step.name,
            'line': step.location.line,
            'result': {
                'status': 'skipped',
                'duration': 0

        if step.text:
            s['doc_string'] = {
                'value': step.text,
                'line': step.text.line
        if step.table:
            s['rows'] = [{'cells': [heading for heading in step.table.headings]}]
            s['rows'] += [{'cells': [cell for cell in row.cells]} for row in step.table]

        if self.current_feature.background is not None:
            element = self.current_feature_data['elements'][-2]
            if len(element['steps']) >= len(self.current_feature.background.steps):
                element = self.current_feature_element
            element = self.current_feature_element

    def match(self, match):
        if match.location:
            # -- NOTE: match.location=None occurs for undefined steps.
            match_data = {
                'location': six.text_type(match.location) or "",
            self.current_step['match'] = match_data

    def result(self, result):
        self.current_step['result'] = {
            'status': result.status.name,
            'duration': int(round(result.duration * 1000.0 * 1000.0 * 1000.0)),
        if result.error_message and result.status == Status.failed:
            # -- OPTIONAL: Provided for failed steps.
            error_message = result.error_message
            result_element = self.current_step['result']
            result_element['error_message'] = error_message
        self._step_index += 1

    def embedding(self, mime_type, data):
        step = self.current_feature_element['steps'][-1]
            'mime_type': mime_type,
            'data': base64.b64encode(data).replace('\n', ''),

    def eof(self):
        End of feature
        if not self.current_feature_data:

        # -- NORMAL CASE: Write collected data of current feature.

        if self.feature_count == 0:
            # -- FIRST FEATURE:
            # -- NEXT FEATURE:

        self.current_feature_data = None
        self.feature_count += 1

    def close(self):

    def add_feature_element(self, element):
        assert self.current_feature_data is not None
        if 'elements' not in self.current_feature_data:
            self.current_feature_data['elements'] = []
        return element

    def current_feature_element(self):
        assert self.current_feature_data is not None
        return self.current_feature_data['elements'][-1]

    def current_step(self):
        step_index = self._step_index
        if self.current_feature.background is not None:
            element = self.current_feature_data['elements'][-2]
            if step_index >= len(self.current_feature.background.steps):
                step_index -= len(self.current_feature.background.steps)
                element = self.current_feature_element
            element = self.current_feature_element

        return element['steps'][step_index]

    def update_status_data(self):
        assert self.current_feature
        assert self.current_feature_data
        self.current_feature_data['status'] = self.current_feature.status.name

    def write_tags(self, tags):
        return [{'name': f'@{tag}', 'line': tag.line if hasattr(tag, 'line') else 1} for tag in tags]

    def generate_id(self, feature, scenario=None):
        def convert(name):
            return name.lower().replace(' ', '-')
        id = convert(feature.name)
        if scenario is not None:
            id += ';'
            id += convert(scenario.name)
        return id

    def format_description(self, lines):
        description = '\n'.join(lines)
        description = '<pre>%s</pre>' % description
        return description

    # -- JSON-WRITER:
    def write_json_header(self):

    def write_json_footer(self):

    def write_json_feature(self, feature_data):
        self.stream.write(json.dumps(feature_data, **self.dumps_kwargs))

    def write_json_feature_separator(self):

# -----------------------------------------------------------------------------
# CLASS: PrettyJSONFormatter
# -----------------------------------------------------------------------------
class PrettyCucumberJSONFormatter(CucumberJSONFormatter):
    Provides readable/comparable textual JSON output.
    name = 'json.pretty'
    description = 'JSON dump of test run (human readable)'
    dumps_kwargs = { 'indent': 2, 'sort_keys': True }

example of a Bash script to run the tests
export PYTHONPATH=`pwd`
behave --format=cucumber_json126:PrettyCucumberJSONFormatter -o results/cucumber.json  --format=json -o results/behave.json features

Import results

After running the tests and generating the Behave report, it can be imported to Xray via the REST API or the Xray - Import Execution Results action within the Test Execution.

example of a Bash script to import results using the standard Behave endpoint
token=$(curl -H "Content-Type: application/json" -X POST --data @"cloud_auth.json" "$BASE_URL/api/v2/authenticate"| tr -d '"')
curl -H "Content-Type: application/json" -X POST -H "Authorization: Bearer $token"  --data @"results/behave.json" "$BASE_URL/api/v2/import/execution/behave"

If we use the Cucumber JSON formatter instead, then the endpoint to be used needs to be changed accordingly.

example of a Bash script to import results using the standard Cucumber endpoint
token=$(curl -H "Content-Type: application/json" -X POST --data @"cloud_auth.json" "$BASE_URL/api/v2/authenticate"| tr -d '"')
curl -H "Content-Type: application/json" -X POST -H "Authorization: Bearer $token"  --data @"results/cucumber.json" "$BASE_URL/api/v2/import/execution/cucumber"

The execution page provides detailed information, which in this case includes the results for the different examples along with the respective step results.