Overview
Taurus is an open-source automation tool used for load and also functional testing.
Taurus acts as an abstraction layer on top of different load and functional tools, allowing users to easily maintain the YAML/JSON-based test scripts by storing them in the source control system.
Taurus can interact with JMeter, Gatling, Robot and other tools.
By using its CLI, Taurus can easily be integrated in the CI pipeline.
It can produce XML/CSV based reports or even Junit XML based reports. Reports can also be uploaded to Blazemeter where they can be analyzed in further extent.
Taurus provides a kind of SLA/SLO mechanism based on "pass/fail" criteria. Criteria can be defined based on typical load testing metrics (i.e. the "runtime criteria) and/or on the monitoring data obtained from the services running the target system. Thus, Taurus can be used as a way to extend JMeter, for example, and provide it advanced SLAs.
Requirements
- Taurus
- JMeter (installed automatically)
- Jenkins (optional)
Description
In the following example, we'll perform testing on a fictitious "Simple Travel Agency" site (kindly provided by BlazeMeter for demo purposes).
The overall approach to have visibility of the performance results in Xray will be as follows:
- run Taurus in command line
- generate multiple test reports
- standard results in CSV and XML formats
- custom JUnit XML report with additional info
- submit results to Xray along with the previously generated report assets
- fill out the "Description" field of the corresponding created Test Execution issue with
- link to project/job in Jenkins
- link to BlazeMeter report
- summary results formatted as a table
- fill out the "Description" field of the corresponding created Test Execution issue with
Load testing example, with SLAs
In this example, we will load test the site.
The testing scenario exercises 10 users, with a ramp-up period of 40s, doing a (partial) reservation interaction: go to the site, and reserve a flight from Paris to Buenos Aires.
There are several "labels" (i.e. transactions), grouping one or more actions (i.e. HTTP requests). However, there are no explicit assertions.
The scenario itself and execution-related details are described in an YAML file.
Multiple reporting modules are configured to process the results. A custom module produces a JUnit XML report; this module is a customized variant over the standard junit-xml module.
This modified custom-junit-xml module accepts one additional setting "classname" that can be used to customize the classname attribute on the target <testcase> elements of the XML report ; otherwise, "bzt" will be used. We can use to have an unique identifier such as "PurchaseUserPath".
The module also provides additional information that may be shown in Xray.
Test scenarios can be run using the command line bzt
tool.
We'll use Jenkins as our CI tool and we'll configure a freestyle project for performing our load testing.
Please note
If we aim to send the reports to BlazeMeter, we need to configure the API token used by Taurus' bzt
. Instead of hard-coding this in Taurus configuration files, it can be set on the Jenkins user' home settings (more info here).
Setup: checking out the JMeter project and setup of auxiliary variables
We need to setup some variables related to the Jira instance to be able to attach some files to the Test Execution issue later on, if we want to, using the attach_files_to_issue.sh
shell script.
These are somehow redundant with the Xray instance configuration but are necessary if we wish to expose them.
We start by defining one variable for the Jira server base URL as build pararameter.
Using the Credentials Binding plugin, we will populate two variables for the Jira instance's username and password; these will be, in turn, obtained from the credentials already stored and linked to the Xray instance configuration in Jenkins.
The "code" will be checked out from our source code versioning system (e.g. Git), which contain the Taurus configuration files along with some additional scripts.
Configuring the Build steps
The "build" is composed of several steps, starting with the one that runs Taurus.
Here we may decide to enforce the build step as successful, as shown in the screenshot, or let it fail or not depending on the load testing results. The latter would require to define the additional build steps as post-actions though.
We'll store the artifact inside the directory artifacts
. We'll also customize the report name in BlazeMeter so that it contains the Jenkins build number.
Optionally, we'll add two build steps to store the tabular aggregate report in an environment variable (e.g. AGGREGATE_TABLE) as a string. This requires the Environment Injector plugin.
Configuring the Post-build actions
Bonus tip!
The Jenkins' Performance plugin can optionally be used to create some trend charts in Jenkins and also as means to mark the build as failed or unstable depending on absolute or relative thresholds.
Test results can be submitted to Xray either by using a command line tool (e.g. curl
) or by using a specific CI plugin which in our case will be the "Xray – Test Management for Jira Plugin".
We could choose the "JUnit XML" as the format in the "Xray: Results Import Task", that would be simpler to setup.
However, if we use the "JUnit XML multipart" format, we can further customize the Test Execution issue. We'll use this as means to provide a link to the Jenkins build along with a link to more in-depth details at BlazeMeter site. We may also provide the aggregate report table stored previously as an environment variable.
If using this format, you'll need to provide the Test Execution's issue type name (or the id) and the project key.
You may also specify the Test Plan, Revision and Test Environments fields but you'll need to obtain their custom field ID from Jira's administration. Note that these IDs are specific to each Jira instance. In the following example, "customfield_10033" corresponds to the Revision CF, "customfield_11805" to the Test Environments CF and "customfield_11807" to the Test Plan CF.
Bonus tip!
You may also attach some files (e.g. logs, reports) to the created Test Execution issue.
The Jenkins plugin exports the XRAY_TEST_EXECS variable containing the issue key of the Test Execution that was created.
For the time being, the Jenkins plugin can't upload other files; however, we can make a basic shell script (e.g. attach_files_to_issue.sh
) for that.
After running Jenkins job, some performance information will be directly avilable in Jenkins; this is provided by the Performance plugin (if you've previously configured it as mentioned earlier), either on the project page or on the build page.
As we submitted the processed test results to Xray (e.g. junit_report.xml), we can now track them in Jira.
A Test Execution will be created containing a summary of results along with some useful links to access additional information in Jenkins.
The attachments section on the Test Execution issue provide direct access to some reports and also to a zipped file containing the dashboard report generated by JMeter.
Unstructured (i.e. "generic) Test issues will be auto-provisioned (unless they already exist), one per each SLA criteria defined in the passfail
module configuration. The "Generic Definition" field acts as the unique test identifier for subsequent imports and is composed by a prefix along with the criteria (e.g. "PurchaseUserPath.avg-rt of >10ms for 7s", "bzt.avg-rt of >10ms for 7s").
If classname
configuration has been specified in the custom-junit-xml
module, it will be used as the prefix; otherwise "bzt" will be used.
The execution details of a specific Test Run show whether the pass/fail criteria (i.e. our SLA) passed or not.
The following screenshot showcases the details of a failed criterium (e.g. average response-time greater than 10ms for 19 seconds).
Results can be further analyzed on BlazeMeter site (if you've previously configured the "blazemeter" reporter).
You can access the report by using a specific link within the Jenkins build screen, or by using the link provided inside the Test Execution's description field in Xray.
Bonus tip!
After Tests are auto-provisioned in Xray, they can be manually linked (i.e. cover) to a performance-related requirement/user story issue. This will provide you the ability to track coverage directly on the requirement issue.
Load testing example, without SLAs
This example is similar to the previous one (please have a look at it first), except that we wont define SLAs using the pass-fail module.
Multiple reporting modules are configured to process the results. A custom module (see code in the previous example) produces a JUnit XML report; this module is a customized variant over the standard junit-xml module.
Upon submission to Xray (e.g. junit_report.xml), each labeled request is converted to an unstructured (i.e. Generic) Test, uniquely identified by a prefix (e.g. PurchaseUserPath) along with the label.
Detailed reports can be analyzed in BlazeMeter site.
Functional testing example
Taurus can also be used for functional testing. Taurus can execute Selenium-based tests directly or invoke other test automation frameworks (e.g. JUnit) to run these type of tests.
To assist you writing U/Selenium test scritpts, you may use a specific Chrome extension that is able of generation a YAML configuration file Taurus friendly.
This file can then be further customized to your needs (e.g. for adding assertions).
After test is run, a JUnit XML report is produced (e.g. junit_report.xml). This can be submitted to Xray.
An unstructured (i.e. Generic) Test will be created per each action.
The unique identifier for the Test (i.e. the value of the Generic definition field) is composed by a prefix along with the label.
Results can be further analyzed in BlazeMeter' site.
Please note
Unless you run your tests in BlazeMeter using the remote webdriver, test results will appear under the Performance section and you won't be able to see action level pass/fail information.
Please check the Selenium Executor and the Apiritif Executor documentation.
Our test was run against two different browsers, thus we can see the two distinct results.
On each test result, it is possible to evaluate the step/group of actions (i.e. label) result along with the inner actions. It's also possible to watch a recording of the test session.
It's also possible to access an execute summary report.
Please note
Assertion errors can be tracked in the Errors Report section of the overall test results.
Room for improvement
- abstract the whole Taurus test as a single Test
- use Robot Framework XML report instead of JUnit to provide more granular details
- provide the possibility of linking test(s) to an existing requirement in Xray
References
- https://github.com/Blazemeter/taurus
- https://gettaurus.org/
- https://gettaurus.org/docs/PassFail/
- https://gettaurus.org/learn/
- https://gettaurus.org/docs/JMeter/#Assertions
- https://gettaurus.org/kb/SeleniumActions/
- https://chrome.google.com/webstore/detail/blazemeter-the-continuous/mbopgmdnpcbohhpnfglgohlbhfongabi?hl=en
- https://gettaurus.org/kb/Reporting/
- https://www.blazemeter.com/blog/how-to-perform-local-GUI-functional-test-through-Taurus
- https://gettaurus.org/docs/BlazemeterReporter/