Overview

JMeter is an open-source tool used for performance and load testing.

Normally used to measure web site performance, it can be also used in broader contexts.

Native features provide a reasonable set of samplers and reports; however, this may be extended using plugins.

JMeter does not provide, by default, a SLA/SLO mechanism. Basic SLAs may be implemented using assertions (e.g. duration/response assertion or custom assertion) though.

JMeter has a GUI but it can be run in command line mode using its CLI. It can produce JTL/CSV based reports or XML based reports; the latter provide additional information.

JMeter concepts

The following table provides an overview of JMeter concepts; if you're used to it, you can probably skip it.

By having these concepts present, we may reflect on their mapping somehow to Xray.


JMeter conceptWhat it means?
Test Plana high-level testing scope, consisting of multiple "users"/threads doing multiple acctions
Thread Groupusers
Controllerwhat drives the actions and flow of tests
Sampler (controller)request
Logic (controller)a way to group and determine which samplers to run
Transaction (controlller)one type of logic controller that provides a way to group multiple samplers and its samples (i.e. requests) in order to obtain an additional sample based on them
Sampleobtained sample (i.e. the "response")
(sampler) AssertionAssertions are used to perform additional checks on samplers, validating samples accordingly with a criteria, marking it as successful or not. 
Listenertest results/samples listener (e.g. for producing reports)

Mapping of concepts to Xray

JMeter is not a functional testing tool; it's essentialy a load tool simulating multiple users (threads), doing several actions as they would in a typical usage scenario.

Mapping of concepts may no be straighforward thiugh.

If we aim to have visibility of the performance testing results, we need to think in the following questions:

  • What can we consider the Test?
  • How can we assess if was successful or not?
  • What information is relevant for analysis?

Test

The Test could be the whole JMeter's test plan; this is a valid and simple approach. It depends on how you use the test plan. 

A Test could also represent each user/thread on that test plan; this would create tons of Tests that would be meaningless as they would not clearly identify anything in particular and could not be reused whatsoever.

Another approach would be to use each sampler as a Test. However, samplers are normally grouped and nested under other controllers. Thus, a better approach would be to represent all controllers (samplers and logic controllers) as Tests.

Test status

Determining whether a test was successful or not, first depends on what you define as being the "Test".

In this tutorial we'll consider each controller as a Test in Xray. Classifying it as failed or not can be done based on the nested assertion results or simply on the implicit sampler' (un)successful classification.

Other relevant performance test results

As part of performance testing, the following metrics are common:

  • errors (count, %)
  • total elapsed time (e.g average, min, max, std dev, 90th percentile)
  • latency time/TTFB (e.g average, min, max
  • connect time (e.g average, min, max)
  • requests throughput/requests per time unit (e.g. average)
  • received bytes (total, throughput)
  • sent bytes (total, throughput)
  • requests (count)

Some of these may be considered as KPIs and used to define SLA/SLOs. JMeter does not provide a way to implement SLAs though.

SLAs are usually marked as being successful/met, warning or as failed/unmet.

Requirements

  • JMeter
  • JMeter Plugins Manager and some plugins (jmeter-http, jpgc-httpraw, jpgc-graphs-basic, jpgc-graphs-additional, jpgc-synthesis, jpgc-cmd)
  • Jenkins (optional)

Description

The overall approach to have visibility of the performance results in Xray will be as follows:

  1. run JMeter in command line 
  2. generate results in JTL (CSV based) format
  3. post-process results to
    1. generate a JUnit XML report, mapping each controller as a Test
    2. generate dashboard report, containing multiple reports/charts
    3. produce aggregate report or similar (e.g. synthesis report) in CSV
    4. produce one or more charts
  4. submit results to Xray along with the previously generated report assets
    1. fill out the "Description" field of the corresponding created Test Execution issue with
      1. link to project/job in Jenkins
      2. link to dashboard HTML report in Jenkins workspace
      3. aggregate report content formatted as a table

JPetStore example

In this example, we're load testing a fictitious pet store site name JPetStore (this site is kindly provided  by Octoperf for demo purposes).

The testing scenario exercises 20 users, with a ramp-up period of 240s, doing a standard user path/scenario: go to the site, login, view a category, then a product, add to cart, buy it and logout.

There are several transactions, grouping one or more HTTP requests (i.e. using the HTTP Request sampler).

However, there are no explicit assertions; thus, all failures (i.e. samples marked as being unsuccessful) will be based on the standard HTTP response codes.


Tests can be run using JMeter GUI or using the command line jmeter, which is the preferred approach if you wish to make it part of your CI.

We'll use Jenkins as our CI tool and we'll configure a freestyle project for running our tests.


Setup: checking out the JMeter project and setup of auxiliary variables

We need to setup some variables related to the Jira instance to be able to attach some files to the Test Execution issue later on, if we want to, using the attach_files_to_issue.sh shell script.

These are somehow redundant with the Xray instance configuration but are necessary if we wish to expose them.

We start by defining one variable for the Jira server base URL as build pararamter.

Using the Credentials Binding plugin, we will populate two variables for the Jira instance's username and password; these will be, in turn, obtained from the credentials already stored and linked to the Xray instance configuration in Jenkins.


The "code" will be checked out from our source code versioning system (e.g. Git), which contain the JMeter project(s) saved in .jmx format along with some additional scripts.

Configuring the Build steps

The "build" is composed of several steps, starting with the one that runs JMeter.


./run_petstore_octoperf.sh
#!/bin/bash

JMETERPLUGINSCMD=JMeterPluginsCMD.sh

./cleanup.sh

# run jmeter and produce a JTL csv report
jmeter -n -t  examples/jpetstore/jpetstore_configurable_host.jmx -l results.jtl -e -o dashboard

# process JTL and covert it to a synthesis report as CSV
$JMETERPLUGINSCMD --generate-csv synthesis_results.csv --input-jtl results.jtl --plugin-type SynthesisReport
$JMETERPLUGINSCMD --tool Reporter --generate-csv reports/aggregate_results.csv --input-jtl results.jtl --plugin-type AggregateReport

$JMETERPLUGINSCMD --generate-png reports/ResponseTimesOverTime.png --input-jtl results.jtl --plugin-type ResponseTimesOverTime --width 800 --height 600
$JMETERPLUGINSCMD --generate-png reports/TransactionsPerSecond.png --input-jtl results.jtl --plugin-type TransactionsPerSecond --width 800 --height 600

./convert.sh "jmeter.jpetstore"


We need to process the JTL file and produce a report that can be submited to Xray; we'll use a JUnit XML based report that will be generated using a specific tool.


About JMeter to JUnit XML converts

There are several  JMeter JTL to JUnit XML converters out there. However, most of them do neither a implement a mapping of concepts that is useful nor provide additional information about the failures.

This tutorial uses a modified version (pre-built JAR) of the the jmeter-junit-xml-converter code.

It will produce a JUnit XML report containing:

  • one Test Suite per each Thread
  • multiple <testcase> elements, one per each controller
  • add  information about the duration (i.e "time" attribute) on each <testcase>
    • add failure message, if available


The modified jmeter-junit-xml-converter utility will produce a junit.xml and an alternate_junit.xml file; we want the latter as it better suits our needs. We'll call it using the converter.sh shell script along with a parameter that will allow us to uniquely identify the Tests afterwards (e.g. "jmeter.jpetstore").

./convert.sh
#!/bin/bash

if [ $# == 1 ];
then
 TESTSUITE=$1
else
 TESTSUITE="jmeter"
fi
JAR=./converters/jmeter-junit-xml-converter-0.0.1-SNAPSHOT-jar-with-dependencies.jar

java -jar $JAR results.jtl junit.xml $TESTSUITE


Optionally, we'll add two build steps to store the tabular aggregate report in an environment variable (e.g. AGGREGATE_TABLE) as a string. This requires the Environment Injector plugin.

./process_aggregate.sh
#!/bin/bash

cat reports/aggregate_results.csv  |tr "," "|" | sed -e 's/^/|/' | sed -e 's/$/|\\\\n\\/' | sed -e '1 s/|/||/g'

Configuring the Post-build actions


Bonus tip!

The Jenkins' Performance plugin can optionally be used to create some trend charts in Jenkins and also as means to mark the build as failed or unstable depending on absolute or relative thresholds.


Test results can be submitted to Xray either by using a command line tool (e.g. curl) or by using a specific CI plugin which in our case will be the "Xray – Test Management for Jira Plugin".

We could choose the "JUnit XML" as the format in the "Xray: Results Import Task", that would be simpler to setup.

However, if we use the "JUnit XML multipart" format, we can further customize the Test Execution issue. We'll use this as means to provide a link to the Jenkins build along with a link to dashboard report generated by JMeter. We may also provide the aggregate report table stored previously as an environment variable.


If using this format, you'll need to provide the Test Execution's issue type name (or the id) and the project key.

Test Execution fields (JSON content) - example1
{
   "fields": {
      "project": {
         "key": "CALC"
      },
      "summary": "JMeter performance results",
      "description": "Build URL:  ${BUILD_URL}.\n\nDetailed dashboard report at: ${JOB_URL}ws/dashboard/index.html\n\n*Aggregate results summary*\n\n ${AGGREGATE_TABLE}\n",
      "issuetype": {
         "name": "Test Execution"
      }
   }
}


You may also specify the Test Plan, Revision and Test Environments fields but you'll need to obtain their custom field ID from Jira's administration. Note that these IDs are specific to each Jira instance. In the following example, "customfield_10033" corresponds to the Revision CF, "customfield_11805" to the Test Environments CF and "customfield_11807" to the Test Plan CF.

Test Execution fields (JSON content) - example2
{
   "fields": {
      "project": {
         "key": "CALC"
      },
      "summary": "JMeter performance results",
      "description": "Build URL:  ${BUILD_URL}.\n\nDetailed dashboard report at: ${JOB_URL}ws/dashboard/index.html\n\n*Aggregate results summary*\n\n ${AGGREGATE_TABLE}\n",
      "issuetype": {
         "name": "Test Execution"
      },
      "customfield_10033": "123", 
	  "customfield_11805" : [
            "staging"
      ],
      "customfield_11807": [
         "CALC-1200"
      ]

   }
}



Bonus tip!

You may also attach some files (e.g. charts, reports) to the created Test Execution issue. 

The Jenkins plugin exports the XRAY_TEST_EXECS variable containing the issue key of the Test Execution that was created.


For the time being, the Jenkins plugin can't upload other files; however, we can make a basic shell script (e.g. attach_files_to_issue.sh) for that.

attach_files_to_issue.sh
#!/bin/bash

BASEURL=${JIRA_BASEURL:-http://yourjiraserver.example.com}
USERNAME=${JIRA_USERNAME:-admin}
PASSWORD=${JIRA_PASSWORD:-admin}

ISSUEKEY=$1

for file in "${@:2}"
do
 curl -D- -u $USERNAME:$PASSWORD -X POST -H "X-Atlassian-Token: nocheck" -F "file=@$file" $BASEURL/rest/api/2/issue/$ISSUEKEY/attachments
done


After running Jenkins job, we may track some performance trend charts directly in the project's page. This requires previous configuration of the Performance Plugin as mentioned earlier.


As we submitted the processed test results to Xray (alternate_junit.xml), we can now track them in Jira.

A Test Execution will be created containing a summary of results along with some useful links to access additional information in Jenkins.


Using the link provided in the description field of the Test Execution, we can access an extensive dashboard report generated by JMeter and stored in Jenkins project's workspace.

In order to correctly view it, you may need to change one settings in Jenkins: go to Manage Jenkins > Script console and execute:

System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "")


Finally, we should be able to correctly display the HTML based dashboard report.

  




Unstructured (i.e. "generic) Test issues will be auto-provisioned (unless they already exist), one per each controller. The "Generic Definition" field acts as the unique test identifier for subsequent imports and is composed by a prefix along with the controller's name (e.g. "jmeter.jpetstore.AddToCart").


The attachments section on the Test Execution issue provide direct access to some reports and also to a zipped file containing the dashboard report generated by JMeter.


The execution details of a specific Test Run show multiple entries, each one representing a sample.

The following screenshot showcases the details of the sample produced by the Transaction Controller named "AddToCart". We can see that it was executed multiple times, in the context of different "users" (i.e. JMeter's threads).

JPetStore with assertions example 

This example (JMeter project file) is similar to the previous one with the exception that it contains some assertions: one standard Size assertion and a custom BeanShell assertion that looks at the duration and marks the sample as unsuccessful after "maxErrors" failures . 

We'll use a set of variables defined at JMeter's test plan-level to assist in the assertion logic.


BeanShell assertion code
debug();

long elapsed = SampleResult.getTime() ;

long threshold = Long.parseLong(vars.get("SLA_elapsedTime_threshold"));

if (elapsed > threshold) {

    int failureCount = Integer.parseInt(vars.get("SLA_elapsedTime_failures"));
    failureCount++;

    int maxErrors = Integer.parseInt(vars.get("SLA_elapsedTime_maxErrors"));

    if (failureCount >= maxErrors) {
    	Failure = true;
    	FailureMessage = "SF: failureCount" + " requests failed to finish in " + threshold + " ms";
        SampleResult.setSuccessful(false);
        SampleResult.setResponseMessage(failureCount + " requests failed to finish in " + threshold + " ms");
    } else {

        vars.put("SLA_elapsedTime_failures", String.valueOf(failureCount));
    };
    SampleResult.setResponseMessage("duration: "+elapsed+"; failureCount= "+failureCount);
}


After results are imported to Xray, we can see each sample result in the Test Run associated to the controller (i.e. HTTP Request sampler).

Room for improvement

  • abstract the whole JMeter test plan as a Test
  • use Robot Framework XML report instead of JUnit to provide more granular details
  • provide the possibility of linking test(s) to an existing requirement in Xray
  • implement SLAs on top of results

References