What you'll learn

  • Define tests using Rust
  • Run the test and push the test report to Xray
  • Validate in Jira that the test results are available


Source-code for this tutorial

Overview

Rust is a systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory-safe without using garbage collection.



Prerequisites


For this example we will use cargo-nextest instead of cargo, that is the build and package manager for Rust. Cargo-nextest allows the generation of a Junit result file and improves over cargo functionalities. 


 We will need:

  • Rust installed in your environment
  • cargo-nextest installed in your environment
  • The example downloaded from GitHub


We have created a simple Rust application with two modules and created unit and integration tests to validate those.  

The application consists of adding two numbers from the command line and printing the result in the console. 

In the first file lib.rs we have defined modules and unit tests as we can see below.

/src/lib.rs
pub mod adder{
    pub fn add(left: u32, right: u32) -> u32 {
        left + right
    }
}

pub mod divider{
    pub fn divide_non_zero_result(a: u32, b: u32) -> u32 {
        if b == 0 {
            panic!("Divide-by-zero error");
        } else if a < b {
            panic!("Divide result is zero");
        }
        a / b
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_addition() {
        assert_eq!(adder::add(2, 2), 4);
    }

    #[test]
    fn test_panic() {
        panic!("Make this test fail");
    }

    #[test]
    fn test_divide() {
        assert_eq!(divider::divide_non_zero_result(10, 2), 5);
    }

    #[test]
    #[should_panic]
    fn test_any_panic() {
        divider::divide_non_zero_result(1, 0);
    }

    #[test]
    #[should_panic(expected = "Divide result is zero")]
    fn test_specific_panic() {
        divider::divide_non_zero_result(1, 10);
    }
}


The adder module has one function that adds two numbers and the divider module contains the code to perform a division of two numbers. 

We created unit tests to validate these methods in a module name tests identified by the #[cfg(test)] attribute, each test is identified by the #[test] attribute.

The first test validates that the addition of two numbers returns the expected result, the second test forces a failure with panic! . In some cases we want to validate that a failure occurs and in fact this is not a failure but the expected behavior, so for these cases Rust uses the attribute #[should_panic] as we can see above in the last two tests.

Our application's main file is in the main.rs file, where we use these methods to add two numbers received from the command line.

/src/main.rs
use std::env;
use std::str::FromStr;
use MainTests::adder;

fn main() {
    println!("Welcome to the addition machine!");
    let args: Vec<String> = env::args().collect();

    let num1: u32 = FromStr::from_str(&args[1]).unwrap();
    let num2: u32 = FromStr::from_str(&args[2]).unwrap();

    println!("The sum of the numbers {} and {} is: {}", num1, num2, adder::add(num1, num2));
}


Here we are letting Rust know that we are using the methods from our lib.rs file with use MainTest::adder and after receiving the values from the command line we are adding them using the function defined in the lib above and printing the result in the output.

We also have integration tests defined in the file integration_tests.rs that use the methods from the above modules and validate them.

/tests/integration_tests.rs
use MainTests::adder;
use MainTests::divider;

#[test]
fn test_add() {
    assert_eq!(adder::add(3, 2), 5);
}

#[test]
fn test_divide() {
    assert_eq!(divider::divide_non_zero_result(10, 2), 5);
}


The application can be executed using the following command:

cargo run -- 1 2


The Rust application is executed with two parameters - 1 and 2 - and performs their sum returning the result in the output terminal as we can see below:


As described above we are using cargo-next instead of the original build and tool package manager cargo; this generates a JUnit results file from the test execution in addition to the output feedback.

The command used to execute the tests and generate the JUnit test result file is:

cargo nextest run


The tests were executed and the output shows their result. We can find the JUnit result file in the /target/nextest/default directory.


 In this example, one test failed and the others succeeded, the output generated to the terminal is the above one and the correspondent JUnit report is as below:

Junit Report
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="nextest-run" tests="7" failures="1" errors="0" uuid="16e73a21-5bca-4fa6-b9fc-90fdfb223ad5" timestamp="2024-05-16T11:41:25.275+01:00" time="0.049">
    <testsuite name="MainTests::integration_test" tests="2" disabled="0" errors="0" failures="0">
        <testcase name="test_add" classname="MainTests::integration_test" timestamp="2024-05-16T11:41:25.286+01:00" time="0.036">
        </testcase>
        <testcase name="test_divide" classname="MainTests::integration_test" timestamp="2024-05-16T11:41:25.290+01:00" time="0.034">
        </testcase>
    </testsuite>
    <testsuite name="MainTests" tests="5" disabled="0" errors="0" failures="1">
        <testcase name="tests::another" classname="MainTests" timestamp="2024-05-16T11:41:25.277+01:00" time="0.020">
            <failure type="test failure">thread 'tests::another' panicked at src/lib.rs:29:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace</failure>
            <system-out>
running 1 test
test tests::another ... FAILED

failures:

failures:
    tests::another

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 4 filtered out; finished in 0.00s

</system-out>
            <system-err>thread 'tests::another' panicked at src/lib.rs:29:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
        </testcase>
        <testcase name="tests::exploration" classname="MainTests" timestamp="2024-05-16T11:41:25.279+01:00" time="0.024">
        </testcase>
        <testcase name="tests::test_any_panic" classname="MainTests" timestamp="2024-05-16T11:41:25.281+01:00" time="0.023">
        </testcase>
        <testcase name="tests::test_divide" classname="MainTests" timestamp="2024-05-16T11:41:25.282+01:00" time="0.024">
        </testcase>
        <testcase name="tests::test_specific_panic" classname="MainTests" timestamp="2024-05-16T11:41:25.284+01:00" time="0.033">
        </testcase>
    </testsuite>
</testsuites>



Integrating with Xray

As we saw in the above example, where we are producing JUnit reports with the test results, it is now a matter of importing those results to your Jira instance. This can be done by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins), or using the Jira interface to do so.

Next, we will showcase how to import the JUnit reports using both the REST API and the Jira interface.

API

Once you have the report file available you can upload it to Xray through a request to the REST API endpoint for JUnit, and for that the first step is to follow the instructions in v1 or v2 (depending on your usage) to obtain the token we will be using in the subsequent requests.


Authentication

The request made will look like:

curl -H "Content-Type: application/json" -X POST --data '{ "client_id": "CLIENTID","client_secret": "CLIENTSECRET" }'  https://xray.cloud.getxray.app/api/v2/authenticate

The response of this request will return the token to be used in the subsequent requests for authentication purposes.


JUnit XML results

Once you have the token we will use it in the API request with the definition of some common fields on the Test Execution, such as the target project, project version, associated Test Plan, etc.

curl -H "Content-Type: text/xml" -X POST -H "Authorization: Bearer $token"  --data @"junit.xml" https://xray.cloud.getxray.app/api/v2/import/execution/junit?projectKey=XT&testPlanKey=XT-704


With this command we are creating a new Test Execution in the referred Test Plan with a generic summary and seven tests with a summary based on the test name.


Jira UI

Create a Test Execution for the tests that you have

Fill in the necessary fields and press "Create"

Open the Test Execution and import the JUnit report


Choose the results file and press "Import"

The Test Execution is now updated with the test results imported


Tests implemented will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Rust tests are auto-provisioned, unless they already exist. 

Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using cargo-nextest runner.

Detailed results, including logs and exceptions reported during execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:

As we can see here:



Tips

  • after results are imported, in Jira Tests can be linked to existing requirements/user stories, so you can track the impact on their coverage.
  • results from multiple builds can be linked to an existing Test Plan, to facilitate the analysis of test result trends across builds.
  • results can be associated with a Test Environment, in case you want to analyze coverage and test results by that environment later on. A Test Environment can be a testing stage (e.g. dev, staging, prepod, prod) or a identifier of the device/application used to interact with the system (e.g. browser, mobile OS).




References