Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: minor grammar corrections
Info
titleWhat you'll learn
  • Define tests using Rust
  • Run the test and push the test report to Xray
  • Validate in Jira that the test results are available


Note
iconfalse
titleSource-code for this tutorial
typeInfo
  • code Code is available in GiHub

Overview

Rust is a systems programming language focusing on safety, speed, and concurrency. It accomplishes these goals by being memory safe without using garbage collection.



Prerequisites


Expand

For this example we will use cargo-nextest instead of cargo, that is the build and package manager for Rust. Cargo-nextest allows the generation of a Junit result file and improves over cargo functionalities. 


 We will need:

  • Rust installed in your environment
  • cargo-nextest installed in your environment
  • Download the example from GitHub


We have created a simple Rust application with two modules and created unit and integration tests to validate those.  

The application consists in adding two numbers from the command line and print the result in the console. 

In the first file lib.rs we have defined modules and unit tests as we can see below.

Code Block
languagepy
title/src/lib.rs
collapsetrue
pub mod adder{
    pub fn add(left: u32, right: u32) -> u32 {
        left + right
    }
}

pub mod divider{
    pub fn divide_non_zero_result(a: u32, b: u32) -> u32 {
        if b == 0 {
            panic!("Divide-by-zero error");
        } else if a < b {
            panic!("Divide result is zero");
        }
        a / b
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_addition() {
        assert_eq!(adder::add(2, 2), 4);
    }

    #[test]
    fn test_panic() {
        panic!("Make this test fail");
    }

    #[test]
    fn test_divide() {
        assert_eq!(divider::divide_non_zero_result(10, 2), 5);
    }

    #[test]
    #[should_panic]
    fn test_any_panic() {
        divider::divide_non_zero_result(1, 0);
    }

    #[test]
    #[should_panic(expected = "Divide result is zero")]
    fn test_specific_panic() {
        divider::divide_non_zero_result(1, 10);
    }
}


The adder module have one function that adds two numbers and the module divider contains the code to performa a division of two numbers. 

We created unit tests to validate these methods in a module name tests identified by the #[cfg(test)] attribute, each test is identified by the #[test] attribute.

The first test validate that the addition of two numbers returns the expected result, the second test forces a failure with panic! . In some cases we want to validate that a failure occurs and in fact this is not a failure but the expected behavior, so for these case Rust uses the attribute #[should_panic] as we can above in the last two tests.

The Our application's main file of our application is in the the main.rs file file, where we use these methods to add two numbers received from the command line.

Code Block
languagepy
title/src/main.rs
collapsetrue
use std::env;
use std::str::FromStr;
use MainTests::adder;

fn main() {
    println!("Welcome to the addition machine!");
    let args: Vec<String> = env::args().collect();

    let num1: u32 = FromStr::from_str(&args[1]).unwrap();
    let num2: u32 = FromStr::from_str(&args[2]).unwrap();

    println!("The sum of the numbers {} and {} is: {}", num1, num2, adder::add(num1, num2));
}


Here we are letting Rust know that we are using the methods from our lib module with use MainTest::adder and after receiving the values from the command line we are adding them using the function defined in the lib above and printing the result in the output.

We also have integration tests defined in the file integration_tests.rs that use the methods from the above modules and validate them.

Code Block
languagepy
firstline1
title/tests/integration_tests.rs
collapsetrue
use MainTests::adder;
use MainTests::divider;

#[test]
fn test_add() {
    assert_eq!(adder::add(3, 2), 5);
}

#[test]
fn test_divide() {
    assert_eq!(divider::divide_non_zero_result(10, 2), 5);
}


The application can be executed using the following command:

Code Block
languagebash
themeDJango
firstline1
cargo run -- 1 2


The Rust application is executed with two parameters: 1 and 2 and performs its sum returning the result in the output terminal as we can see below:


As described above we are using the cargo-next instead of the original build and tool package manager: cargo, this generates a JUnit results file from the test execution in addition to the output feedback.

The command used to execute the tests and generate the JUnit test result file is:

Code Block
languagebash
themeDJango
firstline1
cargo nextest run


The tests were executed and the output shows their result and we can find the JUnit result file in the /target/nextest/default directory.


 In this example, one test has failed and the others succeeded, the output generated to the terminal is the above one and the correspondent JUnit report is as below:

Code Block
firstline1
titleJunit Report
linenumberstrue
collapsetrue
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="nextest-run" tests="7" failures="1" errors="0" uuid="16e73a21-5bca-4fa6-b9fc-90fdfb223ad5" timestamp="2024-05-16T11:41:25.275+01:00" time="0.049">
    <testsuite name="MainTests::integration_test" tests="2" disabled="0" errors="0" failures="0">
        <testcase name="test_add" classname="MainTests::integration_test" timestamp="2024-05-16T11:41:25.286+01:00" time="0.036">
        </testcase>
        <testcase name="test_divide" classname="MainTests::integration_test" timestamp="2024-05-16T11:41:25.290+01:00" time="0.034">
        </testcase>
    </testsuite>
    <testsuite name="MainTests" tests="5" disabled="0" errors="0" failures="1">
        <testcase name="tests::another" classname="MainTests" timestamp="2024-05-16T11:41:25.277+01:00" time="0.020">
            <failure type="test failure">thread 'tests::another' panicked at src/lib.rs:29:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace</failure>
            <system-out>
running 1 test
test tests::another ... FAILED

failures:

failures:
    tests::another

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 4 filtered out; finished in 0.00s

</system-out>
            <system-err>thread 'tests::another' panicked at src/lib.rs:29:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</system-err>
        </testcase>
        <testcase name="tests::exploration" classname="MainTests" timestamp="2024-05-16T11:41:25.279+01:00" time="0.024">
        </testcase>
        <testcase name="tests::test_any_panic" classname="MainTests" timestamp="2024-05-16T11:41:25.281+01:00" time="0.023">
        </testcase>
        <testcase name="tests::test_divide" classname="MainTests" timestamp="2024-05-16T11:41:25.282+01:00" time="0.024">
        </testcase>
        <testcase name="tests::test_specific_panic" classname="MainTests" timestamp="2024-05-16T11:41:25.284+01:00" time="0.033">
        </testcase>
    </testsuite>
</testsuites>



Integrating with Xray

As we saw in the above example, where we are producing JUnit reports with the result of the teststest results, it is now a matter of importing those results to your Jira instance, this . This can be done by simply submitting automation results to Xray through the REST API, by using one of the available CI/CD plugins (e.g. for Jenkins) or using the Jira interface to do so.

Next, we will showcase how to import the JUnit reports using both the REST API and the Jira interface.


UI Tabs
UI Tab
titleAPI

API

Once you have the report file available you can upload it to Xray through a request to the REST API endpoint for JUnit, and for that the first step is to follow the instructions in v1 or v2 (depending on your usage) to obtain the token we will be using in the subsequent requests.


JUnit XML results

We will use the API request with the definition of some common fields on the Test Execution, such as the target project, project version, etc.

In the first version of the API, the authentication used a login and password (not the token that is used in Cloud).

Code Block
languagebash
themeDJango
curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=@junit.xml" 'https://<LOCAL_JIRA_INSTANCE>/rest/raven/1.0/import/execution/junit?projectKey=COM&testPlanKey=COM-9'


With this command we are creating a new Test Execution in the referred Test Plan with a generic summary and seven tests with a summary based on the test name.


UI Tab
titleJira UI

Jira UI

UI Steps
UI Step

Create a Test Execution for the test that you have

UI Step

Fill in the necessary fields and press "Create"

UI Step

Open the Test Execution and import the JUnit report


UI Step

Choose the results file and press "Import"

UI Step

The Test Execution is now updated with the test results imported


Tests implemented will have a corresponding Test entity in Xray. Once results are uploaded, Test issues corresponding to the Rust tests are auto-provisioned, unless they already exist. 

Xray uses a concatenation of the suite name and the test name as the the unique identifier for the test.

In Xray, results are stored in a Test Execution, usually a new one. The Test Execution contains a Test Run per each test that was executed using cargo-nextest runner.

Detailed results, including logs and exceptions reported during execution of the test, can be seen on the execution screen details of each Test Run, accessible through the Execution details:

As we can see here:



Tips

  • after results are imported, in Jira Tests can be linked to existing requirements/user stories, so you can track the impacts on their coverage.
  • results from multiple builds can be linked to an existing Test Plan, to facilitate the analysis of test result trends across builds.
  • results can be associated with a Test Environment, in case you want to analyze coverage and test results by that environment later on. A Test Environment can be a testing stage (e.g. dev, staging, prepod, prod) or a identifier of the device/application used to interact with the system (e.g. browser, mobile OS).




References

Table of Contents
classtoc-btf

CSS Stylesheet
.toc-btf {
    position: fixed;
}