This document provides some guidelines for addressing the specification of test cases during testing.

Here you may find some recommended tips to perform better testing in general and also whenever using Xray.

Although named "best practices", please consider this document informative and not binding, since not every aspect is covered in it and proper evaluation needs to be performed to ensure your needs are addressed. As Xray evolves and testing methodologies also evolve, these guidelines and your process may need to be adjusted and evolve likewise.


Overview

Testing is crucial for the products that you build and that your customers use.

In the same way that you demand high-quality code from an architectural and strict development point-of-view, your test cases should also have the same high-quality.

As you struggle to make releases more often, while maintaining quality as a priority, you struggle with limited resources and limited time.

Thus, you need to make choices in order to perform the right testing with the time that you have, while maintaining a certain level of confidence.

In this document you can find some guidelines for enabling great testing, which, as we'll see, is not limited to writing test cases.

Enabling great testing

Testing quality does not just depend on the executed Tests and bugs found during that process. Testing is a lot more.

But to have great testing, you'll need first of all to have a testable SUT, which is properly described and detailed in clear, well-defined requirements, and reviewed from the start by great testers whose skills helped build a better product through awesome testing.

A testable SUT

In order for a system to be testable, it must provide certain characteristics (i.e. it must be built in such a way that promoves them) including:

Clear, concise and correct requirements

In order to specify great Tests for existing requirements, first of all they need also to meet certain quality criteria, by incorporating some key characteristics:

Being a great tester

To perform valuable testing, a tester needs many skills. Some of these include the ability to:

A great Test

What makes a Test, a great Test? 

There are a set of characteristics that all scripted tests should have. The following ones are just a few of them:

Whenever performing other types of testing, such as exploratory testing, some of the former do not need to be considered due to the nature of the practice, including objectivity and consistency among other.

Tests in Xray

Xray provides three different "Test Types:"

However, one consequence of the above is that there is no explicit way to identify whether a Test is automated or not. However, you may define your own rules for that.

Best Practices

Process

Testers are part of the team

Involve testers whenever reviewing and discussing requirements or user stories with the development team; make testers part of the team and shift your testing to the left side of your development life cycle.

Requirements/stories should be clear and have acceptance criteria

Clear requirements are essential to have the correct understanding of what aims to be addressed by the requirement and the related business needs. 

Make sure they are reviewed; you can use Jira's workflow status in the related issue for this.

Test using the whole Pyramid

Have tests for the different layers of the Test Pyramid, starting with unit tests up to E2E/GUI tests.

To remember:

Manage changes on requirements

Changes on requirements/stories should be reviewed by testers.

Shift Left Testing

Review the requirements with the "testing team", write Tests and run them as soon as possible. By making testing part of the whole development life-cyle, requirements will be more clear, understandable and testable. 

Perform exploratory testing

Define test objectives, goals, and areas of opportunity that allow the tester to use her own creativity and skill.

Check here how to perform Exploratory Testing using Xray in a very basic way.

Requirements/stories should be clear and have acceptance criteria

Clear requirements are essential to have the correct understanding of what aims to be addressed by the requirement and the related business needs. 

Make sure they are reviewed; you can use Jira's workflow status in the related issue for this.

Store Tests alongside your other project development related issues

Although Xray is quite flexible and supports many different project organization scenarios, we recommend you to have your testing related artifacts together with other project related ones, such as requirements/user stories, bugs, tasks, etc. This approach provides a self-contained project, it's easier to understand and manage and, best of all, promotes team collaboration.

Organize Tests properly from the start

Tests must be organized in proper ways, either by using lists (i.e. Test Sets) or folders (i.e. within the project's Test Repository).

But besides this more structured way, Tests can and should be properly "organized" right form the start, independently of how they are to be grouped later on.

This can be achieved by "tagging" the Tests adequately and by identifying the reason for the Test to exist in the first place;  with this you can:

It's important that your process clearly states how to perform the earlier points, namely the tagging. Otherwise you will end up with similar Tests but tagged in different ways.

Specification

Tests with a purpose

The reason for a Test to exist in the first place should be its purpose/goal.

All the necessary and clear steps

Clear, non-ambiguous steps (actions and expected results) avoid different interpretations and thus different results.

Tests should have the necessary steps to validate their purpose:

Mark the Tests so they can easily be found/managed

Use labels or specific custom fields; if using labels, keep in mind that Jira's default label type of fields is case-sensitive and does not provide a way to limit its values; a more restricted custom field type may be more appropriate.

Tests specific for a given Environment

If Tests are specific for a given environment, use the Environment field to clearly identify that.

Whenever scheduling these Tests for execution, make sure to use the Test Environments feature by setting the Test Environments field on the Test Execution.

More info on Test Environments here.

Avoid too many dependencies

A Test that depends on other Tests being executed previously, even by a certain order, makes the Test harder to understand and manage.

Try to avoid dependencies in the first place. If really needed, try to isolate them as much as possible on Pre-Conditions.

Note that a Pre-Condition has only an open text field that is used to describe it; it does not provide semantics to explicit mentioning dependencies to other Test cases.

Missing Precondition/requisite for running a Test

If a certain condition is necessary to run a Test, it's best to abstract in a Pre-Condition.

This makes sense only if you foresee it as something reusable, that is useful for other Tests.

Relative importance of Tests for a given requirement 

Use Priority field to distinguish between different Test cases; Priority is standard field from Jira and thus should be the preferred field for this purpose.

Avoid UI/visual dependencies

Don't make Tests dependent on very specific UI aspects, such as the position or on text labels, unless they're UI/visually aimed.

Test in different configuration scenarios

Perform testing using the same test procedure but in different conditions, such as different combinations of enabled features.

In this case, the preferred approach would be the following one:

An alternate approach would be using Test Environments and define each one as a specific feature combination. However, this does not scale well if you have several features which would lead to the creation of many Test Environments. Also, you may prefer to use Test Environments as a means to identify the target environment instead (i.e. browser, mobile device).

Few Tests (or few testing) per requirement

A requirement covered by few test cases may be a symptom of lacking test cases, testing only the successful path.

Dozens of Tests per requirement

A requirement covered by large dozens of test cases may be a symptom that the requirement is too complex or vague.

Avoid having very few Tests, that only validate the obvious

Sometimes testers look at a requirement and write one or two Tests that mimic exactly the description of the requirement, which can be quite simplistic and give a totally wrong sense of confidence

Exercise Equivalence Class Partitioning and BVA

As a means to provide greater coverage, Equivalence Class Partitioning and Boundary Value Analysis can provide enhanced coverage without growing the Test suite too much.

Few Tests (or few testing) per requirement

A requirement covered by few test cases may be a symptom of lacking test cases, testing only the successful path.

Promove reusability and avoid cloning Tests specifications

If Tests are exactly the same, in theory there should be no need to have clones.

Whenever talking about clones, we may be talking about explicit clones (i.e. cloning the whole Test issue, with all its steps) or implicit/embedded clones (i.e. where the steps of an existing Test are used in another more high-level Test case). 

In general, users may clone Tests for different purposes:

Performance considerations

  1. Avoid scripted Tests with dozens of steps
  2. Avoid Tests linked to many different requirements; try to have just one covered requirement per Test
  3. Avoid huge amounts of Tests per requirement; consider more grained requirements

Quick Checklist

Use this to quickly evaluate if you're on the right track or not. This list is a very condensed sum up of many practices mentioned earlier.


#DescriptionWhy it matters?
1

Have you defined a process for your testing, with guidance covering the specification?

Having a well defined process ensures that users follow the same procedures and use tools in similar ways. Thus, similar things will be done in similar ways which allow better collaboration grounded on proper understanding.

This is key to effectively working as a team and avoid problems later on.

2Is the Test understandable? Can you understand the scope and the actions you need to perform to perform the testing?If the Test is ambiguous, then its results will also be likewise. An understandable Test is mostly a useless and invalid test for the purpose it seeks to address.
3

Are your Tests being reviewed?

Having reviews on the Tests makes them more clear and robust.
4

Can you clearly identify the type of Test? Are specific fields/labels being used for this purpose?

Properly identifying tests by multiple criteria will allow you to find them whenever needed later on, for regression or sanity testing for example.
5Do you have a way to clearly identify deprecated Tests?As your Test suite/database grows, some Tests will become useless and you only have them for tracking purposes. It's crucial to have a way to clearly identify these Tests and exclude them from coverage calculations.
6Do your Tests have a medium amount of steps?

Few or too much steps are signs that something is either missing/assumed or that too much is being done in the scope of the Test. Tests with a medium amount of step tend to be clearer, are more focused and easier to manage.

7Have you materialized all assumptions in Pre-Conditions?Using Pre-Conditions fosters reusability by making these assumptions visible and manageable.
8

Are you performing testing at the different layers/levels?

It is really important to have unit tests. But it also important to have tests at different levels of the well-known Test Pyramid, because each level has its own value and addresses different concerns.
9Are Tests covering one and just one requirement?

Tests linked to requirements allow traceability and evince their goal. If a Test just covers one requirement, then it means that its purpose is focused and the results impact will be more easy to interpret.

10Are you just testing the happy path?Edge cases are many times the source of problems. But besides it, by understanding the context of the Test, of the requirement and its implementation, additional Tests can be designed to cover potential impacted areas of the SUT.
11Are you just performing manual testing?You should complement manual testing with automated testing and exploratory testing. Each practice has its own benefits and complements each other.