A coverable issue (e.g., Story, requirement) may be covered by one or multiple Tests. In fact, the test coverage status of a given issue goes way further than the basic covered/not covered information; it takes into account your test results.

As soon as you start running your Tests, the individual test result may be one of many and it will be very specific to your use case.

To make your analysis even more complex, you may be using sub-requirements, for example, and executing related Tests.

Requirements may be validated directly or indirectly through related sub-requirements and associated Test cases.


How do all these factors contribute to the calculation of a coverage status? How is the status of a Test evaluated?

Let’s start by detailing the different possible values Test, Test Step and coverage statuses. Then, we’ll see how they’ll impact on the calculation of the coverage status of a issue in some specific version or Test Plan.

Overview of statuses

When talking about statuses, we may be talking about statuses of requirements, Tests, Test Runs and Test Steps.

The status (i.e., test coverage status) of a coverable issue depends on the status of its "related" Tests.

The status of a Test depends on the status of its "related" Test Runs, which in turn depend on the recorded Test Step statuses for each one of them.


When we're speaking about the status associated with a requirement or with a Test (and even with a Test Set), we may be talking about different things:

  1. (coverage) status for a version, taking into account executions made for that version of the Tests that validate the requirement
  2. workflow status associated with the requirement issue (e.g., "New", "In Progress", "Closed")


In this page, we're referring to #1, i.e., the status of the entities based on the executions made in some context.

Test status

The status of a Test tells you information about its current consolidated state (e.g. latest record result, if existent). Was it is executed? Successfully? In which version?

Thus, when speaking about the "status of a Test", we need to give it additional context (e.g. "In which version?") since it depends on "where" and how you want to analyze it.



The status of a Test indicates its "latest state" in some given context (e.g. for some version, some Test Plan and/or in some Test Environment).


Xray provides some built-in Test statuses (which can’t be modified nor deleted):


Test status

Final status?

Test Coverage Status mapped to

PASSyesOK
FAILyesNOK
TODOnoNOTRUN
ABORTEDyesNOTRUN
EXECUTINGnoNOTRUN
customcustomOK, NOK, NOTRUN or UNKNOWN


The status (i.e., result) of a Test Run is an attribute of the Test Run (a “Test Run” is an instance of a Test and is not a Jira issue) and is the one taken into account to assess the coverage status of the coverable issue.


Managing Test Statuses

Creating new Test (Run) statuses may be done in the Global Settings: Test Statuses configuration section of Xray.

When creating/editing a Test status, we have to identify the Test Coverage Status to which we want this Test status to map.



One important attribute of a Test status is the “final” attribute. If Final Statuses have precedence over non-final flag is enabled, then Xray will give priority to final statuses when calculating the status of a Test. In other words, if you have a Test currently in a final status (e.g., PASS, FAIL) and you schedule a new Test Run for it, then this Test Run won't affect the calculation of the status of the Test.

You may use this if you prefer to take into account only the last final/complete recorded result and want to discard Test Runs that are in an intermediate status (e.g., EXECUTING, TODO).

Test Step status

The status of a Test Step indicates the result obtained for that step for some Test Run. 


Statuses reported at Test Step level will contribute to the overall calculation of the status of the related Test Run.


Xray provides some built-in Test Step statuses (which can’t be modified nor deleted).

Test Step status

Test status

PASSPASS
TODOTODO
EXECUTINGEXECUTING
FAILFAIL
custom

custom

Managing Test Step Statuses

Creating new Test Step statuses may be done in the Global Settings: Test Step Statuses configuration section of Xray.


When creating/editing a Test Step status, we have to identify the Test status to which we want this step status to map.


Note that native Test Step statuses can’t be modified nor deleted.

Test Coverage status

The (coverage) status of a coverable issue tells you information about its current state, from a quality perspective. Is it covered with test cases? If so, has it been validated successfully? In which version?

Thus, when speaking about the "status of a requirement/Story", for example, we need to give it additional context (e.g., "In which version?") since it depends on "where" and how you want to analyze it.



The (coverage) status of a coverable issue indicates its coverage information along with its "state", depending on the results recorded for the Tests that do validate it.

This status is evaluated in a given context (e.g., for some version, some Test Plan and/or in some Test Environment).



In Xray, for a given coverable issue, considering the default settings, its coverage status may be:

It’s not possible to create custom Test Coverage statuses.

To calculate an issue coverage status for a specific system version, we “just” need to take into account the status of the related Tests for that same version. We’ll come back to this later on.

Calculation of the status for a given Test Run

The status of a given Test Run is an attribute that is often calculated automatically based on the respective recorded step statuses. You can also enforce a specific status for a Test Run, which in turn may implicitly enforce specific step statuses (e.g., setting a Test Run as "FAIL" can set all steps as "FAIL"). 

This calculation is made by following these rules:

  1. Obtain the test status mapped to each reported test step status; this is important as the actual test step statuses are not directly compared
  2. Compare all the the previously mapped test statuses together
    1. if any of these statuses (e.g., "PASS") is in turn mapped to the coverage status "OK", then the other status wins; if both are mapped to "OK" then the highest ranked wins
    2. if any of these statuses is "FAIL", then the Test Run status will be "FAIL"
    3. if any of these statuses is in turn mapped to the coverage status "NOK", then the Test Run status will be that one
    4. if any of theses statuses is final, then wins over non-final ones
    5. of these statuses, the status with the highest ranking wins

The order of the steps is irrelevant for the purpose of the overall Test Run status value.


Consequences:

Configuration Example 1


The following table provides some examples given the Test Step Statuses configuration shown above.


Example #

Statuses of the steps/contexts

(the order of the steps/contexts is irrelevant)

Calculated value for the status of the Test Run

Why?

1
  • PASS
  • PASS
  • PASS
PASSAll steps are PASS, thus the joint value is PASS
2
  • PASS
  • TODO
  • PASS
EXECUTINGAt least one step status (i.e. TODO) is mapped to a non-final Test status
3
  • PASS
  • FAIL
  • PASS
FAILOne of the step statuses (i.e. FAIL) has higher ranking than the other ones
4
  • XPASS
  • FAIL
  • PASS
FAIL

Since one of the steps is FAIL, then the run will be marked as FAIL.


5
  • FAIL
  • XPASS
  • FAIL
FAILSince one of the steps is FAIL, then the run will be marked as FAIL.
6
  • XFAIL (=>MYFAIL=>NOK)
  • XPASS2 (=>MYFAIL=>NOK)
  • XPASS (=>FAIL=>NOK)
FAILAll mapped statuses map to a test status that in turn is associated to "NOK". Since one of them is FAIL, then the run will be marked as FAIL.


Configuration Example 2

Let's consider the following configuration.


Example #

Statuses of the steps/contexts

(the order of the steps/contexts is irrelevant)

Calculated value for the status of the Test Run

Why?

1
  • DUMMY_P2 (=>CUSTOM_PASS2=>OK)
  • DUMMY_P1 (=>CUSTOM_PASS=>OK)
CUSTOM_PASS2

We can see that both steps contribute in a "positive way" (i.e., they were successful as ultimately they are linked to successful coverage impact).

Both statuses mapped to these test step statuses are associated with the "OK" coverage; as CUSTOM_PASS2 has higher ranking than CUSTOM_PASS, the run will be marked as "CUSTOM_PASS2".

2
  • DUMMY_P2 (=>CUSTOM_PASS2=>OK)
  • DUMMY_P1 (=>CUSTOM_PASS=>OK)
  • PASS (=>PASS=>OK)
CUSTOM_PASS2Similary to the previous example. Any status wins the "PASS" status.
3
  • DUMMY_F2 (=>CUSTOM_FAIL2=>NOK)
  • DUMMY_F1 (=>CUSTOM_FAIL=>NOK)
CUSTOM_FAIL2

We can see that both steps contribute in a "negative way" (i.e., they were not successful as ultimately they are linked to unsucessful coverage impact).

Both statuses mapped to these test step statuses are associated with the "NOK" coverage; as CUSTOM_FAIL2 has higher ranking than CUSTOM_FAIL, the run will be marked as "CUSTOM_FAIL2".

Calculation of the status for a given Test

It is possible to calculate the status of a Test either by Version or Test Plan, in a specific Test Environment or globally, taking into account the results obtained for all Test Environments.


Analysis:


What affects the calculation:

Calculate the status of some Test, in version V or Test Plan TP, for Test Environment TE

  1. This takes into account Test Runs in version V (as a result of Test Executions in version V) or Test Runs in Test Plan TP (within Test Executions associated with Test Plan TP)
  2. If Test Environment is chosen, then only Tests Runs on that Environment (e.g., TE) will be considered.
  3. If "Final statuses have precedence over non-final statuses" is true, then:
    1. final Test Run statuses will have higher ranking than non-final ones
    2. only the latest Test Run is taken into account based on its "finished on" date  
  4. If "Final statuses have precedence over non-final statuses" is false, then:
    1. only the latest Test Run is taken into account based on its "created" date (i.e. the creation date of the related Test Run entity - this happens when a Test is added to the Test Execution)

Calculate the status of some Test, in version V or Test Plan TP, for "All Environments"

  1. calculate the Test status for each Test Environment, based on all the implicit Test Environments from the relevant Test Executions (i.e., Test Executions in version V or Test Executions associated with Test Plan TP)
  2. calculate the joint value for the Test status
    1. PASS has lowest ranking (i.e. for the calculated to be PASS, all calculated statuses must be PASS in the different Test Environments)
    2. if one is FAIL, then the calculated value will be FAIL
    3. otherwise, use the ranking of Test statuses


Examples

The following table provides some examples given the Test Statuses configuration shown above in the Managing Test Statuses section.


Example #

Statuses of the Test Runs

(ordered by time of execution/creation, ascending)

Final statuses have precedence over non-final statuses

Calculated value for the status of the Test

Why?

1a
  1. PASS
  2. PASS
  3. TODO
truePASSLatest executed Test Run (2) having a final status was PASS.
1b
  1. PASS
  2. PASS
  3. TODO
falseTODOLatest created Test Run (3) was TODO.
2a
  1. PASS (env1)
  2. MYPASS2 (env2)
  3. TODO (env2)
  4. PASS (env3)
trueMYPASS2

Latest executed final Test Runs on each environment were PASS, MYPASS2 and PASS respectively.

Since MYPASS2 (3) has higher ranking then the calculated status will be MYPASS2.


2b
  1. PASS (env1)
  2. MYPASS2 (env2)
  3. TODO (env2)
  4. PASS (env3)
falseTODO

Latest created Test Runs on each environment were PASS, TODO and PASS respectively.

Since PASS has the lowest ranking, then TODO (3) will "win" and then the calculated status will be TODO

3
  1. PASS (env1)
  2. TODO (env2)
  3. PASS (env3)
trueTODO

Latest created Test Runs on each environment were PASS, TODO and PASS respectively.

Although Test Environment "env2" has only a non-final Test Run, since there is no other Run for that environment, then it will be considered as the calculated status for that environment.

Since PASS has the lowest ranking, then TODO (3) will "win" and then the calculated status will be TODO.

4
  1. PASS (env1)
  2. FAIL (env2)
  3. PASS (env3)
true (or false)FAIL

Latest executed (or created) final Test Runs on each environment were PASS, FAIL and PASS respectively.

Since the calculated status for one of the environments is FAIL, then the calculated status will be FAIL.

5
  1. PASS (env1)
  2. MYPASS2 (env2)
  3. TODO (env2)
  4. MYFAIL (env3)
trueMYPASS2

Latest executed final Test Runs on each environment were PASS, MYPASS2 and MYFAIL respectively.

MYPASS2 has higher ranking than the other ones, thus the overall calculated value will be MYPASS2.


6
  1. PASS (env1)
  2. MYPASS2 (env2)
  3. TODO (env2)
  4. MYFAIL (env3)
falseMYFAIL

Latest created Test Runs on each environment were PASS, TODO and MYFAIL respectively.

MYFAIL has higher ranking than the other ones, thus the overall calculated value will be MYFAIL.


Calculation of the coverage status for a given issue

It is possible to calculate the test coverage status of a coverable issue either by Version or Test Plan, in a specific Test Environment or globally, taking into account the results obtained for all Test Environments.


Analysis :


The algorithm is similar to the overall calculation of the Test status, taking into account the results obtained for different Test Environments.

In other words, the status for each linked and "relevant" Test case is calculated and at the end, a joint calculation is done for a virtual Test case. The coverage status will correspond to the mapped value for the status that was calculated for this virtual Test.

The Tests that will be considered as covering the issue are not just the ones directly linked to the issue. In fact, they may either be direct ones or ones linked to "child" issues (e.g., sub-requirements). 


Algorithm :

  1. Obtain the list of Tests that directly or indirectly through "child" issues (e.g., sub-requirements) cover the issue
    1. This depends on the Test Coverage Hierarchy-related settings, defined in Project Settings: Test Coverage
  2. Calculate the Test status for all the Tests individually, in version V or Test Plan TP
    1. This takes into account Test Runs in version V (as a result of Test Executions in version V) or Test Runs in Test Plan TP (within Test Executions associated with Test Plan TP)
    2. If a specific Environment is also chosen, then only Test Runs from Test Executions with this Environment will be considered. In case no Environment is specified then all Test Executions are considered (more info on Test Environments here).
  3. Calculate the "joint" status of all the previous Test statuses (i.e., by comparing together each Test status)
  4. Calculate the coverage status mapped to the previous calculated Test status


What affects the calculation:

Indirectly, the flag "Final statuses have precedence over non-final statuses" (enabled by default)

The existence of Test Runs for different Test Environments, in case the analysis is made for "All Environments"

Test Coverage Hierarchy

Sometimes, you may have parent requirements and sub-requirements. In general, you may have parent issues and "child" issues, both of them being handled as coverable issues.

Xray is able to understand this hierarchical relation and takes that into account for the calculation of the coverage status of the parent issues.

When a issue has some "child" issues, then the calculated status for the parent issue depends not only on its status calculated per se but also on the status of each individual child.

The calculation follows the rules described in the following table.


PARENT \ CHILD

OK

NOK

NOT RUN

UNKNOWN

UNCOVERED

OKOKNOKNOT RUNUNKNOWNOK
NOKNOKNOKNOKNOKNOK
NOT RUNNOT RUNNOKNOT RUNUNKNOWNNOT RUN
UNKNOWNUNKNOWNNOKUNKNOWNUNKNOWNUNKNOWN
UNCOVEREDOKNOKNOT RUNUNKNOWNUNCOVERED


From another perspective, you would obtain the same value for the calculation of the status of the parent issue if you consider that it is being covered by all the explicitly linked Tests and also the ones linked to the child issues.


Consequences :

The parent issue is OK if it is OK per se and the child issues are either UNCOVERED or also OK

The parent issue is NOK if it is NOK per se or if any of the child issues is NOK

The parent issue is only UNCOVERED if neither the parent requirement is covered per se nor the child issues are covered


Even if you are using the Test Coverage Hierarchy-related features, when you have tests that are directly linked to the parent issue, Xray assumes that you are validating the parent issue directly. Thus, it's irrelevant if the child issues are uncovered by tests or not.


Examples

The following table provides some examples given the Test Statuses configuration shown above in the Managing Test Statuses section.


Example #

Statuses of the related Tests

(child issues, whenever present, appear as subReqX)

Calculated value for the coverage status of the issue

Why?

1
  1. PASS
  2. PASS
  3. PASS
OKAll Tests are passed (it is similar to having just one virtual test that would be considered PASS and thus mapped to the OK status of the issue)
2
  1. PASS
  2. PASS
  3. TODO
NOT RUNOne of the Tests (3) is TODO, which has higher ranking than PASS.
3
  1. PASS
  2. PASS
  3. FAIL

NOK

One of the Tests (3) is FAIL, which has higher ranking than PASS.
4
  1. PASS
  2. subReq1 => OK
    1. PASS
  3. subReq2 => NOK
    1. PASS
    2. FAIL

NOK

One of the Tests (3b) is FAIL, thus subReq2 will be considered as NOK. Since it is NOK, then the parent req which has higher ranking than PASS.
5
  1. PASS
  2. subReq1 => NOTRUN
    1. TODO
  3. subReq2 => OK
    1. PASS
    2. PASS
NOT RUNOne of the child issues (subReq1) is NOT RUN, thus the calculated status, whenever doing the conjunction with the parent issue status, will be NOT RUN.
6
  1. PASS
  2. subReq1 => UNCOVERED
    1. (no Tests associated)
  3. subReq2 => UNCOVERED
    1. (no Tests associated)
OKSince all child issues are uncovered and the parent issue is covered directly by one Test (1), which is currently PASS, then the calculated "OK" status will be based on that Test.

Setup information for possible use cases

  1. I want to skip some Tests and proceed as they didn't exist
    1. Create a "Test Step Status"  (e.g., "SKIP"), mapped to the Test Status "PASS"
  2. I want to fail a Test Run but I don't want to mark the requirement as being NOT OK because this failure can be discarded
    1. Create a "Test Status" (e.g., "FAIL_DISCARD") , non-final and mapped to the coverage status "UNKNOWN"; setting the status as non-final will give priority to other Test Runs you may have for that Test, If “Final Statuses have precedence over non-final” flag is enabled
    2. Create a "Test Step Status" (e.g., "IRRELEVANT_FAIL") and map it to the Test Status created in the previous step
  3. I want to always see, for a given Test, the status of Test based on the last run scheduled for it, no matter if it was completed (i.e. in a final status) or not
    1. Just uncheck the flag “Final Statuses have precedence over non-final”
  4. I want to execute some steps, set them as failed or passed, but I don't want them to reflect immediately in the status of the Test Run
    1. Create custom, non-final, Test statuses for passing and failure (e.g., "MYPASS", "MYFAIL"), mapped to the OK and NOK coverage statuses, respectively
    2. Create your own custom Test Step statuses for passed and failure (e.g., "PASS_CONTINUE" and FAIL_CONTINUE"), mapped to the previously created Test statuses