Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The status of a given Test Run is an attribute that most of the time is often calculated automatically on automatically based on the respective recorded step statuses. Users You can also enforce a specific status for a Test Run, which in turn may implicitly enforce specific step statuses (e.g., setting a Test Run as "FAIL" can set all steps as "FAIL"). 

This calculation is made by comparing steps together, following these rules:

  1. if a step status is mapped to a non-final then Test Run status will be "EXECUTING"
  2. compare all the step statuses, by their order (steps at the bottom of the list will have higher ranking)
    1. the step status "PASS" has the lowest ranking 

The order of the steps is indifferent for the purpose of the overall Test Run status value.

Image Removed

Examples

The following table provides some examples given the Test Step Statuses configuration shown above.

  1. Obtain the test status mapped to each reported test step status; this is important as the actual test step statuses are not directly compared
  2. Compare all the the previously mapped test statuses together
    1. if any of these statuses (e.g., "PASS") is in turn mapped to the coverage status "OK", then the other status wins; if both are mapped to "OK" then the highest ranked wins
    2. if any of these statuses is "FAIL", then the Test Run status will be "FAIL"
    3. if any of these statuses is in turn mapped to the coverage status "NOK", then the Test Run status will be that one
    4. if any of theses statuses is final, then wins over non-final ones
    5. of these statuses, the status with the highest ranking wins

The order of the steps is irrelevant for the purpose of the overall Test Run status value.


Consequences:

  • if any test step status is "FAIL" then the calculated status for the Test Run will be "FAIL"
  • if any of the test steps "contributes  negatively" (i.e., is mapped to a Test status associated with the NOK coverage), then the status of Test Run will correspond to the mapped Test status of that step
  • the Test Run will have status "PASS" if all the steps are marked as "PASS"
  • the calculated status for the Test Run will only be "EXECUTING" if there is at least one step in "EXECUTING" or "TODO"  (or a similar custom test step status) and all other steps are in "PASS" or equivalent (i.e., associated to the "OK" coverage status)

Configuration Example 1


Image Added


The following table provides some examples given the Test Step Statuses configuration shown above.


Example #

Statuses of the steps/contexts

(the order of the steps/contexts is irrelevant)

Calculated value for the status of the Test RunWhy?
1
  • PASS
  • PASS
  • PASS
PASSAll steps are PASS, thus the joint value is PASS
2
  • PASS
  • TODO
  • PASS
EXECUTINGAt least one step status (i.e. TODO) is mapped to a non-final Test status
3
  • PASS
  • FAIL
  • PASS
FAILOne of the step statuses (i.e. FAIL) has higher ranking than the other ones
4
  • XPASS
  • FAIL
  • PASS
FAIL

Since one of the steps is FAIL, then the run will be marked as FAIL.


5
  • FAIL
  • XPASS
  • FAIL
FAILSince one of the steps is FAIL, then the run will be marked as FAIL.
6
  • XFAIL (=>MYFAIL=>NOK)
  • XPASS2 (=>MYFAIL=>NOK)
  • XPASS (=>FAIL=>NOK)
FAILAll mapped statuses map to a test status that in turn is associated to "NOK". Since one of them is FAIL, then the run will be marked as FAIL.


Configuration Example 2

Let's consider the following configuration.

Image Added

Image Added


Example #

Statuses of the steps/contexts

(the order of the steps/contexts is irrelevant)

Calculated value for the status of the Test Run

Why?

1
  • DUMMY_P2 (=>CUSTOM_PASS2=>OK)
  • DUMMY_P1 (=>CUSTOM_PASS=>OK)
CUSTOM_PASS2

We can see that both steps contribute in a "positive way" (i.e., they were successful as ultimately they are linked to successful coverage impact).

Both statuses mapped to these test step statuses are associated with the "OK" coverage; as CUSTOM_PASS2 has higher ranking than CUSTOM_PASS, the run will be marked as "CUSTOM_PASS2".

2
  • DUMMY_P2 (=>CUSTOM_PASS2=>OK)
  • DUMMY_P1 (=>CUSTOM_PASS=>OK)
  • PASS (=>PASS=>OK)
CUSTOM_PASS2Similary to the previous example. Any status wins the "PASS" status.
3
  • DUMMY_F2 (=>CUSTOM_FAIL2=>NOK)
  • DUMMY_F1 (=>CUSTOM_FAIL=>NOK)
CUSTOM_FAIL2

We can see that both steps contribute in a "negative way" (i.e., they were not successful as ultimately they are linked to unsucessful coverage impact).

Both statuses mapped to these test step statuses are associated with the "NOK" coverage; as CUSTOM_FAIL2 has higher ranking than CUSTOM_FAIL, the run will be marked as "CUSTOM_FAIL2"

Example #

Statuses of the steps/contexts

(the order of the steps/contexts is irrelevant)

Calculated value for the status of the Test RunWhy?1
  • PASS
  • PASS
  • PASS
PASSAll steps are PASS, thus the joint value is PASS2
  • PASS
  • TODO
  • PASS
EXECUTINGAt least one step status (i.e. TODO) is mapped to a non-final Test status3
  • PASS
  • FAIL
  • PASS
FAILOne of the step statuses (i.e. FAIL) has higher ranking than the other ones4
  • XPASS
  • FAIL
  • PASS
FAIL

XPASS has higher ranking than the other ones, thus the overall calculated value is based on the mapping for that Test Step status.

5
  • FAIL
  • XPASS
  • FAIL
FAILXPASS has higher ranking than the other ones, thus the overall calculated value is based on the mapping for that Test Step status

.

Calculation of the status for a given Test

...

  1. calculate the Test status for each Test Environment, based on all the implicit Test Environments from the relevant Test Executions (i.e. Test Executions in version V or Test Executions associated with Test Plan TP)
  2. calculate the joint value for the Test status
    1. PASS has lowest ranking (i.e. for the calculated to be PASS, all calculated statuses must be PASS in the different Test Environments)
    2. if one is FAIL, then the calculated value will be FAIL
    3. otherwise, use the ranking of Test statuses


Examples

The following table provides some examples given the Test Statuses configuration shown above in the Managing Test Statuses section.

...

Info
titleRationale

Even if you have sub-requirements, when you have tests that are directly linked to the parent requirement, Xray assumes that you are validating the requirement directly. Thus, it's irrelevant if the sub-requirements are uncovered by tests.


Examples

The following table provides some examples given the Test Statuses configuration shown above in the Managing Test Statuses section.

...