Page History
...
Gliffy Diagram | ||||
---|---|---|---|---|
|
Note: If you don't use Test Environments, then only the latest result matters for assessing the current status of the Test.
...
Scenario | Test Environment(s) of TE 1 | Test Environment(s) of TE 2 | Test run status in TE1 | Test run status in TE2 | Calculated value for the overall, consolidated status of the Test (i.e. for the "All Environments") | Other |
---|---|---|---|---|---|---|
A | Android | iOS | PASS | PASS | PASS | The test will be considered to be PASS in both Android and iOS environments. |
B | iOS | iOS | PASS | FAIL | FAIL | The test will be considered to be FAIL in iOS. |
C | iOS | iOS | FAIL | PASS | PASS | The test will be considered to be FAIL in iOS. |
D | iOS | - | FAIL | PASS | FAIL | The test will be considered to be FAIL in iOS and PASS for the empty environment. |
E | - | - | PASS | FAIL | FAIL | The test will be considered to be FAIL for the empty environment. |
F | - | - | FAIL | PASS | PASS PASS | The test will be considered to be PASS for the empty environment. |
...
Whenever creating a Test Execution, you must set the Test Environment in which the execution will be executed. You can use this field as a simple label: just add the environment or reuse a previously created one.
Please see see some important Tips and Recommendations ahead.
Creating a Test Execution
Test Execution for
...
“android” Test Environment
...
Test Execution for
...
“ios” Test Environment
Tracking the results on different environments
The Test Environments column is shown in your Test Runs table so you can distinguish each execution of the Test between the different environments..
This information can be seen in the Test issue screen (see next screenshot) or in other places that show a list of Test Runs (e.g. Test Plan issue screen).
The same test has been executed in both Test Environments (a Test Execution per Test Environment).
Analyzing the impact of the results on different environments
Results obtained for Test Environments will impact coverage.
Considering the previous screenshot, the "Requirement Status" custom field for the Test issue will show NOK because the Test has failed for one of the environments.. This information is independent of the environment picker below within the "Test Coverage" section, which in turn is used to calculate the coverage on request for the selected scope, showing it on the right side along with the corresponding test results.
If you want to analyze the coverage for the requirement (e.g. "story") and show the latest results on that environment, just use the picker on the "Test Coverage" section. As seen ahead, this will produce different results because different results were obtained in different environments.
Please check Coverage Analysis to learn more about coverage analysis possibilities.
It is also possible to analyze testing thoroughly considering Test Environments; this analysis can be done using the Traceability Report or the Overall Coverage Report, among others.
The exact behavior upon choosing a specific Test Environment depends on the report itself but, either explicitly or implicitly, Test Runs will be filtered by the selected Test Environment and reports will reflect it.
Traceability Report being used to analyze the results on the "edge" test environment.
Traceability Report being used to analyze the results on the "chrome" test environment.
Analyzing coverage of "requirements" on the "edge" test environment.
Analyzing coverage of "requirements" on the "chrome" test environment.
Using multiple environments at the same time
Sometimes, you may have multiple semantics for environments categorizations for a given environment; in theory, you can think as it being something multidimensional.
Consider a very basic example: whenever performing web/UI based testing you will be using a browser and an operating system and you may want to analyze the results per a browser perspective or per an operating system perspective.
The recommended way to deal with environments having multiple dimensions is to treat each dimension (e.g. browser name, operating system , testing stage); in theory, you can think as it being something multidimensional
...
name) individually. In other words, add the values of each dimension to the "Test Environments" field separately.
Whenever you assign "mac" and "edge" to the Test Environments of a given Test Execution, it's equivalent to saying that your Test Run is scheduled for/was run in the "mac" and also in the "edge" environment.
This approach will limit the number of environments to the total number of possible values for each dimension, as opposed to having <number_of_values_dimension_1>*<number_of_values_dimension_2>*... environments.
The drawback of this solution is that you won't be able to analyze the results for an environment tagged as "mac" and "edge" at the same time, for example; you can just analyze results from a specific dimension.
Info | ||
---|---|---|
| ||
One way to deal with these kinds of environments would be to flatten them and treat them as usual, i.e. you could name the environment such as “windows_edge” or “mac_chrome” but…
|
...
|
...
|
How to use
Assign each environment (e.g. name of operation system, name of browser vendor) as you do for a single environment; in other words, just add the multiple environment names as multiple, distinct labels.
...
Whenever creating a Test Execution (e.g. from a Test Plan):
Whenever updating an existing Test Execution:
Example
Test executed in the context of Test Execution assigned to several environments at the same time
...
Status | Why? | |
---|---|---|
windows | PASS | due to the last result obtained in "windows" environment on CALC-5262 |
mac | PASS | due to the last result obtained in the "mac" environment on CALC-5262 |
edge | FAIL | due to the last result obtained in "chrome" environment on CALC-5261 |
chrome | PASS | due to the last result obtained in "edge" environment on CALC-5263 |
"All Environments" (if analyzing the status of the test without identifying a specific environment) | FAIL | as the last result for one of the environments ("chrome") was FAIL (i.e. on CALC-5261) |
Advanced
Test Environments and the TestRunStatus custom field
The "TestRunStatus" custom field is associated with Test issues and can be used to provide information about the latest status of your test; more info here.
This custom field calculates the status of the test for "all environments" (i.e. the consolidated status), giving you a high-level view; it cannot be configured to show the status for a specific environment.
Internally, this field will store the status of the test for all possible scopes, which besides other things includes the information about the status in all different environments.
Info |
---|
If you start using Test Environments in your Test Executions, then it's not only your test status calculation that will change (i.e. the one stored in the TestRunStatus custom field). All custom fields that depend on it (e.g., Requirement status, Test Sets status) will change. Consequently, the requirement coverage calculation and all associated charts/gadgets are also affected. |
Tips and Recommendations
Do's
- Use Test Environments only if you want to run the same Test case in different environments and track results per each environment.
- Simplify the names of Test Environments (i.e. lowercase it, shorten it)
- Example: macOS => mac
- Evaluate if you really need to assign multiple environments at the same time; using just one is preferable if you can afford that simplicity
Info | ||
---|---|---|
| ||
For advanced Test Environment management capabilities, please check our Integration with Apwide Golive. |
Don'ts
- Don't create dozens or hundreds of Test Environments as it will harden their usage and add some performance overhead
- Don’t make composed environment names, such as ”<os>_<browser>_<stage>” as it will pollute the environment namespace and harden management
- Don't try to do data-driven testing using Test Environments; they're not tailored for that
...
Info | ||
---|---|---|
| ||
Besides other usage issues, if you have a large number of environments (>>10), it will impact the calculations that need to be done and the size of the Lucene index. Please try to have a limited, restricted and well-defined list of Test Environments. |
- You may filter using Test Environments in your Test Executions panel to see how the executions are doing, per environment.
Info | ||
---|---|---|
| ||
For advanced Test Environment management capabilities, please check our Integration with Apwide Golive. |
Test Environments and the TestRunStatus custom field
The "TestRunStatus" custom field is associated with Test issues and can be used to provide information about the latest status of your test; more info here.
This custom field calculates the status of the test for "all environments" (i.e. the consolidated status), giving you a high-level view; it cannot be configured to show the status for a specific environment.
Internally, this field will store the status of the test for all possible scopes, which besides other things includes the information about the status in all different environments.
Info |
---|
If you start using Test Environments in your Test Executions, then it's not only your test status calculation that will change (i.e. the one stored in the TestRunStatus custom field). All custom fields that depend on it (e.g., Requirement status, Test Sets status) will change. Consequently, the requirement coverage calculation and all associated charts/gadgets are also affected. |