Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The status section contains other relevant fields such as:

  • Timer - The time elapsed since the last time the timer started and the total time logged into this execution
  • Assignee -  the User assigned to perform the current test execution
  • Executed By - the last User that changed the status of the current test run
  • Versions - the target release version tested by the current test execution
  • Revision - the source code and documentation version used in the current test execution
  • Started On - the date and time the execution of the current test started
  • Finished On - the date and time the execution of the current test finished

...


Timer

Image Added

The timer is used to help the user track the time spent in the execution of the test.
This component will only be visible if the corresponding setting is active in the miscellaneous.
The timer itself can be started, paused and reset manually using the button to do so:
Image Added
You can also edit the value on the timer by clicking on it if the Set the value of the Time Tracker setting is active in the project settings.

The timer will also automatically start when the status of the Test Run changes to Executing and it will pause when it changes to a final status (i.e. Passed, Failed), it will also reset when it the Test Run changes to Todo.
Below the timer there is a component that tracks the time logged for this execution:

Image Added
By clicking on it, you can log more time into it:
Image Added

Time added in this dialog will be added to the work log of the respective Test Execution:
Image Added

Affected Requirements

This section provides the ability to manually set the status for Requirement issues that are tested by the current Test issue. By default, the requirement status is calculated based on the latest Test Run of each Test associated with a given requirement. Even though the Test Run status is FAILED, not all requirement issues associated with the Test issue might have failed. Some of these requirements can, for instance, be PASSED, if the tester chooses to explicitly set the requirement status. This functionality makes it possible for a single Test issue to test multiple requirement issues with different concerns and functionalities. These requirement statuses that are explicitly set in a Test Run will then be considered when calculating the Requirement Status and Requirement Coverage.

...