Xray uses different issue types for implementing the testing process (with the exception of Test Runs).

Thus means that you are able to manually log time on those issues using Jira out-of-the-box. If you have some other apps or integrations that are based on this field, then you are going to have it available there also.


Please note

Only some issue types have time tracking enabled by default: Test Execution, Test Plan.

However, you can then easily add Time Tracking features to the other issue types (Test, Pre-Condition, Test Set) by editing the respective view/default screens and adding the "Time Tracking" field.


The question arises: How do I log the time I spent executing some tests runs?.

The time that you spend executing some Test Run is not logged in Jira automatically; you have to log it manually in the corresponding Test Execution issue.

Note that Xray is able to automatically calculate the elapsed time from each Test Run based on the "Started on" and "Finished on" dates of the Test Run. Some reports (e.g., Test Runs ReportTest Executions Report) use this value for the calculation of the overall elapsed time or simply display it. 

Thus, the elapsed time (which is calculated automatically) and the actual logged time are two different things. You can choose to log exactly the displayed elapsed time, report another value or not even  log anything.


Possible usage scenarios for logging time

The following table presents some possible usage scenarios for logging time in Xray issue types.


Issue typePurposeNotes
Testlog time related to specificationdon't log time related to execution here because Test issues are like test templates
Pre-Conditionlog time related to specification
Test Setlog time related to the creation of Test Set, eventually considering the time related to the specification of related Tests and Pre-Conditions
(Sub)Test Executionlog time related to the execution of related Test runs
Test Planlog time related to environment setup, planningdon't log time related to execution twice (both in the Test Plan and in the related Test Executions)

Possible usage scenarios for using time estimates

The following table presents some possible usage scenarios for using time estimates, through the "Original Estimate" field, in Xray issue types.


Issue typePurposeNotes
Test
  • estimated execution duration for this Test
  • estimated specification time for this Test
using "estimated execution time" may be more adequate; however if you choose that approach, you may have to decide where to report the estimation for the specification (e.g. in a Test Set or in some specification task)
Pre-Condition
  • estimated execution duration for this Pre-Condition
  • estimated specification time for this Pre-Condition
see notes for Test issue
Test Setestimated time for specifying all the Tests being grouped within this Test Set
(Sub)Test Executionestimated time for running all related Test Runs
Test Plan
  • estimated time related only with environment setup, planning
  • estimated time for all the effort associated with the validation of the Tests within the plan, including environment setup and multiple iterations
your estimated time aim for the Test Plan should match the aim of the logged time

Comparisons

At individual Test Run level, original estimated vs effective elapsed time

For obtaining the elapsed time for each Test Run, you may use the Test Runs Report. Then you may export it to CSV.

You would need however to export also Test issues from the Issue search page and show the "Original Estimate" field. Then you could cross the two results in order to have a one-to-one comparison.

At Test Execution level, comparing the sum of the estimated times for each Test vs the sum of the elapsed time for all the Test Runs

In this case we compare estimations on Test issues against the actual elapsed time (not the logged time/work logs on the Test Execution).

In order to obtain the "the sum of the estimated times for the Tests that are part of the Test Execution", we can either do an Issue Search based on the Tests and show the Original Estimate column, or we can use an additional add-on to build a calculated custom field.  sumUp add-on is a good solution for that.

With "sumUp" you may define a rule and then you can use a custom field which calculates the value based on it.

a) create a configuration

b)  create a custom field (e.g. "TestsEstimation") of type "SumUp Text" and configure it to use the "Estimation" created previously; associate it to the Test Execution screens


Then in Issues search page, that information can be seen as a column.

  

In order to obtain the sum of the elapsed time, you may use the "Test Executions Report", which you can export to CSV if you wish to cross the values.


At Test Execution level, comparing the sum of the estimated times for each Test vs the sum of the logged time (work logs)

In this case we assume people log time on the Test Execution issues which we want to compare against the estimations made on the Test issues.

In order to obtain the "the sum of the estimated times for each Test", we can follow the same approach we did in 2) for creating a calculated custom field with its value.

We can then just compare that value of that custom field (e.g. "TestsEstimation") to the "Time Spent" directly. 

If you prefer to use estimation on the Test Execution field, you may use the "TestsEstimation" as an auxiliary field for filling out the value of the "Original Estimate" on the Test Execution. Then you can use the standard "Original Estimate" vs "Time Spent" comparisons.

Recommendations

Using time estimates and logging time depend on your use case. You can choose to use both or just one of them.

General recommendations:

  • Use the "Original Estimate" on Test issues to estimate the execution duration, so that you can make some calculations based on it.
  • If you're doing testing services, using time estimates for Test Sets may be useful if you want to have an idea of the associated effort of the Tests you aim to implement related with that Test Set.
  • Log execution time in Test Executions right after you finish a given Test Run based on the elapsed time. To obtain the elapsed time, you can use the Test Runs report. A more seamless way may be implemented in the near future.