You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Scheduling executions of Tests can be made by creating Test Executions containing the Tests that should be run.

In fact, Test Executions contain Test Runs (i.e. instances of the Tests in the context of that Test Execution, that contain a copy of the Test specification along with the respective result).



Executing Tests in multiple environments

Sometimes you're testing against different target systems, different browsers, different devices, different database providers.

In that case, if the Tests are exactly the same but you wish to track the results on those different environments, then you should create different Test Executions for the different environments and assign the proper Test Environment.

 More info about Test Environments can be found in Working with Test Environments.

How to manage assignment

The Test Execution provides two levels of assignments:

  • the Test Execution issue itself (since it's an issue it has an assignee)
  • at individual Test (run) level

Test Executions per each test executor

In the simplest scenario, and by default, the individual Test Runs are assigned to the same user as the one from the Test Execution assignee.

Therefore, you can create different Test Executions for different persons, each one will contain only the Tests aimed to be run by that person. In this case, the Test Execution assignee would be the same as the assignee of the individual runs.

Test Executions assigned to QA manager

Some teams have a QA manager/lead that is responsible for managing the lifecyle of the Test Execution, including for dealing with tasks such as

  • assuring the execution progress is on track
  • reviewing the results

In this case it may also be normal that Test Executions contain many Tests to be run. 

In this scenario, the Test Execution assignee would be the QA manager while the individual Test Runs would be assigned to one or more different persons.

Recommendations

  • avoid cumulative testing assumptions because it may not be that trustworthy; sometimes you have a bunch of Tests that you executed once (some passed and some failed), then you create Test Executions just for the failing tests, assuming that the other tests still have the same result. As you may know, changes you make related with some faulty/incomplete requirement may affect other requirements; therefore, changes you make due some failing tests may implicitly affect other tests that supposely were already ok; 
  • automate as much as possible, including your regression testing;
  • take advantage of the fact that the Test Execution is an issue type; that means you can use workflows and take advantage of it in order to track the progress of the Test Execution. Xray provides some workflow possibilties for Test Executions as mentioned in Global Preferences

FAQ

Do I need to create a Test Execution every time I need to run the same Tests? Can't I update an existing one?

For history reasons, you should create a Test Execution with the Tests that you want to run in that version/revision of the system. That way you'll be able to see the results you obtained in that specific iteration (i.e. in that Test Execution) that ran in some revision of the system under test. 

You can also update the same Test Execution but you'll loose the benefit of tracking how your test results evolve through time.

How many Test Executions should I create? And when?

 Or... "How many Tests should you create?" There are no "correct" or unique answers for these kind of questions. You should create as many as needed in order to ensure the quality of the product you're working on. The quantity and the timing depends on the process and methodology you have implemented. As generic rules, you should start testing as soon as possible and avoid testing only at the end of your development cycle. 

If you want to validate some components or subsets of the system, you may create specific Test Executions for it. The how and when depends on your approach.

Do I have to use Sub Test Executions?

No; they're optional. If you don't to use them, you can just remove the issue type from your project.

Should I use Sub Test Executions? Why?

Sub Test Executions provide an easy way to schedule an execution containing all the Tests that validate a given requirement.

Besides it, they will be handled as sub-tasks of the parent requirement. This means that you gain some out-of-the-box features because of that, such as:

  • ability to track the progress of the Test Execution in the Agile board, by seeing it in-context with the parent requirement
  • TO DO


  • No labels