You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

In this page you will be able to get an high-level overview of how to implement testing in your project.

Having the Test Process in mind will help you out, so you can clearly identify in which phase you currently are. The different testing phases are mostly implement as different issue types. More info on each phase can be obtained in each specific section within the User's Guide.


Learn more

Please take some time to learn about the terminology used in Xray and the relation between the several entities, by looking at Terms and Concepts.


Managing versioned projects

In this use case, your project has one or more versions that you evolve as needed.

You may start with some requirements for v1.0 and later on create a v1.1 or a v2.0 release, as an example.

How do you then implement testing in this scenario?

Mainly with manual testing

Suppose that you are working in version "XPTO" and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with a version XPTO, through the FixVersion field
  2. create one or more Tests for validating each requirement; Typical manual Tests can be created from the requirement issue screen, thus they will be automatically linked to the requirement. Cucumber automated tests can be created in the same manner, while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results
  3. organize your Tests either in lists (i.e. Test Set issues) or in folders, so you can easily pick them afterwards whenever you need to create executions or plans. Test Sets can also be used as a way to indirectly validate requirements, since you can link them to requirements using the "tests" issue link 

  4. create at least one Test Plan with the Tests you want to validate in version XPTO; don't forget to assign the Test Plan with version XPTO, through the FixVersion field

    Learn more

    Pleas see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.

  5. from the Test Plan, create one or more planned Test Executions with the Tests that you want to execute. Each Test Execution is an abstraction of a "task for running some Tests" and can be assigned to specific users. Inside the Test Execution invidividual Test Runs may be reasssigned to some other users;
  6. execute the Tests (i.e. Test Runs)
    1. for manual Tests, execute them in the scope of each Test Execution. For each Test Run, report the status of each step or the overall result if you prefer; you may need to create defects for failed Test Runs, which you can do immediately from a given step or globally at Test Run level
    2. for automated Tests, in the CI tool (e.g. Bamboo, Jenkins), run the automated tests and report them to Xray, associating them to the respective Test Plan. In Xray, a Test Execution associated with the Test Plan will be created; it will contain the results for each automated Test. Test entities will be created automatically from the results, if they have not yet been created before
      1. analyze the results of each Test Execution. For each failed Test Run, you may need to manually create defects, which you can do in the execution details screen of the respective Test Run 
  7. from the Test Plan, create new Test Executions to validate all Tests or just with the ones that are, for example, failing
  8. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects


Mainly with automated testing

In the case you will be probably implementing Continuous Integration and Continous Delivery with the help of automated testing.

How can you adapt your process to this scenario?

Most probably you're adopting an Agile methodology, such as Scrum. If this is the case, then you have Sprints and you can them as basis to define some scope.

Note that Scrum does not dictate that you make just one delivery at the end of each Sprint; you can make many in fact, during the lifespan of a Sprint.


Suppose that you are working in version "XPTO", sprint "X", and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with version XPTO, through the FixVersion field, and sprint X
  2. create one or more Tests for validating each requirement;  In this case your automated tests will be specified before the actual implementation of the requirement is done, if you're following TDD, or after the requirement is implemented, in the worst case scenario. Cucumber automated tests can be specified in JIRA (and implemented in code), while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results

  3. create at least one Test Plan with the Tests you want to validate in version XPTO; don't forget to assign the Test Plan with version XPTO and sprint X. Having a specific Test Plan for tracking the regression testing may prove to be useful.

    Learn more

    Please see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.

  4. in the CI tool (e.g. Bamboo, Jenkins), run the automated tests and report them to Xray, associating them to the respective Test Plan. In Xray, a Test Execution associated with the Test Plan will be created; it will contain the results for each automated Test. Test entities will be created automatically from the results, if they have not yet been created before
  5. analyze the results of each Test Execution. For each failed Test Run, you may need to manually create defects, which you can do in the execution details screen of the respective Test Run
  6. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects

Managing non-versioned projects

In this use case, your project is not using versions. This may be common in Continuous Delivery scenarios or in the case where you simply don't want to manage versions at all.

How do you then implement testing in this scenario then?

Most probably you're adoptin gan Agile methodology, such as Scrum.

If this is the case, then you have Sprints and you can them as basis to define some scope.


Mainly with manual testing

Suppose that you are working in sprint "X" and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with sprint X
  2. create one or more Tests for validating each requirement; Typical manual Tests can be created from the requirement issue screen, thus they will be automatically linked to the requirement. Cucumber automated tests can be created in the same manner, while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results
  3. organize your Tests either in lists (i.e. Test Set issues) or in folders, so you can easily pick them afterwards whenever you need to create executions or plans. Test Sets can also be used as a way to indirectly validate requirements, since you can link them to requirements using the "tests" issue link 

  4. create at least one Test Plan with the Tests you want to validate in sprint X; don't forget to assign the Test Plan with sprint X

    Learn more

    Please see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.

  5. from the Test Plan, create one or more planned Test Executions with the Tests that you want to execute. Each Test Execution is an abstraction of a "task for running some Tests" and can be assigned to specific users. Inside the Test Execution invidividual Test Runs may be reasssigned to some other users;
  6. execute the Tests (i.e. Test Runs)
    1. for manual Tests, execute them in the scope of each Test Execution. For each Test Run, report the status of each step or the overall result if you prefer; you may need to create defects for failed Test Runs, which you can do immediately from a given step or globally at Test Run level
    2. for automated Tests, in the CI tool (e.g. Bamboo, Jenkins), run the automated tests and report them to Xray, associating them to the respective Test Plan. In Xray, a Test Execution associated with the Test Plan will be created; it will contain the results for each automated Test. Test entities will be created automatically from the results, if they have not yet been created before
      1. analyze the results of each Test Execution. For each failed Test Run, you may need to manually create defects, which you can do in the execution details screen of the respective Test Run 
  7. from the Test Plan, create new Test Executions to validate all Tests or just with the ones that are, for example, failing
  8. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects

Mainly with automated testing

In the case you will be probably implementing Continuous Integration and Continous Delivery with the help of automated testing.

How can you adapt your process to this scenario?

Most probably you're adopting an Agile methodology, such as Scrum. If this is the case, then you have Sprints and you can them as basis to define some scope.

Note that Scrum does not dictate that you make just one delivery at the end of each Sprint; you can make many in fact, during the lifespan of a Sprint.


Suppose that you are working in sprint "X" and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with sprint X
  2. create one or more Tests for validating each requirement;  In this case your automated tests will be specified before the actual implementation of the requirement is done, if you're following TDD, or after the requirement is implemented, in the worst case scenario. Cucumber automated tests can be specified in JIRA (and implemented in code), while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results

  3. create at least one Test Plan with the Tests you want to validate in sprint X; don't forget to assign the Test Plan with sprint X. Having a specific Test Plan for tracking the regression testing may prove to be useful.

    Learn more

    Please see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.

  4. in the CI tool (e.g. Bamboo, Jenkins), run the automated tests and report them to Xray, associating them to the respective Test Plan. In Xray, a Test Execution associated with the Test Plan will be created; it will contain the results for each automated Test. Test entities will be created automatically from the results, if they have not yet been created before
  5. analyze the results of each Test Execution. For each failed Test Run, you may need to manually create defects, which you can do in the execution details screen of the respective Test Run
  6. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects


  • No labels