Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
titleLearn more

Please take some time to learn about the terminology used in Xray and the relation between the several entities, by looking at Terms and Concepts.


Table of Contents

Managing versioned projects

...

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with a version XPTO, through the FixVersion field
  2. create one or more Tests for validating each requirement; Typical manual Tests can be created from the requirement issue screen, thus they will be automatically linked to the requirement. Cucumber automated tests can be created in the same manner, while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results
  3. organize your Tests either in lists (i.e. Test Set issues) or in folders, so you can easily pick them afterwards whenever you need to create executions or plans. Test Sets can also be used as a way to indirectly validate requirements, since you can link them to requirements using the "tests" issue link 

  4. create at least one Test Plan with the Tests you want to validate in version XPTO; don't forget to assign the Test Plan with version XPTO, through the FixVersion field

    Info
    titleLearn more

    Pleas see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.


  5. from the Test Plan, create one or more planned Test Executions with the Tests that you want to execute. Each Test Execution is an abstraction of a "task for running some Tests" and can be assigned to specific users. Inside the Test Execution invidividual Test Runs may be reasssigned to some other users;
  6. execute the Tests (i.e. Test Runs), in the scope of each Test Execution. For each Test Run, report the status of each step or the overall result if you prefer; you may need to create defects for failed Test Runs, which you can do immediately from a given step or globally at Test Run level
  7. from the Test Plan, create new Test Executions to validate all Tests or just with the ones that are, for example, failing
  8. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects


Managing non-versioned projects

In this use case, your project is not using versions. This may be common in Continuous Delivery scenarios or in the case where you simply don't want to manage versions at all.

How do you then implement testing in this scenario then?

Most probably you're adoptin gan Agile methodology, such as Scrum.

If this is the case, then you have Sprints and you can them as basis to define some scope.


Mainly with manual testing

Suppose that you are working in sprint "X" and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with sprint X
  2. create one or more Tests for validating each requirement; Typical manual Tests can be created from the requirement issue screen, thus they will be automatically linked to the requirement. Cucumber automated tests can be created in the same manner, while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results
  3. organize your Tests either in lists (i.e. Test Set issues) or in folders, so you can easily pick them afterwards whenever you need to create executions or plans. Test Sets can also be used as a way to indirectly validate requirements, since you can link them to requirements using the "tests" issue link 

  4. create at least one Test Plan with the Tests you want to validate in sprint X; don't forget to assign the Test Plan with sprint X

    Info
    titleLearn more

    Pleas see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.


  5. from the Test Plan, create one or more planned Test Executions with the Tests that you want to execute. Each Test Execution is an abstraction of a "task for running some Tests" and can be assigned to specific users. Inside the Test Execution invidividual Test Runs may be reasssigned to some other users;
  6. execute the Tests (i.e. Test Runs), in the scope of each Test Execution. For each Test Run, report the status of each step or the overall result if you prefer; you may need to create defects for failed Test Runs, which you can do immediately from a given step or globally at Test Run level
  7. from the Test Plan, create new Test Executions to validate all Tests or just with the ones that are, for example, failing
  8. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects


Mainly with automated testing

In the case you will be probably implementing Continuous Integration and Continous Delivery with the help of automated testing.

How can you adapt your process to this scenario?

Most probably you're adopting an Agile methodology, such as Scrum. If this is the case, then you have Sprints and you can them as basis to define some scope.

Note that Scrum does not dictate that you make just one delivery at the end of each Sprint; you can make many in fact, during the lifespan of a Sprint.


Suppose that you are working in sprint "X" and you want to implement testing in it, in order to make sure that the features you deliver are correct.

Your workflow would be more or less:

  1. create "requirements" (e.g. Story, Epic or other similar issue types) and associate them with sprint X
  2. create one or more Tests for validating each requirement;  In this case your automated tests will be specified before the actual implementation of the requirement is done, if you're following TDD, or after the requirement is implemented, in the worst case scenario.  Cucumber automated tests can be created in the same manner, while other automated tests will be written in code and either linked to the requirement directly in the code or manually after importing their respective results
  3. organize your Tests either in lists (i.e. Test Set issues) or in folders, so you can easily pick them afterwards whenever you need to create executions or plans. Test Sets can also be used as a way to indirectly validate requirements, since you can link them to requirements using the "tests" issue link 

  4. create at least one Test Plan with the Tests you want to validate in sprint X; don't forget to assign the Test Plan with sprint X

    Info
    titleLearn more

    Pleas see our Tips for planning tests, which explore the different possibilities you have concerning planning, including in Waterfall and Agile methodologies.


  5. from the Test Plan, create one or more planned Test Executions with the Tests that you want to execute. Each Test Execution is an abstraction of a "task for running some Tests" and can be assigned to specific users. Inside the Test Execution invidividual Test Runs may be reasssigned to some other users;
  6. execute the Tests (i.e. Test Runs), in the scope of each Test Execution. For each Test Run, report the status of each step or the overall result if you prefer; you may need to create defects for failed Test Runs, which you can do immediately from a given step or globally at Test Run level
  7. from the Test Plan, create new Test Executions to validate all Tests or just with the ones that are, for example, failing
  8. use the prompt feedback of Test Plan and Test Execution issues along with reports to track the progress of your testing; built-in reports, such as the Traceability Report, Overall Requirment Coverage and others, along with custom dashboards can be used to track the relevant information such as open defects

...