You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

You have specified your Tests and organized them using Test Sets or using the Test Repository and now you want how to address properly the planning phase of your testing, right?

In fact, you may start by creating a Test Plan and then create the Tests and add those to the Test Plan.

Planning your testing is also dependant on the methodology and process you're adopting.

Some usage examples follow next and feel free to adapt these to your own internal process, the way that fits best your team.


Planning, from an "aim" perspective

Instead of creating one Test Plan for your "release", you may create multiple Test Plans to track different Tests.

This may be useful if you wish to have clear visibility of how certain groups of Tests are.

Some examples of creating different Test Plans: 

  • for the new features related Tests
  • for regression testing (i.e. for Tests related with features implemented in previous versions)
  • for security related Tests
  • for performance related Tests
  • for conformance related Tests
  • for non-functional Tests


Please note

This approach is independent of methodology being used. This means you can follow the tips described next for the different methodologies and extend them with these tips, in order to gain additional visibility over certain Tests.


Planning, from a "methodology" perspective

Agile

Scrum

If you're following Agile, namely Scrum, you'll have sprints of one or more weeks, where each one represents one iteration where at the end you have a shippable product. Testing can occur hopefuly during the period of the sprint or more closely to the end of it. In any case, you can have a Test Plan per sprint to track the results of the Tests you want to execute, including the ones that validate the features implemented in that sprint. Besides this, you may also want to have a Test Plan related with regression testing to ensure you're not braking anything in the sprint.

Waterfall

In this scenario, testing is done as a separate phase at the end of development. From the testing bugs may be found which will lead to go back to the requirements analysis/development, wait for changes/fixes to be made and then test it once again.

Having a Test Plan assigned to the version currently being developed as a way to group the results from multiple testing cycles may be enough.

Iterative Waterfall

In this approach, a release is somehow split in some internal mini-releases with subsets of the features implemented and it may be assumed that some of these features may not be complete/stable/finished.

Each intermediate release may have one specific Test Plan as a way to track the Tests and their results, made in the context of the intermediate release. The Test Executions made in the context of each of these Test Plans can also be associated with a more broader Test Plan, so their results would also get reflected in the broader Test Plan.

In sum, you can simply create one Test Plan for tracking all the Tests you wish to validate in that version along with additional Test Plans, one per iteration/intermediate release. Then, you need to assure that all Test Executions created in the context of your intermediate release's Test Plan are also reflected in the "global" Test Plan. 


  • No labels