You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 28 Next »

Xray's roadmap is continuously reviewed and redefined. We update often, depending on the feedback we receive from our clients and internal stakeholders.

Our release plan is available in our Jira issue tracker. Feel free to view and to vote on the issues that you would like to see implemented (account registration required).

Here you can find a list of features that define our main goals for future releases. This doesn't mean that other, potentially smaller features, won't be implemented as well.

Shipped


V1.10


The Test Steps component was the target of a major facelift with the goal of providing a wider, more usable UI that will have the ability to edit all step-related fields at once, in a grid or column-based layout.

Editing, reading and navigating through Test steps are now much easier with this new UI.

Along with the Test steps UI revamp, Xray now also provides the ability to configure and specify new custom step fields for manual test cases that can complement the standard ones (Action/Step, Data, Expected Result). The standard custom fields can also be hidden if desired. All of this can be configured in the project settings.

V1.12


You can now define additional custom fields for Test Runs. These fields can be useful to add extra information to Test Runs usually only available during or after executing Tests.

Test Run custom fields can be configured by project and by Test Type. Therefore, these settings will not affect other projects within your Jira instance. For example, it is possible to have custom fields just for Manual Tests within a project.


Reporting

The Test Runs List report and gadget can already display Test Run Custom Field values for each Test Run.

Within a Test Execution issue, it is also possible to display Test Run Custom Field columns and to filter Test Runs by Test Run Custom Field values.


Learn more about Test Run Custom Fields here.


V2.0


Test parameterization is a powerful practice that allows the same test to be executed multiple times with different parameters. Parameters are similar to input values (variables) that can change with each execution.

Parameterized tests in Xray are defined just like any other test with the addition of some parameter names within the specification using the following notation: ${PARAMETER_NAME}. This notation is used to reference parameters within the test steps.

Precondition issues can also be parameterized by including parameter names in the precondition specification.

The parameters, along with their values, are defined within a dataset. A dataset is a collection of data represented with a tabular view where every column of the table represents a particular variable (or parameter), and each row corresponds to a given record (or iteration) of the dataset. The number of rows in the dataset determines the number of iterations to execute.

A dataset can be defined in the following entities/scopes:

  1. Test (default dataset)
  2. Test Plan - Test
  3. Test Execution - Test (Test Run)

The closest dataset to the test run will be the one used to generate the iterations, effectively overriding any dataset defined in higher levels.



All iterations for a given test are executed within the context of the same test run. Each iteration can be expanded, and the steps executed individually. The step parameters will be replaced by the corresponding iteration values. The steps affect the iteration status that, in turn, affects the overall test run status.


Learn more about parameterized tests here.

V3.0


Modular test design is a way of promoting test case reusability and composition across a large test repository. To design modular tests, you can create a test where some of the steps call or include other test cases. This prevents testers from having to write the same steps over and over again in different high-level tests. Using a modular design approach, any test can become a building block of more extensive test scenarios. Still, they can also be executed individually if needed.

A called test can, in turn, also call other tests. You can compose a test scenario with up to five levels of depth.

Modular tests can also be parameterized. When calling a test, you can provide new parameter values according to the parent test data.

Upon executing, Xray will unfold all called test steps in the test run. This becomes transparent to testers as they only have to follow and execute the steps on the execution, even though test steps might come from different tests issues.

A common use case for modular tests is end-to-end testing. End-to-end tests often need to pass through the same area or component of the application before asserting the final result. With modular test design, you can reuse the tests for these common areas or components.



Learn more about modular tests here.

V3.1


The BDD Step Library is a project-level steps library organization feature, containing all the Gherkin steps included by all Tests/Preconditions belonging to the project. Thus, it provides an overview of all the Gherkin steps used in the context of each project, allowing users to easily manage and refactor the steps.

You can also configure heuristics in order to recognize variables within the steps. This way, Xray is able to identify a single step from multiple steps within Tests/Preconditions where the only thing that varies is the parameter values.

It is also possible to create static steps. Static steps can be created manually within the BDD library and are not deleted even if there are no Tests/Preconditions using these steps.

Besides project libraries, it is also possible to configure global BDD libraries that can be accessed from multiple projects. Hence, if you have multiple Jira projects using the same Gherkin steps you can also use the same BDD library in Xray.

When creating BDD tests or preconditions, you can now use auto-complete to reuse a step that you already have in the library.


bdd_step_lib.png

Find out more about this feature here.

Xray now provides a time tracking module within the execution page. This module allows users to record the time spent executing a test by controlling a stopwatch. 

The stopwatch is also synchronized with the Test Run status. When you start executing a step, the stopwatch starts counting. When you set a final status on the Test Run, the stopwatch stops.


You can also log work directly from the execution screen, just like you do within Jira issues. Any logged time will be added automatically to the Test Execution issue log work.


time_tracking.png


Learn more here.


In the works


The goal behind the testing board is to provide a centralized hub for accessing Xray entities and activities in order to improve navigation and discoverability. For now, this board features the Test repository, the Test Plan board, and project reports. 

The testing board will be improved to include other Xray entities such as Preconditions, Test Sets, and Test Executions. The Test Run and execution screen will have the testing board in context so that users can navigate to any other test activity.


Wish list for future versions


This feature will cover a common scenario where manual Test cases evolve to automated Tests. In this case, the only option is to create separate Test issues to cover all the natures for the Test.

With Test natures, multiple Test definitions can coexist within the same Test case. Users will also be able to set the current definition so that Test Runs are always created using the "current" definition.

This feature will allow test engineers to create or generate environments with variables such as Browser, Operating System, Database, etc. These environments (or configurations) can be managed and assigned to Test Plans and Test Plan folders. Test Executions can then be created automatically for each environment/configuration.

Xray will provide a report to track the progress of Test Runs by environment in the context of a specific Test Plan.

The Xray Connector app for Bamboo will be updated in order to support Xray cloud APIs and connectivity.

Currently, the Test Plan is composed of a static list of Test cases. This means you must explicitly add the Tests to the Test Plan. If the Tests are all known and well defined when you start your Test Plan, this is ok. However, if you are working in an agile context where a Test Plan is created for a specific sprint, Tests will only be specified during the sprint and later added to the Test Plan. This process is not ideal because users might forget to add the Tests to the Test Plan.

Dynamic Test Plans can be defined with a JQL query which will be the source for Test cases. Considering the use case described above, we can define a JQL query that will get all Tests covering requirements of a specific sprint.

This report will feature a list of Test Execution issues along with metrics. Possible usage scenarios:

  • analyze both the progress of the Test Execution  and the success rate (i.e., the % of Tests contributing to the requirement's OK status)
  • see the number of manual Tests vs. others in the Test Execution
  • see the overall execution status (i.e., the current status of the Test Runs)
  • see the number of opened/closed linked defects, in the context of the Test Execution

Refer to the Test Executions Report on Xray server.

This report will show a daily historical view of Requirements coverage. Users will be able to analyze the evolution of the Requirement's coverage status over time for a particular analysis scope: Latest, Version or Test Plan (for each Environment). This way they can estimate if they are going to keep the planed released date according to the coverage status trend.



  • No labels