Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1.  "Unified" development process: define a process that can be applied to all teams in the way how to manage the STLC. Every team is different and has its own needs, therefore your process should not be too strict but it should provide some guidance on how development life cycle should be addressed, covering requirement management, bug management and test management. Having teams working completely in different ways, hardens communication and leads to unproper and unoptimized tool usage. If you have a well-defined process that can be used organization wide, better; this is the key to ensure an optimal usage and have the best performance.
  2.  Are you adopting Agile and Scrum? Check out Using Xray in an Agile context for tips on how you can take advantage of Xray in such scenarios;. Agile software development page provides high-level overview of Agile and Agile Testing and besides background information on them, it will also provide some useful tips so your team can be more Agile and avoid doing things that are unnecessary.
  3.  Each Xray entity has a purpose/fit that you should try to take advantage of. You're not obligued to use all of them; you even need can choose to use some instead of over other ones. Xray also provides . This means.In order to have an optimum usage of Xray, we would recommend understanding, first of all, the purpose of each entity.

    Expand

    Entity / Issue Type

    Purpose

    Test

    For making the specification of some test; a test case template;

    Pre-Condition

    For abstracting some initial condition that one or more tests must assure; reusable, i.e. linked to one or more test cases

    Test Set

    For creating lists of test cases, so you can easily pick those test cases afterwards in case you need them

    Test Execution

    For scheduling a execution of a bunch of test cases in some version&revision of the SUT;

    a Test Execution contains several Test Runs, one per each ”linked” test cased

    Sub Test Execution

    Similar to a Test Execution; the difference between them is that the Sub-Test Execution is a sub-task and can be created within the context of a requirement.

    Creating a Test Execution as a sub-task of the requirement issue provides you the ability to track executions in the Agile board.

    Test Plan

    For grouping multiple Test Executions and presenting a consolidated overview of them; tracks the results of some Tests in some version/sprint of the SUT

    Test Run

    An instance of a Test in the context of some Test Execution; contains a copy of the original Test specification along with the recorded results. It’s not an issue type.

    Test Repository

    A per project hierarchical organization of Test cases using folders; an alternate approach to Test Sets for organizing test cases.

    Test Plan Board

    A per Test Plan hierarchical organization of Test cases, at the planning phase, using folders.

    It is used for...

    • grouping, organizing the tests in the context of the Test Plan and to easily track the results of certain Tests grouped in some folder;
    • easily change the ranking of the Tests, to create Test Executions for them afterwards.

Specification

  1.  Avoid having many sub-requirements (>100) per requirement (e.g. Stories per Epic) as it can impact the calculation of their statuses
    1. normally it is a signal that the requirement needs to be further decomposed. Besides hardening analysis and its management, it will also require additional resources during computation of its status upon changes in any of the related sub-requirements, that in turn is affected by the status of the related Tests.
  2.  Requirements being covered by many (>>100) Tests
    1. normally it is a signal that the requirement needs to be further decomposed. Besides hardening analysis and its management, it will also require additional resources during computation of its status, that in turn is affected by the status of the related Tests.

...

  1.   Using unoptimized JQL queries can degrade performance substantially
    1. most times this happens because users don't understand how JQL works first of all; JQL is not like SQL (see Understanding JQL Performance). Thus, filtering issues by project by adding the "project = <xxx>" clause is not the same as specifying the project as argument to the subsequent JQL function.
      1. Example: 
        1. Use

          issue in requirements('OK','CALC')

          ...instead of ...

          project = 'CALC' and issue in requirements('OK')

  2.  Some JQL functions, such as the ones dealing with requirement coverage, may be more intensive than other ones, since Xray may have to, for example, load all the related Test Runs in order to obtain relevant data. The following JQL 
    1. testPlanTests() - whenever by tests in a given status; the current workaround is to search using the "TestRunStatus" custom field. 

      Note
      titlePlease note

      When searching for Tests with a certain status inside a Test Plan, we recommend you to use the custom field search instead.


      Xray has created a new way of searching with big improvements when filtering by test status, using the Custom Fields:
      (3)
      issuetype = Test and TestRunStatus = "DEMO-10 - TODO"
      (4)
      issuetype = Test and TestRunStatus = "DEMO-10 - TODO environment:IOS"

    2. requirements() - whenever filtering by by dates
    3. testExecutionTests() (Diamantino Campos tem duvidas... podes confirmar pf?)  - whenever filtering by tests/requirements in a given status; it will depend on the amount of Tests you have on the Test Execution
    4. parentRequirements() (Diamantino Campos tem duvidas.. podes confirmar pf?)  - depending on the amount of requirements and sub-requirements you have, it can take a while to complete and require some resources
    5. Diamantino Campos , any other Jql function?

...

  1.  One way of doing reporting is by using gadgets. Gadgets are great to share information between team members and even between different teams; however, if not use carefully, they can degrade JIRA performance if all users have the same report on their dashboard as they will probably generate multiple requests once users access the dashboard. Thus, use carefully most intensive gadgets such as the "Historical Daily Requirement Coverage" and the "Tests Evolution" gadget and others that do aggregations (e.g. "Test Runs Summary" gadget). Gadgets that just "list" entities should not affect performance significatively.
  2.  Limit the target issues for the reports/gadgets CONFgadgets, i.e.g. generate  generate the Overall Requirement Coverage (report/gadget) just for issues that you really need and not all Jira requirements or projects; the setting "Max number of requirements per report or gadget results" (available under Miscellaneous) acts as a maximum limit for some reports/gadgets.

Dashboards

  1.  Choose properly the filters you use for each gadget, in order to restrict the amount of issues that will be processed
  2.  Don't use low (i.e. intensive) refresh times as they will add some overhead to the Jira instance
  3.  Use shareable dashboards to have the high-level overview and the things that matter but avoid creating highly complex dashboards
  4.  Try to normalize the dashboards and make them like a standard organization wide; it will facilitate communication and avoid “wrong”/unoptimized usage

...