Overview
JQL (Jira Query Language) functions in Xray provide querying capabilities that help you retrieve Test-related information efficiently. These functions enable you to explore relationships between Xray Issue Types, such as Tests, Test Sets, Test Executions, Requirements, and more.
By leveraging these functions, teams can filter and analyze data for better Test management and reporting.
JQL functions are currently in Beta. A feature toggle will be available, allowing you to enable or disable them based on your needs. If you find the current limitations too restrictive, you can choose to turn them off.
Additionally, this feature is currently part of a closed Beta program and is not generally available.
If you are interested in participating in this program, please submit your access request here. Our team will review and provide further guidance.
How It Works
Xray extends Jira's native JQL functionality by introducing custom functions that interact with Test-related Issues. These functions can be used within Jira's Issue Search page to refine queries and extract meaningful insights.
Each function operates by accepting specific parameters, such as Issue keys, project identifiers, saved filters, Test statuses, and environments. Depending on the function, it returns relevant Issue Types, allowing teams to track Test coverage, identify gaps, and monitor execution results.
The JQL functions in Xray follow a structured format where users can input key parameters, such as:
Test Issue keys.
Test Set Issue keys.
Requirement Issue keys.
Test Execution statuses.
Fix Versions.
Environments.
Impact
Using these JQL functions enhances Test management operations by:
Improving traceability: enables linking Tests, Requirements, and Test Executions for comprehensive tracking.
Enhancing efficiency: reduces manual efforts in identifying relationships between different Test entities.
Supporting advanced reporting: helps generate precise Test execution and Requirement coverage reports.
Ensuring quality assurance: identifies gaps in Test coverage and monitors defect trends effectively.
JQL Functions
The following JQL functions are available to query Xray Issues on the Issue Search Page.
They enable you to query the relationships between Xray Issue Types.
JQL Function | Parameters | Description | Example |
---|---|---|---|
testTestSet | P1 - Test Issue Key | Returns a list of Test Set Issues associated with the input Test issue key | issuetype = 'Test Set' and key in testTestSet('DEMO-1') |
testSetTests | P1 - Test Set Issue Key or Filter Name/ID of Test Sets | Returns a list of Test Issues associated with the input Test Set Issue key | (1) issuetype = 'Test' and key in testSetTests('DEMO-5') (2) issuetype = 'Test' and key in testSetTests('Test sets saved filter') |
testsWithNoTestSet | P1 - Saved filter Name/ID | Returns a list of Test Issues not associated with a Test Set | (1) issue in testsWithNoTestSet() (2) issue in testsWithNoTestSet("saved_filter") |
testPreConditions | P1 - Test Issue Key | Returns the Pre-Condition Issues associated with the input Test issue key | issuetype = 'Precondition' and key in testPreConditions('DEMO-1') |
preConditionTests | P1 - Precondition Issue Key | Returns the Test Issues associated with the input Pre-Condition Issue key | issuetype = 'Test' and key in preConditionTests('DEMO-1') |
testRequirements | P1 - Test Issue Key or Filter Name/ID of Tests | Returns a list of Requirement Issues associated with the input Test issue key/Filter of Tests | (1) and key in testRequirements('DEMO-1') (2) issuetype = 'Feature' and key in testRequirements('Tests saved filter') |
requirementTests | P1 - Requirement Issue Key or Filter Name/ID of Requirement Issues | Returns a list of Test Issues associated with the input Requirement issue key or saved filter with Requirements | (1) issuetype = 'Test' and key in requirementTests('DEMO-10') (2) issuetype = 'Test' and key in requirementTests('Requirements saved filter') |
testsWithReqVersion | P1 - Project Name/Key/Id P2 - Fix Version P3 - Fix Version (Optional) ... Pn - Fix Version (Optional) | Returns a list of Test Issues associated with the Requirement issues of the input Fix Versions of the specified project | issuetype = 'Test' and issue in testsWithReqVersion('DEMO', 'v1.0', 'v1.1') |
testExecutionTests | P1 - Test Execution Issue Key or Filter Name/ID P2 - Test Run Status list separated by "|"(pipe) (Optional) P3 - User assigned to execute Test Run (Optional) P4 - Defects Flag with value in true or false (needs to be true or false). P5 - User who executed the Test Run (optional). P6 - Existence of comments (Optional) P7 - Existence of evidence (Optional) P8 - Started from (symbols >/< are read as bigger, exactly/smaller or exactly, respectively; the = sign or the full absence of signs is read as exactly) P9 - Finished on (symbols >/< are read as bigger, exactly/smaller, or exactly, respectively; the = sign or the full absence of signs is read as exactly) | Returns a List of Test Issues associated with the input Test Execution issues from P1 optionally filtered by the current test run status for each Test Issue. Parameter P1 can either be a single Test Execution Issue key or a saved filter containing multiple Test Execution Issues. Possible Test Run Status values are: PASS, FAIL, EXECUTING, ABORTED, TODO, and all custom statuses. P3 corresponds to the user assigned to execute the Test Run, while P5 corresponds to the one who actually executed it. For analyzing the joint values of all Test Run Assignees, "" should be used. For taking into account the Test Runs without any Assignee, then "__NULL__" should be used. If you pass true as the value for P4, the query returns all Tests from a particular set of Test Executions where no Defects were created. | (1) issuetype = 'Test' and issue in testExecutionTests('DEMO-9') (2) issuetype = 'Test' and issue in testExecutionTests('DEMO-9', 'PASS') (3) issuetype = 'Test' and issue in testExecutionTests('DEMO-9', 'PASS', 'user A') (4) issuetype = 'Test' and issue in testExecutionTests( 'Saved Test Execution Filter', 'PASS') (5) issuetype = 'Test' and issue in testExecutionTests( 'Saved Test Execution Filter', '', 'user A') (6) issue in testExecutionTests( 'Saved Test Execution Filter', '', 'user A', 'true') (7) issue in testExecutionTests( 'Saved Test Execution Filter', '','', 'false', 'admin') (8) issuetype = 'Test' and issue in testExecutionTests( 'CALC-397', '', '', 'false', '', 'true', 'false', '>2016-05-31', '<2016-06-30') |
testsWithoutTestExecution | P1 - Saved filter Name/ID | Returns a list of Tests that are not associated with a Test Execution to be executed | (1) issuetype = Test and issue in testsWithoutTestExecution() (2) issuetype = Test and issue in testsWithoutTestExecution("saved_filter") |
requirements | P1 - Status list separated by "|"(pipe) P2 - Project (Optional) P3 - Version to calculate requirement status (Optional) P4 - Test Environment (Optional) P5 - Flat (Optional) P6 - ToDate (Optional) P7 - Saved Filter (Optional) | Returns a list of Requirement Issues with the provided coverage status. Please provide Project parameter (P2) to restrict the requirements to the specified project. If analyzing a specific version, then the Project and Version parameters must be filled. Optional filters include: Test Environment, for taking into account the Test Executions made for that environment. For analyzing the joint values of all environments, "" should be used. For taking into account the Test Executions without any Test Environment assigned, then "__NULL__" should be used. Flat that indicates whether all Requirements (not only parents) should be searched. If "Flat" is not provided, the default value is 'false'. ToDate considers only those requirements executions before a specific date/time (the date literal must follow the ISO8601 format). Saved Filters considers only requirements from that specific filter. | (1) issue in requirements('OK','Calculator') Although optional, it is highly recommended to specify the Project parameter as a means to define the project having the Requirements and thus reduce the number of Issues that will be processed/returned. Otherwise, requirements from all JIRA projects will be processed, which possibly is something that you don't want or need at all. (2) priority = Major and fixVersion <= 'v3.0' and issue in requirements('NOK', 'Calculator', 'V4.0') (3) issue in requirements('NOK', '', '', '', '','2014-01-01') (4) issue in requirements('OK', 'Calculator', 'v1.0', 'chrome' 'false' '2014-08-30')
(5) issue in requirements('NOK', 'Calculator', 'v2.0', 'true') (6)
|
requirementsWithStatusByTestPlan | P1 - Status list separated by "|"(pipe) P2 - Test Plan Issue Key P3 - Test Environment (Optional) P4 - Flat (Optional) P5 - ToDate (Optional) P6 - Project (Optional) P7 - Saved Filter (Optional) | Returns a list of Requirement Issues with the coverage status calculated for the given Test Plan Issue. Optional filters include: Test Environment, for taking into account the Test Executions made for that environment. For analyzing the joint values of all environments, "" should be used. For taking into account the Test Executions without any Test Environment assigned, then "__NULL__" should be used. Flat that indicates whether all Requirements (not only parents) should be searched. If "Flat" is not provided, the default value is 'false'. ToDate considers only those requirements executions before a specific date/time (the date literal must follow the ISO8601 format). Project and Saved Filters considers only requirements from that specific project or filter. | (1) issue in requirementsWithStatusByTestPlan('OK', 'TP-123') (2) issue in requirementsWithStatusByTestPlan('NOK', 'TP-123', '', 'true') (3) issue in requirementsWithStatusByTestPlan('NOK', 'TP-123', 'Android', 'false', '2014-01-01') (4) |
defectsCreatedDuringTesting | P1 - Test Issue Key or Filter Name/ID of Test Issues | Return a list of defects created during the execution of the specified Tests | (1) issue in defectsCreatedDuringTesting() (2) issue in defectsCreatedDuringTesting("TEST-123") (3) issue in defectsCreatedDuringTesting("saved_filter") |
defectsCreatedDuringTestExecution | P1 - Test Execution issue Key or Filter Name/ID of Test Executions P2 - List of users separated by "|" (pipe) (Optional) | Returns a list of Defects created during the execution of the specified Test Executions; can optionally be filtered by the Defect Issue Assignee username | (1) issue in defectsCreatedDuringTestExecution(TEST-123) (2) issue in defectsCreatedDuringTestExecution(saved_filter) (3) issue in defectsCreatedDuringTestExecution(saved_filter, 'user1|user2') (4) issue in defectsCreatedDuringTestExecution(TEST-123, 'user1|user2') |
defectsCreatedForRequirement | P1 - Requirement key or Filter Name/ID of Requirement Issues | Returns a list of defects created during the execution of Tests covering the specified Requirements | (1) issue in defectsCreatedForRequirement("REQ-123") (2) issue in defectsCreatedForRequirement("saved_filter") |
manualTestsWithoutSteps | P1 - Filter Name/ID | Returns a list of Manual Tests that have no Test Steps | (1) issue in manualTestsWithoutSteps() (2) issue in manualTestsWithoutSteps("saved_filter") |
testTestExecutions | P1 - Test Issue Key or Filter Name/ID of Test issues P2 - Test Run Status list separated by "|"(pipe) (Optional) | Returns a list of Test Executions associated with the input Test Issues from P1 optionally filtered by the current Test status in each Test Execution Issue. Parameter P1 can either be a single Test Issue key or a saved filter name or ID containing multiple Test Issues. Possible Test Run Status values are: PASS, FAIL, EXECUTING, ABORTED, TODO and all custom statuses. | (1) issuetype = 'Test Execution' and issue in testTestExecutions('DEMO-9') (2) issuetype = 'Test Execution' and issue in testTestExecutions('DEMO-9', 'PASS') (3) issuetype = 'Test Execution' and issue in testTestExecutions( 'Saved Test Filter', 'PASS') |
testExecWithTestRunsAssignedToUser | P1 - Username (Optional) P2 - Status (Optional) Username is required in case we use this parameter | Returns a list of Test Executions where a user has at least one Test Run assigned to him. You can optionally specify a user with P1, or if the user is omitted the current user will be used. Note that if you are not logged in to JIRA, a user must be specified. If you use a status parameter, then the user is required | (1) issuetype = 'Test Execution' and issue in testExecWithTestRunsAssignedToUser() (2) issuetype = 'Test Execution' and issue in testExecWithTestRunsAssignedToUser('userDPC') (3) issuetype = 'Test Execution' and issue in testExecWithTestRunsAssignedToUser('userDPC', "FAIL") |
testSetPartiallyIn | P1 - Test Execution Issue Key or Test Plan Issue Key or Filter Name/ID of Test Executions or Test Plans | Return a list of Test Sets that have at least one test in P1 | (1) issuetype = 'Test Set' and issue in testSetPartiallyIn('DEMO-15') (2) issuetype = 'Test Set' and issue in testSetPartiallyIn('testExecList') (3) issuetype = 'Test Set' and issue in testSetPartiallyIn('testPlanList') |
testSetFullyIn | P1 - Test Execution Issue Key or Test Plan Issue Key or Filter Name/ID of Test Executions or Test Plans | Return a list of Test Sets that have all its Tests in P1 | (1) issuetype = 'Test Set' and issue in testSetFullyIn('DEMO-15') (2) issuetype = 'Test Set' and issue in testSetFullyIn('testExecList') (3) issuetype = 'Test Set' and issue in testSetFullyIn('testPlanList') |
testPlanTests | P1 - Test Plan Key or Filter Name/ID of Test Plans P2 - Status (Optional) P3 - Environment (Optional) | Returns a list of Tests that are associated with the Test Plan. The "status" parameter is optional and allows to filter Test Issues in a specific Plan with the specified execution status. If the "status" parameter is present, users might also pass the "environment" parameter. If this parameter is filled, Xray will provide all Tests in a Test Plan that are in the specified "status" and for the specified "environment". | (1) issue in testPlanTests("DEMO-10") (2) issue in testPlanTests("Test Plans saved filter","TODO") (3) issue in testPlanTests("DEMO-10","TODO") (4) issue in testPlanTests("DEMO-10","TODO","IOS") |
testPlanTestExecutions | P1 - Test Plan Key or Filter Name/ID of Test Plans | Returns a list of Test Executions that are associated with a Test Plan or a saved filter of Test Plans | (1) issue in testPlanTestExecutions("DEMO-10") (2) issue in testPlanTestExecutions("Test Plans saved filter") |
testPlanRequirements | P1 - Test Plan Key or Filter Name/ID of Test Plans | Returns the Requirement Issues that are indirectly associated, through Test Issues, with a Test Plan or a saved filter of Test Plans | (1) issue in testPlanRequirements("DEMO-20") (2)
issue in testPlanRequirements("Test Plans saved filter")
|
testTestPlan | P1 - Test Issue Key | Returns a List of Test Plan Issues associated with the input Test issue key | issuetype = 'Test Plan' and key in testTestPlan('DEMO-1') |
testRepositoryFolderTests | P1 - Project Key P2 - Folder Path P3 - Flatten (Optional) | Returns the list of Tests contained in a folder (P2) of the Test Repository of a Project (P1). May optionally include the Tests in sub-folders by setting Flatten (P3) to "true". | (1) issue in testRepositoryFolderTests("CALC", 'Parent/Child') (2) issue in testRepositoryFolderTests("CALC", 'Parent/Child', "true") |
testPlanFolderTests | P1 - Test Plan Key P2 - Folder Path P3 - Flatten (Optional) P4 - Test Run Status (Optional) P5 - Test Environment (Optional) | Returns the list of Tests contained in a folder (P2) of a Test Plan (P1). May optionally include the Tests in sub-folders by setting Flatten (P3) to "true". Can also filter by Tests Run Status (P4) for a given Test Environment (P5). To analyze the joint values of all Test Environments, "" should be used. To analyze the Test Executions without any Test Environment assigned, then "__NULL__" should be used. | (1) issue in testPlanFolderTests(CALC-10, 'Parent/Child') (2) issue in testPlanFolderTests(CALC-10, 'Parent/Child', "true") (3) issue in testPlanFolderTests(CALC-10, 'Parent/Child', "true", "TODO|FAIL", "windows") |
projectParentRequirements | P1 - Project Key | Returns the list of Requirement Issues, from a given Project, which are not Sub-requirements | (1) issue in projectParentRequirements("CALC") |
testExecutionsWithCompletedTestRunsSince | P1 - Date P2 - Filter ID/Name (Optional) | Return the list of Test Execution (belonging to the given filter) that have Test Runs finished since the given date | (1) issue in testExecutionsWithCompletedTestRunsSince(2022-01-01) (2) issue in testExecutionsWithCompletedTestRunsSince(2022-01-01 12:00, "Current Sprint TestExecs") (3) issue in testExecutionsWithCompletedTestRunsSince(-3d, 10101) |
testPlansWithCompletedTestRunsSince | P1 - Date P2 - Filter ID/Name (Optional) | Return the list of Test Plans (belonging to the given filter) that have Test Runs finished since the given date | (1) issue in testPlansWithCompletedTestRunsSince(2022-01-01) (2) issue in testPlansWithCompletedTestRunsSince(2022-01-01 12:00, "Current Sprint TestPlans") (3) issue in testPlansWithCompletedTestRunsSince(-3d, 10101) |
Custom Fields and Entity Properties
Xray provides several custom fields and entity properties.
Jira's entity properties provide the means to store key/value pairs on Jira entities, such as Issues. Xray uses some entity properties that can be queried using JQL.
Entity Properties
Entity Property | Issue Type | Description | Example(s) |
---|---|---|---|
testType | Test | The Test Type that characterizes the nature of the Test | project = DEMO and issuetype = Test and testType = Manual project = DEMO and testType = Manual |
testEnvironments | Test Execution | The Test Environment(s) assigned to the Test Execution | project = DEMO and issuetype = "Test Execution" and testEnvironments = Chrome project = DEMO and testEnvironments = Chrome |
Limitations
Atlassian Caching Mechanism
Overview
All custom JQL functions on Jira Cloud are required to use Atlassian’s caching mechanism. This improves performance and reduces the time it takes for users to see query results.
How it works
Whenever a JQL function with a specific set of parameters is executed, Atlassian caches the results for seven days. If the exact same function with the same parameters is executed again during that period, Jira retrieves the results from the cache - not from live data - and displays them instantly.
Limitations
The cache is updated by a continuously running background job. However, this process may take several hours to complete, depending on the environment.
As a result, if users are querying for information such as requirement coverage status, and the cached data is from two hours ago, any updates made to the relevant Issues in that time will not be reflected in the query results.
Xray JQL functions use function-level caching based on the exact arguments passed to the function - not per user.
This means that when a function is called with a specific set of arguments, the result of that call is cached for seven days, regardless of who made the request. During this period, the cache is refreshed periodically. Any user who calls the function with the same arguments will receive the same cached result until the cache is updated.
Timeout Period
Atlassian enforces a 25-second timeout for all JQL queries.
If a query takes longer than 25 seconds to run, it will return an error. To resolve this, you may need to reduce the scope or complexity of the query.
1000 Results Limit
JQL queries in Jira Cloud are limited to 1,000 results.
If a query returns more than 1,000 Issues, it will produce an error - instead of returning a truncated list. This is to prevent misleading or incomplete data.
Example:
The following function will return an error if more than 1,000 Issues match:issue in testsWithoutTestExecution()
Most Xray functions get a saved filter as an argument to work around Jira's 1,000-result limit. The saved filters used in Xray JQL functions must not be private. However, they only need to be shared with the Xray user or configured with a scope that includes Xray. They don’t need to be completely public - though that’s also a valid option.
Go here to learn more.
Limit Applies Per Argument
Each argument in a JQL query is evaluated independently before the combined result is calculated. This means the 1,000-Issue limit is applied per argument, not to the final combined result.
Example:
project = CALC AND issue in testsWithoutTestExecution()
If there are more than 1,000 Tests without Executions across the instance, the enhanced JQL will return an error - due to the limitation explained above.
To mitigate this, please use a filter within the Xray JQL function to reduce the scope of the search.
The Xray team is actively investigating improvements to address these three limitations during the Beta period.