This article provides background information on Risk-Based Testing (RBT) and its foundations on Risk Management concepts, which will be briefly presented and clarified.
RBT is an approach used by testing teams whose focus is on assessing risks and how to handle them, from a testing perspective.
Since Xray v3.4, you may take advantage of some built-in capabilities in order to leverage RBT in your teams.
Xray can be used to perform Risk-Based Testing, giving you all the flexibility to tailor it to your team's needs and implement RBT successfully.
Xray is not dependent on any risk management app nor does it provide in itself risk management capabilities. In other words, it's up to you how to define risk and how to manage it.
The first step is to decide where to set the risk (i.e. on requirements, Tests or at project level); that will affect how you will implement the related fields and how you will use Xray during RBT.
For risk management you need to choose if you're going to use a specific app (e.g. Risk Management for Jira, Risk Register) or not; using Jira or a generic customization app (e.g. Jira Misc Custom Fields, ScriptRunner,etc ) can be enough for risk identification and analysis.
After you have decided where to set the risk and how, and configured properly, you may perform RBT and use Xray to assist you.
If you're new to Risk Management and RBT, please check the Risk Management article. |
As long as you are able to define a sort-able value for the risk as a custom field in either cover-able issues (e.g. "requirements"/stories) or Test themselves, then you will be able to select and rank Tests for execution, based on risk level. If you have identified global risks at project level as issues, and those risks can be mitigated somehow by testing, then you can also use RBT with them.
Thus, you may define risks (and the corresponding risk level as a sort-able custom field on the related issue) at:
Each approach has it's own usage scenarios and pros/cons. Throughout this article, most of the examples will be based on risks defined at Test level.
Depending on your approach to Risk Management and to RBT, you will need to configure risk related fields as explained in the configuration section ahead.
The whole RBT process has many activities and Xray can be used along the way.
This section provides a brief overview of some of them giving special focus on sorting and ranking Tests using the risk level as input; however, there's a lot more than that.
For a given identified risk, during Risk Analysis you need to define impact and probability, as a means to calculate the risk level.
Thus, impact and probability are mostly temporary values used to calculate the risk; they can be assessed early and updated if needed during Risk Monitoring and Reviewing.
Where and how you define impact and probability fields depends on your approach, including how your fields will be implemented.
The following screenshots show risk being defined at different levels, while also using different tools to implement them.
Whenever adding Tests to an existing Test Execution/Test Plan/Test Set, you may choose to show the impact, probability and risk level related fields.
Tests can be ranked on descending order of risk level, so the ones related with higher risks can be addressed first; you may also sort Tests by impact or probability if you want.
To sort a column (i.e custom field) by descending/ascending order, just click on the column name.
On the left side, you may use filters on the impact, probability and risk level fields. Filtering by risk level is essential but filtering by other related files may prove to be useful.
The way filters are shown depends on their type (i.e. the type of the custom field); therefore, it's important to setup these fields properly.
If risk is defined at requirement or project level then we need to first sort these entities by risk level; only then we may obtain related Tests based on the previous sorting.
How to perform this exactly?
A Test Execution contains a list of tests to be run accordingly with their ranking (i.e. which you may see in the first column, named "Rank").
Tests may be visually ordered ascending/descending by clicking on the column name; this won't affect their ranking, which is the one used as the effective order to run the tests.
If you want to make this order permanent, by setting the rank accordingly, then you can choose "Apply Rank."
Note that you may also sort and re-rank Tests by impact or probability, if you want.
If you have a custom field at Test level that "inherits" (e.g. copies or computes) the risk level from the parent issue (e.g. requirement/story or project level risk issue) then you can follow the previous instructions described for risk defined at Test level.
Otherwise, there is not yet a straightforward solution.
As of v3.4, Xray does not yet provide a way to perform sort and re-ranking on Test Plans or Test Sets based on specific custom field.
However, if you add Tests to an existing Test Plan/Test Set and you order them by risk level in the dialog before they're actually added, the Tests ranking in the destination entity will take into account that order.
There, one workaround/"hack", for now, would be:
These instructions assume that Risk is defined at Test level; if it's defined at requirement or project level, then it would need to be adapted accordingly depending on your implementation.
If you have assessed risks at requirement or project level, then Xray offers you the possibility of analyzing coverage by risk level (and even by impact and probability). This is supported as long as those fields are "Select List (single choice)" based; number or text based custom fields are not supported for grouping purposes.
What you can use this for?
The Overall Requirement Coverage Report allows visually grouping of requirements by custom fields, thus you may easily group by risk level related field, for example.
The report also allows you to drill-down on the bar and thus see exactly which requirements have that risk level, for example, and that are on that specific coverage status (e.g. "OK"). This can be quite handy to evaluate the completeness and the failed tests, especially if the requirement is NOK.
If using the Overall Requirement Coverage gadget, then you need to pick a project (and not a saved filter) as the data source; only then Group By will allow you to select specific custom fields to group your requirements, such as the Risk Level related custom field.
The Requirements List gadget may be used to show just the coverage status, completeness and information about the passed/failed tests of requirements having a certain risk level. Thus, it is a great way to track the actual consolidated progress of your requirements/stories.
As this gadget does not provide a grouping mechanism, you need to instantiate it multiple times configuring each instance with a different filter for picking only the requirements having a certain risk level.
None of the following gadgets provide a grouping mechanism, thus you need to create multiple instances of them configuring each one with a different filter for picking only the Tests having a certain risk level.
Whenever analyzing the consolidated results, you may evaluate how your Tests are on a specific project version or just present their latest results (no version specified in the "Analysis Version" configuration field).
Coverage calculation in Xray takes into account the testing results. Therefore, it's possible to indirectly analyze it by looking at the requirements and their coverage status. |
The legacy Tests report, available on the project tab, can be configured to show additional sortable columns. Thus, you may add the risk level related field (or others) and sort by it.
Note: this report is still a bit limited, so no advanced/flexible filtering mechanisms are available.
As your testing progresses, you may find certain defects along the way that you have to manage. You should classify your defects or even deal with them as risks.
Anyway, you'll need to find defects related with certain assessed risks, no matter at what level they where identified, so you can decide how to proceed or treat them.
Xray provides several JQL functions that can be used to obtain these defects. Remember to complement your JQL query so you can find only the defects you want (e.g. the ones assigned to a certain AffectsVersion, for example).
Examples of JQL queries:
In order to avoid unecessary risks or at least mitigate them, you may enforce reviewing on requirements and even on test cases. Since they're issue type based, you may apply Jira workflows on them and/or set them read-only. This can be extended to Test Plans and any other issue types used by Xray.
Usage examples:
Note: all changes are tracked, directly on the respective issues; in the case of Test Runs, changes also get tracked on the related Activity section.
Xray gives you the ability to create one or more Test Plans if you wish to manage and track the testing progress for different tests independently.
Therefore you may create, for example, one Test Plan containing all the Tests related with with the highest risk level items and distinct ones for the other risk levels; on each Test Plan you may further prioritize Tests.
If you have multiple Test Plans, and since they're issues, you may priorize them in the current Sprint or Kanban boards. You can also "label" these Test Plans somehow to distinguish their relative relevance; you may use Jira's native Priority field for this as it comes by default and fulfills this purpose.
In order to start with RBT, you have to configure the risk management related fields. You may use Jira built-in capabilities or use a risk management or even a customization app instead.
Either way, later on, we would recommend using saved filters as a way to quickly obtain Tests or the issues where the risk is being defined on.
If you're adopting risk at Test level, you may create several saved filters (e.g. "tests_risk_low", "tests_risk_medium", "tests_risk_high", "tests_risk_very_high", "tests_risk_severe") as this can later be quite useful, namely for using within gadgets or reports.
If you're using adopting risk at requirement/story or at project level instead, you may follow a similar approach to group those issues by risk level.
Create a set of filters, each one related with a different risk level. Using a nomenclature for the saved filters can help you out. Example: "RR_risks_low", "RR_risks_medium", "RR_risks_high", "RR_risks_extreme".
To make our life easier, we may also use saved filters to group Tests by the indirectly associated risk revel, using JQL and the requirementTests() function.
You won't need these pre-created filters to pick tests sorted by risk level whenever adding Tests to a Test Execution/Test Plan as you may write JQL query inline; however, they will be required if you aim to filter out Tests in gadgets and in reports by the related risk level.
The first thing you have to do is to configure risk (level), probability and impact. Xray does not provide any of these fields; thus, you have to either configure them by yourself or use an app to assist you with that.
The only thing that you'll need in the end is the risk level related one because that's the one you will use to pick tests sorted by risk and to execute them by that order.
Risk, probability and impact may be configured as "Number," "Text Field (single line)" or "Select List (single choice)" based custom fields.
Always have in mind how sorting works for that field as this is critical for RBT.
"Number" based custom fields are naturally sortable and thus are the easiest ones to implement; however, they're not very user-friendly. If you use "Text Field (single line)", note that the sorting will be based on text/string comparison (thus, for example, the text "Medium" will be greater than "High.") In this case, you may add a prefix in order to guarantee the correct order (e.g. "L2: Medium", "L3: High"). Using "text fields", namely for the risk level, may be tricky as it will make it harder to search for them; thus, we don't recommend their usage. On the other hand, "Select List (single choice)" based fields have options (i.e. well-defined possible values) along with a respective order; therefore, this order will be the one used whenever ordering issues by this type of fields. "Select List (single choice)" fields are also easier to use during filtering, as you don't have to type their values by hand, besides giving you the ability to filter by multiple values at the same time. However, if you decide to configure your risk level as a "Select List (single choice)" field, you'll need to do some additional work to calculate it based on the probability and impact fields. Note that an empty value (i.e. None) for these fields will have a higher ranking than all other options; thus, you may set an additional option (e.g. N/A) as the first/lowest ranked one and set it as the default value. |
All configuration examples, including related source-code, provided herein using apps from other vendors are just informative and need to be evaluated properly, considering also performance impacts among other. We don't provide support for these apps neither for their configuration. |
Jira can be used to define risk, probability and impact; however, notice that Jira does not provide an out-of-the-box solution for calculating the value of a field (i.e. risk level) based on the values of other fields. You'll need to either make a custom development for that or use an app to help you out.
Whenever creating fields, please have a look at the previous note on their types. We advise you to set this first in a testing environment to make sure it works as expected and covers the needs of the teams adopting RBT.
After choosing the type of fields you'll need, adding/creating them is straightforward; this can be done right from the issue screen using Admin > Add field.
It can also be done from Jira's administration, under Issues > Custom fields > Add custom field.
You may create a custom field "Select List (single choice)" based to store and manage impact and probability related fields. You may also use "Number" based fields instead.
If you use a "Select List" then you need to make sure their option values are sorted in the right order and that their values are also alphabetically sorted in the same way.
In the previous example, 0 is redundant with having an empty value; thus, you may either set the default to be 0 (instead of "none") or remove the "0" option.
You may create a custom field, either "Number" or "Select List (single choice)" based, to store and manage your risk. A "Select List" may provide additional benefits.
No matter your choice, the risk level field won't be automatically calculated based on the probability/impact fields; you'll need to use a app or custom development for that as shown in the next sections.
Risk Management for Jira app provides a simple, yet flexible approach for risk management, allowing you to assess risks in your existing issues.
Integration with Xray is straightforward as this app uses custom fields for the probability and the impact (i.e "consequence"). The risk level (i.e. "risk number") is also a custom field calculated automatically based on the previous ones.
To configure Risk Management for Jira, go to Add-ons > Risk Management > Required > Custom Fields.
In the previous screen you can choose to reuse existing fields for abstracting the probability and the impact (they should be "Select List" based).
It can also create these fields for you as well, along with the risk level. Choose "Add Custom Fields" and the name for the impact (i.e. consequence), probability and risk level (i.e. risk index) fields; please check "RiskIndexNumberField" flag.
If you already had values for probability and impact/consequence, indexing will not recalculate the risk level, as the app uses a listener to update it upon changes. |
On issues having the impact/consequence and probability related fields, risk level ("Risk Score" on the following example) will be updated upon update on any of these fields.
Within the Risk Matrix, issues can easily be found/searched as they're mapped to a cell and to a specific color, depending on their probability, impact/consequence and calculated risk level.
In Xray, whenever adding Tests to a Test Execution for example, the risk level field (e.g "Risk Score") can later be used to sort Tests ascending/descending; it can also be used as a filter.
Risk Register app provides risk management at project level, by allowing you to create and manage Risks as issues.
Although it may be possible to assess risks in your existing issues by adding the fields to them, this seems to be an unsupported featured by Risk Register; therefore we don't recommend using it, unless stated otherwise by the app vendor. |
Risk Register app uses a set of custom fields (e.g. Impact, Probability, Exposure, etc) that will be created during its setup.
Some of the identified project level risks may be mitigated with tests while other ones may not.
For those that can, you may start by defining the "Risk assessment" related fields, including probability and impact, in the Risk issues (i.e. the ones whose issue type is configured to be handled as risk).
Later on, we'll need to find these Tests based on the "exposure" of the associated/covered risk. To make our life easier, we may use saved filters to group Tests by the indirectly associated risk exposure.
Create a set of filters, each one related with a different "Exposure" (i.e risk level). Using a nomenclature for the saved filters can help you out. Example: "RR_risks_low", "RR_risks_medium", "RR_risks_high", "RR_risks_extreme".
To obtain the related Tests, you can use JQL and the requirementTests() function.
Thus, lets say you want to obtain all Tests related with Risks with an "High" exposure in order to add them to an existing Test Execution; you can do as follows.
The returned Tests may have different relevance; therefore, if you want, you may use a complementary field (e.g. "Priority") to sort them out.
If you wish, you may also assess Risk at Test level and define probability, impact and risk level on the Test issues. Then you may sort them by their individual risk level.
Jira Misc Custom Fields (JMCF) is a popular app for implementing calculated custom fields very easily. JMCF provides a set of types that can be used, for example, to implement Select List based fields (i.e. "dropdown").
If you already have JMCF, you may use it to perform the calculation of risk level.
Impact and probability may be created as standard Jira custom fields, using for example the standard "Number" or even "Select List (single choice)" based fields; for those you won't need this app.
For the calculation of risk you may use a Calculated Single-select" field (recommended) or a "Calculated Number" field from JMCF.
After creating the calculated field, you'll need to perform re-indexing if your project already has values for probability and impact. |
In this approach, we're using a Calculated Single-select field to present the risk level.
The first thing we need to do is filling out the possible options (e.g. "L1: Low", "L2: Medium", etc). Then we can specify the groovy script that is used to calculate the value, that must be one of the predefined options.
Using a single-select field will ease the search of issues later on.
impact = issue.get("Impact") probability = issue.get("Probability") def risk def level if ((impact) && (probability)) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } if ((risk>0) && (risk<4)) { level = "L1: Low" } else if ((risk>=4) && (risk<8)) { level = "L2: Medium" } else if ((risk>=8) && (risk<12)) { level = "L3: High" } else if ((risk>=12) && (risk<16)) { level = "L4: Very High" } else if (risk==16) { level = "L5: Severe" } else { level = null } return level |
After creating the previous fields, you may use it, for example, in the dialog for adding Tests to an existing Test Execution; you can also easily filter issues by risk level.
In this approach, risk is calculated and stored in a Calculated Number field; we just need to define the groovy script that will be used to calculate it.
impact = issue.get("Impact") probability = issue.get("Probability") def risk if ((impact) && (probability)) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } return risk |
Although it could seam feasible to implement a calculated single-select field that would calculate the risk level based on the impact and probability defined on the requirement/project level risk, it would not be enough.
Why? First of all you need to make sure the calculation happens upon changes on the parent impact and probability values; second, you need to make sure the issue and the calculated value are indexed after it.
As JMCF does not provide a way to implement event listeners, there is currently no viable solution for this. As a possible alternative, please see a possible implementation using ScriptRunner.
If you already have the ScriptRunner app, you may use it to perform the calculation of risk level.
Impact and probability may be created as standard Jira custom fields though, using for example "Number" or even "Select List (single choice)" based fields; for those you won't need ScriptRunner.
You may adopt different implementation approaches for calculating and storing the risk level; the following sections depict different scenarios. We recommend using a "Select List (single choice)" based field to persist the risk, as shown in the first scenario, as it will be easier to find/filter later on.
In this example, we're calculating the risk level based on the changes made on issues (i.e. by listening on the "issue created" and "issue updated" Jira events). You can use a Script Field for this.
For optimization purposes, only issues of a certain issue type on a certain project will be thoroughly evaluated; from those, the script will only proceed if changes were made to the probability and impact related custom fields (their name is defined within the script itself). Upon success, the script will update an existing custom field where the risk level will be stored in.
What you'll need to do:
import com.atlassian.jira.issue.fields.CustomField import com.atlassian.jira.issue.CustomFieldManager import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.Issue import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.event.issue.IssueEvent import com.atlassian.jira.issue.util.DefaultIssueChangeHolder import com.atlassian.jira.issue.ModifiedValue import com.atlassian.jira.issue.history.ChangeItemBean import com.atlassian.jira.issue.IssueManager import com.atlassian.jira.issue.customfields.option.Option import com.atlassian.jira.event.type.EventDispatchOption import com.atlassian.jira.issue.index.IssueIndexingService import com.atlassian.jira.util.ImportUtils import org.apache.log4j.Logger import org.apache.log4j.Level def log = Logger.getLogger("com.onresolve.jira.groovy") log.setLevel(Level.DEBUG) MutableIssue issue = event.issue def issueType = issue.issueTypeObject.name // Only process Test or Story based issues (ADAPT IT tou your needs) if (issueType != "Test" && issueType != "Story"){ return } String cfImpactName = "Impact" String cfProbabilityName = "Probability" String cfRiskLevelName = "Risk Level" def changeManager = ComponentAccessor.getChangeHistoryManager(); def changeLog = event.getChangeLog(); def impactChanged = false, probabilityChanged = false; if ( changeLog != null ) { def changeId = changeLog.getLong( "id" ); if ( changeId != null ) { def change = changeManager.getChangeHistoryById( changeId ); for (ChangeItemBean bean : change.getChangeItemBeans() ) { if ( bean.getField().equals(cfImpactName) && ChangeItemBean.CUSTOM_FIELD.equals( bean.getFieldType() )) { log.debug(cfImpactName + " changed"); impactChanged = true; } if ( bean.getField().equals(cfProbabilityName) && ChangeItemBean.CUSTOM_FIELD.equals( bean.getFieldType() )) { log.debug(cfProbabilityName + " changed"); probabilityChanged = true; } } } } if (impactChanged || probabilityChanged){ def issueManager = ComponentAccessor.issueManager def customFieldManager = ComponentAccessor.customFieldManager def cfImpact = customFieldManager.getCustomFieldObjectByName(cfImpactName) def cfProbability = customFieldManager.getCustomFieldObjectByName(cfProbabilityName) def option def impact def probability option = (Option)issue.getCustomFieldValue(cfImpact) if (option != null) impact = option.getValue() option = (Option)issue.getCustomFieldValue(cfProbability) if (option != null) probability = option.getValue() def risk def level if ((impact) && (probability)) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } if ((risk>0) && (risk<4)) { level = "L1: Low" } else if ((risk>=4) && (risk<8)) { level = "L2: Medium" } else if ((risk>=8) && (risk<12)) { level = "L3: High" } else if ((risk>=12) && (risk<16)) { level = "L4: Very High" } else if (risk==16) { level = "L5: Severe" } else { level = null } def cfRiskLevel = customFieldManager.getCustomFieldObjectByName(cfRiskLevelName) def fieldConfig = cfRiskLevel.getRelevantConfig(issue) def value = ComponentAccessor.optionsManager.getOptions(fieldConfig)?.find { it.toString() == level } issue.setCustomFieldValue(cfRiskLevel, value) def currentUser = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser() issueManager.updateIssue(currentUser, issue, EventDispatchOption.DO_NOT_DISPATCH, false); def issueIndexingService = ComponentAccessor.getComponent(IssueIndexingService) boolean wasIndexing = ImportUtils.isIndexIssues() ImportUtils.setIndexIssues(true) issueIndexingService.reIndex(issueManager.getIssueObject(issue.id)) ImportUtils.setIndexIssues(wasIndexing) log.debug(value) } |
Upon creation, you'll be warned about configuring the context and screen, for choosing to which issue types it's applicable, and to the screens where you want it visible.
As an example, you may use the previous field as a way to sort and filter Tests in the "Add Tests" dialog, whenever adding them from within an existing Test Execution.
This example assumes that we're evaluating the probability and impact (and consequently the risk level) at requirement or at project level (in some specific issue type for that purpose.)
As changes are made to probability or to impact on the parent issue (e.g. requirement, project risk,) a listener will handle the issue updated & created events and will update a "Select List (single choice)" based custom field defined for Test issues. That field needs to be created and configured beforehand will all the possible options.
This approach and the code of following script assumes that a Test covers just one issue. If, for example, a given requirement is covered by a Test that also covers another requirement, then whenever editing the impact in one of the requirements will update/overwrite the calculated risk level on the custom field on the Test issue. Thus, you may have to think if you want to follow this approach as it has some constraints. |
import com.atlassian.jira.issue.fields.CustomField import com.atlassian.jira.issue.CustomFieldManager import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.Issue import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.event.issue.IssueEvent import com.atlassian.jira.issue.util.DefaultIssueChangeHolder import com.atlassian.jira.issue.ModifiedValue import com.atlassian.jira.issue.history.ChangeItemBean import com.atlassian.jira.issue.IssueManager import com.atlassian.jira.issue.customfields.option.Option import com.atlassian.jira.event.type.EventDispatchOption import com.atlassian.jira.issue.index.IssueIndexingService import com.atlassian.jira.util.ImportUtils import com.atlassian.jira.bc.issue.search.SearchService import com.atlassian.jira.issue.search.SearchProvider import com.atlassian.jira.issue.search.SearchResults import com.atlassian.jira.web.bean.PagerFilter import org.apache.log4j.Logger import org.apache.log4j.Level serviceAccount = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser() Object getIssues(jqlQuery){ // A list of GenericValues representing issues List<Issue> searchResults = null; def searchService = ComponentAccessor.getComponent(SearchService.class); SearchService.ParseResult parseResult = searchService.parseQuery(serviceAccount, jqlQuery); if (parseResult.isValid()) { // throws SearchException SearchResults results = searchService.search(serviceAccount, parseResult.getQuery(), PagerFilter.getUnlimitedFilter()); searchResults = results.getIssues(); return searchResults; } return [] } def log = Logger.getLogger("com.onresolve.jira.groovy") log.setLevel(Level.DEBUG) MutableIssue issue = event.issue def issueType = issue.issueTypeObject.name // Only process Story and Risk issues if (issueType != "Story" && issueType != "Risk"){ return } String cfImpactName = "Impact" String cfProbabilityName = "Probability" String cfRiskLevelName = "Parent Risk Level" def changeManager = ComponentAccessor.getChangeHistoryManager(); def changeLog = event.getChangeLog(); def impactChanged = false, probabilityChanged = false; if ( changeLog != null ) { def changeId = changeLog.getLong( "id" ); if ( changeId != null ) { def change = changeManager.getChangeHistoryById( changeId ); for (ChangeItemBean bean : change.getChangeItemBeans() ) { if ( bean.getField().equals(cfImpactName) && ChangeItemBean.CUSTOM_FIELD.equals( bean.getFieldType() )) { log.debug(cfImpactName + " changed"); impactChanged = true; } if ( bean.getField().equals(cfProbabilityName) && ChangeItemBean.CUSTOM_FIELD.equals( bean.getFieldType() )) { log.debug(cfProbabilityName + " changed"); probabilityChanged = true; } } } } if (impactChanged || probabilityChanged){ def issueManager = ComponentAccessor.issueManager def customFieldManager = ComponentAccessor.customFieldManager def cfImpact = customFieldManager.getCustomFieldObjectByName(cfImpactName) def cfProbability = customFieldManager.getCustomFieldObjectByName(cfProbabilityName) def option def impact def probability option = (Option)issue.getCustomFieldValue(cfImpact) if (option != null) impact = option.getValue() option = (Option)issue.getCustomFieldValue(cfProbability) if (option != null) probability = option.getValue() def risk def level if ((impact) && (probability)) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } if ((risk>0) && (risk<4)) { level = "L1: Low" } else if ((risk>=4) && (risk<8)) { level = "L2: Medium" } else if ((risk>=8) && (risk<12)) { level = "L3: High" } else if ((risk>=12) && (risk<16)) { level = "L4: Very High" } else if (risk==16) { level = "L5: Severe" } else { level = null } def cfRiskLevel = customFieldManager.getCustomFieldObjectByName(cfRiskLevelName) def jql = "issue in requirementTests('${issue.key}')" def testIssues = getIssues(jql) testIssues.each { MutableIssue testIssue = issueManager.getIssueObject(it.key) def fieldConfig = cfRiskLevel.getRelevantConfig(it) def value = ComponentAccessor.optionsManager.getOptions(fieldConfig)?.find { it.toString() == level } testIssue.setCustomFieldValue(cfRiskLevel, value) issueManager.updateIssue(serviceAccount, testIssue, EventDispatchOption.DO_NOT_DISPATCH, false); def issueIndexingService = ComponentAccessor.getComponent(IssueIndexingService) boolean wasIndexing = ImportUtils.isIndexIssues() ImportUtils.setIndexIssues(true) issueIndexingService.reIndex(issueManager.getIssueObject(testIssue.id)) ImportUtils.setIndexIssues(wasIndexing) } log.debug(value) } |
These configuration approaches have several drawbacks, specially in terms of usability; thus, they are not recommended. They are provided here for reference though, in case you still want to implement the fields this way.
In this example, we're calculating the risk as a number using a custom Script Field (Add-ons > ScriptRunner > Script Fields > Add New Item > Custom Script Field).
import com.atlassian.jira.issue.Issue import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.IssueManager import com.atlassian.jira.component.ComponentAccessor import org.apache.log4j.Logger import org.apache.log4j.Level def log = Logger.getLogger("com.onresolve.jira.groovy") log.setLevel(Level.DEBUG) def issueManager = ComponentAccessor.issueManager def impact = getCustomFieldValue("Impact").toString() def probability = getCustomFieldValue("Probability").toString() def risk if ((impact!="null") && (probability!="null")) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } return risk |
Even if risk value related custom field is created using the Template "Number Field", as shown earlier, sorting won't work as expected as it will be done by text comparison (i.e. alphabetically). You will need to to guarantee that Jira uses the "Number Searcher" by editing the custom field details. If you change this, don't forget to re-index. |
In this example, we're calculating the risk level and presenting it in a friendly/textual way. For this, we'll use a custom Script Field (Add-ons > ScriptRunner > Script Fields > Add New Item > Custom Script Field).
This approach has one side effect though: it will harden the search of issues by value as you have to write the value to search for, whereas a Select List based field can easily be searched by picking the possible values you want to filter.
import com.atlassian.jira.issue.Issue import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.IssueManager import com.atlassian.jira.component.ComponentAccessor import org.apache.log4j.Logger import org.apache.log4j.Level def log = Logger.getLogger("com.onresolve.jira.groovy") log.setLevel(Level.DEBUG) def issueManager = ComponentAccessor.issueManager def impact = getCustomFieldValue("Impact").toString() def probability = getCustomFieldValue("Probability").toString() def risk def level if ((impact) && (probability)) { risk = Double.parseDouble(impact) * Double.parseDouble(probability) } else { risk = null } if ((risk>0) && (risk<4)) { level = "L1: Low" } else if ((risk>=4) && (risk<8)) { level = "L2: Medium" } else if ((risk>=8) && (risk<12)) { level = "L3: High" } else if ((risk>=12) && (risk<16)) { level = "L4: Very High" } else if (risk==16) { level = "L5: Severe" } else { level = null } return level |
RISK FACTORS IN SOFTWARE DEVELOPMENT PHASES, Haneen Hijazi, Msc Hashemite University, Jordan Shihadeh Alqrainy, PhD; Hasan Muaidi, PhD; Thair Khdour, PhD; Albalqa Applied University, Jordan