Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1.  Review the "Max results per request" setting in the Miscellaneous administration settings as it controls the pagination on the REST API calls. The default value should be ok.
  2.  Limit API calls (to Jira and Xray related endpoints) using a reverse proxy
    1. Evaluate what REST API calls are being used, discuss their real need with users
      1. Make sure that pagination is being used on the REST API calls
    2. Restrict access to REST API calls
      1. Limit access to well-known hosts/applications
  3.  Export results endpoint (i.e. /rest/raven/1.0/testruns) allows you to include custom fields from the Test issues in the response, using includeTestFields parameters; please choose carefully what fields you choose to include, as some of these may be calculated and thus add some additional overhead to the request.
  4.  Whenever searching for issues using Jira's REST API (i.e. api/2/search), please choose explicitly what fields to return using the fields parameter; that will avoid including unnecessary fields (e.g. "Requirement Status, Test Count, Test Set Status, Test Execution Defects, Test Plan Status) that are included by default and that add overhead to the request. This can be aggravated if this endpoint is used automatically by some integration with an external application. This is relevant for "requirement" like issues, Tests, Test Sets, Test Executions and Test Plans.

...

Xray calculated custom fields

  1.   Xray provides some specific custom fields that calculate their values on the fly. This means that you should have that in mind, especially if you're including them in tables/issue listings/gadgets.
    1. The most intensive custom field is the "Test Set Status." The "Test Count," as it does an aggregation, is also intensive if you use it for multiple issues.

...

  1.  Review Atlassian's best practices for Data Center and the Node sizing overview for Atlassian Data Center article.
  2.  Configure your load balancer properly to use dedicated nodes for REST API calls, which can reduce the impact on other application nodes.
  3. Integration with other apps
    1.  Please see eazyBI's Data Center related recommendations here.
  4. Cluster locks prune - Jira uses Beehive library to manage locks in DC. Locks are held internally in JVM and also implemented through the database table clusterlockstatus, which is shared between nodes.
    Locks are allocated through lock() / unlock() methods, which is expected semantics and standard API.

    1. The current cluster lock mechanism was designed to work with statically-named locks. The implementation stores each lock in the database permanently. Therefore using dynamically generated lock names, such as "lock_for_task_" + taskId, causes rows to pile up in large numbers in the clusterlockstatus table, and there is no mechanism to prune them.
    2. Workaround

      warning Always have a backup of your database before doing any changes to it. warning

      In general we don't anticipate performance problems, even with a million old locks. If for some reason you do need to purge the clusterlockstatus, you can:

      Workaround #1

      1. Shut down the whole cluster (all nodes).
      2. Remove all the locks: 
        delete from clusterlockstatus;
        Note there's no where clause in the query above. Cluster locks do not survive cluster shutdown, so all rows can be safely removed when the cluster is down.
      3. Start nodes one by one (as usual).

      Workaround #2

      You can prune the clusterlockstatus table without downtime, too.

      1. Remove only the unlocked locks:
        delete from clusterlockstatus where locked_by_node is NULL;
      2. At this point these pruned locks are unacquirable. Therefore you need to...
      3. Do a rolling restart of all nodes.

      Workaround #3

      You can prune the clusterlockstatus table in Xray Integrity Checker.

      1. Enable the following dark feature: com.xpandit.raven.clearClusterLockOnIntegrityChecker
      2. In Xray Integrity Checker go to the Cluster Lock, check and fix the Check for Xray cluster locks
      3. At this point these pruned locks are unacquirable. Therefore you need to...
      4. Do a rolling restart of all nodes.

References