Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1.  Review Atlassian's best practices for Data Center and the Node sizing overview for Atlassian Data Center article.
  2.  Configure your load balancer properly to use dedicated nodes for REST API calls, which can reduce the impact on other application nodes.
  3. Integration with other apps
    1.  Please see eazyBI's Data Center related recommendations here.
  4. Cluster locks prune - Jira uses Beehive library to manage locks in DC. Locks are held internally in JVM and also implemented through the database table clusterlockstatus, which is shared between nodes.
    Locks are allocated through lock() / unlock() methods, which is expected semantics and standard API.

    1. The current cluster lock mechanism was designed to work with statically-named locks. The implementation stores each lock in the database permanently. Therefore using dynamically generated lock names, such as "lock_for_task_" + taskId, causes rows to pile up in large numbers in the clusterlockstatus table, and there is no mechanism to prune them.
    2. Workaround

      warning Always have a backup of your database before doing any changes to it. warning

      In general we don't anticipate performance problems, even with a million old locks. If for some reason you do need to purge the clusterlockstatus, you can:

      Workaround #1

      1. Shut down the whole cluster (all nodes).
      2. Remove all the locks: 
        delete from clusterlockstatus;
        Note there's no where clause in the query above. Cluster locks do not survive cluster shutdown, so all rows can be safely removed when the cluster is down.
      3. Start nodes one by one (as usual).

      Workaround #2

      You can prune the clusterlockstatus table without downtime, too.

      1. Remove only the unlocked locks:
        delete from clusterlockstatus where locked_by_node is NULL;
      2. At this point these pruned locks are unacquirable. Therefore you need to...
      3. Do a rolling restart of all nodes.

      Workaround #3

      You can prune the clusterlockstatus table in Xray Integrity Checker.

      1. Enable the following dark feature: com.xpandit.raven.clearClusterLockOnIntegrityChecker
      2. In Xray Integrity Checker go to the Cluster Lock, check and fix the Check for Xray cluster locks
      3. At this point these pruned locks are unacquirable. Therefore you need to...
      4. Do a rolling restart of all nodes.

References