Kubernetes Lease


This module is currently marked as May Change in the sense of that the API, configuration and behavior might be changed based on feedback from initial usage.

This module is an implementation of an Akka Coordination Lease backed by a Custom Resource Definition (CRD) in Kubernetes. Resources in Kubernetes offer concurrency control and consistency that have been used to build a distributed lease/lock.


This feature is included in a subscription to Lightbend Platform, which includes other technology enhancements, monitoring and telemetry, and one-to-one support from the expert engineers behind Akka.

A lease can be used for:

  • Split Brain Resolver (SBR). An additional safety measure so that only one SBR instance can make the decision to remain up.
  • Cluster Singleton. A singleton manager can be configured to acquire a lease before creating the singleton.
  • Cluster Sharding. Each Shard can be configured to acquire a lease before creating entity actors.

In all cases the use of the lease increases the consistency of the feature. However, as the Kubernetes API server and its backing etcd cluster can also be subject to failure and network issues any use of this lease can reduce availability.

Lease Instances

  • With Split Brain Resolver (SBR) there will be one lease per Akka Cluster
  • With multiple Akka Clusters using SBRs in the same namespace, e.g. multiple Lagom applications, you must ensure different ActorSystem names because they all need a separate lease. For different cluster names set play.akka.actor-system = <some-unique-name> on each service.
  • With Cluster Sharding and Cluster Singleton there will be more leases



libraryDependencies += "com.lightbend.akka" %% "akka-lease-kubernetes" % "1.1.16"
dependencies {
  compile group: 'com.lightbend.akka', name: 'akka-lease-kubernetes_2.11', version: '1.1.16'

To use with SBR add its dependency.

Creating the Custom Resource Definition for the lease

This requires admin privileges to your Kubernetes / Open Shift cluster but only needs doing once.


kubectl apply -f lease.yml

Open shift

oc apply -f lease.yml

Where lease.yml contains:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
  name: leases.akka.io
  group: akka.io
  version: v1
  scope: Namespaced
    plural: leases
    singular: lease
    kind: Lease
    - le

Role based access control

Each pod needs permission to read/create and update lease resources. They only need access for the namespace they are in.

An example RBAC that can be used:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
  name: lease-access
  - apiGroups: ["akka.io"]
    resources: ["leases"]
    verbs: ["get", "create", "update", "list"]
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
  name: lease-access
  - kind: User
    name: system:serviceaccount:<YOUR NAMSPACE>:default
  kind: Role
  name: lease-access
  apiGroup: rbac.authorization.k8s.io

This defines a Role that is allowed to get, create and update lease objects and a RoleBinding that gives the default service user this role in <YOUR NAMESPACE>.

Future versions may also require delete access for cleaning up old resources. Current uses within Akka only create a single lease so cleanup is not an issue.

To avoid giving an application the access to create new leases an empty lease can be created in the same namespace as the application with:


kubelctl create -f sbr-lease.yml -n <YOUR_NAMESPACE>

OpenShift (from your project):

oc create -f sbr-lease.yml

Where sbr-lease.yml contains:

apiVersion: "akka.io/v1"
kind: Lease
  name: <YOUR_ACTORSYSTEM_NAME>-akka-sbr
  owner: ""
  time: 0

Enable in SBR

To enable the lease for use within SBR:

akka { cluster { downing-provider-class = "com.lightbend.akka.sbr.SplitBrainResolverProvider" split-brain-resolver { active-strategy = "lease-majority" lease-majority { lease-implementation = "akka.lease.kubernetes" } } } }

Full configuration options

akka.lease.kubernetes {

    lease-class = "akka.lease.kubernetes.KubernetesLease"

    api-ca-path = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
    api-token-path = "/var/run/secrets/kubernetes.io/serviceaccount/token"

    # Host for the Kubernetes API server. Typically this will be set via an environment
    # variable that is set when running inside Kubernetes
    api-service-host = "localhost"
    api-service-host = ${?KUBERNETES_SERVICE_HOST}

    # Port for the Kubernetes API server. Typically this will be set via an environment
    # variable that is set when running inside Kubernetes
    api-service-port = 8080
    api-service-port = ${?KUBERNETES_SERVICE_PORT}

    # Namespace file path. The namespace is to create the lock in. Can be overridden by "namespace"
    # If this path doesn't exist, the namespace will default to "default".
    namespace-path = "/var/run/secrets/kubernetes.io/serviceaccount/namespace"

    # Namespace to create the lock in. If set to something other than "<namespace>" then overrides any value
    # in "namespace-path"
    namespace = "<namespace>"

    # How often to write time into CRD so that if the holder crashes
    # another node can take the lease after a given timeout. If left blank then the default is
    # max(5s, heartbeat-timeout / 10)
    heartbeat-interval = ""
    #heartbeat-interval = 12s

    # How long a lease must not be updated before another node can assume
    # the holder has crashed.
    # If the lease holder hasn't crashed its next heart beat will fail due to the version
    # having been updated
    heartbeat-timeout = 120s

    # The individual timeout for each HTTP request. Defaults to 2/5 of the lease-operation-timoeut
    # Can't be greater than then lease-operation-timeout
    api-server-request-timeout = ""
    #api-server-request-timeout = 2s

    # Use TLS & auth token for communication with the API server
    # set to false for plain text with no auth
    secure-api-server = true

    # The amount of time to wait for a lease to be aquired or released. This includes all requests to the API
    # server that are required. If this timeout is hit then the lease *may* be taken due to the response being lost
    # on the way back from the API server but will be reported as not taken and can be safely retried.
    lease-operation-timeout = 5s


Q. What happens if the node that holds the lease crashes?

A. Each lease has a Time To Live (TTL) that is set akka.lease.kubernetes.heartbeat-timeout which defaults to 120s. A lease holder updates the lease every 1/10 of the timeout to keep the lease. If the TTL passes without the lease being updated another node is allowed to take the lease. For ultimate safety this timeout can be set very high but then an operator would need to come and clear the lease if a lease owner crashes with the lease taken.