Caching

Akka HTTP’s caching support provides a lightweight and fast in-memory caching functionality based on futures. The primary use-case is the “wrapping” of an expensive operation with a caching layer that, based on a certain key of type K, runs the wrapped operation only once and returns the cached value for all future accesses for the same key (as long as the respective entry has not expired).

Akka HTTP comes with one implementation of the CacheCache API built on Caffeine featuring frequency-biased cache eviction semantics with support for time-based entry expiration.

Dependency

The Akka dependencies are available from Akka’s library repository. To access them there, you need to configure the URL for this repository.

sbt
resolvers += "Akka library repository".at("https://repo.akka.io/maven")
Gradle
repositories {
    mavenCentral()
    maven {
        url "https://repo.akka.io/maven"
    }
}
Maven
<project>
  ...
  <repositories>
    <repository>
      <id>akka-repository</id>
      <name>Akka library repository</name>
      <url>https://repo.akka.io/maven</url>
    </repository>
  </repositories>
</project>

To use Akka HTTP Caching, add the module to your project:

sbt
val AkkaHttpVersion = "10.6.1"
libraryDependencies += "com.typesafe.akka" %% "akka-http-caching" % AkkaHttpVersion
Gradle
def versions = [
  ScalaBinary: "2.13"
]
dependencies {
  implementation platform("com.typesafe.akka:akka-http-bom_${versions.ScalaBinary}:10.6.1")

  implementation "com.typesafe.akka:akka-http-caching_${versions.ScalaBinary}"
}
Maven
<properties>
  <scala.binary.version>2.13</scala.binary.version>
</properties>
<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>com.typesafe.akka</groupId>
      <artifactId>akka-http-bom_${scala.binary.version}</artifactId>
      <version>10.6.1</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
<dependencies>
  <dependency>
    <groupId>com.typesafe.akka</groupId>
    <artifactId>akka-http-caching_${scala.binary.version}</artifactId>
  </dependency>
</dependencies>

Basic design

The central idea of the cache API is to not store the actual values of type T themselves in the cache but rather the corresponding futures, i.e. instances of type CompletableFuture<T>Future[T]. This approach has the advantage of taking care of the thundering herds problem where many requests to a particular cache key (e.g. a resource URI) arrive before the first one could be completed. Normally (without special guarding techniques, like so-called “cowboy” entries) this can cause many requests to compete for system resources while trying to compute the same result thereby greatly reducing overall system performance. When you use an Akka HTTP cache the very first request that arrives for a certain cache key causes a future to be put into the cache which all later requests then “hook into”. As soon as the first request completes all other ones complete as well. This minimizes processing time and server load for all requests.

All Akka HTTP cache implementations adheres to the CacheCache interfaceclass, which allows you to interact with the cache.

Along with the cache API, the routing DSL provides several caching directives to use caching in your routes.

Frequency-biased LFU cache

The frequency-biased LFU cache implementation has a defined maximum number of entries it can store. After the maximum capacity is reached the cache will evict entries that are less likely to be used again. For example, the cache may evict an entry because it hasn’t been used recently or very often.

Time-based entry expiration is enabled when time-to-live and/or time-to-idle expirations are set to a finite duration. The former provides an upper limit to the time period an entry is allowed to remain in the cache while the latter limits the maximum time an entry is kept without having been accessed, ie. either read or updated. If both values are finite the time-to-live has to be greater or equal than the time-to-idle.

Note

Expired entries are only evicted upon next access (or by being thrown out by the capacity constraint), so they might prevent garbage collection of their values for longer than expected.

For simple cases, configure the capacity and expiration settings in your application.conf file via the settings under akka.http.caching and use LfuCache.create()LfuCache.apply() to create the cache. For more advanced usage you can create an LfuCacheLfuCache with settings specialized for your use case:

Java
sourceimport akka.http.caching.javadsl.Cache;
import akka.http.caching.javadsl.CachingSettings;
import akka.http.caching.javadsl.LfuCacheSettings;
import akka.http.caching.LfuCache;
import static akka.http.javadsl.server.directives.CachingDirectives.*;
import java.util.concurrent.TimeUnit;

// Use the request's URI as the cache's key
final JavaPartialFunction<RequestContext, Uri> keyerFunction = new JavaPartialFunction<RequestContext, Uri>() {
  public Uri apply(RequestContext in, boolean isCheck) {
    return in.getRequest().getUri();
  }
};
final CachingSettings defaultCachingSettings = CachingSettings.create(system());
final LfuCacheSettings lfuCacheSettings = defaultCachingSettings.lfuCacheSettings()
  .withInitialCapacity(25)
  .withMaxCapacity(50)
  .withTimeToLive(Duration.create(20, TimeUnit.SECONDS))
  .withTimeToIdle(Duration.create(10, TimeUnit.SECONDS));
final CachingSettings cachingSettings = defaultCachingSettings.withLfuCacheSettings(lfuCacheSettings);
final Cache<Uri, RouteResult> lfuCache = LfuCache.create(cachingSettings);

// Create the route
final Route route = cache(lfuCache, keyerFunction, () -> innerRoute);
Scala
sourceimport akka.http.caching.scaladsl.Cache
import akka.http.caching.scaladsl.CachingSettings
import akka.http.caching.LfuCache
import akka.http.scaladsl.server.RequestContext
import akka.http.scaladsl.server.RouteResult
import akka.http.scaladsl.model.Uri
import akka.http.scaladsl.server.directives.CachingDirectives._
import scala.concurrent.duration._

// Use the request's URI as the cache's key
val keyerFunction: PartialFunction[RequestContext, Uri] = {
  case r: RequestContext => r.request.uri
}
val defaultCachingSettings = CachingSettings(system)
val lfuCacheSettings =
  defaultCachingSettings.lfuCacheSettings
    .withInitialCapacity(25)
    .withMaxCapacity(50)
    .withTimeToLive(20.seconds)
    .withTimeToIdle(10.seconds)
val cachingSettings =
  defaultCachingSettings.withLfuCacheSettings(lfuCacheSettings)
val lfuCache: Cache[Uri, RouteResult] = LfuCache(cachingSettings)

// Create the route
val route = cache(lfuCache, keyerFunction)(innerRoute)
Found an error in this documentation? The source code for this page can be found here. Please feel free to edit and contribute a pull request.