Deployments at the edge of the cloud may need to minimize resource usage and be capable of running in resource-constrained environments. Akka Edge applications can be configured and built to run with low resource usage and adapt to changing resource needs.
Some approaches to running lightweight deployments for Akka Edge applications include:
- Using lightweight Kubernetes distributions
- Using cloud-optimized JVMs
- Building GraalVM Native Image executables
- Configuring adaptive resource usage with multidimensional autoscaling
These approaches are useful when running applications in edge environments that are on-premise or in 5G edge computing infrastructure, including cloud provider products such as AWS Wavelength, AWS Outposts, Google Distributed Cloud Edge, or Azure Stack Edge.
Kubernetes has become the standard orchestration tool for deploying containers and there are lightweight Kubernetes distributions that are specifically designed for edge computing environments. K3s and MicroK8s are lightweight Kubernetes distributions that are suitable for deploying containerized Akka Edge applications.
The Java Virtual Machine (JVM) can be configured to run with lower resource usage. OpenJDK’s Project Leyden is not released yet, but is looking to improve the startup time, time to peak performance, and the footprint of Java programs.
OpenJ9 is a JVM that is optimized for running in cloud environments and configured for lower resource usage. Options include tuning for virtualized environments, class data sharing, and ahead-of-time (AOT) compilation for faster starts and warmup.
GraalVM Native Image compiles Java or Scala code ahead-of-time to a native executable. A native image executable provides lower resource usage compared with the JVM, smaller deployments, faster starts, and immediate peak performance — making it ideal for Akka Edge deployments in resource-constrained environments and for responsiveness under autoscaling.
Native Image builds can be integrated into your application using build tool plugins:
- Maven plugin for GraalVM Native Image
- Gradle plugin for GraalVM Native Image
- sbt plugin for GraalVM Native Image
Native Image builds need to be configured in various ways. See the build tool plugins and the Native Image build configuration reference documentation for more information on how to do this. An important part of the configuration is the reachability metadata, which covers dynamic features used at runtime and which can’t be discovered statically at build time.
GraalVM provides a tracing agent to automatically gather metadata and create configuration files. The tracing agent tracks usage of dynamic features during regular running of the application in a JVM, and outputs Native Image configuration based on the code paths that were exercised. The build tool plugins for Native Image provide ways to run locally with the tracing agent enabled. It can also be useful to deploy your application to a testing environment with the GraalVM tracing agent enabled, to capture usage in an actual deployment environment and exercising all the Akka Edge features that are being used.
The GraalVM Native Image tracing agent can only generate configuration for code paths that were observed during the running of an application. When using this approach for generating configuration, make sure that tests exercise all possible code paths. In particular, check dynamic serialization of classes used for persistence or cross-node communication.
The Local Drone Control service from the Akka Edge guide has been configured for GraalVM Native Image as an example:
- Native Image build for Local Drone Control service in Java
- Native Image build for Local Drone Control service in Scala
An application using Akka Edge features, such as event sourcing and projections over gRPC, cannot scale to zero when idle. It’s possible, however, for the application to be scaled to and from “near zero” — scaling down to a state of minimal resource usage when idle, scaling up and out when load is increased. Multidimensional autoscaling is scaling both vertically (lower or higher resource allocation) and horizontally (fewer or more instances) and can be used to align resource usage with the actual demand given dynamic workloads.
In Kubernetes, the horizontal pod autoscaler (HPA) and the vertical pod autoscaler (VPA) can be combined, so that when the service is idle it is both scaled down with minimal resource requests, and scaled in to a minimal number of pods.
A multidimensional autoscaling configuration for an Akka Edge application in Kubernetes can be set up with:
Custom VPA recommender for vertical autoscaling configured to respond quickly, to “activate” the application. The default vertical pod autoscaler bases its recommendations for resource requests over long time frames (over days). A custom VPA recommender is needed to go from minimal resource allocation to higher requests more quickly.
HPA configured to horizontally autoscale based on custom metrics — such as the number of active event sourced entities in an Akka Cluster. Custom metrics need to be exposed by the application and configured for the Kubernetes custom metrics API with an “adapter”, such as the Prometheus adapter.
Application availability ensured by having a minimum of 2 replicas, and configuring a pod disruption budget (PDB) so that no more than one pod is unavailable at a time. When the vertical autoscaler makes changes, pods are evicted and restarted with updated resource requests. In-place changes are not currently supported by Kubernetes.
The Kubernetes horizontal and vertical pod autoscalers should not be triggered using the same metrics. As the default vertical autoscaler is currently designed for resource metrics (CPU and memory), the horizontal autoscaler should be configured to use custom metrics.
The Local Drone Control service from the Akka Edge guide has been configured for multidimensional autoscaling. The example uses GraalVM Native Image builds for low resource usage, combines the vertical and horizontal pod autoscalers, and runs in k3s (lightweight Kubernetes).