It is now recommended to use ByteStringByteString
.emptyByteString() instead of ByteStringByteString
.empty() when using Java because ByteStringByteString
.empty() is no longer available as a static method in the artifacts built for Scala 2.13.
If you are still using Scala 2.11 then you must upgrade to 2.12 or 2.13
After being deprecated since 2.5.0, the following have been removed in Akka 2.6.
- akka-camel module
- As an alternative we recommend Alpakka.
- This is of course not a drop-in replacement. If there is community interest we are open to setting up akka-camel as a separate community-maintained repository.
- akka-agent module
- If there is interest it may be moved to a separate, community-maintained repository.
- akka-contrib module
- To migrate, take the components you are using from Akka 2.5 and include them in your own project or library under your own package name.
- Actor DSL
- Actor DSL is a rarely used feature. Use plain
system.actorOfinstead of the DSL to create Actors if you have been using it.
- Actor DSL is a rarely used feature. Use plain
- If you need it you can now find it in
akka.stream.contrib.Timedfrom Akka Stream Contrib.
- If you need it you can now find it in
- Netty UDP (Classic remoting over UDP)
- To continue to use UDP configure Artery UDP or migrate to Artery TCP.
- A full cluster restart is required to change to Artery.
After being deprecated since 2.2, the following have been removed in Akka 2.6.
UntypedActorhas been removed, use
ActorMaterialzierSettings.withAutoFusingdisabling fusing is no longer possible.
Migration guide to Persistence Typed is in the PersistentFSM documentation.
akka.actor.TypedActor has been deprecated as of 2.6 in favor of the
akka.actor.typed API which should be used instead.
There are several reasons for phasing out the old
TypedActor. The primary reason is they use transparent remoting which is not our recommended way of implementing and interacting with actors. Transparent remoting is when you try to make remote method invocations look like local calls. In contrast we believe in location transparency with explicit messaging between actors (same type of messaging for both local and remote actors). They also have limited functionality compared to ordinary actors, and worse performance.
To summarize the fallacy of transparent remoting: * Was used in CORBA, RMI, and DCOM, and all of them failed. Those problems were noted by Waldo et al already in 1994 * Partial failure is a major problem. Remote calls introduce uncertainty whether the function was invoked or not. Typically handled by using timeouts but the client can’t always know the result of the call. * Latency of calls over a network are several order of magnitudes longer than latency of local calls, which can be more than surprising if encoded as an innocent looking local method call. * Remote invocations have much lower throughput due to the need of serializing the data and you can’t just pass huge datasets in the same way.
Therefore explicit message passing is preferred. It looks different from local method calls (
actorRef ! message
actorRef.tell(message)) and there is no misconception that sending a message will result in it being processed instantaneously. The goal of location transparency is to unify message passing for both local and remote interactions, versus attempting to make remote interactions look like local method calls.
TypedActor have been mentioned in documentation for many years.
Artery TCP is now the default remoting implementation. Classic remoting has been deprecated and will be removed in
Artery has the same functionality as classic remoting and you should normally only have to change the configuration to switch. To switch a full cluster restart is required and any overrides for classic remoting need to be ported to Artery configuration.
Artery defaults to TCP (see selected transport) which is a good start when migrating from classic remoting.
The protocol part in the Akka
Address, for example
"akka.tcp://[email protected]:2552/user/actorName" has changed from
akka. If you have configured or hardcoded any such addresses you have to change them to
akka is used also when TLS is enabled. One typical place where such address is used is in the
The default port is 25520 instead of 2552 to avoid connections between Artery and classic remoting due to misconfiguration. You can run Artery on 2552 if you prefer that (e.g. existing firewall rules) and then you have to configure the port with:
akka.remote.artery.canonical.port = 2552
For more details on rolling updates with this migration see the shutdown and startup section.
Configuration that is likely required to be ported:
If using SSL then
tcp-tls needs to be enabled and setup. See Artery docs for SSL for how to do this.
The following events that are published to the
eventStream have changed:
The following defaults have changed:
akka.remote.artery.transportdefault has changed from
The following properties have moved. If you don’t adjust these from their defaults no changes are required:
Classic remoting is deprecated but can be used in
2.6. Explicitly disable Artery by setting property
false. Further, any configuration under
akka.remote that is specific to classic remoting needs to be moved to
akka.remote.classic. To see which configuration options are specific to classic search for them in:
akka-protobuf was never intended to be used by end users but perhaps this was not well-documented. Applications should use standard Protobuf dependency instead of
akka-protobuf. The artifact is still published, but the transitive dependency to
akka-protobuf has been removed.
Akka is now using Protobuf version 3.9.0 for serialization of messages defined by Akka.
Java serialization is known to be slow and prone to attacks of various kinds - it never was designed for high throughput messaging after all. One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
From Akka 2.6.0 the Akka serialization with Java serialization is disabled by default and Akka itself doesn’t use Java serialization for any of its internal messages.
For compatibility with older systems that rely on Java serialization it can be enabled with the following configuration:
akka.actor.allow-java-serialization = on
Akka will still log warning when Java serialization is used and to silent that you may add:
akka.actor.warn-about-java-serializer-usage = off
Please see the rolling update procedure from Java serialization to Jackson.
When using a consistent hashing router keys that were not bytes or a String are serialized. You might have to add a serializer for you hash keys, unless one of the default serializer are not handling that type and it was previously “accidentally” serialized with Java serialization.
The following documents configuration changes and behavior changes where no action is required. In some cases the old behavior can be restored via configuration.
By default, these remoting features are disabled when not using Akka Cluster:
- Remote Deployment: falls back to creating a local actor
- Remote Watch: ignores the watch and unwatch request, and
Terminatedwill not be delivered when the remote actor is stopped or if a remote node crashes
Watching an actor on a node outside the cluster may have unexpected consequences, such as quarantining so it has been disabled by default in Akka 2.6. This is the case if either cluster is not used at all (only plain remoting) or when watching an actor outside of the cluster.
On the other hand, failure detection between nodes of the same cluster do not have that shortcoming. Thus, when remote watching or deployment is used within the same cluster, they are working the same in 2.6 as before, except that a remote watch attempt before a node has joined will log a warning and be ignored, it must be done after the node has joined.
To optionally enable a watch without Akka Cluster or across a Cluster boundary between Cluster and non Cluster, knowing the consequences, all watchers (cluster as well as remote) need to set
akka.remote.use-unsafe-remote-features-outside-cluster = on`.
- An initial warning is logged on startup of
A warning will be logged on remote watch attempts, which you can suppress by setting
akka.remote.warn-unsafe-watch-outside-cluster = off
Scheduler.schedule method has been deprecated in favor of selecting
The Scheduler documentation describes the difference between
fixed-rate scheduling. If you are uncertain of which one to use you should pick
schedule method had the same semantics as
scheduleAtFixedRate, but since that can result in bursts of scheduled tasks or messages after long garbage collection pauses and in worst case cause undesired load on the system
scheduleWithFixedDelay is often preferred.
For the same reason the following methods have also been deprecated:
TimerScheduler.startPeriodicTimer, replaced by
FSM.setTimer, replaced by
PersistentFSM.setTimer, replaced by
To protect the Akka internals against starvation when user code blocks the default dispatcher (for example by accidental use of blocking APIs from actors) a new internal dispatcher has been added. All of Akka’s internal, non-blocking actors now run on the internal dispatcher by default.
The dispatcher can be configured through
For maximum performance, you might want to use a single shared dispatcher for all non-blocking, asynchronous actors, user actors and Akka internal actors. In that case, can configure the
akka.actor.internal-dispatcher with a string value of
akka.actor.default-dispatcher. This reinstantiates the behavior from previous Akka versions but also removes the isolation between user and Akka internals. So, use at your own risk!
use-dispatcher configuration settings that previously accepted an empty value to fall back to the default dispatcher has now gotten an explicit value of
akka.actor.internal-dispatcher and no longer accept an empty string as value. If such an empty value is used in your
application.conf the same result is achieved by simply removing that entry completely and having the default apply.
For more details about configuring dispatchers, see the Dispatchers
Previously the factor for the default dispatcher was set a bit high (
3.0) to give some extra threads in case of accidental blocking and protect a bit against starving the internal actors. Since the internal actors are now on a separate dispatcher the default dispatcher has been adjusted down to
1.0 which means the number of threads will be one per core, but at least
8 and at most
64. This can be tuned using the individual settings in
This has been reduced to speed up ShardCoordinator initialization in smaller clusters. The read from ddata is a ReadMajority. For small clusters (< majority-min-cap) every node needs to respond so it is more likely to timeout if there are nodes restarting, for example when there is a rolling re-deploy happening.
akka.cluster.sharding.passivate-idle-entity-after is now enabled by default. Sharding will passivate entities when they have not received any messages after this duration. To disable passivation you can use configuration:
akka.cluster.sharding.passivate-idle-entity-after = off
It is always disabled if Remembering Entities is enabled.
A new field has been added to the response of a
ShardRegion.GetClusterShardingStats command for any shards per region that may have failed or not responded within the new configurable
akka.cluster.sharding.shard-region-query-timeout. This is described further in inspecting sharding state.
Configuration properties for controlling sizes of
DeltaPropagation messages in Distributed Data have been reduced. Previous defaults sometimes resulted in messages exceeding max payload size for remote actor messages.
The new configuration properties are:
akka.cluster.distributed-data.max-delta-elements = 500 akka.cluster.distributed-data.delta-crdt.max-delta-size = 50
No migration is needed but it is mentioned here because it is a change in behavior.
ActorSystem.terminate() is called,
CoordinatedShutdown will be run in Akka 2.6.x, which wasn’t the case in 2.5.x. For example, if using Akka Cluster this means that member will attempt to leave the cluster gracefully.
If this is not desired behavior, for example in tests, you can disable this feature with the following configuration and then it will behave as in Akka 2.5.x:
akka.coordinated-shutdown.run-by-actor-system-terminate = off
ActorSystem was shutting down and the
Scheduler was closed all outstanding scheduled tasks were run, which was needed for some internals in Akka but a surprising behavior for end users. Therefore this behavior has changed in Akka 2.6.x and outstanding tasks are not run when the system is terminated.
CoordinatedShutdown can be used for running such tasks when the shutting down.
StreamConverters.fromOutputStream now always fail the materialized value in case of failure. It is no longer required to both check the materialized value and the
Try[Done] inside the IOResultIOResult. In case of an IO failure the exception will be IOOperationIncompleteExceptionIOOperationIncompleteException instead of AbruptIOTerminationExceptionAbruptIOTerminationException.
Additionally when downstream of the IO-sources cancels with a failure, the materialized value is failed with that failure rather than completed successfully.
Previously, Akka contained a shaded copy of the ForkJoinPool. In benchmarks, we could not find significant benefits of keeping our own copy, so from Akka 2.6 on, the default FJP from the JDK will be used. The Akka FJP copy was removed.
When the number of dead letters have reached configured
akka.log-dead-letters value it didn’t log more dead letters in Akka 2.5. In Akka 2.6 the count is reset after configured
akka.log-dead-letters-during-shutdown default configuration changed from
Default number of nodes that each node is observing for failure detection has increased from 5 to 9. The reason is to have better coverage and unreachability information for downing decisions.
akka.cluster.monitored-by-nr-of-members = 9
expectNoMessage() without timeout parameter is now using a new configuration property
akka.test.expect-no-message-default (short timeout) instead of
remainingOrDefault (long timeout).
The materialized value for
StreamRefs.sourceRef is no longer wrapped in
CompletionStage. It can be sent as reply to
sender() immediately without using the
StreamRefs was marked as may change.
In needing a way to distinguish the new APIs in code and docs from the original, Akka used the naming convention
untyped. All references of the original have now been changed to
classic. The reference of the new APIs as
typed is going away as it becomes the primary APIs.
The receptionist had a name clash with the default Cluster Client Receptionist at
/system/receptionist and will now instead either run under
The path change means that the receptionist information will not be disseminated between 2.5 and 2.6 nodes during a rolling update from 2.5 to 2.6 if you use Akka Typed. See rolling updates with typed Receptionist
In 2.5 the Cluster Receptionist was using the shared Distributed Data extension but that could result in undesired configuration changes if the application was also using that and changed for example the
In 2.6 the Cluster Receptionist is using it’s own independent instance of Distributed Data.
This means that the receptionist information will not be disseminated between 2.5 and 2.6 nodes during a rolling update from 2.5 to 2.6 if you use Akka Typed. See rolling updates with typed Cluster Receptionist
Akka Typed APIs are still marked as may change and a few changes were made before finalizing the APIs. Compared to Akka 2.5.x the source incompatible changes are:
Behaviors.interceptnow takes a factory function for the interceptor.
- Factory method
Entity.ofPersistentEntityis renamed to
Entity.ofEventSourcedEntityin the Java API for Akka Cluster Sharding Typed.
- New abstract class
EventSourcedEntityWithEnforcedRepliesin Java API for Akka Cluster Sharding Typed and corresponding factory method
Entity.ofEventSourcedEntityWithEnforcedRepliesto ease the creation of
EventSourcedBehaviorwith enforced replies.
- New method
EventSourcedEntity.withEnforcedRepliesadded to Scala API to ease the creation of
EventSourcedBehaviorwith enforced replies.
ActorSystem.schedulerpreviously gave access to the classic
akka.actor.Schedulerbut now returns a typed specific
schedulemethod has been replaced by
scheduleAtFixedRate. Actors that needs to schedule tasks should prefer
TimerScheduler.startPeriodicTimer, replaced by
Routers.poolnow take a factory function rather than a
Behaviorto protect against accidentally sharing same behavior instance and state across routees.
requestparameter in Distributed Data commands was removed, in favor of using
askwith the new
Behavior.ignoresince they were redundant with corresponding scaladsl.Behaviors.xjavadsl.Behaviors.x.
ActorContextparameter removed in
javadsl.ReceiveBuilderfor the functional style in Java. Use
ActorContext, and use an enclosing class to hold initialization parameters and
- Java EntityRef ask timeout now takes a
java.time.Durationrather than a TimeoutTimeout
- Changed method signature for
EventAdapter.fromJournaland support for
ClassTagparameter (probably source compatible)
BehaviorInterceptoris replaced with this
Behavior.orElsehas been removed because it wasn’t safe together with
StashBuffers are now created with
Behaviors.withStashrather than instantiating directly
- To align with the Akka Typed style guide
SpawnProtocolis now created through
SpawnProtocol.create(), the special
Spawnmessage factories has been removed and the top level of the actor protocol is now
toUntypedhas been renamed to
- Akka Typed is now using SLF4J as the logging API.
org.slf4j.Logger. MDC has been changed to only support
ActorContexthas been renamed to
PartialFunctionhas been replaced in the Java API with a variant more suitable to be called by Java.
- Factories for creating a materializer from an
akka.actor.typed.ActorSystemhave been removed. A stream can be run with an
akka.actor.typed.ActorSystemin implicit scopeparameter and therefore the need for creating a materializer has been reduced.
A default materializer is now provided out of the box. For the Java API just pass
system when running streams, for Scala an implicit materializer is provided if there is an implicit
ActorSystem available. This avoids leaking materializers and simplifies most stream use cases somewhat.
ActorMaterializer factories has been deprecated and replaced with a few corresponding factories in
akka.stream.Materializer. New factories with per-materializer settings has not been provided but should instead be done globally through config or per stream, see below for more details.
Having a default materializer available means that most, if not all, usages of Java
ActorMaterializer.create() and Scala
implicit val materializer = ActorMaterializer() should be removed.
Details about the stream materializer can be found in Actor Materializer Lifecycle
When using streams from typed the same factories and methods for creating materializers and running streams as from classic can now be used with typed. The
akka.stream.typed.javadsl.ActorMaterializerFactory that previously existed in the
akka-stream-typed module has been removed.
ActorMaterializerSettings class has been deprecated.
All materializer settings are available as configuration to change the system default or through attributes that can be used for individual streams when they are materialized.
|Materializer setting||Corresponding attribute||Setting|
||no longer used (since 2.5.0)||na|
Setting attributes on individual streams can be done like so:
RunnableGraph<CompletionStage<Done>> stream = Source.range(1, 10) .map(Object::toString) .toMat(Sink.foreach(System.out::println), Keep.right()) .withAttributes( Attributes.inputBuffer(4, 4) .and(ActorAttributes.dispatcher("my-stream-dispatcher")) .and(TcpAttributes.tcpWriteBufferSize(2048))); stream.run(system);
Previously an Akka streams stage or operator failed it was impossible to discern this from the stage just cancelling. This has been improved so that when a stream stage fails the cause will be propagated upstream.
The following operators have a slight change in behavior because of this:
StreamConverters.fromInputStreamwill fail the materialized future with an
IOOperationIncompleteExceptionwhen downstream fails
.watchTerminationwill fail the materialized
CompletionStagerather than completing it when downstream fails
SourceRefwill cancel with a failure when the receiving node is downed
This also means that custom
GraphStage implementations should be changed to pass on the cancellation cause when downstream cancels by implementing the
OutHandler.onDownstreamFinish signature taking a
cause parameter and calling
cancelStage(cause) to pass the cause upstream. The old zero-argument
onDownstreamFinish method has been deprecated.