package sharding
- Alphabetic
- Public
- Protected
Type Members
- class ClusterSharding extends Extension
- See also
- final class ClusterShardingHealthCheck extends () => Future[Boolean]
INTERNAL API (ctr)
- final class ClusterShardingHealthCheckSettings extends AnyRef
- trait ClusterShardingSerializable extends Serializable
Marker trait for remote messages and persistent events/snapshots with special serializer.
- final class ClusterShardingSettings extends NoSerializationVerificationNeeded
- class ConsistentHashingShardAllocationStrategy extends ActorSystemDependentAllocationStrategy with ClusterShardAllocationMixin
akka.cluster.sharding.ShardCoordinator.ShardAllocationStrategy that is using consistent hashing.
akka.cluster.sharding.ShardCoordinator.ShardAllocationStrategy that is using consistent hashing. This can be useful when shards with the same shard id for different entity types should be best effort colocated to the same nodes.
When adding or removing nodes it will rebalance according to the new consistent hashing, but that means that only a few shards will be rebalanced and others remain on the same location.
A good explanation of Consistent Hashing: https://tom-e-white.com/2007/11/consistent-hashing.html
Create a new instance of this for each entity types, i.e. a
ConsistentHashingShardAllocationStrategy
instance must not be shared between different entity types.Not intended for public inheritance/implementation
- Annotations
- @DoNotInherit()
- final class JoinConfigCompatCheckSharding extends JoinConfigCompatChecker
INTERNAL API
INTERNAL API
- Annotations
- @InternalApi()
- class RemoveInternalClusterShardingData extends Actor with ActorLogging
- abstract class ShardCoordinator extends Actor with Timers
Singleton coordinator that decides where to allocate shards.
Singleton coordinator that decides where to allocate shards.
- See also
Deprecated Type Members
- class PersistentShardCoordinator extends ShardCoordinator with PersistentActor
Singleton coordinator that decides where to allocate shards.
Singleton coordinator that decides where to allocate shards.
Users can migrate to using DData to store state then either Event Sourcing or ddata to store the remembered entities.
- Annotations
- @deprecated
- Deprecated
(Since version 2.6.0) Use
ddata
mode, persistence mode is deprecated.- See also
Value Members
- object ClusterSharding extends ExtensionId[ClusterSharding] with ExtensionIdProvider
This extension provides sharding functionality of actors in a cluster.
This extension provides sharding functionality of actors in a cluster. The typical use case is when you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine.
- Distribution: You need to distribute them across several nodes in the cluster
- Location Transparency: You need to interact with them using their logical identifier, without having to care about their physical location in the cluster, which can change over time.
Entities: It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology. Here we call these actors "entities" which typically have persistent (durable) state, but this feature is not limited to persistent state actors.
Sharding: In this context sharding means that actors with an identifier, or entities, can be automatically distributed across multiple nodes in the cluster.
ShardRegion: Each entity actor runs only at one place, and messages can be sent to the entity without requiring the sender to know the location of the destination actor. This is achieved by sending the messages via a ShardRegion actor, provided by this extension. The ShardRegion knows the shard mappings and routes inbound messages to the entity with the entity id. Messages to the entities are always sent via the local
ShardRegion
. TheShardRegion
actor is started on each node in the cluster, or group of nodes tagged with a specific role. TheShardRegion
is created with two application specific functions to extract the entity identifier and the shard identifier from incoming messages.Typical usage of this extension:
- At system startup on each cluster node by registering the supported entity types with the ClusterSharding#start method
- Retrieve the
ShardRegion
actor for a named entity type with ClusterSharding#shardRegion Settings can be configured as described in theakka.cluster.sharding
section of thereference.conf
.
Shard and ShardCoordinator: A shard is a group of entities that will be managed together. For the first message in a specific shard the
ShardRegion
requests the location of the shard from a central ShardCoordinator. TheShardCoordinator
decides whichShardRegion
owns the shard. TheShardRegion
receives the decided home of the shard and if that is theShardRegion
instance itself it will create a local child actor representing the entity and direct all messages for that entity to it. If the shard home is anotherShardRegion
, instance messages will be forwarded to thatShardRegion
instance instead. While resolving the location of a shard, incoming messages for that shard are buffered and later delivered when the shard location is known. Subsequent messages to the resolved shard can be delivered to the target destination immediately without involving theShardCoordinator
. To make sure at-most-one instance of a specific entity actor is running somewhere in the cluster it is important that all nodes have the same view of where the shards are located. Therefore the shard allocation decisions are taken by the centralShardCoordinator
, a cluster singleton, i.e. one instance on the oldest member among all cluster nodes or a group of nodes tagged with a specific role. The oldest member can be determined by akka.cluster.Member#isOlderThan.Shard Rebalancing: To be able to use newly added members in the cluster the coordinator facilitates rebalancing of shards, migrating entities from one node to another. In the rebalance process the coordinator first notifies all
ShardRegion
actors that a handoff for a shard has begun.ShardRegion
actors will start buffering incoming messages for that shard, as they do when shard location is unknown. During the rebalance process the coordinator will not answer any requests for the location of shards that are being rebalanced, i.e. local buffering will continue until the handoff is complete. TheShardRegion
responsible for the rebalanced shard will stop all entities in that shard by sending them aPoisonPill
. When all entities have been terminated theShardRegion
owning the entities will acknowledge to the coordinator that the handoff has completed. Thereafter the coordinator will reply to requests for the location of the shard, allocate a new home for the shard and then buffered messages in theShardRegion
actors are delivered to the new location. This means that the state of the entities are not transferred or migrated. If the state of the entities are of importance it should be persistent (durable), e.g. withakka-persistence
so that it can be recovered at the new location.Shard Allocation: The logic deciding which shards to rebalance is defined in a plugable shard allocation strategy. The default implementation
LeastShardAllocationStrategy
picks shards for handoff from theShardRegion
with highest number of previously allocated shards. They will then be allocated to theShardRegion
with lowest number of previously allocated shards, i.e. new members in the cluster. This strategy can be replaced by an application specific implementation.Recovery: The state of shard locations in the
ShardCoordinator
is stored withakka-distributed-data
orakka-persistence
to survive failures. When a crashed or unreachable coordinator node has been removed (via down) from the cluster a newShardCoordinator
singleton actor will take over and the state is recovered. During such a failure period shards with known location are still available, while messages for new (unknown) shards are buffered until the newShardCoordinator
becomes available.Delivery Semantics: As long as a sender uses the same
ShardRegion
actor to deliver messages to an entity actor the order of the messages is preserved. As long as the buffer limit is not reached messages are delivered on a best effort basis, with at-most once delivery semantics, in the same way as ordinary message sending. Reliable end-to-end messaging, with at-least-once semantics can be added by usingAtLeastOnceDelivery
inakka-persistence
.Some additional latency is introduced for messages targeted to new or previously unused shards due to the round-trip to the coordinator. Rebalancing of shards may also add latency. This should be considered when designing the application specific shard resolution, e.g. to avoid too fine grained shards.
The
ShardRegion
actor can also be started in proxy only mode, i.e. it will not host any entities itself, but knows how to delegate messages to the right location.If the state of the entities are persistent you may stop entities that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (
context.setReceiveTimeout
). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without losing such messages the entity actor can send ShardRegion.Passivate to its parentShardRegion
. The specified wrapped message inPassivate
will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by theShardRegion
between reception ofPassivate
and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity. - object ClusterShardingSettings
- object ConsistentHashingShardAllocationStrategy
- object RemoveInternalClusterShardingData
Utility program that removes the internal data stored with Akka Persistence by the Cluster
ShardCoordinator
.Utility program that removes the internal data stored with Akka Persistence by the Cluster
ShardCoordinator
. The data contains the locations of the shards using Akka Persistence and it can safely be removed when restarting the whole Akka Cluster. Note that this is not application data.Never use this program while there are running Akka Cluster that is using Cluster Sharding. Stop all Cluster nodes before using this program.
It can be needed to remove the data if the Cluster
ShardCoordinator
cannot startup because of corrupt data, which may happen if accidentally two clusters were running at the same time, e.g. caused by using auto-down and there was a network partition.Use this program as a standalone Java main program:
java -classpath <jar files, including akka-cluster-sharding> akka.cluster.sharding.RemoveInternalClusterShardingData -2.3 entityType1 entityType2 entityType3
The program is included in the
akka-cluster-sharding
jar file. It is easiest to run it with same classpath and configuration as your ordinary application. It can be run from sbt or maven in similar way.Specify the entity type names (same as you use in the
start
method ofClusterSharding
) as program arguments.If you specify
-2.3
as the first program argument it will also try to remove data that was stored by Cluster Sharding in Akka 2.3.x using different persistenceId. - object ShardCoordinator
- See also
- object ShardRegion
- See also
- object ShardingFlightRecorder extends ExtensionId[ShardingFlightRecorder] with ExtensionIdProvider
INTERNAL API
INTERNAL API
- Annotations
- @InternalApi()
- object ShardingLogMarker
This is public with the purpose to document the used markers and properties of log events.
This is public with the purpose to document the used markers and properties of log events. No guarantee that it will remain binary compatible, but the marker names and properties are considered public API and will not be changed without notice.