Class ClusterSharding
- java.lang.Object
-
- akka.cluster.sharding.typed.javadsl.ClusterSharding
-
public abstract class ClusterSharding extends java.lang.Object
This extension provides sharding functionality of actors in a cluster. The typical use case is when you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine. You need to distribute them across several nodes in the cluster and you want to be able to interact with them using their logical identifier, but without having to care about their physical location in the cluster, which might also change over time. It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology. Here we call these actors "entities". These actors typically have persistent (durable) state, but this feature is not limited to actors with persistent state.In this context sharding means that actors with an identifier, so called entities, can be automatically distributed across multiple nodes in the cluster. Each entity actor runs only at one place, and messages can be sent to the entity without requiring the sender to know the location of the destination actor. This is achieved by sending the messages via a
ShardRegion
actor provided by this extension, which knows how to route the message with the entity id to the final destination.This extension is supposed to be used by first, typically at system startup on each node in the cluster, registering the supported entity types with the
init(akka.cluster.sharding.typed.javadsl.Entity<M, E>)
method, which returns theShardRegion
actor reference for a named entity type. Messages to the entities are always sent via thatActorRef
, i.e. the localShardRegion
. Messages can also be sent via theEntityRef
retrieved withentityRefFor(akka.cluster.sharding.typed.javadsl.EntityTypeKey<M>, java.lang.String)
, which will also send via the localShardRegion
.Some settings can be configured as described in the
akka.cluster.sharding
section of thereference.conf
.The
ShardRegion
actor is started on each node in the cluster, or group of nodes tagged with a specific role. TheShardRegion
is created with aShardingMessageExtractor
to extract the entity identifier and the shard identifier from incoming messages. A shard is a group of entities that will be managed together. For the first message in a specific shard theShardRegion
requests the location of the shard from a central coordinator, theShardCoordinator
. TheShardCoordinator
decides whichShardRegion
owns the shard. TheShardRegion
receives the decided home of the shard and if that is theShardRegion
instance itself it will create a local child actor representing the entity and direct all messages for that entity to it. If the shard home is anotherShardRegion
instance messages will be forwarded to thatShardRegion
instance instead. While resolving the location of a shard incoming messages for that shard are buffered and later delivered when the shard location is known. Subsequent messages to the resolved shard can be delivered to the target destination immediately without involving theShardCoordinator
.To make sure that at most one instance of a specific entity actor is running somewhere in the cluster it is important that all nodes have the same view of where the shards are located. Therefore the shard allocation decisions are taken by the central
ShardCoordinator
, which is running as a cluster singleton, i.e. one instance on the oldest member among all cluster nodes or a group of nodes tagged with a specific role. The oldest member can be determined byMember.isOlderThan(akka.cluster.Member)
.To be able to use newly added members in the cluster the coordinator facilitates rebalancing of shards, i.e. migrate entities from one node to another. In the rebalance process the coordinator first notifies all
ShardRegion
actors that a handoff for a shard has started. That means they will start buffering incoming messages for that shard, in the same way as if the shard location is unknown. During the rebalance process the coordinator will not answer any requests for the location of shards that are being rebalanced, i.e. local buffering will continue until the handoff is completed. TheShardRegion
responsible for the rebalanced shard will stop all entities in that shard by sending thehandOffMessage
to them. When all entities have been terminated theShardRegion
owning the entities will acknowledge the handoff as completed to the coordinator. Thereafter the coordinator will reply to requests for the location of the shard and thereby allocate a new home for the shard and then buffered messages in theShardRegion
actors are delivered to the new location. This means that the state of the entities are not transferred or migrated. If the state of the entities are of importance it should be persistent (durable), e.g. withakka-persistence
, so that it can be recovered at the new location.The logic that decides which shards to rebalance is defined in a plugable shard allocation strategy. The default implementation
LeastShardAllocationStrategy
picks shards for handoff from the ShardRegionwith most number of previously allocated shards. They will then be allocated to the
ShardRegionwith least number of previously allocated shards, i.e. new members in the cluster. This strategy can be replaced by an application specific implementation.
The state of shard locations in the
ShardCoordinator
is stored withakka-distributed-data
orakka-persistence
to survive failures. When a crashed or unreachable coordinator node has been removed (via down) from the cluster a newShardCoordinator
singleton actor will take over and the state is recovered. During such a failure period shards with known location are still available, while messages for new (unknown) shards are buffered until the newShardCoordinator
becomes available.As long as a sender uses the same
ShardRegion
actor to deliver messages to an entity actor the order of the messages is preserved. As long as the buffer limit is not reached messages are delivered on a best effort basis, with at-most once delivery semantics, in the same way as ordinary message sending. Reliable end-to-end messaging, with at-least-once semantics can be added by usingAtLeastOnceDelivery
inakka-persistence
.Some additional latency is introduced for messages targeted to new or previously unused shards due to the round-trip to the coordinator. Rebalancing of shards may also add latency. This should be considered when designing the application specific shard resolution, e.g. to avoid too fine grained shards.
The
ShardRegion
actor can also be started in proxy only mode, i.e. it will not host any entities itself, but knows how to delegate messages to the right location.If the state of the entities are persistent you may stop entities that are not used to reduce memory consumption. This is done by the application specific implementation of the entity actors for example by defining receive timeout (
context.setReceiveTimeout
). If a message is already enqueued to the entity when it stops itself the enqueued message in the mailbox will be dropped. To support graceful passivation without losing such messages the entity actor can sendClusterSharding#Passivate
to theActorRef[ShardCommand]
that was passed in to the factory method when creating the entity.. The specifiedstopMessage
message will be sent back to the entity, which is then supposed to stop itself. Incoming messages will be buffered by theShardRegion
between reception ofPassivate
and termination of the entity. Such buffered messages are thereafter delivered to a new incarnation of the entity.This class is not intended for user extension other than for test purposes (e.g. stub implementation). More methods may be added in the future and that may break such implementations.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
ClusterSharding.Passivate<M>
The entity can request passivation by sending theClusterSharding.Passivate
message to theActorRef[ShardCommand]
that was passed in to the factory method when creating the entity.static class
ClusterSharding.Passivate$
static interface
ClusterSharding.ShardCommand
When an entity is created anActorRef[ShardCommand]
is passed to the factory method.
-
Constructor Summary
Constructors Constructor Description ClusterSharding()
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Deprecated Methods Modifier and Type Method Description abstract ShardCoordinator.ShardAllocationStrategy
defaultShardAllocationStrategy(ClusterShardingSettings settings)
The defaultShardAllocationStrategy
is configured byleast-shard-allocation-strategy
properties.abstract <M> EntityRef<M>
entityRefFor(EntityTypeKey<M> typeKey, java.lang.String entityId)
Create anActorRef
-like reference to a specific sharded entity.abstract <M> EntityRef<M>
entityRefFor(EntityTypeKey<M> typeKey, java.lang.String entityId, java.lang.String dataCenter)
Deprecated.Use Akka Distributed Cluster instead.static ClusterSharding
get(ActorSystem<?> system)
abstract <M,E>
ActorRef<E>init(Entity<M,E> entity)
Initialize sharding for the givenentity
factory settings.abstract ActorRef<ClusterSharding.ShardCommand>
shard(EntityTypeKey<?> typeKey)
Access to theActorRef
to sendShardCommand
for a given entity type.abstract ActorRef<ClusterShardingQuery>
shardState()
Actor for querying Cluster Sharding state
-
-
-
Method Detail
-
get
public static ClusterSharding get(ActorSystem<?> system)
-
init
public abstract <M,E> ActorRef<E> init(Entity<M,E> entity)
Initialize sharding for the givenentity
factory settings.It will start a shard region or a proxy depending on if the settings require role and if this node has such a role.
-
entityRefFor
public abstract <M> EntityRef<M> entityRefFor(EntityTypeKey<M> typeKey, java.lang.String entityId)
Create anActorRef
-like reference to a specific sharded entity.You have to correctly specify the type of messages the target can handle via the
typeKey
.Messages sent through this
EntityRef
will be wrapped in aShardingEnvelope
including the here providedentityId
.This can only be used if the default
ShardingEnvelope
is used, when using custom envelopes or in message entity ids you will need to use theActorRef<E>
returned by sharding init for messaging with the sharded actors.For in-depth documentation of its semantics, see
EntityRef
.
-
entityRefFor
public abstract <M> EntityRef<M> entityRefFor(EntityTypeKey<M> typeKey, java.lang.String entityId, java.lang.String dataCenter)
Deprecated.Use Akka Distributed Cluster instead. Since 2.10.0.Create anActorRef
-like reference to a specific sharded entity running in another data center.You have to correctly specify the type of messages the target can handle via the
typeKey
.Messages sent through this
EntityRef
will be wrapped in aShardingEnvelope
including the providedentityId
.This can only be used if the default
ShardingEnvelope
is used, when using custom envelopes or in message entity ids you will need to use theActorRef[E]
returned by sharding init for messaging with the sharded actors.For in-depth documentation of its semantics, see
EntityRef
.
-
shardState
public abstract ActorRef<ClusterShardingQuery> shardState()
Actor for querying Cluster Sharding state
-
shard
public abstract ActorRef<ClusterSharding.ShardCommand> shard(EntityTypeKey<?> typeKey)
Access to theActorRef
to sendShardCommand
for a given entity type. For exampleClusterSharding.Passivate
can be sent to thisActorRef
. Note that thisActorRef
is also available in theEntityContext
. The entity type must first be initialized with theClusterSharding.init
method.
-
defaultShardAllocationStrategy
public abstract ShardCoordinator.ShardAllocationStrategy defaultShardAllocationStrategy(ClusterShardingSettings settings)
The defaultShardAllocationStrategy
is configured byleast-shard-allocation-strategy
properties.
-
-