public interface AsyncWriteProxy extends AsyncWriteJournal, Stash, ActorLogging
A journal that delegates actual storage to a target actor. For testing only.
Modifier and Type | Interface and Description |
---|---|
static class |
AsyncWriteProxy.InitTimeout$ |
static class |
AsyncWriteProxy.SetStore |
static class |
AsyncWriteProxy.SetStore$ |
AsyncWriteJournal.Desequenced, AsyncWriteJournal.Desequenced$, AsyncWriteJournal.Resequencer
Actor.emptyBehavior$, Actor.ignoringBehavior$
Modifier and Type | Method and Description |
---|---|
void |
aroundPreStart()
Can be overridden to intercept calls to
preStart . |
void |
aroundReceive(scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit> receive,
java.lang.Object msg)
INTERNAL API.
|
scala.concurrent.Future<scala.runtime.BoxedUnit> |
asyncDeleteMessagesTo(java.lang.String persistenceId,
long toSequenceNr)
Plugin API: asynchronously deletes all persistent messages up to
toSequenceNr
(inclusive). |
scala.concurrent.Future<java.lang.Object> |
asyncReadHighestSequenceNr(java.lang.String persistenceId,
long fromSequenceNr)
Plugin API: asynchronously reads the highest stored sequence number for the
given
persistenceId . |
scala.concurrent.Future<scala.runtime.BoxedUnit> |
asyncReplayMessages(java.lang.String persistenceId,
long fromSequenceNr,
long toSequenceNr,
long max,
scala.Function1<PersistentRepr,scala.runtime.BoxedUnit> replayCallback)
Plugin API: asynchronously replays persistent messages.
|
scala.concurrent.Future<scala.collection.immutable.Seq<scala.util.Try<scala.runtime.BoxedUnit>>> |
asyncWriteMessages(scala.collection.immutable.Seq<AtomicWrite> messages)
Plugin API: asynchronously writes a batch (
Seq ) of persistent messages to the
journal. |
scala.Option<ActorRef> |
store() |
Timeout |
timeout() |
isReplayFilterEnabled, receive, receivePluginInternal, receiveWriteJournal
adaptFromJournal, adaptToJournal, persistence, preparePersistentBatch
postStop, preRestart
aroundPostRestart, aroundPostStop, aroundPreRestart, context, postRestart, preStart, self, sender, supervisorStrategy, unhandled
actorCell, clearStash, context, enqueueFirst, mailbox, prepend, self, stash, unstash, unstashAll, unstashAll
log
scala.Option<ActorRef> store()
void aroundPreStart()
Actor
preStart
. Calls preStart
by default.aroundPreStart
in interface Actor
void aroundReceive(scala.PartialFunction<java.lang.Object,scala.runtime.BoxedUnit> receive, java.lang.Object msg)
Actor
Can be overridden to intercept calls to this actor's current behavior.
aroundReceive
in interface Actor
receive
- current behavior.msg
- current message.Timeout timeout()
scala.concurrent.Future<scala.collection.immutable.Seq<scala.util.Try<scala.runtime.BoxedUnit>>> asyncWriteMessages(scala.collection.immutable.Seq<AtomicWrite> messages)
AsyncWriteJournal
Seq
) of persistent messages to the
journal.
The batch is only for performance reasons, i.e. all messages don't have to be written
atomically. Higher throughput can typically be achieved by using batch inserts of many
records compared to inserting records one-by-one, but this aspect depends on the
underlying data store and a journal implementation can implement it as efficient as
possible. Journals should aim to persist events in-order for a given persistenceId
as otherwise in case of a failure, the persistent state may be end up being inconsistent.
Each AtomicWrite
message contains the single PersistentRepr
that corresponds to
the event that was passed to the persist
method of the PersistentActor
, or it
contains several PersistentRepr
that corresponds to the events that were passed
to the persistAll
method of the PersistentActor
. All PersistentRepr
of the
AtomicWrite
must be written to the data store atomically, i.e. all or none must
be stored. If the journal (data store) cannot support atomic writes of multiple
events it should reject such writes with a Try
Failure
with an
UnsupportedOperationException
describing the issue. This limitation should
also be documented by the journal plugin.
If there are failures when storing any of the messages in the batch the returned
Future
must be completed with failure. The Future
must only be completed with
success when all messages in the batch have been confirmed to be stored successfully,
i.e. they will be readable, and visible, in a subsequent replay. If there is
uncertainty about if the messages were stored or not the Future
must be completed
with failure.
Data store connection problems must be signaled by completing the Future
with
failure.
The journal can also signal that it rejects individual messages (AtomicWrite
) by
the returned immutable.Seq[Try[Unit}
. It is possible but not mandatory to reduce
number of allocations by returning Future.successful(Nil)
for the happy path,
i.e. when no messages are rejected. Otherwise the returned Seq
must have as many elements
as the input messages
Seq
. Each Try
element signals if the corresponding
AtomicWrite
is rejected or not, with an exception describing the problem. Rejecting
a message means it was not stored, i.e. it must not be included in a later replay.
Rejecting a message is typically done before attempting to store it, e.g. because of
serialization error.
Data store connection problems must not be signaled as rejections.
It is possible but not mandatory to reduce number of allocations by returning
Future.successful(Nil)
for the happy path, i.e. when no messages are rejected.
Calls to this method are serialized by the enclosing journal actor. If you spawn work in asynchronous tasks it is alright that they complete the futures in any order, but the actual writes for a specific persistenceId should be serialized to avoid issues such as events of a later write are visible to consumers (query side, or replay) before the events of an earlier write are visible. A PersistentActor will not send a new WriteMessages request before the previous one has been completed.
Please note that the sender
field of the contained PersistentRepr objects has been
nulled out (i.e. set to ActorRef.noSender
) in order to not use space in the journal
for a sender reference that will likely be obsolete during replay.
Please also note that requests for the highest sequence number may be made concurrently
to this call executing for the same persistenceId
, in particular it is possible that
a restarting actor tries to recover before its outstanding writes have completed. In
the latter case it is highly desirable to defer reading the highest sequence number
until all outstanding writes have completed, otherwise the PersistentActor may reuse
sequence numbers.
This call is protected with a circuit-breaker.
asyncWriteMessages
in interface AsyncWriteJournal
messages
- (undocumented)scala.concurrent.Future<scala.runtime.BoxedUnit> asyncDeleteMessagesTo(java.lang.String persistenceId, long toSequenceNr)
AsyncWriteJournal
toSequenceNr
(inclusive).
This call is protected with a circuit-breaker. Message deletion doesn't affect the highest sequence number of messages, journal must maintain the highest sequence number and never decrease it.
asyncDeleteMessagesTo
in interface AsyncWriteJournal
persistenceId
- (undocumented)toSequenceNr
- (undocumented)scala.concurrent.Future<scala.runtime.BoxedUnit> asyncReplayMessages(java.lang.String persistenceId, long fromSequenceNr, long toSequenceNr, long max, scala.Function1<PersistentRepr,scala.runtime.BoxedUnit> replayCallback)
AsyncRecovery
replayCallback
. The returned future must be completed
when all messages (matching the sequence number bounds) have been replayed.
The future must be completed with a failure if any of the persistent messages
could not be replayed.
The replayCallback
must also be called with messages that have been marked
as deleted. In this case a replayed message's deleted
method must return
true
.
The toSequenceNr
is the lowest of what was returned by AsyncRecovery.asyncReadHighestSequenceNr(java.lang.String, long)
and what the user specified as recovery Recovery
parameter.
This does imply that this call is always preceded by reading the highest sequence
number for the given persistenceId
.
This call is NOT protected with a circuit-breaker because it may take long time to replay all events. The plugin implementation itself must protect against an unresponsive backend store and make sure that the returned Future is completed with success or failure within reasonable time. It is not allowed to ignore completing the future.
asyncReplayMessages
in interface AsyncRecovery
persistenceId
- persistent actor id.fromSequenceNr
- sequence number where replay should start (inclusive).toSequenceNr
- sequence number where replay should end (inclusive).max
- maximum number of messages to be replayed.replayCallback
- called to replay a single message. Can be called from any
thread.
AsyncWriteJournal
scala.concurrent.Future<java.lang.Object> asyncReadHighestSequenceNr(java.lang.String persistenceId, long fromSequenceNr)
AsyncRecovery
persistenceId
. The persistent actor will use the highest sequence
number after recovery as the starting point when persisting new events.
This sequence number is also used as toSequenceNr
in subsequent call
to AsyncRecovery.asyncReplayMessages(java.lang.String, long, long, long, scala.Function1<akka.persistence.PersistentRepr, scala.runtime.BoxedUnit>)
unless the user has specified a lower toSequenceNr
.
Journal must maintain the highest sequence number and never decrease it.
This call is protected with a circuit-breaker.
Please also note that requests for the highest sequence number may be made concurrently
to writes executing for the same persistenceId
, in particular it is possible that
a restarting actor tries to recover before its outstanding writes have completed.
asyncReadHighestSequenceNr
in interface AsyncRecovery
persistenceId
- persistent actor id.fromSequenceNr
- hint where to start searching for the highest sequence
number. When a persistent actor is recovering this
fromSequenceNr
will be the sequence number of the used
snapshot or 0L
if no snapshot is used.