final class GrpcReadJournal extends ReadJournal with EventsBySliceQuery with EventTimestampQuery with LoadEventQuery with CanTriggerReplay
- Source
 - GrpcReadJournal.scala
 
- Alphabetic
 - By Inheritance
 
- GrpcReadJournal
 - CanTriggerReplay
 - LoadEventQuery
 - EventTimestampQuery
 - EventsBySliceQuery
 - ReadJournal
 - AnyRef
 - Any
 
- Hide All
 - Show All
 
- Public
 - Protected
 
Instance Constructors
-  new GrpcReadJournal(system: ExtendedActorSystem, config: Config, cfgPath: String)
 
Value Members
-   final  def !=(arg0: Any): Boolean
- Definition Classes
 - AnyRef → Any
 
 -   final  def ##: Int
- Definition Classes
 - AnyRef → Any
 
 -   final  def ==(arg0: Any): Boolean
- Definition Classes
 - AnyRef → Any
 
 -   final  def asInstanceOf[T0]: T0
- Definition Classes
 - Any
 
 -    def clone(): AnyRef
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
 -    def close(): Future[Done]
Close the gRPC client.
Close the gRPC client. It will be automatically closed when the
ActorSystemis terminated, so invoking this is only needed when there is a need to close the resource before that. After closing theGrpcReadJournalinstance cannot be used again. -  lazy val consumerFilter: ConsumerFilter
 -   final  def eq(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef
 
 -    def equals(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef → Any
 
 -    def eventsBySlices[Evt](streamId: String, minSlice: Int, maxSlice: Int, offset: Offset): Source[EventEnvelope[Evt], NotUsed]
Query events for given slices.
Query events for given slices. A slice is deterministically defined based on the persistence id. The purpose is to evenly distribute all persistence ids over the slices.
The consumer can keep track of its current position in the event stream by storing the
offsetand restart the query from a givenoffsetafter a crash/restart.The supported offset is TimestampOffset and Offset.noOffset.
The timestamp is based on the database
transaction_timestamp()when the event was stored.transaction_timestamp()is the time when the transaction started, not when it was committed. This means that a "later" event may be visible first and when retrieving events after the previously seen timestamp we may miss some events. In distributed SQL databases there can also be clock skews for the database timestamps. For that reason it will perform additional backtracking queries to catch missed events. Events from backtracking will typically be duplicates of previously emitted events. It's the responsibility of the consumer to filter duplicates and make sure that events are processed in exact sequence number order for each persistence id. Such deduplication is provided by the R2DBC Projection.Events emitted by the backtracking don't contain the event payload (
EventBySliceEnvelope.eventis None) and the consumer can load the fullEventBySliceEnvelopewith GrpcReadJournal.loadEnvelope.The events will be emitted in the timestamp order with the caveat of duplicate events as described above. Events with the same timestamp are ordered by sequence number.
The stream is not completed when it reaches the end of the currently stored events, but it continues to push new events when new events are persisted.
- Definition Classes
 - GrpcReadJournal → EventsBySliceQuery
 
 -   final  def getClass(): Class[_ <: AnyRef]
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -    def hashCode(): Int
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -   final  def isInstanceOf[T0]: Boolean
- Definition Classes
 - Any
 
 -    def loadEnvelope[Evt](persistenceId: String, sequenceNr: Long): Future[EventEnvelope[Evt]]
- Definition Classes
 - GrpcReadJournal → LoadEventQuery
 
 -   final  def ne(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef
 
 -   final  def notify(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -   final  def notifyAll(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -    val replayCorrelationId: UUID
Correlation id to be used with ConsumerFilter.ReplayWithFilter.
Correlation id to be used with ConsumerFilter.ReplayWithFilter. Such replay request will trigger replay in all
eventsBySlicesqueries with the samestreamIdrunning from this instance of theGrpcReadJournal. Create separate instances of theGrpcReadJournalto have separation between replay requests for the samestreamId. -    def sliceForPersistenceId(persistenceId: String): Int
- Definition Classes
 - GrpcReadJournal → EventsBySliceQuery
 
 -    def sliceRanges(numberOfRanges: Int): Seq[Range]
- Definition Classes
 - GrpcReadJournal → EventsBySliceQuery
 
 -    def streamId: String
The identifier of the stream to consume, which is exposed by the producing/publishing side.
The identifier of the stream to consume, which is exposed by the producing/publishing side. It is defined in the GrpcQuerySettings.
 -   final  def synchronized[T0](arg0: => T0): T0
- Definition Classes
 - AnyRef
 
 -    def timestampOf(persistenceId: String, sequenceNr: Long): Future[Option[Instant]]
- Definition Classes
 - GrpcReadJournal → EventTimestampQuery
 
 -    def toString(): String
- Definition Classes
 - AnyRef → Any
 
 -   final  def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException])
 
 -   final  def wait(arg0: Long): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException]) @native()
 
 -   final  def wait(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException])