object Consumer
- Alphabetic
- By Inheritance
- Consumer
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
atMostOnceSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]
Convenience for "at-most once delivery" semantics.
Convenience for "at-most once delivery" semantics. The offset of each message is committed to Kafka before being emitted downstream.
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate() @throws( ... )
-
def
commitWithMetadataPartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, metadataFromRecord: Function[ConsumerRecord[K, V], String]): Source[Pair[TopicPartition, Source[CommittableMessage[K, V], NotUsed]], Control]
The same as #plainPartitionedSource but with offset commit with metadata support
-
def
commitWithMetadataSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription, metadataFromRecord: Function[ConsumerRecord[K, V], String]): Source[CommittableMessage[K, V], Control]
The
commitWithMetadataSource
makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record.The
commitWithMetadataSource
makes it possible to add additional metadata (in the form of a string) when an offset is committed based on the record. This can be useful (for example) to store information about which node made the commit, what time the commit was made, the timestamp of the record etc. -
def
committableExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription, groupId: String, commitTimeout: FiniteDuration): Source[CommittableMessage[K, V], Control]
The same as #plainExternalSource but with offset commit support
-
def
committablePartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[Pair[TopicPartition, Source[CommittableMessage[K, V], NotUsed]], Control]
The same as #plainPartitionedSource but with offset commit support
-
def
committableSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[CommittableMessage[K, V], Control]
The
committableSource
makes it possible to commit offset positions to Kafka.The
committableSource
makes it possible to commit offset positions to Kafka. This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time but in failure cases could be duplicated.If you commit the offset before processing the message you get "at-most once delivery" semantics, and for that there is a #atMostOnceSource.
Compared to auto-commit this gives exact control of when a message is considered consumed.
If you need to store offsets in anything other than Kafka, #plainSource should be used instead of this API.
-
def
createDrainingControl[T](pair: Pair[Control, CompletionStage[T]]): DrainingControl[T]
Combine control and a stream completion signal materialized values into one, so that the stream can be stopped in a controlled way without losing commits.
-
def
createNoopControl(): Control
An implementation of Control to be used as an empty value, all methods return a failed
CompletionStage
. -
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
plainExternalSource[K, V](consumer: ActorRef, subscription: ManualSubscription): Source[ConsumerRecord[K, V], Control]
Special source that can use external
KafkaAsyncConsumer
.Special source that can use external
KafkaAsyncConsumer
. This is useful in case when you have lot of manually assigned topic-partitions and want to keep only one kafka consumer -
def
plainPartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: Function[Set[TopicPartition], CompletionStage[Map[TopicPartition, Long]]], onRevoke: java.util.function.Consumer[Set[TopicPartition]]): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]
The
plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment.The
plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, theloadOffsetOnAssign
function will be called to retrieve the offset, followed by a seek to the correct spot in the partition. TheonRevoke
function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required. Also allows the user access to theonPartitionsRevoked
hook, useful for cleaning up any partition-specific resources being used by the consumer. -
def
plainPartitionedManualOffsetSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription, getOffsetsOnAssign: Function[Set[TopicPartition], CompletionStage[Map[TopicPartition, Long]]]): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]
The
plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment.The
plainPartitionedManualOffsetSource
is similar to #plainPartitionedSource but allows the use of an offset store outside of Kafka, while retaining the automatic partition assignment. When a topic-partition is assigned to a consumer, theloadOffsetOnAssign
function will be called to retrieve the offset, followed by a seek to the correct spot in the partition. TheonRevoke
function gives the consumer a chance to store any uncommitted offsets, and do any other cleanup that is required. -
def
plainPartitionedSource[K, V](settings: ConsumerSettings[K, V], subscription: AutoSubscription): Source[Pair[TopicPartition, Source[ConsumerRecord[K, V], NotUsed]], Control]
The
plainPartitionedSource
is a way to track automatic partition assignment from kafka.The
plainPartitionedSource
is a way to track automatic partition assignment from kafka. When topic-partition is assigned to a consumer this source will emit tuple with assigned topic-partition and a corresponding source When topic-partition is revoked then corresponding source completes -
def
plainSource[K, V](settings: ConsumerSettings[K, V], subscription: Subscription): Source[ConsumerRecord[K, V], Control]
The
plainSource
emitsConsumerRecord
elements (as received from the underlyingKafkaConsumer
).The
plainSource
emitsConsumerRecord
elements (as received from the underlyingKafkaConsumer
). It has not support for committing offsets to Kafka. It can be used when offset is stored externally or with auto-commit (note that auto-commit is by default disabled).The consumer application doesn't need to use Kafka's built-in offset storage, it can store offsets in a store of its own choosing. The primary use case for this is allowing the application to store both the offset and the results of the consumption in the same system in a way that both the results and offsets are stored atomically. This is not always possible, but when it is it will make the consumption fully atomic and give "exactly once" semantics that are stronger than the "at-least once" semantics you get with Kafka's offset commit functionality.
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )