public class StreamConverters
extends java.lang.Object
java.io
streams APIs and Java 8 StreamsConstructor and Description |
---|
StreamConverters() |
Modifier and Type | Method and Description |
---|---|
static Sink<ByteString,java.io.InputStream> |
asInputStream(scala.concurrent.duration.FiniteDuration readTimeout)
Creates a Sink which when materialized will return an
InputStream which it is possible
to read the values produced by the stream this Sink is attached to. |
static <T> Sink<T,java.util.stream.Stream<T>> |
asJavaStream()
Creates a sink which materializes into Java 8
Stream that can be run to trigger demand through the sink. |
static Source<ByteString,java.io.OutputStream> |
asOutputStream(scala.concurrent.duration.FiniteDuration writeTimeout)
Creates a Source which when materialized will return an
OutputStream which it is possible
to write the ByteStrings to the stream this Source is attached to. |
static Source<ByteString,scala.concurrent.Future<IOResult>> |
fromInputStream(scala.Function0<java.io.InputStream> in,
int chunkSize)
Creates a Source from an
InputStream created by the given function. |
static <T,S extends java.util.stream.BaseStream<T,S>> |
fromJavaStream(scala.Function0<java.util.stream.BaseStream<T,S>> stream)
Creates a source that wraps a Java 8
Stream . |
static Sink<ByteString,scala.concurrent.Future<IOResult>> |
fromOutputStream(scala.Function0<java.io.OutputStream> out,
boolean autoFlush)
Creates a Sink which writes incoming
ByteString s to an OutputStream created by the given function. |
static <T,R> Sink<T,scala.concurrent.Future<R>> |
javaCollector(scala.Function0<java.util.stream.Collector<T,?,R>> collectorFactory)
Creates a sink which materializes into a
Future which will be completed with result of the Java 8 Collector transformation
and reduction operations. |
static <T,R> Sink<T,scala.concurrent.Future<R>> |
javaCollectorParallelUnordered(int parallelism,
scala.Function0<java.util.stream.Collector<T,?,R>> collectorFactory)
Creates a sink which materializes into a
Future which will be completed with result of the Java 8 Collector transformation
and reduction operations. |
public static Source<ByteString,scala.concurrent.Future<IOResult>> fromInputStream(scala.Function0<java.io.InputStream> in, int chunkSize)
InputStream
created by the given function.
Emitted elements are chunkSize
sized ByteString
elements,
except the final element, which will be up to chunkSize
in size.
You can configure the default dispatcher for this Source by changing the akka.stream.blocking-io-dispatcher
or
set it for a given Source by using ActorAttributes
.
It materializes a Future
of IOResult
containing the number of bytes read from the source file upon completion,
and a possible exception if IO operation was not completed successfully.
The created InputStream
will be closed when the Source
is cancelled.
in
- a function which creates the InputStream to read fromchunkSize
- the size of each read operation, defaults to 8192public static Source<ByteString,java.io.OutputStream> asOutputStream(scala.concurrent.duration.FiniteDuration writeTimeout)
OutputStream
which it is possible
to write the ByteStrings to the stream this Source is attached to.
This Source is intended for inter-operation with legacy APIs since it is inherently blocking.
You can configure the default dispatcher for this Source by changing the akka.stream.blocking-io-dispatcher
or
set it for a given Source by using ActorAttributes
.
The created OutputStream
will be closed when the Source
is cancelled, and closing the OutputStream
will complete this Source
.
writeTimeout
- the max time the write operation on the materialized OutputStream should block, defaults to 5 secondspublic static Sink<ByteString,scala.concurrent.Future<IOResult>> fromOutputStream(scala.Function0<java.io.OutputStream> out, boolean autoFlush)
ByteString
s to an OutputStream
created by the given function.
Materializes a Future
of IOResult
that will be completed with the size of the file (in bytes) at the streams completion,
and a possible exception if IO operation was not completed successfully.
You can configure the default dispatcher for this Source by changing the akka.stream.blocking-io-dispatcher
or
set it for a given Source by using ActorAttributes
.
If autoFlush
is true the OutputStream will be flushed whenever a byte array is written, defaults to false.
The OutputStream
will be closed when the stream flowing into this Sink
is completed. The Sink
will cancel the stream when the OutputStream
is no longer writable.
out
- (undocumented)autoFlush
- (undocumented)public static Sink<ByteString,java.io.InputStream> asInputStream(scala.concurrent.duration.FiniteDuration readTimeout)
InputStream
which it is possible
to read the values produced by the stream this Sink is attached to.
This Sink is intended for inter-operation with legacy APIs since it is inherently blocking.
You can configure the default dispatcher for this Source by changing the akka.stream.blocking-io-dispatcher
or
set it for a given Source by using ActorAttributes
.
The InputStream
will be closed when the stream flowing into this Sink
completes, and
closing the InputStream
will cancel this Sink
.
readTimeout
- the max time the read operation on the materialized InputStream should blockpublic static <T,R> Sink<T,scala.concurrent.Future<R>> javaCollector(scala.Function0<java.util.stream.Collector<T,?,R>> collectorFactory)
Future
which will be completed with result of the Java 8
Collector
transformation
and reduction operations. This allows usage of Java 8 streams transformations for reactive streams. The
Collector
will trigger
demand downstream. Elements emitted through the stream will be accumulated into a mutable result container, optionally transformed
into a final representation after all input elements have been processed. The
Collector
can also do reduction
at the end. Reduction processing is performed sequentially
Note that a flow can be materialized multiple times, so the function producing the Collector
must be able
to handle multiple invocations.
collectorFactory
- (undocumented)public static <T,R> Sink<T,scala.concurrent.Future<R>> javaCollectorParallelUnordered(int parallelism, scala.Function0<java.util.stream.Collector<T,?,R>> collectorFactory)
Future
which will be completed with result of the Java 8
Collector
transformation
and reduction operations. This allows usage of Java 8 streams transformations for reactive streams. The
Collector
will trigger demand
downstream. Elements emitted through the stream will be accumulated into a mutable result container, optionally transformed
into a final representation after all input elements have been processed. The
Collector
can also do reduction
at the end. Reduction processing is performed in parallel based on graph
Balance
.
Note that a flow can be materialized multiple times, so the function producing the Collector
must be able
to handle multiple invocations.
parallelism
- (undocumented)collectorFactory
- (undocumented)public static <T> Sink<T,java.util.stream.Stream<T>> asJavaStream()
Stream
that can be run to trigger demand through the sink.
Elements emitted through the stream will be available for reading through the Java 8
Stream
.
The Java 8 Stream
will be ended when the stream flowing into this
Sink
completes, and closing the Java
Stream
will cancel the inflow of this
Sink
.
Java 8 Stream
throws exception in case reactive stream failed.
Be aware that Java Stream
blocks current thread while waiting on next element from downstream.
As it is interacting wit blocking API the implementation runs on a separate dispatcher
configured through the
akka.stream.blocking-io-dispatcher
.
public static <T,S extends java.util.stream.BaseStream<T,S>> Source<T,NotUsed> fromJavaStream(scala.Function0<java.util.stream.BaseStream<T,S>> stream)
Stream
.
Source
uses a stream iterator to get all its
elements and send them downstream on demand.
Example usage: Source.fromJavaStream(() ⇒ IntStream.rangeClosed(1, 10))
You can use Source.async
to create asynchronous boundaries between synchronous Java Stream
and the rest of flow.
stream
- (undocumented)