Class StreamConverters$


  • public class StreamConverters$
    extends java.lang.Object
    Converters for interacting with the blocking java.io streams APIs and Java 8 Streams
    • Field Summary

      Fields 
      Modifier and Type Field Description
      static StreamConverters$ MODULE$
      Static reference to the singleton instance of this Scala object.
    • Method Summary

      All Methods Instance Methods Concrete Methods 
      Modifier and Type Method Description
      Sink<ByteString,​java.io.InputStream> asInputStream​(scala.concurrent.duration.FiniteDuration readTimeout)
      Creates a Sink which when materialized will return an InputStream which it is possible to read the values produced by the stream this Sink is attached to.
      scala.concurrent.duration.FiniteDuration asInputStream$default$1()  
      <T> Sink<T,​java.util.stream.Stream<T>> asJavaStream()
      Creates a sink which materializes into Java 8 Stream that can be run to trigger demand through the sink.
      Source<ByteString,​java.io.OutputStream> asOutputStream​(scala.concurrent.duration.FiniteDuration writeTimeout)
      Creates a Source which when materialized will return an OutputStream which it is possible to write the ByteStrings to the stream this Source is attached to.
      scala.concurrent.duration.FiniteDuration asOutputStream$default$1()  
      Source<ByteString,​scala.concurrent.Future<IOResult>> fromInputStream​(scala.Function0<java.io.InputStream> in, int chunkSize)
      Creates a Source from an InputStream created by the given function.
      int fromInputStream$default$2()  
      <T,​S extends java.util.stream.BaseStream<T,​S>>
      Source<T,​NotUsed>
      fromJavaStream​(scala.Function0<java.util.stream.BaseStream<T,​S>> stream)
      Creates a source that wraps a Java 8 Stream.
      Sink<ByteString,​scala.concurrent.Future<IOResult>> fromOutputStream​(scala.Function0<java.io.OutputStream> out, boolean autoFlush)
      Creates a Sink which writes incoming ByteStrings to an OutputStream created by the given function.
      boolean fromOutputStream$default$2()  
      <T,​R>
      Sink<T,​scala.concurrent.Future<R>>
      javaCollector​(scala.Function0<java.util.stream.Collector<T,​?,​R>> collectorFactory)
      Creates a sink which materializes into a Future which will be completed with result of the Java 8 Collector transformation and reduction operations.
      <T,​R>
      Sink<T,​scala.concurrent.Future<R>>
      javaCollectorParallelUnordered​(int parallelism, scala.Function0<java.util.stream.Collector<T,​?,​R>> collectorFactory)
      Creates a sink which materializes into a Future which will be completed with result of the Java 8 Collector transformation and reduction operations.
      • Methods inherited from class java.lang.Object

        clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
    • Field Detail

      • MODULE$

        public static final StreamConverters$ MODULE$
        Static reference to the singleton instance of this Scala object.
    • Constructor Detail

      • StreamConverters$

        public StreamConverters$()
    • Method Detail

      • fromInputStream

        public Source<ByteString,​scala.concurrent.Future<IOResult>> fromInputStream​(scala.Function0<java.io.InputStream> in,
                                                                                          int chunkSize)
        Creates a Source from an InputStream created by the given function. Emitted elements are up to chunkSize sized ByteString elements. The actual size of the emitted elements depends on how much data the underlying InputStream returns on each read invocation. Such chunks will never be larger than chunkSize though.

        You can configure the default dispatcher for this Source by changing the akka.stream.materializer.blocking-io-dispatcher or set it for a given Source by using ActorAttributes.

        It materializes a Future of IOResult containing the number of bytes read from the source file upon completion, and a possible exception if IO operation was not completed successfully. Note that bytes having been read by the source does not give any guarantee that the bytes were seen by downstream stages.

        The created InputStream will be closed when the Source is cancelled.

        Parameters:
        in - a function which creates the InputStream to read from
        chunkSize - the size of each read operation, defaults to 8192
      • fromInputStream$default$2

        public int fromInputStream$default$2()
      • asOutputStream

        public Source<ByteString,​java.io.OutputStream> asOutputStream​(scala.concurrent.duration.FiniteDuration writeTimeout)
        Creates a Source which when materialized will return an OutputStream which it is possible to write the ByteStrings to the stream this Source is attached to.

        This Source is intended for inter-operation with legacy APIs since it is inherently blocking.

        You can configure the internal buffer size by using ActorAttributes.

        The created OutputStream will be closed when the Source is cancelled, and closing the OutputStream will complete this Source.

        Parameters:
        writeTimeout - the max time the write operation on the materialized OutputStream should block, defaults to 5 seconds
      • asOutputStream$default$1

        public scala.concurrent.duration.FiniteDuration asOutputStream$default$1()
      • fromOutputStream

        public Sink<ByteString,​scala.concurrent.Future<IOResult>> fromOutputStream​(scala.Function0<java.io.OutputStream> out,
                                                                                         boolean autoFlush)
        Creates a Sink which writes incoming ByteStrings to an OutputStream created by the given function.

        Materializes a Future of IOResult that will be completed with the size of the file (in bytes) at the streams completion, and a possible exception if IO operation was not completed successfully.

        You can configure the default dispatcher for this Source by changing the akka.stream.materializer.blocking-io-dispatcher or set it for a given Source by using ActorAttributes. If autoFlush is true the OutputStream will be flushed whenever a byte array is written, defaults to false.

        The OutputStream will be closed when the stream flowing into this Sink is completed. The Sink will cancel the stream when the OutputStream is no longer writable.

      • fromOutputStream$default$2

        public boolean fromOutputStream$default$2()
      • asInputStream

        public Sink<ByteString,​java.io.InputStream> asInputStream​(scala.concurrent.duration.FiniteDuration readTimeout)
        Creates a Sink which when materialized will return an InputStream which it is possible to read the values produced by the stream this Sink is attached to.

        This Sink is intended for inter-operation with legacy APIs since it is inherently blocking.

        You can configure the internal buffer size by using ActorAttributes.

        The InputStream will be closed when the stream flowing into this Sink completes, and closing the InputStream will cancel this Sink.

        Parameters:
        readTimeout - the max time the read operation on the materialized InputStream should block
      • asInputStream$default$1

        public scala.concurrent.duration.FiniteDuration asInputStream$default$1()
      • javaCollector

        public <T,​R> Sink<T,​scala.concurrent.Future<R>> javaCollector​(scala.Function0<java.util.stream.Collector<T,​?,​R>> collectorFactory)
        Creates a sink which materializes into a Future which will be completed with result of the Java 8 Collector transformation and reduction operations. This allows usage of Java 8 streams transformations for reactive streams. The Collector will trigger demand downstream. Elements emitted through the stream will be accumulated into a mutable result container, optionally transformed into a final representation after all input elements have been processed. The Collector can also do reduction at the end. Reduction processing is performed sequentially

        Note that a flow can be materialized multiple times, so the function producing the Collector must be able to handle multiple invocations.

      • javaCollectorParallelUnordered

        public <T,​R> Sink<T,​scala.concurrent.Future<R>> javaCollectorParallelUnordered​(int parallelism,
                                                                                                   scala.Function0<java.util.stream.Collector<T,​?,​R>> collectorFactory)
        Creates a sink which materializes into a Future which will be completed with result of the Java 8 Collector transformation and reduction operations. This allows usage of Java 8 streams transformations for reactive streams. The Collector will trigger demand downstream. Elements emitted through the stream will be accumulated into a mutable result container, optionally transformed into a final representation after all input elements have been processed. The Collector can also do reduction at the end. Reduction processing is performed in parallel based on graph Balance.

        Note that a flow can be materialized multiple times, so the function producing the Collector must be able to handle multiple invocations.

      • asJavaStream

        public <T> Sink<T,​java.util.stream.Stream<T>> asJavaStream()
        Creates a sink which materializes into Java 8 Stream that can be run to trigger demand through the sink. Elements emitted through the stream will be available for reading through the Java 8 Stream.

        The Java 8 Stream will be ended when the stream flowing into this Sink completes, and closing the Java Stream will cancel the inflow of this Sink.

        If the Java 8 Stream throws exception the Akka stream is cancelled.

        Be aware that Java Stream blocks current thread while waiting on next element from downstream. As it is interacting wit blocking API the implementation runs on a separate dispatcher configured through the akka.stream.blocking-io-dispatcher.

      • fromJavaStream

        public <T,​S extends java.util.stream.BaseStream<T,​S>> Source<T,​NotUsed> fromJavaStream​(scala.Function0<java.util.stream.BaseStream<T,​S>> stream)
        Creates a source that wraps a Java 8 Stream. Source uses a stream iterator to get all its elements and send them downstream on demand.

        Example usage: StreamConverters.fromJavaStream(() => IntStream.rangeClosed(1, 10))

        You can use Source.async to create asynchronous boundaries between synchronous Java Stream and the rest of flow.