File

The File connectors provide additional connectors for filesystems complementing the sources and sinks for files already included in core Akka Streams (which can be found in ).

Project Info: Alpakka File
Artifact
com.lightbend.akka
akka-stream-alpakka-file
2.0.2
JDK versions
Adopt OpenJDK 8
Adopt OpenJDK 11
Scala versions2.12.11, 2.11.12, 2.13.3
JPMS module nameakka.stream.alpakka.file
License
Readiness level
Since 0.1, 2016-11-16
Home pagehttps://doc.akka.io/docs/alpakka/current
API documentation
Forums
Release notesIn the documentation
IssuesGithub issues
Sourceshttps://github.com/akka/alpakka

Artifacts

sbt
val AkkaVersion = "2.5.31"
libraryDependencies ++= Seq(
  "com.lightbend.akka" %% "akka-stream-alpakka-file" % "2.0.2",
  "com.typesafe.akka" %% "akka-stream" % AkkaVersion
)
Maven
<properties>
  <akka.version>2.5.31</akka.version>
  <scala.binary.version>2.12</scala.binary.version>
</properties>
<dependency>
  <groupId>com.lightbend.akka</groupId>
  <artifactId>akka-stream-alpakka-file_${scala.binary.version}</artifactId>
  <version>2.0.2</version>
</dependency>
<dependency>
  <groupId>com.typesafe.akka</groupId>
  <artifactId>akka-stream_${scala.binary.version}</artifactId>
  <version>${akka.version}</version>
</dependency>
Gradle
versions += [
  AkkaVersion: "2.5.31",
  ScalaBinary: "2.12"
]
dependencies {
  compile group: 'com.lightbend.akka', name: "akka-stream-alpakka-file_${versions.ScalaBinary}", version: '2.0.2',
  compile group: 'com.typesafe.akka', name: "akka-stream_${versions.ScalaBinary}", version: versions.AkkaVersion
}

The table below shows direct dependencies of this module and the second tab shows all libraries it depends on transitively.

Direct dependencies
OrganizationArtifactVersion
com.typesafe.akkaakka-stream_2.122.5.31
org.scala-langscala-library2.12.11
Dependency tree
com.typesafe.akka    akka-stream_2.12    2.5.31
    com.typesafe.akka    akka-actor_2.12    2.5.31
        com.typesafe    config    1.3.3
        org.scala-lang.modules    scala-java8-compat_2.12    0.8.0
            org.scala-lang    scala-library    2.12.11
        org.scala-lang    scala-library    2.12.11
    com.typesafe.akka    akka-protobuf_2.12    2.5.31
        org.scala-lang    scala-library    2.12.11
    com.typesafe    ssl-config-core_2.12    0.3.8
        com.typesafe    config    1.3.3
        org.scala-lang.modules    scala-parser-combinators_2.12    1.1.2
            org.scala-lang    scala-library    2.12.11
        org.scala-lang    scala-library    2.12.11
    org.reactivestreams    reactive-streams    1.0.2
    org.scala-lang    scala-library    2.12.11
org.scala-lang    scala-library    2.12.11

Writing to and reading from files

Use the FileIO class to create streams reading from or writing to files. It is part part of Akka streams.

Akka Streaming File IO documentation

Tailing a file into a stream

The FileTailSource starts at a given offset in a file and emits chunks of bytes until reaching the end of the file, it will then poll the file for changes and emit new changes as they are written to the file (unless there is backpressure).

A very common use case is combining reading bytes with parsing the bytes into lines, therefore FileTailSource contains a few factory methods to create a source that parses the bytes into lines and emits those.

In this sample we simply tail the lines of a file and print them to standard out:

Scala
import akka.stream.alpakka.file.scaladsl.FileTailSource

val fs = FileSystems.getDefault
val lines: Source[String, NotUsed] = FileTailSource.lines(
  path = fs.getPath(path),
  maxLineSize = 8192,
  pollingInterval = 250.millis
)

lines.runForeach(line => System.out.println(line))
Java
final FileSystem fs = FileSystems.getDefault();
final Duration pollingInterval = Duration.ofMillis(250);
final int maxLineSize = 8192;

final Source<String, NotUsed> lines =
    akka.stream.alpakka.file.javadsl.FileTailSource.createLines(
        fs.getPath(path), maxLineSize, pollingInterval);

lines.runForeach((line) -> System.out.println(line), materializer);

Shutdown stream when file is deleted

The FileTailSource stream will not shutdown or throw an error when the file it is tailing is deleted from the filesystem. If you would like to shutdown the stream, or throw an error, you can do so by merging in a DirectoryChangesSourceDirectoryChangesSource that listens to filesystem events in the directory that contains the file.

In the following example, a DirectoryChangesSource is used to watch for events in a directory. If a file delete event is observed for the file we are tailing then we shutdown the stream gracefully by using a Flow.recoverWithRetriesFlow.recoverWithRetries to switch to a Source.emptySource.empty, which with immediately send an OnComplete signal and shutdown the stream.

Scala

val checkInterval = 1.second val fileCheckSource = DirectoryChangesSource(path.getParent, checkInterval, 8192) .collect { case (p, DirectoryChange.Deletion) if path == p => throw new FileNotFoundException(path.toString) } .recoverWithRetries(1, { case _: FileNotFoundException => Source.empty }) val stream = FileTailSource .lines(path = path, maxLineSize = 8192, pollingInterval = 250.millis) .merge(fileCheckSource, eagerComplete = true)
Java

final Duration checkInterval = Duration.ofSeconds(1); final Source<String, NotUsed> fileCheckSource = akka.stream.alpakka.file.javadsl.DirectoryChangesSource.create( path.getParent(), checkInterval, 8192) .mapConcat( pair -> { if (pair.first().equals(path) && pair.second() == DirectoryChange.Deletion) { throw new FileNotFoundException(); } return Collections.<String>emptyList(); }) .recoverWith( new PFBuilder<Throwable, Source<String, NotUsed>>() .match(FileNotFoundException.class, t -> Source.empty()) .build()); final Source<String, NotUsed> source = akka.stream.alpakka.file.javadsl.FileTailSource.createLines( path, 8192, // chunk size Duration.ofMillis(250)) .merge(fileCheckSource, true);
Stream Shutdown Race Condition

Since the DirectoryChangesSource and the FileTailSource operate asynchronously as separate sources there is the possibility that the stream could be shutdown prematurely. If the file is detected as deleted and the stream is shutdown before the last element is emitted from FileTailSource, then that data will never be available to downstream user stages.

Shutdown stream after an idle timeout

It may be useful to shutdown the stream when no new data has been added for awhile to a file being tailed by FileTailSource. In the following example, a Flow.idleTimeoutFlow.idleTimeout operator is used to trigger a TimeoutException that can be recovered with Flow.recoverWithRetriesFlow.recoverWithRetries and a Source.emptySource.empty to successfully shutdown the stream.

Scala

val stream = FileTailSource .lines(path = path, maxLineSize = 8192, pollingInterval = 250.millis) .idleTimeout(5.seconds) .recoverWithRetries(1, { case _: TimeoutException => Source.empty })
Java

Source<String, NotUsed> stream = akka.stream.alpakka.file.javadsl.FileTailSource.createLines( path, 8192, // chunk size Duration.ofMillis(250)) .idleTimeout(Duration.ofSeconds(5)) .recoverWith( new PFBuilder<Throwable, Source<String, NotUsed>>() .match(TimeoutException.class, t -> Source.empty()) .build());

Creating directories

Directory.mkdirs() and Directory.mkdirsWithContext() create directories for Path elements in the stream. The withContext-variant allows the user to pass through additional information with every path.

Scala
import akka.stream.alpakka.file.scaladsl.Directory

val flow: Flow[Path, Path, NotUsed] = Directory.mkdirs()

val created: Future[immutable.Seq[Path]] =
  Source(immutable.Seq(dir.resolve("dirA"), dir.resolve("dirB")))
    .via(flow)
    .runWith(Sink.seq)

val flowWithContext: FlowWithContext[Path, SomeContext, Path, SomeContext, NotUsed] =
  Directory.mkdirsWithContext[SomeContext]()
Java
Flow<Path, Path, NotUsed> flow = Directory.mkdirs();

CompletionStage<List<Path>> created =
    Source.from(Arrays.asList(dir.resolve("dirA"), dir.resolve("dirB")))
        .via(flow)
        .runWith(Sink.seq(), materializer);

FlowWithContext<Path, SomeContext, Path, SomeContext, NotUsed> flowWithContext =
    Directory.mkdirsWithContext();

Listing directory contents

Directory.ls(path) lists all files and directories directly in a given directory:

Scala
import akka.stream.alpakka.file.scaladsl.Directory

val source: Source[Path, NotUsed] = Directory.ls(dir)
Java
import akka.stream.alpakka.file.javadsl.Directory;

final Source<Path, NotUsed> source = Directory.ls(dir);

Directory.walk(path) traverses all subdirectories and lists files and directories depth first:

Scala
import akka.stream.alpakka.file.scaladsl.Directory
import java.nio.file.FileVisitOption

val files: Source[Path, NotUsed] = Directory.walk(root)

val files2: Source[Path, NotUsed] = Directory.walk(root, maxDepth = Some(1), List(FileVisitOption.FOLLOW_LINKS))
Java
import akka.stream.alpakka.file.javadsl.Directory;
import java.nio.file.FileVisitOption;

final Source<Path, NotUsed> source = Directory.walk(root);

final Source<Path, NotUsed> source = Directory.walk(root, 1, FileVisitOption.FOLLOW_LINKS);

Listening to changes in a directory

The DirectoryChangesSource will emit elements every time there is a change to a watched directory in the local filesystem, the emitted change concists of the path that was changed and an enumeration describing what kind of change it was.

In this sample we simply print each change to the directory to standard output:

Scala
import akka.stream.alpakka.file.scaladsl.DirectoryChangesSource

val fs = FileSystems.getDefault
val changes = DirectoryChangesSource(fs.getPath(path), pollInterval = 1.second, maxBufferSize = 1000)
changes.runForeach {
  case (path, change) => println("Path: " + path + ", Change: " + change)
}
Java
import akka.stream.alpakka.file.javadsl.DirectoryChangesSource;

final FileSystem fs = FileSystems.getDefault();
final Duration pollingInterval = Duration.ofSeconds(1);
final int maxBufferSize = 1000;
final Source<Pair<Path, DirectoryChange>, NotUsed> changes =
    DirectoryChangesSource.create(fs.getPath(path), pollingInterval, maxBufferSize);

changes.runForeach(
    (Pair<Path, DirectoryChange> pair) -> {
      final Path changedPath = pair.first();
      final DirectoryChange change = pair.second();
      System.out.println("Path: " + changedPath + ", Change: " + change);
    },
    materializer);

Rotating the file to stream into

The LogRotatorSinkLogRotatorSink will create and write to multiple files.
This sink takes a creator as parameter which returns a Bytestring => Option[Path] functionFunction<ByteString, Optional<Path>>. If the generated function returns a path the sink will rotate the file output to this new path and the actual ByteString will be written to this new file too. With this approach the user can define a custom stateful file generation implementation.

This example usage shows the built-in target file creation and a custom sink factory which is required to use CompressionCompression for the target files.

Scala
val triggerFunctionCreator: () => ByteString => Option[Path] = ???

val completion = Source(immutable.Seq("test1", "test2", "test3", "test4", "test5", "test6"))
  .map(ByteString(_))
  .runWith(LogRotatorSink(triggerFunctionCreator))

// GZip compressing the data written
val completion =
  source
    .runWith(
      LogRotatorSink.withSinkFactory(
        triggerFunctionCreator,
        (path: Path) =>
          Flow[ByteString]
            .via(Compression.gzip)
            .toMat(FileIO.toPath(path))(Keep.right)
      )
    )
Java
import akka.stream.alpakka.file.javadsl.LogRotatorSink;

Creator<Function<ByteString, Optional<Path>>> triggerFunctionCreator = ...;

CompletionStage<Done> completion =
    Source.from(Arrays.asList("test1", "test2", "test3", "test4", "test5", "test6"))
        .map(ByteString::fromString)
        .runWith(LogRotatorSink.createFromFunction(triggerFunctionCreator), materializer);

// GZip compressing the data written
CompletionStage<Done> compressedCompletion =
    source.runWith(
        LogRotatorSink.withSinkFactory(
            triggerFunctionCreator,
            path ->
                Flow.of(ByteString.class)
                    .via(Compression.gzip())
                    .toMat(FileIO.toPath(path), Keep.right())),
        materializer);

Example: size-based rotation

Scala
import akka.stream.alpakka.file.scaladsl.LogRotatorSink

val fileSizeTriggerCreator: () => ByteString => Option[Path] = () => {
  val max = 10 * 1024 * 1024
  var size: Long = max
  element: ByteString =>
    if (size + element.size > max) {
      val path = Files.createTempFile("out-", ".log")
      size = element.size
      Some(path)
    } else {
      size += element.size
      None
    }
}

val sizeRotatorSink: Sink[ByteString, Future[Done]] =
  LogRotatorSink(fileSizeTriggerCreator)
Java
Creator<Function<ByteString, Optional<Path>>> sizeBasedTriggerCreator =
    () -> {
      long max = 10 * 1024 * 1024;
      final long[] size = new long[] {max};
      return (element) -> {
        if (size[0] + element.size() > max) {
          Path path = Files.createTempFile("out-", ".log");
          size[0] = element.size();
          return Optional.of(path);
        } else {
          size[0] += element.size();
          return Optional.empty();
        }
      };
    };

Sink<ByteString, CompletionStage<Done>> sizeRotatorSink =
    LogRotatorSink.createFromFunction(sizeBasedTriggerCreator);

Example: time-based rotation

Scala
val destinationDir = FileSystems.getDefault.getPath("/tmp")
val formatter = DateTimeFormatter.ofPattern("'stream-'yyyy-MM-dd_HH'.log'")

val timeBasedTriggerCreator: () => ByteString => Option[Path] = () => {
  var currentFilename: Option[String] = None
  (_: ByteString) => {
    val newName = LocalDateTime.now().format(formatter)
    if (currentFilename.contains(newName)) {
      None
    } else {
      currentFilename = Some(newName)
      Some(destinationDir.resolve(newName))
    }
  }
}

val timeBasedSink: Sink[ByteString, Future[Done]] =
  LogRotatorSink(timeBasedTriggerCreator)
Java
final Path destinationDir = FileSystems.getDefault().getPath("/tmp");
final DateTimeFormatter formatter = DateTimeFormatter.ofPattern("'stream-'yyyy-MM-dd_HH'.log'");

Creator<Function<ByteString, Optional<Path>>> timeBasedTriggerCreator =
    () -> {
      final String[] currentFileName = new String[] {null};
      return (element) -> {
        String newName = LocalDateTime.now().format(formatter);
        if (newName.equals(currentFileName[0])) {
          return Optional.empty();
        } else {
          currentFileName[0] = newName;
          return Optional.of(destinationDir.resolve(newName));
        }
      };
    };

Sink<ByteString, CompletionStage<Done>> timeBasedSink =
    LogRotatorSink.createFromFunction(timeBasedTriggerCreator);

Example: content-based rotation with compression to SFTP file

This example can be found in the self-contained example documentation section.

ZIP Archive

The ArchiveArchive contains flow for compressing multiple files into one ZIP file.

Result of flow can be send to sink even before whole ZIP file is created, so size of resulting ZIP archive is not limited to memory size.

This example usage shows compressing files from disk.

Scala
val fileStream1: Source[ByteString,  Any] = ...
val fileStream2: Source[ByteString,  Any] = ...

val filesStream = Source(
  List(
    (ArchiveMetadata("akka_full_color.svg"), fileStream1),
    (ArchiveMetadata("akka_icon_reverse.svg"), fileStream2)
  )
)

val result = filesStream
  .via(Archive.zip())
  .runWith(FileIO.toPath(Paths.get("result.zip")))
Java
Source<ByteString, NotUsed> source1 = ...
Source<ByteString, NotUsed> source2 = ...

Pair<ArchiveMetadata, Source<ByteString, NotUsed>> pair1 =
    Pair.create(ArchiveMetadata.create("akka_full_color.svg"), source1);
Pair<ArchiveMetadata, Source<ByteString, NotUsed>> pair2 =
    Pair.create(ArchiveMetadata.create("akka_icon_reverse.svg"), source2);

Source<Pair<ArchiveMetadata, Source<ByteString, NotUsed>>, NotUsed> source =
    Source.from(Arrays.asList(pair1, pair2));

Sink<ByteString, CompletionStage<IOResult>> fileSink = FileIO.toPath(Paths.get("logo.zip"));
CompletionStage<IOResult> ioResult = source.via(Archive.zip()).runWith(fileSink, mat);

TAR Archive

Writing TAR archives

Archive.tar()Archive.tar() creates a flow for packaging multiple files into one TAR archive.

Result of flow can be send to sink even before whole TAR file is created, so size of resulting TAR archive is not limited to memory size.

This example usage shows packaging directories and files from disk.

Scala
  val fileStream1: Source[ByteString,  Any] = ...
  val fileStream2: Source[ByteString,  Any] = ...
  val fileSize1: Long = ...
  val fileSize2: Long = ...

val filesStream = Source(
  List(
    (TarArchiveMetadata.directory("subdir", lastModification), Source.empty),
    (TarArchiveMetadata("subdir", "akka_full_color.svg", fileSize1, lastModification), fileStream1),
    (TarArchiveMetadata("akka_icon_reverse.svg", fileSize2, lastModification), fileStream2)
  )
)

val result = filesStream
  .via(Archive.tar())
  .runWith(FileIO.toPath(Paths.get("result.tar")))
Java
Source<ByteString, NotUsed> source1 = ...
Source<ByteString, NotUsed> source2 = ...
Long size1 = ...
Long size2 = ...

Pair<TarArchiveMetadata, Source<ByteString, NotUsed>> dir =
    Pair.create(TarArchiveMetadata.directory("subdir", lastModification), Source.empty());

Pair<TarArchiveMetadata, Source<ByteString, NotUsed>> pair1 =
    Pair.create(
        TarArchiveMetadata.create("subdir", "akka_full_color.svg", size1, lastModification),
        source1);
Pair<TarArchiveMetadata, Source<ByteString, NotUsed>> pair2 =
    Pair.create(
        TarArchiveMetadata.create("akka_icon_reverse.svg", size2, lastModification), source2);

Source<Pair<TarArchiveMetadata, Source<ByteString, NotUsed>>, NotUsed> source =
    Source.from(Arrays.asList(dir, pair1, pair2));

Sink<ByteString, CompletionStage<IOResult>> fileSink = FileIO.toPath(Paths.get("logo.tar"));
CompletionStage<IOResult> ioResult = source.via(Archive.tar()).runWith(fileSink, mat);

To produce a gzipped TAR file see the following example.

Scala
val resultGz = filesStream
  .via(Archive.tar().via(akka.stream.scaladsl.Compression.gzip))
  .runWith(FileIO.toPath(Paths.get("result.tar.gz")))
Java
Sink<ByteString, CompletionStage<IOResult>> fileSinkGz =
    FileIO.toPath(Paths.get("logo.tar.gz"));
CompletionStage<IOResult> ioResultGz =
    source
        .via(Archive.tar().via(akka.stream.javadsl.Compression.gzip()))
        .runWith(fileSinkGz, mat);

Reading TAR archives

Archive.tarReader()Archive.tarReader() reads a stream of ByteStrings as TAR format emitting the metadata entry and a Source for every file in the stream. It is essential to request all the emitted source’s data, otherwise the stream will not reach the next file entry.

The example below reads the incoming stream, creates directories and stores all files in the local file system.

Scala
val bytesSource: Source[ByteString, NotUsed] = // ???
val tar =
  bytesSource
    .via(Archive.tarReader())
    .mapAsync(1) {
      case (metadata, source) =>
        val targetFile = target.resolve(metadata.filePath)
        if (metadata.isDirectory) {
          Source
            .single(targetFile)
            .via(Directory.mkdirs())
            .runWith(Sink.ignore)
        } else {
          // create the target directory
          Source
            .single(targetFile.getParent)
            .via(Directory.mkdirs())
            .runWith(Sink.ignore)
            .map { _ =>
              // stream the file contents to a local file
              source.runWith(FileIO.toPath(targetFile))
            }
        }
    }
    .runWith(Sink.ignore)
Java
Source<ByteString, NotUsed> bytesSource = // ???
CompletionStage<Done> tar =
    bytesSource
        .via(Archive.tarReader())
        .mapAsync(
            1,
            pair -> {
              TarArchiveMetadata metadata = pair.first();
              Path targetFile = target.resolve(metadata.filePath());
              if (metadata.isDirectory()) {
                return Source.single(targetFile)
                    .via(Directory.mkdirs())
                    .runWith(Sink.ignore(), mat);
              } else {
                Source<ByteString, NotUsed> source = pair.second();
                // create the target directory
                return Source.single(targetFile.getParent())
                    .via(Directory.mkdirs())
                    .runWith(Sink.ignore(), mat)
                    .thenCompose(
                        done ->
                            // stream the file contents to a local file
                            source
                                .runWith(FileIO.toPath(targetFile), mat)
                                .thenApply(io -> Done.done()));
              }
            })
        .runWith(Sink.ignore(), mat);
Found an error in this documentation? The source code for this page can be found here. Please feel free to edit and contribute a pull request.