Working with streaming IO

Working with streaming IO

Akka Streams provides a way of handling TCP connections with Streams. While the general approach is very similar to the Actor based TCP handling using Akka IO, by using Akka Streams you are freed of having to manually react to back-pressure signals, as the library does it transparently for you.

Accepting connections: Echo Server

In order to implement a simple EchoServer we bind to a given address, which returns a Source[IncomingConnection], which will emit an IncomingConnection element for each new connection that the Server should handle:

val localhost = new InetSocketAddress("", 8888)
val binding = StreamTcp().bind(localhost)

Next, we simply handle each incoming connection using a Flow which will be used as the processing stage to handle and emit ByteStrings from and to the TCP Socket. Since one ByteString does not have to necessarily correspond to exactly one line of text (the client might be sending the line in chunks) we use the parseLines recipe from the Parsing lines from a stream of ByteStrings Akka Streams Cookbook recipe to chunk the inputs up into actual lines of text. In this example we simply add exclamation marks to each incoming text message and push it through the flow:

val connections: Source[IncomingConnection] = binding.connections

connections runForeach { connection =>
  println(s"New connection from: ${connection.remoteAddress}")

  val echo = Flow[ByteString]
    .transform(() => RecipeParseLines.parseLines("\n", maximumLineBytes = 256))
    .map(_ + "!!!\n")


Notice that while most building blocks in Akka Streams are reusable and freely shareable, this is not the case for the incoming connection Flow, since it directly corresponds to an existing, already accepted connection its handling can only ever be materialized once.

Closing connections is possible by cancelling the incoming connection Flow from your server logic (e.g. by connecting its downstream to an CancelledSink and its upstream to a completed Source). It is also possible to shut down the servers socket by cancelling the connections:Source[IncomingConnection].

We can then test the TCP server by sending data to the TCP Socket using netcat:

$ echo -n "Hello World" | netcat 8888
Hello World!!!

Connecting: REPL Client

In this example we implement a rather naive Read Evaluate Print Loop client over TCP. Let's say we know a server has exposed a simple command line interface over TCP, and would like to interact with it using Akka Streams over TCP. To open an outgoing connection socket we use the outgoingConnection method:

val connection: OutgoingConnection = StreamTcp().outgoingConnection(localhost)

val replParser = new PushStage[String, ByteString] {
  override def onPush(elem: String, ctx: Context[ByteString]): Directive = {
    elem match {
      case "q"  ctx.pushAndFinish(ByteString("BYE\n"))
      case _    ctx.push(ByteString(s"$elem\n"))

val repl = Flow[ByteString]
  .transform(() => RecipeParseLines.parseLines("\n", maximumLineBytes = 256))
  .map(text => println("Server: " + text))
  .map(_ => readLine("> "))
  .transform(()  replParser)


The repl flow we use to handle the server interaction first prints the servers response, then awaits on input from the command line (this blocking call is used here just for the sake of simplicity) and converts it to a ByteString which is then sent over the wire to the server. Then we simply connect the TCP pipeline to this processing stage–at this point it will be materialized and start processing data once the server responds with an initial message.

A resilient REPL client would be more sophisticated than this, for example it should split out the input reading into a separate mapAsync step and have a way to let the server write more data than one ByteString chunk at any given time, these improvements however are left as exercise for the reader.

Avoiding deadlocks and liveness issues in back-pressured cycles

When writing such end-to-end back-pressured systems you may sometimes end up in a situation of a loop, in which either side is waiting for the other one to start the conversation. One does not need to look far to find examples of such back-pressure loops. In the two examples shown previously, we always assumed that the side we are connecting to would start the conversation, which effectively means both sides are back-pressured and can not get the conversation started. There are multiple ways of dealing with this which are explained in depth in Graph cycles, liveness and deadlocks, however in client-server scenarios it is often the simplest to make either side simply send an initial message.


In case of back-pressured cycles (which can occur even between different systems) sometimes you have to decide which of the sides has start the conversation in order to kick it off. This can be often done by injecting an initial message from one of the sides–a conversation starter.

To break this back-pressure cycle we need to inject some initial message, a "conversation starter". First, we need to decide which side of the connection should remain passive and which active. Thankfully in most situations finding the right spot to start the conversation is rather simple, as it often is inherent to the protocol we are trying to implement using Streams. In chat-like applications, which our examples resemble, it makes sense to make the Server initiate the conversation by emitting a "hello" message:

binding.connections runForeach { connection =>

  val serverLogic = Flow() { implicit b =>
    import FlowGraphImplicits._

    // to be filled in by StreamTCP
    val in = UndefinedSource[ByteString]
    val out = UndefinedSink[ByteString]

    // server logic, parses incoming commands
    val commandParser = new PushStage[String, String] {
      override def onPush(elem: String, ctx: Context[String]): Directive = {
        elem match {
          case "BYE"  ctx.finish()
          case _      ctx.push(elem + "!")

    import connection._
    val welcomeMsg = s"Welcome to: $localAddress, you are: $remoteAddress!\n"

    val welcome = Source.single(ByteString(welcomeMsg))
    val echo = Flow[ByteString]
      .transform(() => RecipeParseLines.parseLines("\n", maximumLineBytes = 256))
      .transform(()  commandParser)
      .map(_ + "\n")

    val concat = Concat[ByteString]
    // first we emit the welcome message,
    welcome ~> concat.first
    // then we continue using the echo-logic Flow
    in ~> echo ~> concat.second

    concat.out ~> out
    (in, out)


The way we constructed a Flow using a PartialFlowGraph is explained in detail in Constructing Sources, Sinks and Flows from Partial Graphs, however the basic concepts is rather simple– we can encapsulate arbitrarily complex logic within a Flow as long as it exposes the same interface, which means exposing exactly one UndefinedSink and exactly one UndefinedSource which will be connected to the TCP pipeline. In this example we use a Concat graph processing stage to inject the initial message, and then continue with handling all incoming data using the echo handler. You should use this pattern of encapsulating complex logic in Flows and attaching those to StreamIO in order to implement your custom and possibly sophisticated TCP servers.

In this example both client and server may need to close the stream based on a parsed command command - BYE in the case of the server, and q in the case of the client. This is implemented by using a custom PushStage (see Using PushPullStage) which completes the stream once it encounters such command.