Using TCP
Warning
The IO implementation is marked as “experimental” as of its introduction in Akka 2.2.0. We will continue to improve this API based on our users’ feedback, which implies that while we try to keep incompatible changes to a minimum the binary compatibility guarantee for maintenance releases does not apply to the contents of the akka.io package.
The code snippets through-out this section assume the following imports:
import java.net.InetSocketAddress;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.actor.UntypedActor;
import akka.io.Tcp;
import akka.io.Tcp.Bound;
import akka.io.Tcp.CommandFailed;
import akka.io.Tcp.Connected;
import akka.io.Tcp.ConnectionClosed;
import akka.io.Tcp.Received;
import akka.io.TcpMessage;
import akka.japi.Procedure;
import akka.util.ByteString;
All of the Akka I/O APIs are accessed through manager objects. When using an I/O API, the first step is to acquire a reference to the appropriate manager. The code below shows how to acquire a reference to the Tcp manager.
final ActorRef tcpManager = Tcp.get(getContext().system()).manager();
The manager is an actor that handles the underlying low level I/O resources (selectors, channels) and instantiates workers for specific tasks, such as listening to incoming connections.
Connecting
public class Client extends UntypedActor {
final InetSocketAddress remote;
final ActorRef listener;
public Client(InetSocketAddress remote, ActorRef listener) {
this.remote = remote;
this.listener = listener;
final ActorRef tcp = Tcp.get(getContext().system()).manager();
tcp.tell(TcpMessage.connect(remote), getSelf());
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof CommandFailed) {
listener.tell("failed", getSelf());
getContext().stop(getSelf());
} else if (msg instanceof Connected) {
listener.tell(msg, getSelf());
getSender().tell(TcpMessage.register(getSelf()), getSelf());
getContext().become(connected(getSender()));
}
}
private Procedure<Object> connected(final ActorRef connection) {
return new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof ByteString) {
connection.tell(TcpMessage.write((ByteString) msg), getSelf());
} else if (msg instanceof CommandFailed) {
// OS kernel socket buffer was full
} else if (msg instanceof Received) {
listener.tell(((Received) msg).data(), getSelf());
} else if (msg.equals("close")) {
connection.tell(TcpMessage.close(), getSelf());
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
};
}
}
The first step of connecting to a remote address is sending a Connect message to the TCP manager; in addition to the simplest form shown above there is also the possibility to specify a local InetSocketAddress to bind to and a list of socket options to apply.
Note
The SO_NODELAY (TCP_NODELAY on Windows) socket option defaults to true in Akka, independently of the OS default settings. This setting disables Nagle's algorithm, considerably improving latency for most applications. This setting could be overridden by passing SO.TcpNoDelay(false) in the list of socket options of the Connect message.
The TCP manager will then reply either with a CommandFailed or it will spawn an internal actor representing the new connection. This new actor will then send a Connected message to the original sender of the Connect message.
In order to activate the new connection a Register message must be sent to the connection actor, informing that one about who shall receive data from the socket. Before this step is done the connection cannot be used, and there is an internal timeout after which the connection actor will shut itself down if no Register message is received.
The connection actor watches the registered handler and closes the connection when that one terminates, thereby cleaning up all internal resources associated with that connection.
The actor in the example above uses become to switch from unconnected to connected operation, demonstrating the commands and events which are observed in that state. For a discussion on CommandFailed see Throttling Reads and Writes below. ConnectionClosed is a trait, which marks the different connection close events. The last line handles all connection close events in the same way. It is possible to listen for more fine-grained connection close events, see Closing Connections below.
Accepting connections
public class Server extends UntypedActor {
final ActorRef manager;
public Server(ActorRef manager) {
this.manager = manager;
}
@Override
public void preStart() throws Exception {
final ActorRef tcp = Tcp.get(getContext().system()).manager();
tcp.tell(TcpMessage.bind(getSelf(),
new InetSocketAddress("localhost", 0), 100), getSelf());
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Bound) {
manager.tell(msg, getSelf());
} else if (msg instanceof CommandFailed) {
getContext().stop(getSelf());
} else if (msg instanceof Connected) {
final Connected conn = (Connected) msg;
manager.tell(conn, getSelf());
final ActorRef handler = getContext().actorOf(
Props.create(SimplisticHandler.class));
getSender().tell(TcpMessage.register(handler), getSelf());
}
}
}
To create a TCP server and listen for inbound connections, a Bind command has to be sent to the TCP manager. This will instruct the TCP manager to listen for TCP connections on a particular InetSocketAddress; the port may be specified as 0 in order to bind to a random port.
The actor sending the Bind message will receive a Bound message signalling that the server is ready to accept incoming connections; this message also contains the InetSocketAddress to which the socket was actually bound (i.e. resolved IP address and correct port number).
From this point forward the process of handling connections is the same as for outgoing connections. The example demonstrates that handling the reads from a certain connection can be delegated to another actor by naming it as the handler when sending the Register message. Writes can be sent from any actor in the system to the connection actor (i.e. the actor which sent the Connected message). The simplistic handler is defined as:
public class SimplisticHandler extends UntypedActor {
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
System.out.println(data);
getSender().tell(TcpMessage.write(data), getSelf());
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
}
For a more complete sample which also takes into account the possibility of failures when sending please see Throttling Reads and Writes below.
The only difference to outgoing connections is that the internal actor managing the listen port—the sender of the Bound message—watches the actor which was named as the recipient for Connected messages in the Bind message. When that actor terminates the listen port will be closed and all resources associated with it will be released; existing connections will not be terminated at this point.
Closing connections
A connection can be closed by sending one of the commands Close, ConfirmedClose or Abort to the connection actor.
Close will close the connection by sending a FIN message, but without waiting for confirmation from the remote endpoint. Pending writes will be flushed. If the close is successful, the listener will be notified with Closed.
ConfirmedClose will close the sending direction of the connection by sending a FIN message, but data will continue to be received until the remote endpoint closes the connection, too. Pending writes will be flushed. If the close is successful, the listener will be notified with ConfirmedClosed.
Abort will immediately terminate the connection by sending a RST message to the remote endpoint. Pending writes will be not flushed. If the close is successful, the listener will be notified with Aborted.
PeerClosed will be sent to the listener if the connection has been closed by the remote endpoint. Per default, the connection will then automatically be closed from this endpoint as well. To support half-closed connections set the keepOpenOnPeerClosed member of the Register message to true in which case the connection stays open until it receives one of the above close commands.
ErrorClosed will be sent to the listener whenever an error happened that forced the connection to be closed.
All close notifications are sub-types of ConnectionClosed so listeners who do not need fine-grained close events may handle all close events in the same way.
Writing to a connection
Once a connection has been established data can be sent to it from any actor in the form of a Tcp.WriteCommand. Tcp.WriteCommand is an abstract class with three concrete implementations:
- Tcp.Write
- The simplest WriteCommand implementation which wraps a ByteString instance and an "ack" event. A ByteString (as explained in this section) models one or more chunks of immutable in-memory data with a maximum (total) size of 2 GB (2^31 bytes).
- Tcp.WriteFile
- If you want to send "raw" data from a file you can do so efficiently with the Tcp.WriteFile command. This allows you do designate a (contiguous) chunk of on-disk bytes for sending across the connection without the need to first load them into the JVM memory. As such Tcp.WriteFile can "hold" more than 2GB of data and an "ack" event if required.
- Tcp.CompoundWrite
Sometimes you might want to group (or interleave) several Tcp.Write and/or Tcp.WriteFile commands into one atomic write command which gets written to the connection in one go. The Tcp.CompoundWrite allows you to do just that and offers three benefits:
- As explained in the following section the TCP connection actor can only handle one single write command at a time. By combining several writes into one CompoundWrite you can have them be sent across the connection with minimum overhead and without the need to spoon feed them to the connection actor via an ACK-based message protocol.
- Because a WriteCommand is atomic you can be sure that no other actor can "inject" other writes into your series of writes if you combine them into one single CompoundWrite. In scenarios where several actors write to the same connection this can be an important feature which can be somewhat hard to achieve otherwise.
- The "sub writes" of a CompoundWrite are regular Write or WriteFile commands that themselves can request "ack" events. These ACKs are sent out as soon as the respective "sub write" has been completed. This allows you to attach more than one ACK to a Write or WriteFile (by combining it with an empty write that itself requests an ACK) or to have the connection actor acknowledge the progress of transmitting the CompoundWrite by sending out intermediate ACKs at arbitrary points.
Throttling Reads and Writes
The basic model of the TCP connection actor is that it has no internal buffering (i.e. it can only process one write at a time, meaning it can buffer one write until it has been passed on to the O/S kernel in full). Congestion needs to be handled at the user level, for which there are three modes of operation:
- ACK-based: every Write command carries an arbitrary object, and if this object is not Tcp.NoAck then it will be returned to the sender of the Write upon successfully writing all contained data to the socket. If no other write is initiated before having received this acknowledgement then no failures can happen due to buffer overrun.
- NACK-based: every write which arrives while a previous write is not yet completed will be replied to with a CommandFailed message containing the failed write. Just relying on this mechanism requires the implemented protocol to tolerate skipping writes (e.g. if each write is a valid message on its own and it is not required that all are delivered). This mode is enabled by setting the useResumeWriting flag to false within the Register message during connection activation.
- NACK-based with write suspending: this mode is very similar to the NACK-based one, but once a single write has failed no further writes will succeed until a ResumeWriting message is received. This message will be answered with a WritingResumed message once the last accepted write has completed. If the actor driving the connection implements buffering and resends the NACK’ed messages after having awaited the WritingResumed signal then every message is delivered exactly once to the network socket.
These models (with the exception of the second which is rather specialised) are demonstrated in complete examples below. The full and contiguous source is available on github.
Note
It should be obvious that all these flow control schemes only work between one writer and one connection actor; as soon as multiple actors send write commands to a single connection no consistent result can be achieved.
ACK-Based Back-Pressure
For proper function of the following example it is important to configure the connection to remain half-open when the remote side closed its writing end: this allows the example EchoHandler to write all outstanding data back to the client before fully closing the connection. This is enabled using a flag upon connection activation (observe the Register message):
connection.tell(TcpMessage.register(handler,
true, // <-- keepOpenOnPeerClosed flag
true), getSelf());
With this preparation let us dive into the handler itself:
public class SimpleEchoHandler extends UntypedActor {
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long maxStored = 100000000;
public static final long highWatermark = maxStored * 5 / 10;
public static final long lowWatermark = maxStored * 2 / 10;
public SimpleEchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
buffer(data);
connection.tell(TcpMessage.write(data, ACK), getSelf());
// now switch behavior to “waiting for acknowledgement”
getContext().become(buffering, false);
} else if (msg instanceof ConnectionClosed) {
getContext().stop(getSelf());
}
}
private final Procedure<Object> buffering = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
buffer(((Received) msg).data());
} else if (msg == ACK) {
acknowledge();
} else if (msg instanceof ConnectionClosed) {
if (((ConnectionClosed) msg).isPeerClosed()) {
closing = true;
} else {
// could also be ErrorClosed, in which case we just give up
getContext().stop(getSelf());
}
}
}
};
// storage omitted ...
}
The principle is simple: when having written a chunk always wait for the Ack to come back before sending the next chunk. While waiting we switch behavior such that new incoming data are buffered. The helper functions used are a bit lengthy but not complicated:
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > maxStored) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > highWatermark) {
log.debug("suspending reading");
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge() {
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
if (suspended && stored < lowWatermark) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
if (storage.isEmpty()) {
if (closing) {
getContext().stop(getSelf());
} else {
getContext().unbecome();
}
} else {
connection.tell(TcpMessage.write(storage.peek(), ACK), getSelf());
}
}
The most interesting part is probably the last: an Ack removes the oldest data chunk from the buffer, and if that was the last chunk then we either close the connection (if the peer closed its half already) or return to the idle behavior; otherwise we just send the next buffered chunk and stay waiting for the next Ack.
Back-pressure can be propagated also across the reading side back to the writer on the other end of the connection by sending the SuspendReading command to the connection actor. This will lead to no data being read from the socket anymore (although this does happen after a delay because it takes some time until the connection actor processes this command, hence appropriate head-room in the buffer should be present), which in turn will lead to the O/S kernel buffer filling up on our end, then the TCP window mechanism will stop the remote side from writing, filling up its write buffer, until finally the writer on the other side cannot push any data into the socket anymore. This is how end-to-end back-pressure is realized across a TCP connection.
NACK-Based Back-Pressure with Write Suspending
public class EchoHandler extends UntypedActor {
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long MAX_STORED = 100000000;
public static final long HIGH_WATERMARK = MAX_STORED * 5 / 10;
public static final long LOW_WATERMARK = MAX_STORED * 2 / 10;
private static class Ack implements Event {
public final int ack;
public Ack(int ack) {
this.ack = ack;
}
}
public EchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
// start out in optimistic write-through mode
getContext().become(writing);
}
private final Procedure<Object> writing = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
final ByteString data = ((Received) msg).data();
connection.tell(TcpMessage.write(data, new Ack(currentOffset())), getSelf());
buffer(data);
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
} else if (msg instanceof CommandFailed) {
final Write w = (Write) ((CommandFailed) msg).cmd();
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(buffering((Ack) w.ack()));
} else if (msg instanceof ConnectionClosed) {
final ConnectionClosed cl = (ConnectionClosed) msg;
if (cl.isPeerClosed()) {
if (storage.isEmpty()) {
getContext().stop(getSelf());
} else {
getContext().become(closing);
}
}
}
}
};
// buffering ...
// closing ...
// storage omitted ...
}
The principle here is to keep writing until a CommandFailed is received, using acknowledgements only to prune the resend buffer. When a such a failure was received, transition into a different state for handling and handle resending of all queued data:
protected Procedure<Object> buffering(final Ack nack) {
return new Procedure<Object>() {
private int toAck = 10;
private boolean peerClosed = false;
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof Received) {
buffer(((Received) msg).data());
} else if (msg instanceof WritingResumed) {
writeFirst();
} else if (msg instanceof ConnectionClosed) {
if (((ConnectionClosed) msg).isPeerClosed())
peerClosed = true;
else
getContext().stop(getSelf());
} else if (msg instanceof Integer) {
final int ack = (Integer) msg;
acknowledge(ack);
if (ack >= nack.ack) {
// otherwise it was the ack of the last successful write
if (storage.isEmpty()) {
if (peerClosed)
getContext().stop(getSelf());
else
getContext().become(writing);
} else {
if (toAck > 0) {
// stay in ACK-based mode for a short while
writeFirst();
--toAck;
} else {
// then return to NACK-based again
writeAll();
if (peerClosed)
getContext().become(closing);
else
getContext().become(writing);
}
}
}
}
}
};
}
It should be noted that all writes which are currently buffered have also been sent to the connection actor upon entering this state, which means that the ResumeWriting message is enqueued after those writes, leading to the reception of all outstanding CommandFailed messages (which are ignored in this state) before receiving the WritingResumed signal. That latter message is sent by the connection actor only once the internally queued write has been fully completed, meaning that a subsequent write will not fail. This is exploited by the EchoHandler to switch to an ACK-based approach for the first ten writes after a failure before resuming the optimistic write-through behavior.
protected Procedure<Object> closing = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof CommandFailed) {
// the command can only have been a Write
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(closeResend, false);
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
if (storage.isEmpty())
getContext().stop(getSelf());
}
}
};
protected Procedure<Object> closeResend = new Procedure<Object>() {
@Override
public void apply(Object msg) throws Exception {
if (msg instanceof WritingResumed) {
writeAll();
getContext().unbecome();
} else if (msg instanceof Integer) {
acknowledge((Integer) msg);
}
}
};
Closing the connection while still sending all data is a bit more involved than in the ACK-based approach: the idea is to always send all outstanding messages and acknowledge all successful writes, and if a failure happens then switch behavior to await the WritingResumed event and start over.
The helper functions are very similar to the ACK-based case:
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > MAX_STORED) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > HIGH_WATERMARK) {
log.debug("suspending reading at {}", currentOffset());
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge(int ack) {
assert ack == storageOffset;
assert !storage.isEmpty();
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
storageOffset += 1;
if (suspended && stored < LOW_WATERMARK) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
}
Usage Example: TcpPipelineHandler and SSL
This example shows the different parts described above working together. Let us first look at the SSL server:
public class SslServer extends UntypedActor {
final SSLContext sslContext;
final ActorRef listener;
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
public SslServer(SSLContext sslContext, ActorRef listener) {
this.sslContext = sslContext;
this.listener = listener;
// bind to a socket, registering ourselves as incoming connection handler
Tcp.get(getContext().system()).getManager().tell(
TcpMessage.bind(getSelf(), new InetSocketAddress("localhost", 0), 100),
getSelf());
}
// this will hold the pipeline handler’s context
Init<WithinActorContext, String, String> init = null;
@Override
public void onReceive(Object msg) {
if (msg instanceof CommandFailed) {
getContext().stop(getSelf());
} else if (msg instanceof Bound) {
listener.tell(msg, getSelf());
} else if (msg instanceof Connected) {
// create a javax.net.ssl.SSLEngine for our peer in server mode
final InetSocketAddress remote = ((Connected) msg).remoteAddress();
final SSLEngine engine = sslContext.createSSLEngine(
remote.getHostName(), remote.getPort());
engine.setUseClientMode(false);
// build pipeline and set up context for communicating with TcpPipelineHandler
init = TcpPipelineHandler.withLogger(log, sequence(sequence(sequence(sequence(
new StringByteStringAdapter("utf-8"),
new DelimiterFraming(1024, ByteString.fromString("\n"), true)),
new TcpReadWriteAdapter()),
new SslTlsSupport(engine)),
new BackpressureBuffer(1000, 10000, 1000000)));
// create handler for pipeline, setting ourselves as payload recipient
final ActorRef handler = getContext().actorOf(
TcpPipelineHandler.props(init, getSender(), getSelf()));
// register the SSL handler with the connection
getSender().tell(TcpMessage.register(handler), getSelf());
} else if (msg instanceof Init.Event) {
// unwrap TcpPipelineHandler’s event to get a Tcp.Event
final String recv = init.event(msg);
// inform someone of the received message
listener.tell(recv, getSelf());
// and reply (sender is the SSL handler created above)
getSender().tell(init.command("world\n"), getSelf());
}
}
}
Please refer to the source code to see all imports.
The actor above binds to a local port and registers itself as the handler for new connections. When a new connection comes in it will create a javax.net.ssl.SSLEngine (details not shown here since they vary widely for different setups, please refer to the JDK documentation) and wrap that in an SslTlsSupport pipeline stage (which is included in akka-actor).
This sample demonstrates a few more things: below the SSL pipeline stage we have inserted a backpressure buffer which will generate a HighWatermarkReached event to tell the upper stages to suspend writing (generated at 10000 buffered bytes) and a LowWatermarkReached when they can resume writing (when buffer empties below 1000 bytes); the buffer has a maximum capacity of 1MB. The implementation is very similar to the NACK-based backpressure approach presented above, please refer to the API documentation for details about its usage. Above the SSL stage comes an adapter which extracts only the payload data from the TCP commands and events, i.e. it speaks ByteString above. The resulting byte streams are broken into frames by a DelimiterFraming stage which chops them up on newline characters. The top-most stage then converts between String and UTF-8 encoded ByteString.
As a result the pipeline will accept simple String commands, encode them using UTF-8, delimit them with newlines (which are expected to be already present in the sending direction), transform them into TCP commands and events, encrypt them and send them off to the connection actor while buffering writes.
This pipeline is driven by a TcpPipelineHandler actor which is also included in akka-actor. In order to capture the generic command and event types consumed and emitted by that actor we need to create a wrapper—the nested Init class—which also provides the the pipeline context needed by the supplied pipeline; in this case we use the withLogger convenience method which supplies a context that implements HasLogger and HasActorContext and should be sufficient for typical pipelines. With those things bundled up all that remains is creating a TcpPipelineHandler and registering that one as the recipient of inbound traffic from the TCP connection.
Since we instructed that handler actor to send any events which are emitted by the SSL pipeline to ourselves, we can then just wait for the reception of the decrypted payload messages, compute a response—just "world\n" in this case—and reply by sending back an Init.Command. It should be noted that communication with the handler wraps commands and events in the inner types of the init object in order to keep things well separated. To ease handling of such path-dependent types there exist two helper methods, namely Init.command for creating a command and Init.event for unwrapping an event.
Looking at the client side we see that not much needs to be changed:
public class SslClient extends UntypedActor {
final InetSocketAddress remote;
final SSLContext sslContext;
final ActorRef listener;
final LoggingAdapter log = Logging
.getLogger(getContext().system(), getSelf());
public SslClient(InetSocketAddress remote, SSLContext sslContext,
ActorRef listener) {
this.remote = remote;
this.sslContext = sslContext;
this.listener = listener;
// open a connection to the remote TCP port
Tcp.get(getContext().system()).getManager()
.tell(TcpMessage.connect(remote), getSelf());
}
// this will hold the pipeline handler’s context
Init<WithinActorContext, String, String> init = null;
@Override
public void onReceive(Object msg) {
if (msg instanceof CommandFailed) {
getContext().stop(getSelf());
} else if (msg instanceof Connected) {
// create a javax.net.ssl.SSLEngine for our peer in client mode
final SSLEngine engine = sslContext.createSSLEngine(
remote.getHostName(), remote.getPort());
engine.setUseClientMode(true);
// build pipeline and set up context for communicating with TcpPipelineHandler
init = TcpPipelineHandler.withLogger(log, sequence(sequence(sequence(sequence(
new StringByteStringAdapter("utf-8"),
new DelimiterFraming(1024, ByteString.fromString("\n"), true)),
new TcpReadWriteAdapter()),
new SslTlsSupport(engine)),
new BackpressureBuffer(1000, 10000, 1000000)));
// create handler for pipeline, setting ourselves as payload recipient
final ActorRef handler = getContext().actorOf(
TcpPipelineHandler.props(init, getSender(), getSelf()));
// register the SSL handler with the connection
getSender().tell(TcpMessage.register(handler), getSelf());
// and send a message across the SSL channel
handler.tell(init.command("hello\n"), getSelf());
} else if (msg instanceof Init.Event) {
// unwrap TcpPipelineHandler’s event into a Tcp.Event
final String recv = init.event(msg);
// and inform someone of the received payload
listener.tell(recv, getSelf());
}
}
}
Once the connection is established we again create a TcpPipelineHandler wrapping an SslTlsSupport (in client mode) and register that as the recipient of inbound traffic and ourselves as recipient for the decrypted payload data. The we send a greeting to the server and forward any replies to some listener actor.
Warning
The SslTlsSupport currently does not support using a Tcp.WriteCommand other than Tcp.Write, like for example Tcp.WriteFile. It also doesn't support messages that are larger than the size of the send buffer on the socket. Trying to send such a message will result in a CommandFailed. If you need to send large messages over SSL, then they have to be sent in chunks.
Contents