Using AWS MSK as Kafka service
Akka connects to AWS MSK clusters via TLS, authenticating using SASL (Simple Authentication and Security Layer) SCRAM.
Prerequisites not covered in detail by this guide:
-
The MSK instance must be provisioned, serverless MSK does not support SASL.
-
The MSK cluster must be set up with TLS for client broker connections and SASL/SCRAM for authentication with a user and password to use for authenticating your Akka service
-
The user and password is stored in a secret
-
The secret must be encrypted with a specific key, MSK cannot use the default MKS encryption key
-
-
The provisioned cluster must be set up for public access
-
Creating relevant ACLs for the user to access the topics in your MSK cluster
-
Disabling
allow.everyone.if.no.acl.found
in the MSK cluster config
-
-
Creating topics used by your Akka service
Steps to connect to an AWS Kafka broker
Take the following steps to configure access to your AWS Kafka broker for your Akka project.
-
Ensure you are on the correct Akka project
akka config get-project
-
Store the password for your user in an Akka secret:
akka secret create generic aws-msk-secret --literal pwd=<sasl user password>
-
Get the bootstrap brokers for your cluster, they can be found by selecting the cluster and clicking "View client information." There is a copy button at the top of "Public endpoint" that will copy a correctly formatted string with the bootstrap brokers. See AWS docs for other ways to inspect the bootstrap brokers.
-
Use
akka projects config
to set the broker details. Set the MSK SASL username you have prepared and the bootstrap servers.akka projects config set broker \ --broker-service kafka \ --broker-auth scram-sha-512 \ --broker-user <sasl username> \ --broker-password-secret aws-msk-secret/pwd \ --broker-bootstrap-servers <bootstrap brokers> \
The broker-password-secret
refer to the name of the Akka secret created earlier rather than the actual password string.
An optional description can be added with the parameter --description
to provide additional notes about the broker.
The broker config can be inspected using:
akka projects config get broker
Custom key pair
If you are using a custom key pair for TLS connections to your MSK cluster, instead of the default AWS provided key pair, you will need to define a secret with the CA certificate:
akka secret create tls-ca kafka-ca-cert --cert ./ca.pem
And then pass the name of that secret for --broker-ca-cert-secret
when setting the broker up:
akka projects config set broker \
--broker-service kafka \
--broker-auth scram-sha-512 \
--broker-user <sasl username> \
--broker-password-secret aws-msk-secret/pwd \
--broker-ca-cert-secret kafka-ca-cert
--broker-bootstrap-servers <bootstrap brokers> \
Delivery characteristics
When your application consumes messages from Kafka, it will try to deliver messages to your service in 'at-least-once' fashion while preserving order.
Kafka partitions are consumed independently. When passing messages to a certain entity or using them to update a view row by specifying the id as the Cloud Event ce-subject
attribute on the message, the same id must be used to partition the topic to guarantee that the messages are processed in order in the entity or view. Ordering is not guaranteed for messages arriving on different Kafka partitions.
Correct partitioning is especially important for topics that stream directly into views and transform the updates: when messages for the same subject id are spread over different transactions, they may read stale data and lose updates. |
To achieve at-least-once delivery, messages that are not acknowledged will be redelivered. This means redeliveries of 'older' messages may arrive behind fresh deliveries of 'newer' messages. The first delivery of each message is always in-order, though.
When publishing messages to Kafka from Akka, the ce-subject
attribute, if present, is used as the Kafka partition key for the message.