sbt Example: Lagom Scala
Here are instructions for how to take a sample application and add telemetry to it for Lagom. In this example you will add Cinnamon and a Coda Hale Console reporter will be used to print telemetry output to the terminal window.
See a comprehensive sample application using Lagom, Maven, and Lightbend Telemetry here: https://github.com/lightbend/telemetry-samples/tree/master/lagom/shopping-cart-scala.
Prerequisites
The following must be installed for these instructions to work:
- Java
- sbt
- Commercial credentials
Commercial credentials
To gain access to Lightbend Telemetry you must have a Lightbend subscription and Lightbend account.
Once you have logged in, the Lightbend platform credentials guide contains instructions for configuring your sbt, maven or gradle project to access Lightbend’s commercial dependencies, including Telemetry. If you have previously used a username and password to configure your commercial credentials, the guide will instruct you how to update to using the new scheme.
The URLs generated as part of your Lightbend platform credentials should not be committed to public source control repositories.
If you do not have a Lightbend subscription, please contact us to request an evaluation.
Sample application
We will use a stripped down version of the Hello Service
from the Scala with sbt version of the Lagom getting started guide. To demonstrate circuit breaker metrics, we will need to create a second service call that does an outgoing service call to the Hello Service
.
Start off by creating a folder lagom-scala-example
with the following files (content of files will be added later):
- build.sbt
- project/build.properties
- project/plugins.sbt
- hello-service-api/src/main/scala/cinnamon/lagom/api/HelloService.scala
- hello-service-impl/src/main/resources/application.conf
- hello-service-impl/src/main/scala/cinnamon/lagom/impl/HelloServiceImpl.scala
- hello-service-impl/src/main/scala/cinnamon/lagom/impl/HelloLoader.scala
Next step is to add content to the files created.
Add to build.sbt
:
ThisBuild / organization := "cinnamon.lagom.example"
ThisBuild / version := "0.1-SNAPSHOT"
// The Scala version that will be used for cross-compiled libraries
ThisBuild / scalaVersion := "2.12.16"
// This sample is not using Cassandra or Kafka
ThisBuild / lagomCassandraEnabled := false
ThisBuild / lagomKafkaEnabled := false
// Generate your Lightbend commercial sbt resolvers at:
// https://www.lightbend.com/account/lightbend-platform/credentials
lazy val helloServiceApi = project
.in(file("hello-service-api"))
.settings(
libraryDependencies ++= Seq(
lagomScaladslApi
)
)
lazy val helloServiceImpl = project
.in(file("hello-service-impl"))
.enablePlugins(LagomScala, Cinnamon)
.settings(
// Enable Cinnamon during tests
test / cinnamon := true,
// Add a play secret to javaOptions in run in Test, so we can run Lagom forked
Test / run / javaOptions += "-Dplay.http.secret.key=DO_NOT_USE_ME_IN_PRODUCTION_123",
libraryDependencies ++= Seq(
"com.softwaremill.macwire" %% "macros" % "2.2.5",
// Use Coda Hale Metrics and Lagom instrumentation
Cinnamon.library.cinnamonCHMetrics,
Cinnamon.library.cinnamonLagom
)
)
.dependsOn(helloServiceApi)
Add to project/build.properties
:
sbt.version=1.6.2
Add to project/plugins.sbt
:
// The Lagom plugin
addSbtPlugin("com.lightbend.lagom" % "lagom-sbt-plugin" % "1.6.5")
// Needed for importing the project into Eclipse
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")
// The Cinnamon Telemetry plugin
addSbtPlugin("com.lightbend.cinnamon" % "sbt-cinnamon" % "2.17.5")
Add to hello-service-api/src/main/scala/cinnamon/lagom/api/HelloService.scala
:
package cinnamon.lagom.api
import akka.NotUsed
import com.lightbend.lagom.scaladsl.api.{ Service, ServiceCall }
trait HelloService extends Service {
/**
* Example: curl http://localhost:9000/api/hello/Alice
*/
def hello(id: String): ServiceCall[NotUsed, String]
/**
* Example: curl http://localhost:9000/api/hello-proxy/Alice
*/
def helloProxy(id: String): ServiceCall[NotUsed, String]
final override def descriptor = {
import Service._
named("hello").withCalls(
pathCall("/api/hello/:id", hello _),
pathCall("/api/hello-proxy/:id", helloProxy _)
).withAutoAcl(true)
}
}
Add to hello-service-impl/src/main/scala/cinnamon/lagom/impl/HelloServiceImpl.scala
:
package cinnamon.lagom.impl
import akka.NotUsed
import cinnamon.lagom.api.HelloService
import com.lightbend.lagom.scaladsl.api.ServiceCall
import scala.concurrent.ExecutionContext
import scala.concurrent.Future
class HelloServiceImpl(helloService: HelloService)(implicit ec: ExecutionContext) extends HelloService {
override def hello(id: String) = ServiceCall { _ =>
Future.successful(s"Hello $id!")
}
override def helloProxy(id: String) = ServiceCall { _ =>
helloService.hello(id).invoke().map { answer =>
s"Hello service said: $answer"
}
}
}
Since circuit breakers are used for outgoing connections in Lagom, we have a service call, named hello-proxy
, that does an outgoing call to the first service call, named hello
.
Now you need to wire everything together in a LagomApplicationLoader
and LagomApplication
.
Add to hello-service-impl/src/main/scala/cinnamon/lagom/impl/HelloLoader.scala
:
package cinnamon.lagom.impl
import cinnamon.lagom.api.HelloService
import cinnamon.lagom.CircuitBreakerInstrumentation
import com.lightbend.lagom.internal.spi.CircuitBreakerMetricsProvider
import com.lightbend.lagom.scaladsl.api.ServiceLocator
import com.lightbend.lagom.scaladsl.client.StaticServiceLocatorComponents
import com.lightbend.lagom.scaladsl.devmode.LagomDevModeComponents
import com.lightbend.lagom.scaladsl.server._
import com.softwaremill.macwire._
import java.net.URI
import play.api.libs.ws.ahc.AhcWSComponents
class HelloLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
// Only needed to allow the sample to run from test and dist
new HelloApplication(context) with StaticServiceLocatorComponents {
override def staticServiceUri: URI = URI.create("http://localhost:9000")
}
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new HelloApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[HelloService])
}
abstract class HelloApplication(context: LagomApplicationContext) extends LagomApplication(context)
with AhcWSComponents {
// Bind the service that this server provides
override lazy val lagomServer = serverFor[HelloService](wire[HelloServiceImpl])
// Wire up the Cinnamon circuit breaker instrumentation
override lazy val circuitBreakerMetricsProvider: CircuitBreakerMetricsProvider =
wire[CircuitBreakerInstrumentation]
// Create a client so we can do outgoing service calls to the HelloService
lazy val helloService = serviceClient.implement[HelloService]
}
There are two things to note in this file. First of, we are using StaticServiceLocatorComponents
in the load
method to allow the application to find the HelloService
for the outgoing service call in the sample. In a real service you would use a different service locator. Second, the Cinnamon CircuitBreakerInstrumentation
is wired up in the HelloApplication
to provide circuit breaker metrics. It is an implementation of this Lagom SPI. You can of course provide your own implementation here if you would like.
Finally, add to hello-service-impl/src/main/resources/application.conf
:
play.application.loader = cinnamon.lagom.impl.HelloLoader
lagom.circuit-breaker.default.max-failures = 10
cinnamon {
application = "hello-lagom"
chmetrics.reporters += "console-reporter"
akka.actors {
default-by-class {
includes = "/user/*"
report-by = class
excludes = ["akka.http.*", "akka.stream.*"]
}
}
lagom.http {
servers {
"*:*" {
paths {
"*" {
metrics = on
}
}
}
}
clients {
"*:*" {
paths {
"*" {
metrics = on
}
}
}
}
}
}
The application.conf
file is where configuration of telemetry takes place so let us dissect this further:
Setting | Explanation |
---|---|
cinnamon.chmetrics.reporters | Specifies the Coda Hale reporter you wish to use. For more information see Coda Hale. Note that there are other ways to send data, e.g. Prometheus or StatsD. |
cinnamon.akka.actors | Specifies which actors to collect metrics for. For more information see Actor Configuration. |
cinnamon.lagom.http | Specifies which HTTP servers and clients to collect metrics for. For more information see Lagom Configuration. |
A Lagom application normally consists of multiple projects, one for each microservice, and you need to make sure that there is an application.conf
file for each project that you would like to instrument.
Running
When you have added the files above you need to start the application. To start your application forked you can run the underlying Play server in the test
scope.
Lagom has a special development mode for rapid development, and does not fork the JVM when using the runAll
or run
commands in sbt. A forked JVM is necessary to gain metrics for actors and HTTP calls, since those are provided by the Cinnamon Java Agent.
> sbt "helloServiceImpl/test:runMain play.core.server.ProdServerStart"
The output should look something like this:
...
[info] Running (fork) play.core.server.ProdServerStart
...
[info] [INFO] [01/31/2019 10:54:11.118] [CoreAgent] Cinnamon Agent version 2.17.5
...
[info] 2019-01-31T09:54:15.001Z [info] play.api.Play [] - Application started (Prod)
[info] 2019-01-31T09:54:15.534Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[info] 1/31/18 10:54:18 AM ============================================================
[info] -- Gauges ----------------------------------------------------------------------
[info] metrics.akka.systems.application.dispatchers.akka_actor_default-dispatcher.active-threads
[info] value = 0
[info] metrics.akka.systems.application.dispatchers.akka_actor_default-dispatcher.parallelism
[info] value = 8
[info] metrics.akka.systems.application.dispatchers.akka_actor_default-dispatcher.pool-size
[info] value = 3
[info] metrics.akka.systems.application.dispatchers.akka_actor_default-dispatcher.queued-tasks
[info] value = 0
[info] metrics.akka.systems.application.dispatchers.akka_actor_default-dispatcher.running-threads
[info] value = 0
[info] metrics.akka.systems.application.dispatchers.akka_io_pinned-dispatcher.active-threads
[info] value = 1
[info] metrics.akka.systems.application.dispatchers.akka_io_pinned-dispatcher.pool-size
[info] value = 1
[info] metrics.akka.systems.application.dispatchers.akka_io_pinned-dispatcher.running-threads
[info] value = 0
...
To try out the hello-proxy
service call and see metrics for the HTTP endpoints, as well as the hello
circuit breaker you can either point your browser to http://localhost:9000/api/hello-proxy/World
or simply run curl
from the command line like this curl http://localhost:9000/api/hello-proxy/World
The output from the server should now also contain metrics like this:
[info] -- Gauges ----------------------------------------------------------------------
...
[info] metrics.lagom.circuit-breakers.hello.state
[info] value = 3
...
[info] -- Histograms ------------------------------------------------------------------
[info] metrics.akka-http.systems.application.http-servers.0_0_0_0_0_0_0_1_9000.request-paths._api_hello-proxy__id.endpoint-response-time
[info] count = 1
[info] min = 472903565
[info] max = 472903565
[info] mean = 472903565.00
[info] stddev = 0.00
[info] median = 472903565.00
[info] 75% <= 472903565.00
[info] 95% <= 472903565.00
[info] 98% <= 472903565.00
[info] 99% <= 472903565.00
[info] 99.9% <= 472903565.00
...
[info] metrics.lagom.circuit-breakers.hello.latency
[info] count = 1
[info] min = 331715012
[info] max = 331715012
[info] mean = 331715012.00
[info] stddev = 0.00
[info] median = 331715012.00
[info] 75% <= 331715012.00
[info] 95% <= 331715012.00
[info] 98% <= 331715012.00
[info] 99% <= 331715012.00
[info] 99.9% <= 331715012.00
...