Configuration - Version 2.4.20

Configuration

You can start using Akka without defining any configuration, since sensible default values are provided. Later on you might need to amend the settings to change the default behavior or adapt for specific runtime environments. Typical examples of settings that you might amend:

  • log level and logger backend
  • enable remoting
  • message serializers
  • definition of routers
  • tuning of dispatchers

Akka uses the Typesafe Config Library, which might also be a good choice for the configuration of your own application or library built with or without Akka. This library is implemented in Java with no external dependencies; you should have a look at its documentation (in particular about ConfigFactory), which is only summarized in the following.

Warning

If you use Akka from the Scala REPL from the 2.9.x series, and you do not provide your own ClassLoader to the ActorSystem, start the REPL with "-Yrepl-sync" to work around a deficiency in the REPLs provided Context ClassLoader.

§Where configuration is read from

All configuration for Akka is held within instances of ActorSystem, or put differently, as viewed from the outside, ActorSystem is the only consumer of configuration information. While constructing an actor system, you can either pass in a Config object or not, where the second case is equivalent to passing ConfigFactory.load() (with the right class loader). This means roughly that the default is to parse all application.conf, application.json and application.properties found at the root of the class path—please refer to the aforementioned documentation for details. The actor system then merges in all reference.conf resources found at the root of the class path to form the fallback configuration, i.e. it internally uses

  1. appConfig.withFallback(ConfigFactory.defaultReference(classLoader))

The philosophy is that code never contains default values, but instead relies upon their presence in the reference.conf supplied with the library in question.

Highest precedence is given to overrides given as system properties, see the HOCON specification (near the bottom). Also noteworthy is that the application configuration—which defaults to application—may be overridden using the config.resource property (there are more, please refer to the Config docs).

Note

If you are writing an Akka application, keep you configuration in application.conf at the root of the class path. If you are writing an Akka-based library, keep its configuration in reference.conf at the root of the JAR file.

§When using JarJar, OneJar, Assembly or any jar-bundler

Warning

Akka's configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.

If you are using Maven to package your application, you can also make use of the Apache Maven Shade Plugin support for Resource Transformers to merge all the reference.confs on the build classpath into one.

The plugin configuration might look like this:

  1. <plugin>
  2. <groupId>org.apache.maven.plugins</groupId>
  3. <artifactId>maven-shade-plugin</artifactId>
  4. <version>1.5</version>
  5. <executions>
  6. <execution>
  7. <phase>package</phase>
  8. <goals>
  9. <goal>shade</goal>
  10. </goals>
  11. <configuration>
  12. <shadedArtifactAttached>true</shadedArtifactAttached>
  13. <shadedClassifierName>allinone</shadedClassifierName>
  14. <artifactSet>
  15. <includes>
  16. <include>*:*</include>
  17. </includes>
  18. </artifactSet>
  19. <transformers>
  20. <transformer
  21. implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
  22. <resource>reference.conf</resource>
  23. </transformer>
  24. <transformer
  25. implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
  26. <manifestEntries>
  27. <Main-Class>akka.Main</Main-Class>
  28. </manifestEntries>
  29. </transformer>
  30. </transformers>
  31. </configuration>
  32. </execution>
  33. </executions>
  34. </plugin>

§Custom application.conf

A custom application.conf might look like this:

  1. # In this file you can override any option defined in the reference files.
  2. # Copy in parts of the reference files and modify as you please.
  3.  
  4. akka {
  5.  
  6. # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
  7. # to STDOUT)
  8. loggers = ["akka.event.slf4j.Slf4jLogger"]
  9.  
  10. # Log level used by the configured loggers (see "loggers") as soon
  11. # as they have been started; before that, see "stdout-loglevel"
  12. # Options: OFF, ERROR, WARNING, INFO, DEBUG
  13. loglevel = "DEBUG"
  14.  
  15. # Log level for the very basic logger activated during ActorSystem startup.
  16. # This logger prints the log messages to stdout (System.out).
  17. # Options: OFF, ERROR, WARNING, INFO, DEBUG
  18. stdout-loglevel = "DEBUG"
  19.  
  20. # Filter of log events that is used by the LoggingAdapter before
  21. # publishing log events to the eventStream.
  22. logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
  23.  
  24. actor {
  25. provider = "cluster"
  26.  
  27. default-dispatcher {
  28. # Throughput for default Dispatcher, set to 1 for as fair as possible
  29. throughput = 10
  30. }
  31. }
  32.  
  33. remote {
  34. # The port clients should connect to. Default is 2552.
  35. netty.tcp.port = 4711
  36. }
  37. }

§Including files

Sometimes it can be useful to include another configuration file, for example if you have one application.conf with all environment independent settings and then override some settings for specific environments.

Specifying system property with -Dconfig.resource=/dev.conf will load the dev.conf file, which includes the application.conf

dev.conf:

  1. include "application"
  2.  
  3. akka {
  4. loglevel = "DEBUG"
  5. }

More advanced include and substitution mechanisms are explained in the HOCON specification.

§Logging of Configuration

If the system or config property akka.log-config-on-start is set to on, then the complete configuration is logged at INFO level when the actor system is started. This is useful when you are uncertain of what configuration is used.

If in doubt, you can also easily and nicely inspect configuration objects before or after using them to construct an actor system:

  1. Welcome to Scala version 2.11.11 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0).
  2. Type in expressions to have them evaluated.
  3. Type :help for more information.
  4.  
  5. scala> import com.typesafe.config._
  6. import com.typesafe.config._
  7.  
  8. scala> ConfigFactory.parseString("a.b=12")
  9. res0: com.typesafe.config.Config = Config(SimpleConfigObject({"a" : {"b" : 12}}))
  10.  
  11. scala> res0.root.render
  12. res1: java.lang.String =
  13. {
  14. # String: 1
  15. "a" : {
  16. # String: 1
  17. "b" : 12
  18. }
  19. }

The comments preceding every item give detailed information about the origin of the setting (file & line number) plus possible comments which were present, e.g. in the reference configuration. The settings as merged with the reference and parsed by the actor system can be displayed like this:

  1. final ActorSystem system = ActorSystem.create();
  2. System.out.println(system.settings());
  3. // this is a shortcut for system.settings().config().root().render()

§A Word About ClassLoaders

In several places of the configuration file it is possible to specify the fully-qualified class name of something to be instantiated by Akka. This is done using Java reflection, which in turn uses a ClassLoader. Getting the right one in challenging environments like application containers or OSGi bundles is not always trivial, the current approach of Akka is that each ActorSystem implementation stores the current thread’s context class loader (if available, otherwise just its own loader as in this.getClass.getClassLoader) and uses that for all reflective accesses. This implies that putting Akka on the boot class path will yield NullPointerException from strange places: this is simply not supported.

§Application specific settings

The configuration can also be used for application specific settings. A good practice is to place those settings in an Extension, as described in:

§Configuring multiple ActorSystem

If you have more than one ActorSystem (or you're writing a library and have an ActorSystem that may be separate from the application's) you may want to separate the configuration for each system.

Given that ConfigFactory.load() merges all resources with matching name from the whole class path, it is easiest to utilize that functionality and differentiate actor systems within the hierarchy of the configuration:

  1. myapp1 {
  2. akka.loglevel = "WARNING"
  3. my.own.setting = 43
  4. }
  5. myapp2 {
  6. akka.loglevel = "ERROR"
  7. app2.setting = "appname"
  8. }
  9. my.own.setting = 42
  10. my.other.setting = "hello"
  1. val config = ConfigFactory.load()
  2. val app1 = ActorSystem("MyApp1", config.getConfig("myapp1").withFallback(config))
  3. val app2 = ActorSystem("MyApp2",
  4. config.getConfig("myapp2").withOnlyPath("akka").withFallback(config))

These two samples demonstrate different variations of the “lift-a-subtree” trick: in the first case, the configuration accessible from within the actor system is this

  1. akka.loglevel = "WARNING"
  2. my.own.setting = 43
  3. my.other.setting = "hello"
  4. // plus myapp1 and myapp2 subtrees

while in the second one, only the “akka” subtree is lifted, with the following result

  1. akka.loglevel = "ERROR"
  2. my.own.setting = 42
  3. my.other.setting = "hello"
  4. // plus myapp1 and myapp2 subtrees

Note

The configuration library is really powerful, explaining all features exceeds the scope affordable here. In particular not covered are how to include other configuration files within other files (see a small example at Including files) and copying parts of the configuration tree by way of path substitutions.

You may also specify and parse the configuration programmatically in other ways when instantiating the ActorSystem.

  1. import akka.actor.ActorSystem
  2. import com.typesafe.config.ConfigFactory
  3. val customConf = ConfigFactory.parseString("""
  4. akka.actor.deployment {
  5. /my-service {
  6. router = round-robin-pool
  7. nr-of-instances = 3
  8. }
  9. }
  10. """)
  11. // ConfigFactory.load sandwiches customConfig between default reference
  12. // config and default overrides, and then resolves it.
  13. val system = ActorSystem("MySystem", ConfigFactory.load(customConf))

§Reading configuration from a custom location

You can replace or supplement application.conf either in code or using system properties.

If you're using ConfigFactory.load() (which Akka does by default) you can replace application.conf by defining -Dconfig.resource=whatever, -Dconfig.file=whatever, or -Dconfig.url=whatever.

From inside your replacement file specified with -Dconfig.resource and friends, you can include "application" if you still want to use application.{conf,json,properties} as well. Settings specified before include "application" would be overridden by the included file, while those after would override the included file.

In code, there are many customization options.

There are several overloads of ConfigFactory.load(); these allow you to specify something to be sandwiched between system properties (which override) and the defaults (from reference.conf), replacing the usual application.{conf,json,properties} and replacing -Dconfig.file and friends.

The simplest variant of ConfigFactory.load() takes a resource basename (instead of application); myname.conf, myname.json, and myname.properties would then be used instead of application.{conf,json,properties}.

The most flexible variant takes a Config object, which you can load using any method in ConfigFactory. For example you could put a config string in code using ConfigFactory.parseString() or you could make a map and ConfigFactory.parseMap(), or you could load a file.

You can also combine your custom config with the usual config, that might look like:

  1. // make a Config with just your special setting
  2. Config myConfig =
  3. ConfigFactory.parseString("something=somethingElse");
  4. // load the normal config stack (system props,
  5. // then application.conf, then reference.conf)
  6. Config regularConfig =
  7. ConfigFactory.load();
  8. // override regular stack with myConfig
  9. Config combined =
  10. myConfig.withFallback(regularConfig);
  11. // put the result in between the overrides
  12. // (system props) and defaults again
  13. Config complete =
  14. ConfigFactory.load(combined);
  15. // create ActorSystem
  16. ActorSystem system =
  17. ActorSystem.create("myname", complete);

When working with Config objects, keep in mind that there are three "layers" in the cake:

  • ConfigFactory.defaultOverrides() (system properties)
  • the app's settings
  • ConfigFactory.defaultReference() (reference.conf)

The normal goal is to customize the middle layer while leaving the other two alone.

  • ConfigFactory.load() loads the whole stack
  • the overloads of ConfigFactory.load() let you specify a different middle layer
  • the ConfigFactory.parse() variations load single files or resources

To stack two layers, use override.withFallback(fallback); try to keep system props (defaultOverrides()) on top and reference.conf (defaultReference()) on the bottom.

Do keep in mind, you can often just add another include statement in application.conf rather than writing code. Includes at the top of application.conf will be overridden by the rest of application.conf, while those at the bottom will override the earlier stuff.

§Actor Deployment Configuration

Deployment settings for specific actors can be defined in the akka.actor.deployment section of the configuration. In the deployment section it is possible to define things like dispatcher, mailbox, router settings, and remote deployment. Configuration of these features are described in the chapters detailing corresponding topics. An example may look like this:

  1. akka.actor.deployment {
  2.  
  3. # '/user/actorA/actorB' is a remote deployed actor
  4. /actorA/actorB {
  5. remote = "akka.tcp://sampleActorSystem@127.0.0.1:2553"
  6. }
  7. # all direct children of '/user/actorC' have a dedicated dispatcher
  8. "/actorC/*" {
  9. dispatcher = my-dispatcher
  10. }
  11.  
  12. # all descendants of '/user/actorC' (direct children, and their children recursively)
  13. # have a dedicated dispatcher
  14. "/actorC/**" {
  15. dispatcher = my-dispatcher
  16. }
  17. # '/user/actorD/actorE' has a special priority mailbox
  18. /actorD/actorE {
  19. mailbox = prio-mailbox
  20. }
  21. # '/user/actorF/actorG/actorH' is a random pool
  22. /actorF/actorG/actorH {
  23. router = random-pool
  24. nr-of-instances = 5
  25. }
  26. }
  27.  
  28. my-dispatcher {
  29. fork-join-executor.parallelism-min = 10
  30. fork-join-executor.parallelism-max = 10
  31. }
  32. prio-mailbox {
  33. mailbox-type = "a.b.MyPrioMailbox"
  34. }

Note

The deployment section for a specific actor is identified by the path of the actor relative to /user.

You can use asterisks as wildcard matches for the actor path sections, so you could specify: /*/sampleActor and that would match all sampleActor on that level in the hierarchy. In addition, please note:

  • you can also use wildcards in the last position to match all actors at a certain level: /someParent/*
  • you can use double-wildcards in the last position to match all child actors and their children recursively: /someParent/**
  • non-wildcard matches always have higher priority to match than wildcards, and single wildcard matches have higher priority than double-wildcards, so: /foo/bar is considered more specific than /foo/*, which is considered more specific than /foo/**. Only the highest priority match is used
  • wildcards cannot be used to partially match section, like this: /foo*/bar, /f*o/bar etc.

Note

Double-wildcards can only be placed in the last position.

§Listing of the Reference Configuration

Each Akka module has a reference configuration file with the default values.

§akka-actor

  1. ####################################
  2. # Akka Actor Reference Config File #
  3. ####################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. # Akka version, checked against the runtime version of Akka. Loaded from generated conf file.
  9. include "version"
  10.  
  11. akka {
  12. # Home directory of Akka, modules in the deploy directory will be loaded
  13. home = ""
  14.  
  15. # Loggers to register at boot time (akka.event.Logging$DefaultLogger logs
  16. # to STDOUT)
  17. loggers = ["akka.event.Logging$DefaultLogger"]
  18. # Filter of log events that is used by the LoggingAdapter before
  19. # publishing log events to the eventStream. It can perform
  20. # fine grained filtering based on the log source. The default
  21. # implementation filters on the `loglevel`.
  22. # FQCN of the LoggingFilter. The Class of the FQCN must implement
  23. # akka.event.LoggingFilter and have a public constructor with
  24. # (akka.actor.ActorSystem.Settings, akka.event.EventStream) parameters.
  25. logging-filter = "akka.event.DefaultLoggingFilter"
  26.  
  27. # Specifies the default loggers dispatcher
  28. loggers-dispatcher = "akka.actor.default-dispatcher"
  29.  
  30. # Loggers are created and registered synchronously during ActorSystem
  31. # start-up, and since they are actors, this timeout is used to bound the
  32. # waiting time
  33. logger-startup-timeout = 5s
  34.  
  35. # Log level used by the configured loggers (see "loggers") as soon
  36. # as they have been started; before that, see "stdout-loglevel"
  37. # Options: OFF, ERROR, WARNING, INFO, DEBUG
  38. loglevel = "INFO"
  39.  
  40. # Log level for the very basic logger activated during ActorSystem startup.
  41. # This logger prints the log messages to stdout (System.out).
  42. # Options: OFF, ERROR, WARNING, INFO, DEBUG
  43. stdout-loglevel = "WARNING"
  44.  
  45. # Log the complete configuration at INFO level when the actor system is started.
  46. # This is useful when you are uncertain of what configuration is used.
  47. log-config-on-start = off
  48.  
  49. # Log at info level when messages are sent to dead letters.
  50. # Possible values:
  51. # on: all dead letters are logged
  52. # off: no logging of dead letters
  53. # n: positive integer, number of dead letters that will be logged
  54. log-dead-letters = 10
  55.  
  56. # Possibility to turn off logging of dead letters while the actor system
  57. # is shutting down. Logging is only done when enabled by 'log-dead-letters'
  58. # setting.
  59. log-dead-letters-during-shutdown = on
  60.  
  61. # List FQCN of extensions which shall be loaded at actor system startup.
  62. # Library extensions are regular extensions that are loaded at startup and are
  63. # available for third party library authors to enable auto-loading of extensions when
  64. # present on the classpath. This is done by appending entries:
  65. # 'library-extensions += "Extension"' in the library `reference.conf`.
  66. #
  67. # Should not be set by end user applications in 'application.conf', use the extensions property for that
  68. #
  69. library-extensions = ${?akka.library-extensions} []
  70.  
  71. # List FQCN of extensions which shall be loaded at actor system startup.
  72. # Should be on the format: 'extensions = ["foo", "bar"]' etc.
  73. # See the Akka Documentation for more info about Extensions
  74. extensions = []
  75.  
  76. # Toggles whether threads created by this ActorSystem should be daemons or not
  77. daemonic = off
  78.  
  79. # JVM shutdown, System.exit(-1), in case of a fatal error,
  80. # such as OutOfMemoryError
  81. jvm-exit-on-fatal-error = on
  82.  
  83. actor {
  84.  
  85. # Either one of "local", "remote" or "cluster" or the
  86. # FQCN of the ActorRefProvider to be used; the below is the built-in default,
  87. # note that "remote" and "cluster" requires the akka-remote and akka-cluster
  88. # artifacts to be on the classpath.
  89. provider = "local"
  90.  
  91. # The guardian "/user" will use this class to obtain its supervisorStrategy.
  92. # It needs to be a subclass of akka.actor.SupervisorStrategyConfigurator.
  93. # In addition to the default there is akka.actor.StoppingSupervisorStrategy.
  94. guardian-supervisor-strategy = "akka.actor.DefaultSupervisorStrategy"
  95.  
  96. # Timeout for ActorSystem.actorOf
  97. creation-timeout = 20s
  98.  
  99. # Serializes and deserializes (non-primitive) messages to ensure immutability,
  100. # this is only intended for testing.
  101. serialize-messages = off
  102.  
  103. # Serializes and deserializes creators (in Props) to ensure that they can be
  104. # sent over the network, this is only intended for testing. Purely local deployments
  105. # as marked with deploy.scope == LocalScope are exempt from verification.
  106. serialize-creators = off
  107.  
  108. # Timeout for send operations to top-level actors which are in the process
  109. # of being started. This is only relevant if using a bounded mailbox or the
  110. # CallingThreadDispatcher for a top-level actor.
  111. unstarted-push-timeout = 10s
  112.  
  113. typed {
  114. # Default timeout for typed actor methods with non-void return type
  115. timeout = 5s
  116. }
  117. # Mapping between ´deployment.router' short names to fully qualified class names
  118. router.type-mapping {
  119. from-code = "akka.routing.NoRouter"
  120. round-robin-pool = "akka.routing.RoundRobinPool"
  121. round-robin-group = "akka.routing.RoundRobinGroup"
  122. random-pool = "akka.routing.RandomPool"
  123. random-group = "akka.routing.RandomGroup"
  124. balancing-pool = "akka.routing.BalancingPool"
  125. smallest-mailbox-pool = "akka.routing.SmallestMailboxPool"
  126. broadcast-pool = "akka.routing.BroadcastPool"
  127. broadcast-group = "akka.routing.BroadcastGroup"
  128. scatter-gather-pool = "akka.routing.ScatterGatherFirstCompletedPool"
  129. scatter-gather-group = "akka.routing.ScatterGatherFirstCompletedGroup"
  130. tail-chopping-pool = "akka.routing.TailChoppingPool"
  131. tail-chopping-group = "akka.routing.TailChoppingGroup"
  132. consistent-hashing-pool = "akka.routing.ConsistentHashingPool"
  133. consistent-hashing-group = "akka.routing.ConsistentHashingGroup"
  134. }
  135.  
  136. deployment {
  137.  
  138. # deployment id pattern - on the format: /parent/child etc.
  139. default {
  140. # The id of the dispatcher to use for this actor.
  141. # If undefined or empty the dispatcher specified in code
  142. # (Props.withDispatcher) is used, or default-dispatcher if not
  143. # specified at all.
  144. dispatcher = ""
  145.  
  146. # The id of the mailbox to use for this actor.
  147. # If undefined or empty the default mailbox of the configured dispatcher
  148. # is used or if there is no mailbox configuration the mailbox specified
  149. # in code (Props.withMailbox) is used.
  150. # If there is a mailbox defined in the configured dispatcher then that
  151. # overrides this setting.
  152. mailbox = ""
  153.  
  154. # routing (load-balance) scheme to use
  155. # - available: "from-code", "round-robin", "random", "smallest-mailbox",
  156. # "scatter-gather", "broadcast"
  157. # - or: Fully qualified class name of the router class.
  158. # The class must extend akka.routing.CustomRouterConfig and
  159. # have a public constructor with com.typesafe.config.Config
  160. # and optional akka.actor.DynamicAccess parameter.
  161. # - default is "from-code";
  162. # Whether or not an actor is transformed to a Router is decided in code
  163. # only (Props.withRouter). The type of router can be overridden in the
  164. # configuration; specifying "from-code" means that the values specified
  165. # in the code shall be used.
  166. # In case of routing, the actors to be routed to can be specified
  167. # in several ways:
  168. # - nr-of-instances: will create that many children
  169. # - routees.paths: will route messages to these paths using ActorSelection,
  170. # i.e. will not create children
  171. # - resizer: dynamically resizable number of routees as specified in
  172. # resizer below
  173. router = "from-code"
  174.  
  175. # number of children to create in case of a router;
  176. # this setting is ignored if routees.paths is given
  177. nr-of-instances = 1
  178.  
  179. # within is the timeout used for routers containing future calls
  180. within = 5 seconds
  181.  
  182. # number of virtual nodes per node for consistent-hashing router
  183. virtual-nodes-factor = 10
  184.  
  185. tail-chopping-router {
  186. # interval is duration between sending message to next routee
  187. interval = 10 milliseconds
  188. }
  189.  
  190. routees {
  191. # Alternatively to giving nr-of-instances you can specify the full
  192. # paths of those actors which should be routed to. This setting takes
  193. # precedence over nr-of-instances
  194. paths = []
  195. }
  196. # To use a dedicated dispatcher for the routees of the pool you can
  197. # define the dispatcher configuration inline with the property name
  198. # 'pool-dispatcher' in the deployment section of the router.
  199. # For example:
  200. # pool-dispatcher {
  201. # fork-join-executor.parallelism-min = 5
  202. # fork-join-executor.parallelism-max = 5
  203. # }
  204.  
  205. # Routers with dynamically resizable number of routees; this feature is
  206. # enabled by including (parts of) this section in the deployment
  207. resizer {
  208. enabled = off
  209.  
  210. # The fewest number of routees the router should ever have.
  211. lower-bound = 1
  212.  
  213. # The most number of routees the router should ever have.
  214. # Must be greater than or equal to lower-bound.
  215. upper-bound = 10
  216.  
  217. # Threshold used to evaluate if a routee is considered to be busy
  218. # (under pressure). Implementation depends on this value (default is 1).
  219. # 0: number of routees currently processing a message.
  220. # 1: number of routees currently processing a message has
  221. # some messages in mailbox.
  222. # > 1: number of routees with at least the configured pressure-threshold
  223. # messages in their mailbox. Note that estimating mailbox size of
  224. # default UnboundedMailbox is O(N) operation.
  225. pressure-threshold = 1
  226.  
  227. # Percentage to increase capacity whenever all routees are busy.
  228. # For example, 0.2 would increase 20% (rounded up), i.e. if current
  229. # capacity is 6 it will request an increase of 2 more routees.
  230. rampup-rate = 0.2
  231.  
  232. # Minimum fraction of busy routees before backing off.
  233. # For example, if this is 0.3, then we'll remove some routees only when
  234. # less than 30% of routees are busy, i.e. if current capacity is 10 and
  235. # 3 are busy then the capacity is unchanged, but if 2 or less are busy
  236. # the capacity is decreased.
  237. # Use 0.0 or negative to avoid removal of routees.
  238. backoff-threshold = 0.3
  239.  
  240. # Fraction of routees to be removed when the resizer reaches the
  241. # backoffThreshold.
  242. # For example, 0.1 would decrease 10% (rounded up), i.e. if current
  243. # capacity is 9 it will request an decrease of 1 routee.
  244. backoff-rate = 0.1
  245.  
  246. # Number of messages between resize operation.
  247. # Use 1 to resize before each message.
  248. messages-per-resize = 10
  249. }
  250.  
  251. # Routers with dynamically resizable number of routees based on
  252. # performance metrics.
  253. # This feature is enabled by including (parts of) this section in
  254. # the deployment, cannot be enabled together with default resizer.
  255. optimal-size-exploring-resizer {
  256.  
  257. enabled = off
  258.  
  259. # The fewest number of routees the router should ever have.
  260. lower-bound = 1
  261.  
  262. # The most number of routees the router should ever have.
  263. # Must be greater than or equal to lower-bound.
  264. upper-bound = 10
  265.  
  266. # probability of doing a ramping down when all routees are busy
  267. # during exploration.
  268. chance-of-ramping-down-when-full = 0.2
  269.  
  270. # Interval between each resize attempt
  271. action-interval = 5s
  272.  
  273. # If the routees have not been fully utilized (i.e. all routees busy)
  274. # for such length, the resizer will downsize the pool.
  275. downsize-after-underutilized-for = 72h
  276.  
  277. # Duration exploration, the ratio between the largest step size and
  278. # current pool size. E.g. if the current pool size is 50, and the
  279. # explore-step-size is 0.1, the maximum pool size change during
  280. # exploration will be +- 5
  281. explore-step-size = 0.1
  282.  
  283. # Probabily of doing an exploration v.s. optmization.
  284. chance-of-exploration = 0.4
  285.  
  286. # When downsizing after a long streak of underutilization, the resizer
  287. # will downsize the pool to the highest utiliziation multiplied by a
  288. # a downsize rasio. This downsize ratio determines the new pools size
  289. # in comparison to the highest utilization.
  290. # E.g. if the highest utilization is 10, and the down size ratio
  291. # is 0.8, the pool will be downsized to 8
  292. downsize-ratio = 0.8
  293.  
  294. # When optimizing, the resizer only considers the sizes adjacent to the
  295. # current size. This number indicates how many adjacent sizes to consider.
  296. optimization-range = 16
  297.  
  298. # The weight of the latest metric over old metrics when collecting
  299. # performance metrics.
  300. # E.g. if the last processing speed is 10 millis per message at pool
  301. # size 5, and if the new processing speed collected is 6 millis per
  302. # message at pool size 5. Given a weight of 0.3, the metrics
  303. # representing pool size 5 will be 6 * 0.3 + 10 * 0.7, i.e. 8.8 millis
  304. # Obviously, this number should be between 0 and 1.
  305. weight-of-latest-metric = 0.5
  306. }
  307. }
  308.  
  309. /IO-DNS/inet-address {
  310. mailbox = "unbounded"
  311. router = "consistent-hashing-pool"
  312. nr-of-instances = 4
  313. }
  314. }
  315.  
  316. default-dispatcher {
  317. # Must be one of the following
  318. # Dispatcher, PinnedDispatcher, or a FQCN to a class inheriting
  319. # MessageDispatcherConfigurator with a public constructor with
  320. # both com.typesafe.config.Config parameter and
  321. # akka.dispatch.DispatcherPrerequisites parameters.
  322. # PinnedDispatcher must be used together with executor=thread-pool-executor.
  323. type = "Dispatcher"
  324.  
  325. # Which kind of ExecutorService to use for this dispatcher
  326. # Valid options:
  327. # - "default-executor" requires a "default-executor" section
  328. # - "fork-join-executor" requires a "fork-join-executor" section
  329. # - "thread-pool-executor" requires a "thread-pool-executor" section
  330. # - A FQCN of a class extending ExecutorServiceConfigurator
  331. executor = "default-executor"
  332.  
  333. # This will be used if you have set "executor = "default-executor"".
  334. # If an ActorSystem is created with a given ExecutionContext, this
  335. # ExecutionContext will be used as the default executor for all
  336. # dispatchers in the ActorSystem configured with
  337. # executor = "default-executor". Note that "default-executor"
  338. # is the default value for executor, and therefore used if not
  339. # specified otherwise. If no ExecutionContext is given,
  340. # the executor configured in "fallback" will be used.
  341. default-executor {
  342. fallback = "fork-join-executor"
  343. }
  344.  
  345. # This will be used if you have set "executor = "fork-join-executor""
  346. # Underlying thread pool implementation is scala.concurrent.forkjoin.ForkJoinPool
  347. fork-join-executor {
  348. # Min number of threads to cap factor-based parallelism number to
  349. parallelism-min = 8
  350.  
  351. # The parallelism factor is used to determine thread pool size using the
  352. # following formula: ceil(available processors * factor). Resulting size
  353. # is then bounded by the parallelism-min and parallelism-max values.
  354. parallelism-factor = 3.0
  355.  
  356. # Max number of threads to cap factor-based parallelism number to
  357. parallelism-max = 64
  358.  
  359. # Setting to "FIFO" to use queue like peeking mode which "poll" or "LIFO" to use stack
  360. # like peeking mode which "pop".
  361. task-peeking-mode = "FIFO"
  362. }
  363.  
  364. # This will be used if you have set "executor = "thread-pool-executor""
  365. # Underlying thread pool implementation is java.util.concurrent.ThreadPoolExecutor
  366. thread-pool-executor {
  367. # Keep alive time for threads
  368. keep-alive-time = 60s
  369. # Define a fixed thread pool size with this property. The corePoolSize
  370. # and the maximumPoolSize of the ThreadPoolExecutor will be set to this
  371. # value, if it is defined. Then the other pool-size properties will not
  372. # be used.
  373. #
  374. # Valid values are: `off` or a positive integer.
  375. fixed-pool-size = off
  376.  
  377. # Min number of threads to cap factor-based corePoolSize number to
  378. core-pool-size-min = 8
  379.  
  380. # The core-pool-size-factor is used to determine corePoolSize of the
  381. # ThreadPoolExecutor using the following formula:
  382. # ceil(available processors * factor).
  383. # Resulting size is then bounded by the core-pool-size-min and
  384. # core-pool-size-max values.
  385. core-pool-size-factor = 3.0
  386.  
  387. # Max number of threads to cap factor-based corePoolSize number to
  388. core-pool-size-max = 64
  389.  
  390. # Minimum number of threads to cap factor-based maximumPoolSize number to
  391. max-pool-size-min = 8
  392.  
  393. # The max-pool-size-factor is used to determine maximumPoolSize of the
  394. # ThreadPoolExecutor using the following formula:
  395. # ceil(available processors * factor)
  396. # The maximumPoolSize will not be less than corePoolSize.
  397. # It is only used if using a bounded task queue.
  398. max-pool-size-factor = 3.0
  399.  
  400. # Max number of threads to cap factor-based maximumPoolSize number to
  401. max-pool-size-max = 64
  402.  
  403. # Specifies the bounded capacity of the task queue (< 1 == unbounded)
  404. task-queue-size = -1
  405.  
  406. # Specifies which type of task queue will be used, can be "array" or
  407. # "linked" (default)
  408. task-queue-type = "linked"
  409.  
  410. # Allow core threads to time out
  411. allow-core-timeout = on
  412. }
  413.  
  414. # How long time the dispatcher will wait for new actors until it shuts down
  415. shutdown-timeout = 1s
  416.  
  417. # Throughput defines the number of messages that are processed in a batch
  418. # before the thread is returned to the pool. Set to 1 for as fair as possible.
  419. throughput = 5
  420.  
  421. # Throughput deadline for Dispatcher, set to 0 or negative for no deadline
  422. throughput-deadline-time = 0ms
  423.  
  424. # For BalancingDispatcher: If the balancing dispatcher should attempt to
  425. # schedule idle actors using the same dispatcher when a message comes in,
  426. # and the dispatchers ExecutorService is not fully busy already.
  427. attempt-teamwork = on
  428.  
  429. # If this dispatcher requires a specific type of mailbox, specify the
  430. # fully-qualified class name here; the actually created mailbox will
  431. # be a subtype of this type. The empty string signifies no requirement.
  432. mailbox-requirement = ""
  433. }
  434.  
  435. default-mailbox {
  436. # FQCN of the MailboxType. The Class of the FQCN must have a public
  437. # constructor with
  438. # (akka.actor.ActorSystem.Settings, com.typesafe.config.Config) parameters.
  439. mailbox-type = "akka.dispatch.UnboundedMailbox"
  440.  
  441. # If the mailbox is bounded then it uses this setting to determine its
  442. # capacity. The provided value must be positive.
  443. # NOTICE:
  444. # Up to version 2.1 the mailbox type was determined based on this setting;
  445. # this is no longer the case, the type must explicitly be a bounded mailbox.
  446. mailbox-capacity = 1000
  447.  
  448. # If the mailbox is bounded then this is the timeout for enqueueing
  449. # in case the mailbox is full. Negative values signify infinite
  450. # timeout, which should be avoided as it bears the risk of dead-lock.
  451. mailbox-push-timeout-time = 10s
  452.  
  453. # For Actor with Stash: The default capacity of the stash.
  454. # If negative (or zero) then an unbounded stash is used (default)
  455. # If positive then a bounded stash is used and the capacity is set using
  456. # the property
  457. stash-capacity = -1
  458. }
  459.  
  460. mailbox {
  461. # Mapping between message queue semantics and mailbox configurations.
  462. # Used by akka.dispatch.RequiresMessageQueue[T] to enforce different
  463. # mailbox types on actors.
  464. # If your Actor implements RequiresMessageQueue[T], then when you create
  465. # an instance of that actor its mailbox type will be decided by looking
  466. # up a mailbox configuration via T in this mapping
  467. requirements {
  468. "akka.dispatch.UnboundedMessageQueueSemantics" =
  469. akka.actor.mailbox.unbounded-queue-based
  470. "akka.dispatch.BoundedMessageQueueSemantics" =
  471. akka.actor.mailbox.bounded-queue-based
  472. "akka.dispatch.DequeBasedMessageQueueSemantics" =
  473. akka.actor.mailbox.unbounded-deque-based
  474. "akka.dispatch.UnboundedDequeBasedMessageQueueSemantics" =
  475. akka.actor.mailbox.unbounded-deque-based
  476. "akka.dispatch.BoundedDequeBasedMessageQueueSemantics" =
  477. akka.actor.mailbox.bounded-deque-based
  478. "akka.dispatch.MultipleConsumerSemantics" =
  479. akka.actor.mailbox.unbounded-queue-based
  480. "akka.dispatch.ControlAwareMessageQueueSemantics" =
  481. akka.actor.mailbox.unbounded-control-aware-queue-based
  482. "akka.dispatch.UnboundedControlAwareMessageQueueSemantics" =
  483. akka.actor.mailbox.unbounded-control-aware-queue-based
  484. "akka.dispatch.BoundedControlAwareMessageQueueSemantics" =
  485. akka.actor.mailbox.bounded-control-aware-queue-based
  486. "akka.event.LoggerMessageQueueSemantics" =
  487. akka.actor.mailbox.logger-queue
  488. }
  489.  
  490. unbounded-queue-based {
  491. # FQCN of the MailboxType, The Class of the FQCN must have a public
  492. # constructor with (akka.actor.ActorSystem.Settings,
  493. # com.typesafe.config.Config) parameters.
  494. mailbox-type = "akka.dispatch.UnboundedMailbox"
  495. }
  496.  
  497. bounded-queue-based {
  498. # FQCN of the MailboxType, The Class of the FQCN must have a public
  499. # constructor with (akka.actor.ActorSystem.Settings,
  500. # com.typesafe.config.Config) parameters.
  501. mailbox-type = "akka.dispatch.BoundedMailbox"
  502. }
  503.  
  504. unbounded-deque-based {
  505. # FQCN of the MailboxType, The Class of the FQCN must have a public
  506. # constructor with (akka.actor.ActorSystem.Settings,
  507. # com.typesafe.config.Config) parameters.
  508. mailbox-type = "akka.dispatch.UnboundedDequeBasedMailbox"
  509. }
  510.  
  511. bounded-deque-based {
  512. # FQCN of the MailboxType, The Class of the FQCN must have a public
  513. # constructor with (akka.actor.ActorSystem.Settings,
  514. # com.typesafe.config.Config) parameters.
  515. mailbox-type = "akka.dispatch.BoundedDequeBasedMailbox"
  516. }
  517.  
  518. unbounded-control-aware-queue-based {
  519. # FQCN of the MailboxType, The Class of the FQCN must have a public
  520. # constructor with (akka.actor.ActorSystem.Settings,
  521. # com.typesafe.config.Config) parameters.
  522. mailbox-type = "akka.dispatch.UnboundedControlAwareMailbox"
  523. }
  524.  
  525. bounded-control-aware-queue-based {
  526. # FQCN of the MailboxType, The Class of the FQCN must have a public
  527. # constructor with (akka.actor.ActorSystem.Settings,
  528. # com.typesafe.config.Config) parameters.
  529. mailbox-type = "akka.dispatch.BoundedControlAwareMailbox"
  530. }
  531. # The LoggerMailbox will drain all messages in the mailbox
  532. # when the system is shutdown and deliver them to the StandardOutLogger.
  533. # Do not change this unless you know what you are doing.
  534. logger-queue {
  535. mailbox-type = "akka.event.LoggerMailboxType"
  536. }
  537. }
  538.  
  539. debug {
  540. # enable function of Actor.loggable(), which is to log any received message
  541. # at DEBUG level, see the “Testing Actor Systems” section of the Akka
  542. # Documentation at https://akka.io/docs
  543. receive = off
  544.  
  545. # enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill et.c.)
  546. autoreceive = off
  547.  
  548. # enable DEBUG logging of actor lifecycle changes
  549. lifecycle = off
  550.  
  551. # enable DEBUG logging of all LoggingFSMs for events, transitions and timers
  552. fsm = off
  553.  
  554. # enable DEBUG logging of subscription changes on the eventStream
  555. event-stream = off
  556.  
  557. # enable DEBUG logging of unhandled messages
  558. unhandled = off
  559.  
  560. # enable WARN logging of misconfigured routers
  561. router-misconfiguration = off
  562. }
  563. # SECURITY BEST-PRACTICE is to disable java serialization for its multiple
  564. # known attack surfaces.
  565. #
  566. # This setting is a short-cut to
  567. # - using DisabledJavaSerializer instead of JavaSerializer
  568. # - enable-additional-serialization-bindings = on
  569. #
  570. # Completely disable the use of `akka.serialization.JavaSerialization` by the
  571. # Akka Serialization extension, instead DisabledJavaSerializer will
  572. # be inserted which will fail explicitly if attempts to use java serialization are made.
  573. #
  574. # The log messages emitted by such serializer SHOULD be be treated as potential
  575. # attacks which the serializer prevented, as they MAY indicate an external operator
  576. # attempting to send malicious messages intending to use java serialization as attack vector.
  577. # The attempts are logged with the SECURITY marker.
  578. #
  579. # Please note that this option does not stop you from manually invoking java serialization
  580. #
  581. # The default value for this might be changed to off in future versions of Akka.
  582. allow-java-serialization = on
  583. # Entries for pluggable serializers and their bindings.
  584. serializers {
  585. java = "akka.serialization.JavaSerializer"
  586. bytes = "akka.serialization.ByteArraySerializer"
  587. }
  588.  
  589. # Class to Serializer binding. You only need to specify the name of an
  590. # interface or abstract base class of the messages. In case of ambiguity it
  591. # is using the most specific configured class, or giving a warning and
  592. # choosing the “first” one.
  593. #
  594. # To disable one of the default serializers, assign its class to "none", like
  595. # "java.io.Serializable" = none
  596. serialization-bindings {
  597. "[B" = bytes
  598. "java.io.Serializable" = java
  599. }
  600. # Set this to on to enable serialization-bindings define in
  601. # additional-serialization-bindings. Those are by default not included
  602. # for backwards compatibility reasons. They are enabled by default if
  603. # akka.remote.artery.enabled=on or if akka.actor.allow-java-serialization=off.
  604. enable-additional-serialization-bindings = off
  605. # Additional serialization-bindings that are replacing Java serialization are
  606. # defined in this section and not included by default for backwards compatibility
  607. # reasons. They can be enabled with enable-additional-serialization-bindings=on.
  608. # They are enabled by default if akka.remote.artery.enabled=on or if
  609. # akka.actor.allow-java-serialization=off.
  610. additional-serialization-bindings {
  611. }
  612.  
  613. # Log warnings when the default Java serialization is used to serialize messages.
  614. # The default serializer uses Java serialization which is not very performant and should not
  615. # be used in production environments unless you don't care about performance. In that case
  616. # you can turn this off.
  617. warn-about-java-serializer-usage = on
  618.  
  619. # To be used with the above warn-about-java-serializer-usage
  620. # When warn-about-java-serializer-usage = on, and this warn-on-no-serialization-verification = off,
  621. # warnings are suppressed for classes extending NoSerializationVerificationNeeded
  622. # to reduce noize.
  623. warn-on-no-serialization-verification = on
  624.  
  625. # Configuration namespace of serialization identifiers.
  626. # Each serializer implementation must have an entry in the following format:
  627. # `akka.actor.serialization-identifiers."FQCN" = ID`
  628. # where `FQCN` is fully qualified class name of the serializer implementation
  629. # and `ID` is globally unique serializer identifier number.
  630. # Identifier values from 0 to 40 are reserved for Akka internal usage.
  631. serialization-identifiers {
  632. "akka.serialization.JavaSerializer" = 1
  633. "akka.serialization.ByteArraySerializer" = 4
  634. }
  635.  
  636. # Configuration items which are used by the akka.actor.ActorDSL._ methods
  637. dsl {
  638. # Maximum queue size of the actor created by newInbox(); this protects
  639. # against faulty programs which use select() and consistently miss messages
  640. inbox-size = 1000
  641.  
  642. # Default timeout to assume for operations like Inbox.receive et al
  643. default-timeout = 5s
  644. }
  645. }
  646.  
  647. # Used to set the behavior of the scheduler.
  648. # Changing the default values may change the system behavior drastically so make
  649. # sure you know what you're doing! See the Scheduler section of the Akka
  650. # Documentation for more details.
  651. scheduler {
  652. # The LightArrayRevolverScheduler is used as the default scheduler in the
  653. # system. It does not execute the scheduled tasks on exact time, but on every
  654. # tick, it will run everything that is (over)due. You can increase or decrease
  655. # the accuracy of the execution timing by specifying smaller or larger tick
  656. # duration. If you are scheduling a lot of tasks you should consider increasing
  657. # the ticks per wheel.
  658. # Note that it might take up to 1 tick to stop the Timer, so setting the
  659. # tick-duration to a high value will make shutting down the actor system
  660. # take longer.
  661. tick-duration = 10ms
  662.  
  663. # The timer uses a circular wheel of buckets to store the timer tasks.
  664. # This should be set such that the majority of scheduled timeouts (for high
  665. # scheduling frequency) will be shorter than one rotation of the wheel
  666. # (ticks-per-wheel * ticks-duration)
  667. # THIS MUST BE A POWER OF TWO!
  668. ticks-per-wheel = 512
  669.  
  670. # This setting selects the timer implementation which shall be loaded at
  671. # system start-up.
  672. # The class given here must implement the akka.actor.Scheduler interface
  673. # and offer a public constructor which takes three arguments:
  674. # 1) com.typesafe.config.Config
  675. # 2) akka.event.LoggingAdapter
  676. # 3) java.util.concurrent.ThreadFactory
  677. implementation = akka.actor.LightArrayRevolverScheduler
  678.  
  679. # When shutting down the scheduler, there will typically be a thread which
  680. # needs to be stopped, and this timeout determines how long to wait for
  681. # that to happen. In case of timeout the shutdown of the actor system will
  682. # proceed without running possibly still enqueued tasks.
  683. shutdown-timeout = 5s
  684. }
  685.  
  686. io {
  687.  
  688. # By default the select loops run on dedicated threads, hence using a
  689. # PinnedDispatcher
  690. pinned-dispatcher {
  691. type = "PinnedDispatcher"
  692. executor = "thread-pool-executor"
  693. thread-pool-executor.allow-core-timeout = off
  694. }
  695.  
  696. tcp {
  697.  
  698. # The number of selectors to stripe the served channels over; each of
  699. # these will use one select loop on the selector-dispatcher.
  700. nr-of-selectors = 1
  701.  
  702. # Maximum number of open channels supported by this TCP module; there is
  703. # no intrinsic general limit, this setting is meant to enable DoS
  704. # protection by limiting the number of concurrently connected clients.
  705. # Also note that this is a "soft" limit; in certain cases the implementation
  706. # will accept a few connections more or a few less than the number configured
  707. # here. Must be an integer > 0 or "unlimited".
  708. max-channels = 256000
  709.  
  710. # When trying to assign a new connection to a selector and the chosen
  711. # selector is at full capacity, retry selector choosing and assignment
  712. # this many times before giving up
  713. selector-association-retries = 10
  714.  
  715. # The maximum number of connection that are accepted in one go,
  716. # higher numbers decrease latency, lower numbers increase fairness on
  717. # the worker-dispatcher
  718. batch-accept-limit = 10
  719.  
  720. # The number of bytes per direct buffer in the pool used to read or write
  721. # network data from the kernel.
  722. direct-buffer-size = 128 KiB
  723.  
  724. # The maximal number of direct buffers kept in the direct buffer pool for
  725. # reuse.
  726. direct-buffer-pool-limit = 1000
  727.  
  728. # The duration a connection actor waits for a `Register` message from
  729. # its commander before aborting the connection.
  730. register-timeout = 5s
  731.  
  732. # The maximum number of bytes delivered by a `Received` message. Before
  733. # more data is read from the network the connection actor will try to
  734. # do other work.
  735. # The purpose of this setting is to impose a smaller limit than the
  736. # configured receive buffer size. When using value 'unlimited' it will
  737. # try to read all from the receive buffer.
  738. max-received-message-size = unlimited
  739.  
  740. # Enable fine grained logging of what goes on inside the implementation.
  741. # Be aware that this may log more than once per message sent to the actors
  742. # of the tcp implementation.
  743. trace-logging = off
  744.  
  745. # Fully qualified config path which holds the dispatcher configuration
  746. # to be used for running the select() calls in the selectors
  747. selector-dispatcher = "akka.io.pinned-dispatcher"
  748.  
  749. # Fully qualified config path which holds the dispatcher configuration
  750. # for the read/write worker actors
  751. worker-dispatcher = "akka.actor.default-dispatcher"
  752.  
  753. # Fully qualified config path which holds the dispatcher configuration
  754. # for the selector management actors
  755. management-dispatcher = "akka.actor.default-dispatcher"
  756.  
  757. # Fully qualified config path which holds the dispatcher configuration
  758. # on which file IO tasks are scheduled
  759. file-io-dispatcher = "akka.actor.default-dispatcher"
  760.  
  761. # The maximum number of bytes (or "unlimited") to transfer in one batch
  762. # when using `WriteFile` command which uses `FileChannel.transferTo` to
  763. # pipe files to a TCP socket. On some OS like Linux `FileChannel.transferTo`
  764. # may block for a long time when network IO is faster than file IO.
  765. # Decreasing the value may improve fairness while increasing may improve
  766. # throughput.
  767. file-io-transferTo-limit = 512 KiB
  768.  
  769. # The number of times to retry the `finishConnect` call after being notified about
  770. # OP_CONNECT. Retries are needed if the OP_CONNECT notification doesn't imply that
  771. # `finishConnect` will succeed, which is the case on Android.
  772. finish-connect-retries = 5
  773.  
  774. # On Windows connection aborts are not reliably detected unless an OP_READ is
  775. # registered on the selector _after_ the connection has been reset. This
  776. # workaround enables an OP_CONNECT which forces the abort to be visible on Windows.
  777. # Enabling this setting on other platforms than Windows will cause various failures
  778. # and undefined behavior.
  779. # Possible values of this key are on, off and auto where auto will enable the
  780. # workaround if Windows is detected automatically.
  781. windows-connection-abort-workaround-enabled = off
  782. }
  783.  
  784. udp {
  785.  
  786. # The number of selectors to stripe the served channels over; each of
  787. # these will use one select loop on the selector-dispatcher.
  788. nr-of-selectors = 1
  789.  
  790. # Maximum number of open channels supported by this UDP module Generally
  791. # UDP does not require a large number of channels, therefore it is
  792. # recommended to keep this setting low.
  793. max-channels = 4096
  794.  
  795. # The select loop can be used in two modes:
  796. # - setting "infinite" will select without a timeout, hogging a thread
  797. # - setting a positive timeout will do a bounded select call,
  798. # enabling sharing of a single thread between multiple selectors
  799. # (in this case you will have to use a different configuration for the
  800. # selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
  801. # - setting it to zero means polling, i.e. calling selectNow()
  802. select-timeout = infinite
  803.  
  804. # When trying to assign a new connection to a selector and the chosen
  805. # selector is at full capacity, retry selector choosing and assignment
  806. # this many times before giving up
  807. selector-association-retries = 10
  808.  
  809. # The maximum number of datagrams that are read in one go,
  810. # higher numbers decrease latency, lower numbers increase fairness on
  811. # the worker-dispatcher
  812. receive-throughput = 3
  813.  
  814. # The number of bytes per direct buffer in the pool used to read or write
  815. # network data from the kernel.
  816. direct-buffer-size = 128 KiB
  817.  
  818. # The maximal number of direct buffers kept in the direct buffer pool for
  819. # reuse.
  820. direct-buffer-pool-limit = 1000
  821.  
  822. # Enable fine grained logging of what goes on inside the implementation.
  823. # Be aware that this may log more than once per message sent to the actors
  824. # of the tcp implementation.
  825. trace-logging = off
  826.  
  827. # Fully qualified config path which holds the dispatcher configuration
  828. # to be used for running the select() calls in the selectors
  829. selector-dispatcher = "akka.io.pinned-dispatcher"
  830.  
  831. # Fully qualified config path which holds the dispatcher configuration
  832. # for the read/write worker actors
  833. worker-dispatcher = "akka.actor.default-dispatcher"
  834.  
  835. # Fully qualified config path which holds the dispatcher configuration
  836. # for the selector management actors
  837. management-dispatcher = "akka.actor.default-dispatcher"
  838. }
  839.  
  840. udp-connected {
  841.  
  842. # The number of selectors to stripe the served channels over; each of
  843. # these will use one select loop on the selector-dispatcher.
  844. nr-of-selectors = 1
  845.  
  846. # Maximum number of open channels supported by this UDP module Generally
  847. # UDP does not require a large number of channels, therefore it is
  848. # recommended to keep this setting low.
  849. max-channels = 4096
  850.  
  851. # The select loop can be used in two modes:
  852. # - setting "infinite" will select without a timeout, hogging a thread
  853. # - setting a positive timeout will do a bounded select call,
  854. # enabling sharing of a single thread between multiple selectors
  855. # (in this case you will have to use a different configuration for the
  856. # selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
  857. # - setting it to zero means polling, i.e. calling selectNow()
  858. select-timeout = infinite
  859.  
  860. # When trying to assign a new connection to a selector and the chosen
  861. # selector is at full capacity, retry selector choosing and assignment
  862. # this many times before giving up
  863. selector-association-retries = 10
  864.  
  865. # The maximum number of datagrams that are read in one go,
  866. # higher numbers decrease latency, lower numbers increase fairness on
  867. # the worker-dispatcher
  868. receive-throughput = 3
  869.  
  870. # The number of bytes per direct buffer in the pool used to read or write
  871. # network data from the kernel.
  872. direct-buffer-size = 128 KiB
  873.  
  874. # The maximal number of direct buffers kept in the direct buffer pool for
  875. # reuse.
  876. direct-buffer-pool-limit = 1000
  877. # Enable fine grained logging of what goes on inside the implementation.
  878. # Be aware that this may log more than once per message sent to the actors
  879. # of the tcp implementation.
  880. trace-logging = off
  881.  
  882. # Fully qualified config path which holds the dispatcher configuration
  883. # to be used for running the select() calls in the selectors
  884. selector-dispatcher = "akka.io.pinned-dispatcher"
  885.  
  886. # Fully qualified config path which holds the dispatcher configuration
  887. # for the read/write worker actors
  888. worker-dispatcher = "akka.actor.default-dispatcher"
  889.  
  890. # Fully qualified config path which holds the dispatcher configuration
  891. # for the selector management actors
  892. management-dispatcher = "akka.actor.default-dispatcher"
  893. }
  894.  
  895. dns {
  896. # Fully qualified config path which holds the dispatcher configuration
  897. # for the manager and resolver router actors.
  898. # For actual router configuration see akka.actor.deployment./IO-DNS/*
  899. dispatcher = "akka.actor.default-dispatcher"
  900.  
  901. # Name of the subconfig at path akka.io.dns, see inet-address below
  902. resolver = "inet-address"
  903.  
  904. inet-address {
  905. # Must implement akka.io.DnsProvider
  906. provider-object = "akka.io.InetAddressDnsProvider"
  907.  
  908. # These TTLs are set to default java 6 values
  909. positive-ttl = 30s
  910. negative-ttl = 10s
  911.  
  912. # How often to sweep out expired cache entries.
  913. # Note that this interval has nothing to do with TTLs
  914. cache-cleanup-interval = 120s
  915. }
  916. }
  917. }
  918.  
  919.  
  920. }

§akka-agent

  1. ####################################
  2. # Akka Agent Reference Config File #
  3. ####################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. akka {
  9. agent {
  10.  
  11. # The dispatcher used for agent-send-off actor
  12. send-off-dispatcher {
  13. executor = thread-pool-executor
  14. type = PinnedDispatcher
  15. }
  16.  
  17. # The dispatcher used for agent-alter-off actor
  18. alter-off-dispatcher {
  19. executor = thread-pool-executor
  20. type = PinnedDispatcher
  21. }
  22. }
  23. }

§akka-camel

  1. ####################################
  2. # Akka Camel Reference Config File #
  3. ####################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. akka {
  9. camel {
  10. # FQCN of the ContextProvider to be used to create or locate a CamelContext
  11. # it must implement akka.camel.ContextProvider and have a no-arg constructor
  12. # the built-in default create a fresh DefaultCamelContext
  13. context-provider = akka.camel.DefaultContextProvider
  14.  
  15. # Whether JMX should be enabled or disabled for the Camel Context
  16. jmx = off
  17. # enable/disable streaming cache on the Camel Context
  18. streamingCache = on
  19. consumer {
  20. # Configured setting which determines whether one-way communications
  21. # between an endpoint and this consumer actor
  22. # should be auto-acknowledged or application-acknowledged.
  23. # This flag has only effect when exchange is in-only.
  24. auto-ack = on
  25.  
  26. # When endpoint is out-capable (can produce responses) reply-timeout is the
  27. # maximum time the endpoint can take to send the response before the message
  28. # exchange fails. This setting is used for out-capable, in-only,
  29. # manually acknowledged communication.
  30. reply-timeout = 1m
  31.  
  32. # The duration of time to await activation of an endpoint.
  33. activation-timeout = 10s
  34. }
  35. producer {
  36. # The id of the dispatcher to use for producer child actors, i.e. the actor that
  37. # interacts with the Camel endpoint. Some endpoints may be blocking and then it
  38. # can be good to define a dedicated dispatcher.
  39. # If not defined the producer child actor is using the same dispatcher as the
  40. # parent producer actor.
  41. use-dispatcher = ""
  42. }
  43.  
  44. #Scheme to FQCN mappings for CamelMessage body conversions
  45. conversions {
  46. "file" = "java.io.InputStream"
  47. }
  48. }
  49. }

§akka-cluster

  1. ######################################
  2. # Akka Cluster Reference Config File #
  3. ######################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. akka {
  9.  
  10. cluster {
  11. # Initial contact points of the cluster.
  12. # The nodes to join automatically at startup.
  13. # Comma separated full URIs defined by a string on the form of
  14. # "akka.tcp://system@hostname:port"
  15. # Leave as empty if the node is supposed to be joined manually.
  16. seed-nodes = []
  17.  
  18. # how long to wait for one of the seed nodes to reply to initial join request
  19. seed-node-timeout = 5s
  20.  
  21. # If a join request fails it will be retried after this period.
  22. # Disable join retry by specifying "off".
  23. retry-unsuccessful-join-after = 10s
  24.  
  25. # Should the 'leader' in the cluster be allowed to automatically mark
  26. # unreachable nodes as DOWN after a configured time of unreachability?
  27. # Using auto-down implies that two separate clusters will automatically be
  28. # formed in case of network partition.
  29. #
  30. # Don't enable this in production, see 'Auto-downing (DO NOT USE)' section
  31. # of Akka Cluster documentation.
  32. #
  33. # Disable with "off" or specify a duration to enable auto-down.
  34. # If a downing-provider-class is configured this setting is ignored.
  35. auto-down-unreachable-after = off
  36. # Time margin after which shards or singletons that belonged to a downed/removed
  37. # partition are created in surviving partition. The purpose of this margin is that
  38. # in case of a network partition the persistent actors in the non-surviving partitions
  39. # must be stopped before corresponding persistent actors are started somewhere else.
  40. # This is useful if you implement downing strategies that handle network partitions,
  41. # e.g. by keeping the larger side of the partition and shutting down the smaller side.
  42. # It will not add any extra safety for auto-down-unreachable-after, since that is not
  43. # handling network partitions.
  44. # Disable with "off" or specify a duration to enable.
  45. down-removal-margin = off
  46.  
  47. # Pluggable support for downing of nodes in the cluster.
  48. # If this setting is left empty behaviour will depend on 'auto-down-unreachable' in the following ways:
  49. # * if it is 'off' the `NoDowning` provider is used and no automatic downing will be performed
  50. # * if it is set to a duration the `AutoDowning` provider is with the configured downing duration
  51. #
  52. # If specified the value must be the fully qualified class name of a subclass of
  53. # `akka.cluster.DowningProvider` having a public one argument constructor accepting an `ActorSystem`
  54. downing-provider-class = ""
  55.  
  56. # Artery only setting
  57. # When a node has been gracefully removed, let this time pass (to allow for example
  58. # cluster singleton handover to complete) and then quarantine the removed node.
  59. quarantine-removed-node-after=30s
  60.  
  61. # By default, the leader will not move 'Joining' members to 'Up' during a network
  62. # split. This feature allows the leader to accept 'Joining' members to be 'WeaklyUp'
  63. # so they become part of the cluster even during a network split. The leader will
  64. # move 'WeaklyUp' members to 'Up' status once convergence has been reached. This
  65. # feature must be off if some members are running Akka 2.3.X.
  66. # WeaklyUp is an EXPERIMENTAL feature.
  67. allow-weakly-up-members = off
  68.  
  69. # The roles of this member. List of strings, e.g. roles = ["A", "B"].
  70. # The roles are part of the membership information and can be used by
  71. # routers or other services to distribute work to certain member types,
  72. # e.g. front-end and back-end nodes.
  73. roles = []
  74.  
  75. role {
  76. # Minimum required number of members of a certain role before the leader
  77. # changes member status of 'Joining' members to 'Up'. Typically used together
  78. # with 'Cluster.registerOnMemberUp' to defer some action, such as starting
  79. # actors, until the cluster has reached a certain size.
  80. # E.g. to require 2 nodes with role 'frontend' and 3 nodes with role 'backend':
  81. # frontend.min-nr-of-members = 2
  82. # backend.min-nr-of-members = 3
  83. #<role-name>.min-nr-of-members = 1
  84. }
  85.  
  86. # Minimum required number of members before the leader changes member status
  87. # of 'Joining' members to 'Up'. Typically used together with
  88. # 'Cluster.registerOnMemberUp' to defer some action, such as starting actors,
  89. # until the cluster has reached a certain size.
  90. min-nr-of-members = 1
  91.  
  92. # Enable/disable info level logging of cluster events
  93. log-info = on
  94.  
  95. # Enable or disable JMX MBeans for management of the cluster
  96. jmx.enabled = on
  97.  
  98. # how long should the node wait before starting the periodic tasks
  99. # maintenance tasks?
  100. periodic-tasks-initial-delay = 1s
  101.  
  102. # how often should the node send out gossip information?
  103. gossip-interval = 1s
  104. # discard incoming gossip messages if not handled within this duration
  105. gossip-time-to-live = 2s
  106.  
  107. # how often should the leader perform maintenance tasks?
  108. leader-actions-interval = 1s
  109.  
  110. # how often should the node move nodes, marked as unreachable by the failure
  111. # detector, out of the membership ring?
  112. unreachable-nodes-reaper-interval = 1s
  113.  
  114. # How often the current internal stats should be published.
  115. # A value of 0s can be used to always publish the stats, when it happens.
  116. # Disable with "off".
  117. publish-stats-interval = off
  118.  
  119. # The id of the dispatcher to use for cluster actors. If not specified
  120. # default dispatcher is used.
  121. # If specified you need to define the settings of the actual dispatcher.
  122. use-dispatcher = ""
  123.  
  124. # Gossip to random node with newer or older state information, if any with
  125. # this probability. Otherwise Gossip to any random live node.
  126. # Probability value is between 0.0 and 1.0. 0.0 means never, 1.0 means always.
  127. gossip-different-view-probability = 0.8
  128. # Reduced the above probability when the number of nodes in the cluster
  129. # greater than this value.
  130. reduce-gossip-different-view-probability = 400
  131.  
  132. # Settings for the Phi accrual failure detector (http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf
  133. # [Hayashibara et al]) used by the cluster subsystem to detect unreachable
  134. # members.
  135. # The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
  136. # the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,
  137. # i.e. around 5.5 seconds with default settings.
  138. failure-detector {
  139.  
  140. # FQCN of the failure detector implementation.
  141. # It must implement akka.remote.FailureDetector and have
  142. # a public constructor with a com.typesafe.config.Config and
  143. # akka.actor.EventStream parameter.
  144. implementation-class = "akka.remote.PhiAccrualFailureDetector"
  145.  
  146. # How often keep-alive heartbeat messages should be sent to each connection.
  147. heartbeat-interval = 1 s
  148.  
  149. # Defines the failure detector threshold.
  150. # A low threshold is prone to generate many wrong suspicions but ensures
  151. # a quick detection in the event of a real crash. Conversely, a high
  152. # threshold generates fewer mistakes but needs more time to detect
  153. # actual crashes.
  154. threshold = 8.0
  155.  
  156. # Number of the samples of inter-heartbeat arrival times to adaptively
  157. # calculate the failure timeout for connections.
  158. max-sample-size = 1000
  159.  
  160. # Minimum standard deviation to use for the normal distribution in
  161. # AccrualFailureDetector. Too low standard deviation might result in
  162. # too much sensitivity for sudden, but normal, deviations in heartbeat
  163. # inter arrival times.
  164. min-std-deviation = 100 ms
  165.  
  166. # Number of potentially lost/delayed heartbeats that will be
  167. # accepted before considering it to be an anomaly.
  168. # This margin is important to be able to survive sudden, occasional,
  169. # pauses in heartbeat arrivals, due to for example garbage collect or
  170. # network drop.
  171. acceptable-heartbeat-pause = 3 s
  172.  
  173. # Number of member nodes that each member will send heartbeat messages to,
  174. # i.e. each node will be monitored by this number of other nodes.
  175. monitored-by-nr-of-members = 5
  176. # After the heartbeat request has been sent the first failure detection
  177. # will start after this period, even though no heartbeat message has
  178. # been received.
  179. expected-response-after = 1 s
  180.  
  181. }
  182.  
  183. metrics {
  184. # Enable or disable metrics collector for load-balancing nodes.
  185. enabled = on
  186.  
  187. # FQCN of the metrics collector implementation.
  188. # It must implement akka.cluster.MetricsCollector and
  189. # have public constructor with akka.actor.ActorSystem parameter.
  190. # The default SigarMetricsCollector uses JMX and Hyperic SIGAR, if SIGAR
  191. # is on the classpath, otherwise only JMX.
  192. collector-class = "akka.cluster.SigarMetricsCollector"
  193.  
  194. # How often metrics are sampled on a node.
  195. # Shorter interval will collect the metrics more often.
  196. collect-interval = 3s
  197.  
  198. # How often a node publishes metrics information.
  199. gossip-interval = 3s
  200.  
  201. # How quickly the exponential weighting of past data is decayed compared to
  202. # new data. Set lower to increase the bias toward newer values.
  203. # The relevance of each data sample is halved for every passing half-life
  204. # duration, i.e. after 4 times the half-life, a data sample’s relevance is
  205. # reduced to 6% of its original relevance. The initial relevance of a data
  206. # sample is given by 1 – 0.5 ^ (collect-interval / half-life).
  207. # See http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average
  208. moving-average-half-life = 12s
  209. }
  210.  
  211. # If the tick-duration of the default scheduler is longer than the
  212. # tick-duration configured here a dedicated scheduler will be used for
  213. # periodic tasks of the cluster, otherwise the default scheduler is used.
  214. # See akka.scheduler settings for more details.
  215. scheduler {
  216. tick-duration = 33ms
  217. ticks-per-wheel = 512
  218. }
  219.  
  220. debug {
  221. # log heartbeat events (very verbose, useful mostly when debugging heartbeating issues)
  222. verbose-heartbeat-logging = off
  223. }
  224.  
  225. }
  226.  
  227. # Default configuration for routers
  228. actor.deployment.default {
  229. # MetricsSelector to use
  230. # - available: "mix", "heap", "cpu", "load"
  231. # - or: Fully qualified class name of the MetricsSelector class.
  232. # The class must extend akka.cluster.routing.MetricsSelector
  233. # and have a public constructor with com.typesafe.config.Config
  234. # parameter.
  235. # - default is "mix"
  236. metrics-selector = mix
  237. }
  238. actor.deployment.default.cluster {
  239. # enable cluster aware router that deploys to nodes in the cluster
  240. enabled = off
  241.  
  242. # Maximum number of routees that will be deployed on each cluster
  243. # member node.
  244. # Note that max-total-nr-of-instances defines total number of routees, but
  245. # number of routees per node will not be exceeded, i.e. if you
  246. # define max-total-nr-of-instances = 50 and max-nr-of-instances-per-node = 2
  247. # it will deploy 2 routees per new member in the cluster, up to
  248. # 25 members.
  249. max-nr-of-instances-per-node = 1
  250. # Maximum number of routees that will be deployed, in total
  251. # on all nodes. See also description of max-nr-of-instances-per-node.
  252. # For backwards compatibility reasons, nr-of-instances
  253. # has the same purpose as max-total-nr-of-instances for cluster
  254. # aware routers and nr-of-instances (if defined by user) takes
  255. # precedence over max-total-nr-of-instances.
  256. max-total-nr-of-instances = 10000
  257.  
  258. # Defines if routees are allowed to be located on the same node as
  259. # the head router actor, or only on remote nodes.
  260. # Useful for master-worker scenario where all routees are remote.
  261. allow-local-routees = on
  262.  
  263. # Use members with specified role, or all members if undefined or empty.
  264. use-role = ""
  265.  
  266. }
  267.  
  268. # Protobuf serializer for cluster messages
  269. actor {
  270. serializers {
  271. akka-cluster = "akka.cluster.protobuf.ClusterMessageSerializer"
  272. }
  273.  
  274. serialization-bindings {
  275. "akka.cluster.ClusterMessage" = akka-cluster
  276. }
  277. serialization-identifiers {
  278. "akka.cluster.protobuf.ClusterMessageSerializer" = 5
  279. }
  280. router.type-mapping {
  281. adaptive-pool = "akka.cluster.routing.AdaptiveLoadBalancingPool"
  282. adaptive-group = "akka.cluster.routing.AdaptiveLoadBalancingGroup"
  283. }
  284. }
  285.  
  286. }

§akka-multi-node-testkit

  1. #############################################
  2. # Akka Remote Testing Reference Config File #
  3. #############################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. akka {
  9. testconductor {
  10.  
  11. # Timeout for joining a barrier: this is the maximum time any participants
  12. # waits for everybody else to join a named barrier.
  13. barrier-timeout = 30s
  14. # Timeout for interrogation of TestConductor’s Controller actor
  15. query-timeout = 10s
  16. # Threshold for packet size in time unit above which the failure injector will
  17. # split the packet and deliver in smaller portions; do not give value smaller
  18. # than HashedWheelTimer resolution (would not make sense)
  19. packet-split-threshold = 100ms
  20. # amount of time for the ClientFSM to wait for the connection to the conductor
  21. # to be successful
  22. connect-timeout = 20s
  23. # Number of connect attempts to be made to the conductor controller
  24. client-reconnects = 30
  25. # minimum time interval which is to be inserted between reconnect attempts
  26. reconnect-backoff = 1s
  27.  
  28. netty {
  29. # (I&O) Used to configure the number of I/O worker threads on server sockets
  30. server-socket-worker-pool {
  31. # Min number of threads to cap factor-based number to
  32. pool-size-min = 1
  33.  
  34. # The pool size factor is used to determine thread pool size
  35. # using the following formula: ceil(available processors * factor).
  36. # Resulting size is then bounded by the pool-size-min and
  37. # pool-size-max values.
  38. pool-size-factor = 1.0
  39.  
  40. # Max number of threads to cap factor-based number to
  41. pool-size-max = 2
  42. }
  43.  
  44. # (I&O) Used to configure the number of I/O worker threads on client sockets
  45. client-socket-worker-pool {
  46. # Min number of threads to cap factor-based number to
  47. pool-size-min = 1
  48.  
  49. # The pool size factor is used to determine thread pool size
  50. # using the following formula: ceil(available processors * factor).
  51. # Resulting size is then bounded by the pool-size-min and
  52. # pool-size-max values.
  53. pool-size-factor = 1.0
  54.  
  55. # Max number of threads to cap factor-based number to
  56. pool-size-max = 2
  57. }
  58. }
  59. }
  60. }

§akka-persistence

  1. ###########################################################
  2. # Akka Persistence Extension Reference Configuration File #
  3. ###########################################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits in your application.conf in order to override these settings.
  7.  
  8. # Directory of persistence journal and snapshot store plugins is available at the
  9. # Akka Community Projects page https://akka.io/community/
  10.  
  11. # Default persistence extension settings.
  12. akka.persistence {
  13.  
  14. # When starting many persistent actors at the same time the journal
  15. # and its data store is protected from being overloaded by limiting number
  16. # of recoveries that can be in progress at the same time. When
  17. # exceeding the limit the actors will wait until other recoveries have
  18. # been completed.
  19. max-concurrent-recoveries = 50
  20.  
  21. # Fully qualified class name providing a default internal stash overflow strategy.
  22. # It needs to be a subclass of akka.persistence.StashOverflowStrategyConfigurator.
  23. # The default strategy throws StashOverflowException.
  24. internal-stash-overflow-strategy = "akka.persistence.ThrowExceptionConfigurator"
  25. journal {
  26. # Absolute path to the journal plugin configuration entry used by
  27. # persistent actor or view by default.
  28. # Persistent actor or view can override `journalPluginId` method
  29. # in order to rely on a different journal plugin.
  30. plugin = ""
  31. # List of journal plugins to start automatically. Use "" for the default journal plugin.
  32. auto-start-journals = []
  33. }
  34. snapshot-store {
  35. # Absolute path to the snapshot plugin configuration entry used by
  36. # persistent actor or view by default.
  37. # Persistent actor or view can override `snapshotPluginId` method
  38. # in order to rely on a different snapshot plugin.
  39. # It is not mandatory to specify a snapshot store plugin.
  40. # If you don't use snapshots you don't have to configure it.
  41. # Note that Cluster Sharding is using snapshots, so if you
  42. # use Cluster Sharding you need to define a snapshot store plugin.
  43. plugin = ""
  44. # List of snapshot stores to start automatically. Use "" for the default snapshot store.
  45. auto-start-snapshot-stores = []
  46. }
  47. # used as default-snapshot store if no plugin configured
  48. # (see `akka.persistence.snapshot-store`)
  49. no-snapshot-store {
  50. class = "akka.persistence.snapshot.NoSnapshotStore"
  51. }
  52. # Default persistent view settings.
  53. view {
  54. # Automated incremental view update.
  55. auto-update = on
  56. # Interval between incremental updates.
  57. auto-update-interval = 5s
  58. # Maximum number of messages to replay per incremental view update.
  59. # Set to -1 for no upper limit.
  60. auto-update-replay-max = -1
  61. }
  62. # Default reliable delivery settings.
  63. at-least-once-delivery {
  64. # Interval between re-delivery attempts.
  65. redeliver-interval = 5s
  66. # Maximum number of unconfirmed messages that will be sent in one
  67. # re-delivery burst.
  68. redelivery-burst-limit = 10000
  69. # After this number of delivery attempts a
  70. # `ReliableRedelivery.UnconfirmedWarning`, message will be sent to the actor.
  71. warn-after-number-of-unconfirmed-attempts = 5
  72. # Maximum number of unconfirmed messages that an actor with
  73. # AtLeastOnceDelivery is allowed to hold in memory.
  74. max-unconfirmed-messages = 100000
  75. }
  76. # Default persistent extension thread pools.
  77. dispatchers {
  78. # Dispatcher used by every plugin which does not declare explicit
  79. # `plugin-dispatcher` field.
  80. default-plugin-dispatcher {
  81. type = PinnedDispatcher
  82. executor = "thread-pool-executor"
  83. }
  84. # Default dispatcher for message replay.
  85. default-replay-dispatcher {
  86. type = Dispatcher
  87. executor = "fork-join-executor"
  88. fork-join-executor {
  89. parallelism-min = 2
  90. parallelism-max = 8
  91. }
  92. }
  93. # Default dispatcher for streaming snapshot IO
  94. default-stream-dispatcher {
  95. type = Dispatcher
  96. executor = "fork-join-executor"
  97. fork-join-executor {
  98. parallelism-min = 2
  99. parallelism-max = 8
  100. }
  101. }
  102. }
  103.  
  104. # Fallback settings for journal plugin configurations.
  105. # These settings are used if they are not defined in plugin config section.
  106. journal-plugin-fallback {
  107.  
  108. # Fully qualified class name providing journal plugin api implementation.
  109. # It is mandatory to specify this property.
  110. # The class must have a constructor without parameters or constructor with
  111. # one `com.typesafe.config.Config` parameter.
  112. class = ""
  113.  
  114. # Dispatcher for the plugin actor.
  115. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
  116.  
  117. # Dispatcher for message replay.
  118. replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
  119.  
  120. # Removed: used to be the Maximum size of a persistent message batch written to the journal.
  121. # Now this setting is without function, PersistentActor will write as many messages
  122. # as it has accumulated since the last write.
  123. max-message-batch-size = 200
  124.  
  125. # If there is more time in between individual events gotten from the journal
  126. # recovery than this the recovery will fail.
  127. # Note that it also affects reading the snapshot before replaying events on
  128. # top of it, even though it is configured for the journal.
  129. recovery-event-timeout = 30s
  130.  
  131. circuit-breaker {
  132. max-failures = 10
  133. call-timeout = 10s
  134. reset-timeout = 30s
  135. }
  136.  
  137. # The replay filter can detect a corrupt event stream by inspecting
  138. # sequence numbers and writerUuid when replaying events.
  139. replay-filter {
  140. # What the filter should do when detecting invalid events.
  141. # Supported values:
  142. # `repair-by-discard-old` : discard events from old writers,
  143. # warning is logged
  144. # `fail` : fail the replay, error is logged
  145. # `warn` : log warning but emit events untouched
  146. # `off` : disable this feature completely
  147. mode = repair-by-discard-old
  148.  
  149. # It uses a look ahead buffer for analyzing the events.
  150. # This defines the size (in number of events) of the buffer.
  151. window-size = 100
  152.  
  153. # How many old writerUuid to remember
  154. max-old-writers = 10
  155.  
  156. # Set this to `on` to enable detailed debug logging of each
  157. # replayed event.
  158. debug = off
  159. }
  160. }
  161.  
  162. # Fallback settings for snapshot store plugin configurations
  163. # These settings are used if they are not defined in plugin config section.
  164. snapshot-store-plugin-fallback {
  165.  
  166. # Fully qualified class name providing snapshot store plugin api
  167. # implementation. It is mandatory to specify this property if
  168. # snapshot store is enabled.
  169. # The class must have a constructor without parameters or constructor with
  170. # one `com.typesafe.config.Config` parameter.
  171. class = ""
  172.  
  173. # Dispatcher for the plugin actor.
  174. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
  175.  
  176. circuit-breaker {
  177. max-failures = 5
  178. call-timeout = 20s
  179. reset-timeout = 60s
  180. }
  181. }
  182. }
  183.  
  184. # Protobuf serialization for the persistent extension messages.
  185. akka.actor {
  186. serializers {
  187. akka-persistence-message = "akka.persistence.serialization.MessageSerializer"
  188. akka-persistence-snapshot = "akka.persistence.serialization.SnapshotSerializer"
  189. }
  190. serialization-bindings {
  191. "akka.persistence.serialization.Message" = akka-persistence-message
  192. "akka.persistence.serialization.Snapshot" = akka-persistence-snapshot
  193. }
  194. serialization-identifiers {
  195. "akka.persistence.serialization.MessageSerializer" = 7
  196. "akka.persistence.serialization.SnapshotSerializer" = 8
  197. }
  198. }
  199.  
  200.  
  201. ###################################################
  202. # Persistence plugins included with the extension #
  203. ###################################################
  204.  
  205. # In-memory journal plugin.
  206. akka.persistence.journal.inmem {
  207. # Class name of the plugin.
  208. class = "akka.persistence.journal.inmem.InmemJournal"
  209. # Dispatcher for the plugin actor.
  210. plugin-dispatcher = "akka.actor.default-dispatcher"
  211. }
  212.  
  213. # Local file system snapshot store plugin.
  214. akka.persistence.snapshot-store.local {
  215. # Class name of the plugin.
  216. class = "akka.persistence.snapshot.local.LocalSnapshotStore"
  217. # Dispatcher for the plugin actor.
  218. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
  219. # Dispatcher for streaming snapshot IO.
  220. stream-dispatcher = "akka.persistence.dispatchers.default-stream-dispatcher"
  221. # Storage location of snapshot files.
  222. dir = "snapshots"
  223. # Number load attempts when recovering from the latest snapshot fails
  224. # yet older snapshot files are available. Each recovery attempt will try
  225. # to recover using an older than previously failed-on snapshot file
  226. # (if any are present). If all attempts fail the recovery will fail and
  227. # the persistent actor will be stopped.
  228. max-load-attempts = 3
  229. }
  230.  
  231. # LevelDB journal plugin.
  232. # Note: this plugin requires explicit LevelDB dependency, see below.
  233. akka.persistence.journal.leveldb {
  234. # Class name of the plugin.
  235. class = "akka.persistence.journal.leveldb.LeveldbJournal"
  236. # Dispatcher for the plugin actor.
  237. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
  238. # Dispatcher for message replay.
  239. replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
  240. # Storage location of LevelDB files.
  241. dir = "journal"
  242. # Use fsync on write.
  243. fsync = on
  244. # Verify checksum on read.
  245. checksum = off
  246. # Native LevelDB (via JNI) or LevelDB Java port.
  247. native = on
  248. }
  249.  
  250. # Shared LevelDB journal plugin (for testing only).
  251. # Note: this plugin requires explicit LevelDB dependency, see below.
  252. akka.persistence.journal.leveldb-shared {
  253. # Class name of the plugin.
  254. class = "akka.persistence.journal.leveldb.SharedLeveldbJournal"
  255. # Dispatcher for the plugin actor.
  256. plugin-dispatcher = "akka.actor.default-dispatcher"
  257. # Timeout for async journal operations.
  258. timeout = 10s
  259. store {
  260. # Dispatcher for shared store actor.
  261. store-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher"
  262. # Dispatcher for message replay.
  263. replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher"
  264. # Storage location of LevelDB files.
  265. dir = "journal"
  266. # Use fsync on write.
  267. fsync = on
  268. # Verify checksum on read.
  269. checksum = off
  270. # Native LevelDB (via JNI) or LevelDB Java port.
  271. native = on
  272. }
  273. }
  274.  
  275. akka.persistence.journal.proxy {
  276. # Class name of the plugin.
  277. class = "akka.persistence.journal.PersistencePluginProxy"
  278. # Dispatcher for the plugin actor.
  279. plugin-dispatcher = "akka.actor.default-dispatcher"
  280. # Set this to on in the configuration of the ActorSystem
  281. # that will host the target journal
  282. start-target-journal = off
  283. # The journal plugin config path to use for the target journal
  284. target-journal-plugin = ""
  285. # The address of the proxy to connect to from other nodes. Optional setting.
  286. target-journal-address = ""
  287. # Initialization timeout of target lookup
  288. init-timeout = 10s
  289. }
  290.  
  291. akka.persistence.snapshot-store.proxy {
  292. # Class name of the plugin.
  293. class = "akka.persistence.journal.PersistencePluginProxy"
  294. # Dispatcher for the plugin actor.
  295. plugin-dispatcher = "akka.actor.default-dispatcher"
  296. # Set this to on in the configuration of the ActorSystem
  297. # that will host the target snapshot-store
  298. start-target-snapshot-store = off
  299. # The journal plugin config path to use for the target snapshot-store
  300. target-snapshot-store-plugin = ""
  301. # The address of the proxy to connect to from other nodes. Optional setting.
  302. target-snapshot-store-address = ""
  303. # Initialization timeout of target lookup
  304. init-timeout = 10s
  305. }
  306.  
  307. # LevelDB persistence requires the following dependency declarations:
  308. #
  309. # SBT:
  310. # "org.iq80.leveldb" % "leveldb" % "0.7"
  311. # "org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8"
  312. #
  313. # Maven:
  314. # <dependency>
  315. # <groupId>org.iq80.leveldb</groupId>
  316. # <artifactId>leveldb</artifactId>
  317. # <version>0.7</version>
  318. # </dependency>
  319. # <dependency>
  320. # <groupId>org.fusesource.leveldbjni</groupId>
  321. # <artifactId>leveldbjni-all</artifactId>
  322. # <version>1.8</version>
  323. # </dependency>

§akka-remote

  1. #####################################
  2. # Akka Remote Reference Config File #
  3. #####################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. # comments about akka.actor settings left out where they are already in akka-
  9. # actor.jar, because otherwise they would be repeated in config rendering.
  10. #
  11. # For the configuration of the new remoting implementation (Artery) please look
  12. # at the bottom section of this file as it is listed separately.
  13.  
  14. akka {
  15.  
  16. actor {
  17.  
  18. serializers {
  19. akka-containers = "akka.remote.serialization.MessageContainerSerializer"
  20. akka-misc = "akka.remote.serialization.MiscMessageSerializer"
  21. artery = "akka.remote.serialization.ArteryMessageSerializer"
  22. proto = "akka.remote.serialization.ProtobufSerializer"
  23. daemon-create = "akka.remote.serialization.DaemonMsgCreateSerializer"
  24. primitive-long = "akka.remote.serialization.LongSerializer"
  25. primitive-int = "akka.remote.serialization.IntSerializer"
  26. primitive-string = "akka.remote.serialization.StringSerializer"
  27. primitive-bytestring = "akka.remote.serialization.ByteStringSerializer"
  28. akka-system-msg = "akka.remote.serialization.SystemMessageSerializer"
  29. }
  30.  
  31. serialization-bindings {
  32. "akka.actor.ActorSelectionMessage" = akka-containers
  33.  
  34. "akka.remote.DaemonMsgCreate" = daemon-create
  35.  
  36. "akka.remote.artery.ArteryMessage" = artery
  37.  
  38. # Since akka.protobuf.Message does not extend Serializable but
  39. # GeneratedMessage does, need to use the more specific one here in order
  40. # to avoid ambiguity.
  41. "akka.protobuf.GeneratedMessage" = proto
  42.  
  43. # Since com.google.protobuf.Message does not extend Serializable but
  44. # GeneratedMessage does, need to use the more specific one here in order
  45. # to avoid ambiguity.
  46. # This com.google.protobuf serialization binding is only used if the class can be loaded,
  47. # i.e. com.google.protobuf dependency has been added in the application project.
  48. "com.google.protobuf.GeneratedMessage" = proto
  49. "java.util.Optional" = akka-misc
  50. }
  51.  
  52. # For the purpose of preserving protocol backward compatibility these bindings are not
  53. # included by default. They can be enabled with enable-additional-serialization-bindings=on.
  54. # They are enabled by default if akka.remote.artery.enabled=on or if
  55. # akka.actor.allow-java-serialization=off.
  56. additional-serialization-bindings {
  57. "akka.actor.Identify" = akka-misc
  58. "akka.actor.ActorIdentity" = akka-misc
  59. "scala.Some" = akka-misc
  60. "scala.None$" = akka-misc
  61. "akka.actor.Status$Success" = akka-misc
  62. "akka.actor.Status$Failure" = akka-misc
  63. "akka.actor.ActorRef" = akka-misc
  64. "akka.actor.PoisonPill$" = akka-misc
  65. "akka.actor.Kill$" = akka-misc
  66. "akka.remote.RemoteWatcher$Heartbeat$" = akka-misc
  67. "akka.remote.RemoteWatcher$HeartbeatRsp" = akka-misc
  68. "akka.actor.ActorInitializationException" = akka-misc
  69.  
  70. "akka.dispatch.sysmsg.SystemMessage" = akka-system-msg
  71.  
  72. "java.lang.String" = primitive-string
  73. "akka.util.ByteString$ByteString1C" = primitive-bytestring
  74. "akka.util.ByteString$ByteString1" = primitive-bytestring
  75. "akka.util.ByteString$ByteStrings" = primitive-bytestring
  76. "java.lang.Long" = primitive-long
  77. "scala.Long" = primitive-long
  78. "java.lang.Integer" = primitive-int
  79. "scala.Int" = primitive-int
  80.  
  81. # Java Serializer is by default used for exceptions.
  82. # It's recommended that you implement custom serializer for exceptions that are
  83. # sent remotely, e.g. in akka.actor.Status.Failure for ask replies. You can add
  84. # binding to akka-misc (MiscMessageSerializerSpec) for the exceptions that have
  85. # a constructor with single message String or constructor with message String as
  86. # first parameter and cause Throwable as second parameter. Note that it's not
  87. # safe to add this binding for general exceptions such as IllegalArgumentException
  88. # because it may have a subclass without required constructor.
  89. "java.lang.Throwable" = java
  90. "akka.actor.IllegalActorStateException" = akka-misc
  91. "akka.actor.ActorKilledException" = akka-misc
  92. "akka.actor.InvalidActorNameException" = akka-misc
  93. "akka.actor.InvalidMessageException" = akka-misc
  94. }
  95.  
  96. serialization-identifiers {
  97. "akka.remote.serialization.ProtobufSerializer" = 2
  98. "akka.remote.serialization.DaemonMsgCreateSerializer" = 3
  99. "akka.remote.serialization.MessageContainerSerializer" = 6
  100. "akka.remote.serialization.MiscMessageSerializer" = 16
  101. "akka.remote.serialization.ArteryMessageSerializer" = 17
  102. "akka.remote.serialization.LongSerializer" = 18
  103. "akka.remote.serialization.IntSerializer" = 19
  104. "akka.remote.serialization.StringSerializer" = 20
  105. "akka.remote.serialization.ByteStringSerializer" = 21
  106. "akka.remote.serialization.SystemMessageSerializer" = 22
  107. }
  108.  
  109. deployment {
  110.  
  111. default {
  112.  
  113. # if this is set to a valid remote address, the named actor will be
  114. # deployed at that node e.g. "akka.tcp://sys@host:port"
  115. remote = ""
  116.  
  117. target {
  118.  
  119. # A list of hostnames and ports for instantiating the children of a
  120. # router
  121. # The format should be on "akka.tcp://sys@host:port", where:
  122. # - sys is the remote actor system name
  123. # - hostname can be either hostname or IP address the remote actor
  124. # should connect to
  125. # - port should be the port for the remote server on the other node
  126. # The number of actor instances to be spawned is still taken from the
  127. # nr-of-instances setting as for local routers; the instances will be
  128. # distributed round-robin among the given nodes.
  129. nodes = []
  130.  
  131. }
  132. }
  133. }
  134. }
  135.  
  136. remote {
  137. ### Settings shared by classic remoting and Artery (the new implementation of remoting)
  138.  
  139. # If set to a nonempty string remoting will use the given dispatcher for
  140. # its internal actors otherwise the default dispatcher is used. Please note
  141. # that since remoting can load arbitrary 3rd party drivers (see
  142. # "enabled-transport" and "adapters" entries) it is not guaranteed that
  143. # every module will respect this setting.
  144. use-dispatcher = "akka.remote.default-remote-dispatcher"
  145.  
  146. # Settings for the failure detector to monitor connections.
  147. # For TCP it is not important to have fast failure detection, since
  148. # most connection failures are captured by TCP itself.
  149. # The default DeadlineFailureDetector will trigger if there are no heartbeats within
  150. # the duration heartbeat-interval + acceptable-heartbeat-pause, i.e. 124 seconds
  151. # with the default settings.
  152. transport-failure-detector {
  153.  
  154. # FQCN of the failure detector implementation.
  155. # It must implement akka.remote.FailureDetector and have
  156. # a public constructor with a com.typesafe.config.Config and
  157. # akka.actor.EventStream parameter.
  158. implementation-class = "akka.remote.DeadlineFailureDetector"
  159.  
  160. # How often keep-alive heartbeat messages should be sent to each connection.
  161. heartbeat-interval = 4 s
  162.  
  163. # Number of potentially lost/delayed heartbeats that will be
  164. # accepted before considering it to be an anomaly.
  165. # A margin to the `heartbeat-interval` is important to be able to survive sudden,
  166. # occasional, pauses in heartbeat arrivals, due to for example garbage collect or
  167. # network drop.
  168. acceptable-heartbeat-pause = 120 s
  169. }
  170.  
  171. # Settings for the Phi accrual failure detector (http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf
  172. # [Hayashibara et al]) used for remote death watch.
  173. # The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
  174. # the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,
  175. # i.e. around 12.5 seconds with default settings.
  176. watch-failure-detector {
  177.  
  178. # FQCN of the failure detector implementation.
  179. # It must implement akka.remote.FailureDetector and have
  180. # a public constructor with a com.typesafe.config.Config and
  181. # akka.actor.EventStream parameter.
  182. implementation-class = "akka.remote.PhiAccrualFailureDetector"
  183.  
  184. # How often keep-alive heartbeat messages should be sent to each connection.
  185. heartbeat-interval = 1 s
  186.  
  187. # Defines the failure detector threshold.
  188. # A low threshold is prone to generate many wrong suspicions but ensures
  189. # a quick detection in the event of a real crash. Conversely, a high
  190. # threshold generates fewer mistakes but needs more time to detect
  191. # actual crashes.
  192. threshold = 10.0
  193.  
  194. # Number of the samples of inter-heartbeat arrival times to adaptively
  195. # calculate the failure timeout for connections.
  196. max-sample-size = 200
  197.  
  198. # Minimum standard deviation to use for the normal distribution in
  199. # AccrualFailureDetector. Too low standard deviation might result in
  200. # too much sensitivity for sudden, but normal, deviations in heartbeat
  201. # inter arrival times.
  202. min-std-deviation = 100 ms
  203.  
  204. # Number of potentially lost/delayed heartbeats that will be
  205. # accepted before considering it to be an anomaly.
  206. # This margin is important to be able to survive sudden, occasional,
  207. # pauses in heartbeat arrivals, due to for example garbage collect or
  208. # network drop.
  209. acceptable-heartbeat-pause = 10 s
  210.  
  211.  
  212. # How often to check for nodes marked as unreachable by the failure
  213. # detector
  214. unreachable-nodes-reaper-interval = 1s
  215.  
  216. # After the heartbeat request has been sent the first failure detection
  217. # will start after this period, even though no heartbeat mesage has
  218. # been received.
  219. expected-response-after = 1 s
  220.  
  221. }
  222. # remote deployment configuration section
  223. deployment {
  224. # If true, will only allow specific classes to be instanciated on this system via remote deployment
  225. enable-whitelist = off
  226. whitelist = []
  227. }
  228.  
  229. ### Configuration for classic remoting
  230.  
  231. # Timeout after which the startup of the remoting subsystem is considered
  232. # to be failed. Increase this value if your transport drivers (see the
  233. # enabled-transports section) need longer time to be loaded.
  234. startup-timeout = 10 s
  235.  
  236. # Timout after which the graceful shutdown of the remoting subsystem is
  237. # considered to be failed. After the timeout the remoting system is
  238. # forcefully shut down. Increase this value if your transport drivers
  239. # (see the enabled-transports section) need longer time to stop properly.
  240. shutdown-timeout = 10 s
  241.  
  242. # Before shutting down the drivers, the remoting subsystem attempts to flush
  243. # all pending writes. This setting controls the maximum time the remoting is
  244. # willing to wait before moving on to shut down the drivers.
  245. flush-wait-on-shutdown = 2 s
  246.  
  247. # Reuse inbound connections for outbound messages
  248. use-passive-connections = on
  249.  
  250. # Controls the backoff interval after a refused write is reattempted.
  251. # (Transports may refuse writes if their internal buffer is full)
  252. backoff-interval = 5 ms
  253.  
  254. # Acknowledgment timeout of management commands sent to the transport stack.
  255. command-ack-timeout = 30 s
  256.  
  257. # The timeout for outbound associations to perform the handshake.
  258. # If the transport is akka.remote.netty.tcp or akka.remote.netty.ssl
  259. # the configured connection-timeout for the transport will be used instead.
  260. handshake-timeout = 15 s
  261. ### Security settings
  262.  
  263. # Enable untrusted mode for full security of server managed actors, prevents
  264. # system messages to be send by clients, e.g. messages like 'Create',
  265. # 'Suspend', 'Resume', 'Terminate', 'Supervise', 'Link' etc.
  266. untrusted-mode = off
  267.  
  268. # When 'untrusted-mode=on' inbound actor selections are by default discarded.
  269. # Actors with paths defined in this white list are granted permission to receive actor
  270. # selections messages.
  271. # E.g. trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
  272. trusted-selection-paths = []
  273.  
  274. # Should the remote server require that its peers share the same
  275. # secure-cookie (defined in the 'remote' section)? Secure cookies are passed
  276. # between during the initial handshake. Connections are refused if the initial
  277. # message contains a mismatching cookie or the cookie is missing.
  278. require-cookie = off
  279.  
  280. # Deprecated since 2.4-M1
  281. secure-cookie = ""
  282.  
  283. ### Logging
  284.  
  285. # If this is "on", Akka will log all inbound messages at DEBUG level,
  286. # if off then they are not logged
  287. log-received-messages = off
  288.  
  289. # If this is "on", Akka will log all outbound messages at DEBUG level,
  290. # if off then they are not logged
  291. log-sent-messages = off
  292.  
  293. # Sets the log granularity level at which Akka logs remoting events. This setting
  294. # can take the values OFF, ERROR, WARNING, INFO, DEBUG, or ON. For compatibility
  295. # reasons the setting "on" will default to "debug" level. Please note that the effective
  296. # logging level is still determined by the global logging level of the actor system:
  297. # for example debug level remoting events will be only logged if the system
  298. # is running with debug level logging.
  299. # Failures to deserialize received messages also fall under this flag.
  300. log-remote-lifecycle-events = on
  301.  
  302. # Logging of message types with payload size in bytes larger than
  303. # this value. Maximum detected size per message type is logged once,
  304. # with an increase threshold of 10%.
  305. # By default this feature is turned off. Activate it by setting the property to
  306. # a value in bytes, such as 1000b. Note that for all messages larger than this
  307. # limit there will be extra performance and scalability cost.
  308. log-frame-size-exceeding = off
  309.  
  310. # Log warning if the number of messages in the backoff buffer in the endpoint
  311. # writer exceeds this limit. It can be disabled by setting the value to off.
  312. log-buffer-size-exceeding = 50000
  313.  
  314.  
  315.  
  316. # After failed to establish an outbound connection, the remoting will mark the
  317. # address as failed. This configuration option controls how much time should
  318. # be elapsed before reattempting a new connection. While the address is
  319. # gated, all messages sent to the address are delivered to dead-letters.
  320. # Since this setting limits the rate of reconnects setting it to a
  321. # very short interval (i.e. less than a second) may result in a storm of
  322. # reconnect attempts.
  323. retry-gate-closed-for = 5 s
  324.  
  325. # After catastrophic communication failures that result in the loss of system
  326. # messages or after the remote DeathWatch triggers the remote system gets
  327. # quarantined to prevent inconsistent behavior.
  328. # This setting controls how long the Quarantine marker will be kept around
  329. # before being removed to avoid long-term memory leaks.
  330. # WARNING: DO NOT change this to a small value to re-enable communication with
  331. # quarantined nodes. Such feature is not supported and any behavior between
  332. # the affected systems after lifting the quarantine is undefined.
  333. prune-quarantine-marker-after = 5 d
  334.  
  335. # If system messages have been exchanged between two systems (i.e. remote death
  336. # watch or remote deployment has been used) a remote system will be marked as
  337. # quarantined after the two system has no active association, and no
  338. # communication happens during the time configured here.
  339. # The only purpose of this setting is to avoid storing system message redelivery
  340. # data (sequence number state, etc.) for an undefined amount of time leading to long
  341. # term memory leak. Instead, if a system has been gone for this period,
  342. # or more exactly
  343. # - there is no association between the two systems (TCP connection, if TCP transport is used)
  344. # - neither side has been attempting to communicate with the other
  345. # - there are no pending system messages to deliver
  346. # for the amount of time configured here, the remote system will be quarantined and all state
  347. # associated with it will be dropped.
  348. quarantine-after-silence = 2 d
  349.  
  350. # This setting defines the maximum number of unacknowledged system messages
  351. # allowed for a remote system. If this limit is reached the remote system is
  352. # declared to be dead and its UID marked as tainted.
  353. system-message-buffer-size = 20000
  354.  
  355. # This setting defines the maximum idle time after an individual
  356. # acknowledgement for system messages is sent. System message delivery
  357. # is guaranteed by explicit acknowledgement messages. These acks are
  358. # piggybacked on ordinary traffic messages. If no traffic is detected
  359. # during the time period configured here, the remoting will send out
  360. # an individual ack.
  361. system-message-ack-piggyback-timeout = 0.3 s
  362.  
  363. # This setting defines the time after internal management signals
  364. # between actors (used for DeathWatch and supervision) that have not been
  365. # explicitly acknowledged or negatively acknowledged are resent.
  366. # Messages that were negatively acknowledged are always immediately
  367. # resent.
  368. resend-interval = 2 s
  369.  
  370. # Maximum number of unacknowledged system messages that will be resent
  371. # each 'resend-interval'. If you watch many (> 1000) remote actors you can
  372. # increase this value to for example 600, but a too large limit (e.g. 10000)
  373. # may flood the connection and might cause false failure detection to trigger.
  374. # Test such a configuration by watching all actors at the same time and stop
  375. # all watched actors at the same time.
  376. resend-limit = 200
  377.  
  378. # WARNING: this setting should not be not changed unless all of its consequences
  379. # are properly understood which assumes experience with remoting internals
  380. # or expert advice.
  381. # This setting defines the time after redelivery attempts of internal management
  382. # signals are stopped to a remote system that has been not confirmed to be alive by
  383. # this system before.
  384. initial-system-message-delivery-timeout = 3 m
  385.  
  386. ### Transports and adapters
  387.  
  388. # List of the transport drivers that will be loaded by the remoting.
  389. # A list of fully qualified config paths must be provided where
  390. # the given configuration path contains a transport-class key
  391. # pointing to an implementation class of the Transport interface.
  392. # If multiple transports are provided, the address of the first
  393. # one will be used as a default address.
  394. enabled-transports = ["akka.remote.netty.tcp"]
  395.  
  396. # Transport drivers can be augmented with adapters by adding their
  397. # name to the applied-adapters setting in the configuration of a
  398. # transport. The available adapters should be configured in this
  399. # section by providing a name, and the fully qualified name of
  400. # their corresponding implementation. The class given here
  401. # must implement akka.akka.remote.transport.TransportAdapterProvider
  402. # and have public constructor without parameters.
  403. adapters {
  404. gremlin = "akka.remote.transport.FailureInjectorProvider"
  405. trttl = "akka.remote.transport.ThrottlerProvider"
  406. }
  407.  
  408. ### Default configuration for the Netty based transport drivers
  409.  
  410. netty.tcp {
  411. # The class given here must implement the akka.remote.transport.Transport
  412. # interface and offer a public constructor which takes two arguments:
  413. # 1) akka.actor.ExtendedActorSystem
  414. # 2) com.typesafe.config.Config
  415. transport-class = "akka.remote.transport.netty.NettyTransport"
  416.  
  417. # Transport drivers can be augmented with adapters by adding their
  418. # name to the applied-adapters list. The last adapter in the
  419. # list is the adapter immediately above the driver, while
  420. # the first one is the top of the stack below the standard
  421. # Akka protocol
  422. applied-adapters = []
  423.  
  424. transport-protocol = tcp
  425.  
  426. # The default remote server port clients should connect to.
  427. # Default is 2552 (AKKA), use 0 if you want a random available port
  428. # This port needs to be unique for each actor system on the same machine.
  429. port = 2552
  430.  
  431. # The hostname or ip clients should connect to.
  432. # InetAddress.getLocalHost.getHostAddress is used if empty
  433. hostname = ""
  434.  
  435. # Use this setting to bind a network interface to a different port
  436. # than remoting protocol expects messages at. This may be used
  437. # when running akka nodes in a separated networks (under NATs or docker containers).
  438. # Use 0 if you want a random available port. Examples:
  439. #
  440. # akka.remote.netty.tcp.port = 2552
  441. # akka.remote.netty.tcp.bind-port = 2553
  442. # Network interface will be bound to the 2553 port, but remoting protocol will
  443. # expect messages sent to port 2552.
  444. #
  445. # akka.remote.netty.tcp.port = 0
  446. # akka.remote.netty.tcp.bind-port = 0
  447. # Network interface will be bound to a random port, and remoting protocol will
  448. # expect messages sent to the bound port.
  449. #
  450. # akka.remote.netty.tcp.port = 2552
  451. # akka.remote.netty.tcp.bind-port = 0
  452. # Network interface will be bound to a random port, but remoting protocol will
  453. # expect messages sent to port 2552.
  454. #
  455. # akka.remote.netty.tcp.port = 0
  456. # akka.remote.netty.tcp.bind-port = 2553
  457. # Network interface will be bound to the 2553 port, and remoting protocol will
  458. # expect messages sent to the bound port.
  459. #
  460. # akka.remote.netty.tcp.port = 2552
  461. # akka.remote.netty.tcp.bind-port = ""
  462. # Network interface will be bound to the 2552 port, and remoting protocol will
  463. # expect messages sent to the bound port.
  464. #
  465. # akka.remote.netty.tcp.port if empty
  466. bind-port = ""
  467.  
  468. # Use this setting to bind a network interface to a different hostname or ip
  469. # than remoting protocol expects messages at.
  470. # Use "0.0.0.0" to bind to all interfaces.
  471. # akka.remote.netty.tcp.hostname if empty
  472. bind-hostname = ""
  473.  
  474. # Enables SSL support on this transport
  475. enable-ssl = false
  476.  
  477. # Sets the connectTimeoutMillis of all outbound connections,
  478. # i.e. how long a connect may take until it is timed out
  479. connection-timeout = 15 s
  480.  
  481. # If set to "<id.of.dispatcher>" then the specified dispatcher
  482. # will be used to accept inbound connections, and perform IO. If "" then
  483. # dedicated threads will be used.
  484. # Please note that the Netty driver only uses this configuration and does
  485. # not read the "akka.remote.use-dispatcher" entry. Instead it has to be
  486. # configured manually to point to the same dispatcher if needed.
  487. use-dispatcher-for-io = ""
  488.  
  489. # Sets the high water mark for the in and outbound sockets,
  490. # set to 0b for platform default
  491. write-buffer-high-water-mark = 0b
  492.  
  493. # Sets the low water mark for the in and outbound sockets,
  494. # set to 0b for platform default
  495. write-buffer-low-water-mark = 0b
  496.  
  497. # Sets the send buffer size of the Sockets,
  498. # set to 0b for platform default
  499. send-buffer-size = 256000b
  500.  
  501. # Sets the receive buffer size of the Sockets,
  502. # set to 0b for platform default
  503. receive-buffer-size = 256000b
  504.  
  505. # Maximum message size the transport will accept, but at least
  506. # 32000 bytes.
  507. # Please note that UDP does not support arbitrary large datagrams,
  508. # so this setting has to be chosen carefully when using UDP.
  509. # Both send-buffer-size and receive-buffer-size settings has to
  510. # be adjusted to be able to buffer messages of maximum size.
  511. maximum-frame-size = 128000b
  512.  
  513. # Sets the size of the connection backlog
  514. backlog = 4096
  515.  
  516. # Enables the TCP_NODELAY flag, i.e. disables Nagle’s algorithm
  517. tcp-nodelay = on
  518.  
  519. # Enables TCP Keepalive, subject to the O/S kernel’s configuration
  520. tcp-keepalive = on
  521.  
  522. # Enables SO_REUSEADDR, which determines when an ActorSystem can open
  523. # the specified listen port (the meaning differs between *nix and Windows)
  524. # Valid values are "on", "off" and "off-for-windows"
  525. # due to the following Windows bug: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4476378
  526. # "off-for-windows" of course means that it's "on" for all other platforms
  527. tcp-reuse-addr = off-for-windows
  528.  
  529. # Used to configure the number of I/O worker threads on server sockets
  530. server-socket-worker-pool {
  531. # Min number of threads to cap factor-based number to
  532. pool-size-min = 2
  533.  
  534. # The pool size factor is used to determine thread pool size
  535. # using the following formula: ceil(available processors * factor).
  536. # Resulting size is then bounded by the pool-size-min and
  537. # pool-size-max values.
  538. pool-size-factor = 1.0
  539.  
  540. # Max number of threads to cap factor-based number to
  541. pool-size-max = 2
  542. }
  543.  
  544. # Used to configure the number of I/O worker threads on client sockets
  545. client-socket-worker-pool {
  546. # Min number of threads to cap factor-based number to
  547. pool-size-min = 2
  548.  
  549. # The pool size factor is used to determine thread pool size
  550. # using the following formula: ceil(available processors * factor).
  551. # Resulting size is then bounded by the pool-size-min and
  552. # pool-size-max values.
  553. pool-size-factor = 1.0
  554.  
  555. # Max number of threads to cap factor-based number to
  556. pool-size-max = 2
  557. }
  558.  
  559.  
  560. }
  561.  
  562. netty.udp = ${akka.remote.netty.tcp}
  563. netty.udp {
  564. transport-protocol = udp
  565. }
  566.  
  567. netty.ssl = ${akka.remote.netty.tcp}
  568. netty.ssl = {
  569. # Enable SSL/TLS encryption.
  570. # This must be enabled on both the client and server to work.
  571. enable-ssl = true
  572.  
  573. security {
  574. # This is the Java Key Store used by the server connection
  575. key-store = "keystore"
  576.  
  577. # This password is used for decrypting the key store
  578. key-store-password = "changeme"
  579.  
  580. # This password is used for decrypting the key
  581. key-password = "changeme"
  582.  
  583. # This is the Java Key Store used by the client connection
  584. trust-store = "truststore"
  585.  
  586. # This password is used for decrypting the trust store
  587. trust-store-password = "changeme"
  588.  
  589. # Protocol to use for SSL encryption, choose from:
  590. # TLS 1.2 is available since JDK7, and default since JDK8:
  591. # https://blogs.oracle.com/java-platform-group/entry/java_8_will_use_tls
  592. protocol = "TLSv1.2"
  593.  
  594. # Example: ["TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA"]
  595. # You need to install the JCE Unlimited Strength Jurisdiction Policy
  596. # Files to use AES 256.
  597. # More info here:
  598. # http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#SunJCEProvider
  599. enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
  600.  
  601. # There are three options, in increasing order of security:
  602. # "" or SecureRandom => (default)
  603. # "SHA1PRNG" => Can be slow because of blocking issues on Linux
  604. # "AES128CounterSecureRNG" => fastest startup and based on AES encryption
  605. # algorithm
  606. # "AES256CounterSecureRNG"
  607. #
  608. # The following are deprecated in Akka 2.4. They use one of 3 possible
  609. # seed sources, depending on availability: /dev/random, random.org and
  610. # SecureRandom (provided by Java)
  611. # "AES128CounterInetRNG"
  612. # "AES256CounterInetRNG" (Install JCE Unlimited Strength Jurisdiction
  613. # Policy Files first)
  614. # Setting a value here may require you to supply the appropriate cipher
  615. # suite (see enabled-algorithms section above)
  616. random-number-generator = ""
  617.  
  618. # Require mutual authentication between TLS peers
  619. #
  620. # Without mutual authentication only the peer that actively establishes a connection (TLS client side)
  621. # checks if the passive side (TLS server side) sends over a trusted certificate. With the flag turned on,
  622. # the passive side will also request and verify a certificate from the connecting peer.
  623. #
  624. # To prevent man-in-the-middle attacks you should enable this setting. For compatibility reasons it is
  625. # still set to 'off' per default.
  626. #
  627. # Note: Nodes that are configured with this setting to 'on' might not be able to receive messages from nodes that
  628. # run on older versions of akka-remote. This is because in older versions of Akka the active side of the remoting
  629. # connection will not send over certificates.
  630. #
  631. # However, starting from the version this setting was added, even with this setting "off", the active side
  632. # (TLS client side) will use the given key-store to send over a certificate if asked. A rolling upgrades from
  633. # older versions of Akka can therefore work like this:
  634. # - upgrade all nodes to an Akka version supporting this flag, keeping it off
  635. # - then switch the flag on and do again a rolling upgrade of all nodes
  636. # The first step ensures that all nodes will send over a certificate when asked to. The second
  637. # step will ensure that all nodes finally enforce the secure checking of client certificates.
  638. require-mutual-authentication = off
  639. }
  640. }
  641.  
  642. ### Default configuration for the failure injector transport adapter
  643.  
  644. gremlin {
  645. # Enable debug logging of the failure injector transport adapter
  646. debug = off
  647. }
  648.  
  649. ### Default dispatcher for the remoting subsystem
  650.  
  651. default-remote-dispatcher {
  652. type = Dispatcher
  653. executor = "fork-join-executor"
  654. fork-join-executor {
  655. parallelism-min = 2
  656. parallelism-factor = 0.5
  657. parallelism-max = 16
  658. }
  659. throughput = 10
  660. }
  661.  
  662. backoff-remote-dispatcher {
  663. type = Dispatcher
  664. executor = "fork-join-executor"
  665. fork-join-executor {
  666. # Min number of threads to cap factor-based parallelism number to
  667. parallelism-min = 2
  668. parallelism-max = 2
  669. }
  670. }
  671. }
  672. }

§akka-remote (artery)

  1. #####################################
  2. # Akka Remote Reference Config File #
  3. #####################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. # comments about akka.actor settings left out where they are already in akka-
  9. # actor.jar, because otherwise they would be repeated in config rendering.
  10. #
  11. # For the configuration of the new remoting implementation (Artery) please look
  12. # at the bottom section of this file as it is listed separately.
  13.  
  14. akka {
  15.  
  16. actor {
  17.  
  18. serializers {
  19. akka-containers = "akka.remote.serialization.MessageContainerSerializer"
  20. akka-misc = "akka.remote.serialization.MiscMessageSerializer"
  21. artery = "akka.remote.serialization.ArteryMessageSerializer"
  22. proto = "akka.remote.serialization.ProtobufSerializer"
  23. daemon-create = "akka.remote.serialization.DaemonMsgCreateSerializer"
  24. primitive-long = "akka.remote.serialization.LongSerializer"
  25. primitive-int = "akka.remote.serialization.IntSerializer"
  26. primitive-string = "akka.remote.serialization.StringSerializer"
  27. primitive-bytestring = "akka.remote.serialization.ByteStringSerializer"
  28. akka-system-msg = "akka.remote.serialization.SystemMessageSerializer"
  29. }
  30.  
  31. serialization-bindings {
  32. "akka.actor.ActorSelectionMessage" = akka-containers
  33.  
  34. "akka.remote.DaemonMsgCreate" = daemon-create
  35.  
  36. "akka.remote.artery.ArteryMessage" = artery
  37.  
  38. # Since akka.protobuf.Message does not extend Serializable but
  39. # GeneratedMessage does, need to use the more specific one here in order
  40. # to avoid ambiguity.
  41. "akka.protobuf.GeneratedMessage" = proto
  42.  
  43. # Since com.google.protobuf.Message does not extend Serializable but
  44. # GeneratedMessage does, need to use the more specific one here in order
  45. # to avoid ambiguity.
  46. # This com.google.protobuf serialization binding is only used if the class can be loaded,
  47. # i.e. com.google.protobuf dependency has been added in the application project.
  48. "com.google.protobuf.GeneratedMessage" = proto
  49. "java.util.Optional" = akka-misc
  50. }
  51.  
  52. # For the purpose of preserving protocol backward compatibility these bindings are not
  53. # included by default. They can be enabled with enable-additional-serialization-bindings=on.
  54. # They are enabled by default if akka.remote.artery.enabled=on or if
  55. # akka.actor.allow-java-serialization=off.
  56. additional-serialization-bindings {
  57. "akka.actor.Identify" = akka-misc
  58. "akka.actor.ActorIdentity" = akka-misc
  59. "scala.Some" = akka-misc
  60. "scala.None$" = akka-misc
  61. "akka.actor.Status$Success" = akka-misc
  62. "akka.actor.Status$Failure" = akka-misc
  63. "akka.actor.ActorRef" = akka-misc
  64. "akka.actor.PoisonPill$" = akka-misc
  65. "akka.actor.Kill$" = akka-misc
  66. "akka.remote.RemoteWatcher$Heartbeat$" = akka-misc
  67. "akka.remote.RemoteWatcher$HeartbeatRsp" = akka-misc
  68. "akka.actor.ActorInitializationException" = akka-misc
  69.  
  70. "akka.dispatch.sysmsg.SystemMessage" = akka-system-msg
  71.  
  72. "java.lang.String" = primitive-string
  73. "akka.util.ByteString$ByteString1C" = primitive-bytestring
  74. "akka.util.ByteString$ByteString1" = primitive-bytestring
  75. "akka.util.ByteString$ByteStrings" = primitive-bytestring
  76. "java.lang.Long" = primitive-long
  77. "scala.Long" = primitive-long
  78. "java.lang.Integer" = primitive-int
  79. "scala.Int" = primitive-int
  80.  
  81. # Java Serializer is by default used for exceptions.
  82. # It's recommended that you implement custom serializer for exceptions that are
  83. # sent remotely, e.g. in akka.actor.Status.Failure for ask replies. You can add
  84. # binding to akka-misc (MiscMessageSerializerSpec) for the exceptions that have
  85. # a constructor with single message String or constructor with message String as
  86. # first parameter and cause Throwable as second parameter. Note that it's not
  87. # safe to add this binding for general exceptions such as IllegalArgumentException
  88. # because it may have a subclass without required constructor.
  89. "java.lang.Throwable" = java
  90. "akka.actor.IllegalActorStateException" = akka-misc
  91. "akka.actor.ActorKilledException" = akka-misc
  92. "akka.actor.InvalidActorNameException" = akka-misc
  93. "akka.actor.InvalidMessageException" = akka-misc
  94. }
  95.  
  96. serialization-identifiers {
  97. "akka.remote.serialization.ProtobufSerializer" = 2
  98. "akka.remote.serialization.DaemonMsgCreateSerializer" = 3
  99. "akka.remote.serialization.MessageContainerSerializer" = 6
  100. "akka.remote.serialization.MiscMessageSerializer" = 16
  101. "akka.remote.serialization.ArteryMessageSerializer" = 17
  102. "akka.remote.serialization.LongSerializer" = 18
  103. "akka.remote.serialization.IntSerializer" = 19
  104. "akka.remote.serialization.StringSerializer" = 20
  105. "akka.remote.serialization.ByteStringSerializer" = 21
  106. "akka.remote.serialization.SystemMessageSerializer" = 22
  107. }
  108.  
  109. deployment {
  110.  
  111. default {
  112.  
  113. # if this is set to a valid remote address, the named actor will be
  114. # deployed at that node e.g. "akka.tcp://sys@host:port"
  115. remote = ""
  116.  
  117. target {
  118.  
  119. # A list of hostnames and ports for instantiating the children of a
  120. # router
  121. # The format should be on "akka.tcp://sys@host:port", where:
  122. # - sys is the remote actor system name
  123. # - hostname can be either hostname or IP address the remote actor
  124. # should connect to
  125. # - port should be the port for the remote server on the other node
  126. # The number of actor instances to be spawned is still taken from the
  127. # nr-of-instances setting as for local routers; the instances will be
  128. # distributed round-robin among the given nodes.
  129. nodes = []
  130.  
  131. }
  132. }
  133. }
  134. }
  135.  
  136. remote {
  137. ### Settings shared by classic remoting and Artery (the new implementation of remoting)
  138.  
  139. # If set to a nonempty string remoting will use the given dispatcher for
  140. # its internal actors otherwise the default dispatcher is used. Please note
  141. # that since remoting can load arbitrary 3rd party drivers (see
  142. # "enabled-transport" and "adapters" entries) it is not guaranteed that
  143. # every module will respect this setting.
  144. use-dispatcher = "akka.remote.default-remote-dispatcher"
  145.  
  146. # Settings for the failure detector to monitor connections.
  147. # For TCP it is not important to have fast failure detection, since
  148. # most connection failures are captured by TCP itself.
  149. # The default DeadlineFailureDetector will trigger if there are no heartbeats within
  150. # the duration heartbeat-interval + acceptable-heartbeat-pause, i.e. 124 seconds
  151. # with the default settings.
  152. transport-failure-detector {
  153.  
  154. # FQCN of the failure detector implementation.
  155. # It must implement akka.remote.FailureDetector and have
  156. # a public constructor with a com.typesafe.config.Config and
  157. # akka.actor.EventStream parameter.
  158. implementation-class = "akka.remote.DeadlineFailureDetector"
  159.  
  160. # How often keep-alive heartbeat messages should be sent to each connection.
  161. heartbeat-interval = 4 s
  162.  
  163. # Number of potentially lost/delayed heartbeats that will be
  164. # accepted before considering it to be an anomaly.
  165. # A margin to the `heartbeat-interval` is important to be able to survive sudden,
  166. # occasional, pauses in heartbeat arrivals, due to for example garbage collect or
  167. # network drop.
  168. acceptable-heartbeat-pause = 120 s
  169. }
  170.  
  171. # Settings for the Phi accrual failure detector (http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf
  172. # [Hayashibara et al]) used for remote death watch.
  173. # The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
  174. # the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,
  175. # i.e. around 12.5 seconds with default settings.
  176. watch-failure-detector {
  177.  
  178. # FQCN of the failure detector implementation.
  179. # It must implement akka.remote.FailureDetector and have
  180. # a public constructor with a com.typesafe.config.Config and
  181. # akka.actor.EventStream parameter.
  182. implementation-class = "akka.remote.PhiAccrualFailureDetector"
  183.  
  184. # How often keep-alive heartbeat messages should be sent to each connection.
  185. heartbeat-interval = 1 s
  186.  
  187. # Defines the failure detector threshold.
  188. # A low threshold is prone to generate many wrong suspicions but ensures
  189. # a quick detection in the event of a real crash. Conversely, a high
  190. # threshold generates fewer mistakes but needs more time to detect
  191. # actual crashes.
  192. threshold = 10.0
  193.  
  194. # Number of the samples of inter-heartbeat arrival times to adaptively
  195. # calculate the failure timeout for connections.
  196. max-sample-size = 200
  197.  
  198. # Minimum standard deviation to use for the normal distribution in
  199. # AccrualFailureDetector. Too low standard deviation might result in
  200. # too much sensitivity for sudden, but normal, deviations in heartbeat
  201. # inter arrival times.
  202. min-std-deviation = 100 ms
  203.  
  204. # Number of potentially lost/delayed heartbeats that will be
  205. # accepted before considering it to be an anomaly.
  206. # This margin is important to be able to survive sudden, occasional,
  207. # pauses in heartbeat arrivals, due to for example garbage collect or
  208. # network drop.
  209. acceptable-heartbeat-pause = 10 s
  210.  
  211.  
  212. # How often to check for nodes marked as unreachable by the failure
  213. # detector
  214. unreachable-nodes-reaper-interval = 1s
  215.  
  216. # After the heartbeat request has been sent the first failure detection
  217. # will start after this period, even though no heartbeat mesage has
  218. # been received.
  219. expected-response-after = 1 s
  220.  
  221. }
  222. # remote deployment configuration section
  223. deployment {
  224. # If true, will only allow specific classes to be instanciated on this system via remote deployment
  225. enable-whitelist = off
  226. whitelist = []
  227. }
  228.  
  229. ### Configuration for Artery, the reimplementation of remoting
  230. artery {
  231.  
  232. # Enable the new remoting with this flag
  233. enabled = off
  234.  
  235. # Canonical address is the address other clients should connect to.
  236. # Artery transport will expect messages to this address.
  237. canonical {
  238.  
  239. # The default remote server port clients should connect to.
  240. # Default is 25520, use 0 if you want a random available port
  241. # This port needs to be unique for each actor system on the same machine.
  242. port = 25520
  243.  
  244. # Hostname clients should connect to. Can be set to an ip, hostname
  245. # or one of the following special values:
  246. # "<getHostAddress>" InetAddress.getLocalHost.getHostAddress
  247. # "<getHostName>" InetAddress.getLocalHost.getHostName
  248. #
  249. hostname = "<getHostAddress>"
  250. }
  251.  
  252. # Use these settings to bind a network interface to a different address
  253. # than artery expects messages at. This may be used when running Akka
  254. # nodes in a separated networks (under NATs or in containers). If canonical
  255. # and bind addresses are different, then network configuration that relays
  256. # communications from canonical to bind addresses is expected.
  257. bind {
  258.  
  259. # Port to bind a network interface to. Can be set to a port number
  260. # of one of the following special values:
  261. # 0 random available port
  262. # "" akka.remote.artery.canonical.port
  263. #
  264. port = ""
  265.  
  266. # Hostname to bind a network interface to. Can be set to an ip, hostname
  267. # or one of the following special values:
  268. # "0.0.0.0" all interfaces
  269. # "" akka.remote.artery.canonical.hostname
  270. # "<getHostAddress>" InetAddress.getLocalHost.getHostAddress
  271. # "<getHostName>" InetAddress.getLocalHost.getHostName
  272. #
  273. hostname = ""
  274. }
  275.  
  276. # Actor paths to use the large message stream for when a message
  277. # is sent to them over remoting. The large message stream dedicated
  278. # is separate from "normal" and system messages so that sending a
  279. # large message does not interfere with them.
  280. # Entries should be the full path to the actor. Wildcards in the form of "*"
  281. # can be supplied at any place and matches any name at that segment -
  282. # "/user/supervisor/actor/*" will match any direct child to actor,
  283. # while "/supervisor/*/child" will match any grandchild to "supervisor" that
  284. # has the name "child"
  285. # Messages sent to ActorSelections will not be passed through the large message
  286. # stream, to pass such messages through the large message stream the selections
  287. # but must be resolved to ActorRefs first.
  288. large-message-destinations = []
  289.  
  290. # Enable untrusted mode, which discards inbound system messages, PossiblyHarmful and
  291. # ActorSelection messages. E.g. remote watch and remote deployment will not work.
  292. # ActorSelection messages can be enabled for specific paths with the trusted-selection-paths
  293. untrusted-mode = off
  294.  
  295. # When 'untrusted-mode=on' inbound actor selections are by default discarded.
  296. # Actors with paths defined in this white list are granted permission to receive actor
  297. # selections messages.
  298. # E.g. trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
  299. trusted-selection-paths = []
  300.  
  301. # If this is "on", all inbound remote messages will be logged at DEBUG level,
  302. # if off then they are not logged
  303. log-received-messages = off
  304.  
  305. # If this is "on", all outbound remote messages will be logged at DEBUG level,
  306. # if off then they are not logged
  307. log-sent-messages = off
  308.  
  309. advanced {
  310.  
  311. # Maximum serialized message size, including header data.
  312. maximum-frame-size = 256 KiB
  313.  
  314. # Direct byte buffers are reused in a pool with this maximum size.
  315. # Each buffer has the size of 'maximum-frame-size'.
  316. # This is not a hard upper limit on number of created buffers. Additional
  317. # buffers will be created if needed, e.g. when using many outbound
  318. # associations at the same time. Such additional buffers will be garbage
  319. # collected, which is not as efficient as reusing buffers in the pool.
  320. buffer-pool-size = 128
  321.  
  322. # Maximum serialized message size for the large messages, including header data.
  323. # See 'large-message-destinations'.
  324. maximum-large-frame-size = 2 MiB
  325.  
  326. # Direct byte buffers for the large messages are reused in a pool with this maximum size.
  327. # Each buffer has the size of 'maximum-large-frame-size'.
  328. # See 'large-message-destinations'.
  329. # This is not a hard upper limit on number of created buffers. Additional
  330. # buffers will be created if needed, e.g. when using many outbound
  331. # associations at the same time. Such additional buffers will be garbage
  332. # collected, which is not as efficient as reusing buffers in the pool.
  333. large-buffer-pool-size = 32
  334.  
  335. # For enabling testing features, such as blackhole in akka-remote-testkit.
  336. test-mode = off
  337.  
  338. # Settings for the materializer that is used for the remote streams.
  339. materializer = ${akka.stream.materializer}
  340.  
  341. # If set to a nonempty string artery will use the given dispatcher for
  342. # the ordinary and large message streams, otherwise the default dispatcher is used.
  343. use-dispatcher = "akka.remote.default-remote-dispatcher"
  344.  
  345. # If set to a nonempty string remoting will use the given dispatcher for
  346. # the control stream, otherwise the default dispatcher is used.
  347. # It can be good to not use the same dispatcher for the control stream as
  348. # the dispatcher for the ordinary message stream so that heartbeat messages
  349. # are not disturbed.
  350. use-control-stream-dispatcher = ""
  351.  
  352. # Controls whether to start the Aeron media driver in the same JVM or use external
  353. # process. Set to 'off' when using external media driver, and then also set the
  354. # 'aeron-dir'.
  355. embedded-media-driver = on
  356.  
  357. # Directory used by the Aeron media driver. It's mandatory to define the 'aeron-dir'
  358. # if using external media driver, i.e. when 'embedded-media-driver = off'.
  359. # Embedded media driver will use a this directory, or a temporary directory if this
  360. # property is not defined (empty).
  361. aeron-dir = ""
  362.  
  363. # Whether to delete aeron embeded driver directory upon driver stop.
  364. delete-aeron-dir = yes
  365.  
  366. # Level of CPU time used, on a scale between 1 and 10, during backoff/idle.
  367. # The tradeoff is that to have low latency more CPU time must be used to be
  368. # able to react quickly on incoming messages or send as fast as possible after
  369. # backoff backpressure.
  370. # Level 1 strongly prefer low CPU consumption over low latency.
  371. # Level 10 strongly prefer low latency over low CPU consumption.
  372. idle-cpu-level = 5
  373.  
  374. # WARNING: This feature is not supported yet. Don't use other value than 1.
  375. # It requires more hardening and performance optimizations.
  376. # Number of outbound lanes for each outbound association. A value greater than 1
  377. # means that serialization can be performed in parallel for different destination
  378. # actors. The selection of lane is based on consistent hashing of the recipient
  379. # ActorRef to preserve message ordering per receiver.
  380. outbound-lanes = 1
  381.  
  382. # WARNING: This feature is not supported yet. Don't use other value than 1.
  383. # It requires more hardening and performance optimizations.
  384. # Total number of inbound lanes, shared among all inbound associations. A value
  385. # greater than 1 means that deserialization can be performed in parallel for
  386. # different destination actors. The selection of lane is based on consistent
  387. # hashing of the recipient ActorRef to preserve message ordering per receiver.
  388. inbound-lanes = 1
  389.  
  390. # Size of the send queue for outgoing messages. Messages will be dropped if
  391. # the queue becomes full. This may happen if you send a burst of many messages
  392. # without end-to-end flow control. Note that there is one such queue per
  393. # outbound association. The trade-off of using a larger queue size is that
  394. # it consumes more memory, since the queue is based on preallocated array with
  395. # fixed size.
  396. outbound-message-queue-size = 3072
  397.  
  398. # Size of the send queue for outgoing control messages, such as system messages.
  399. # If this limit is reached the remote system is declared to be dead and its UID
  400. # marked as quarantined.
  401. # The trade-off of using a larger queue size is that it consumes more memory,
  402. # since the queue is based on preallocated array with fixed size.
  403. outbound-control-queue-size = 3072
  404.  
  405. # Size of the send queue for outgoing large messages. Messages will be dropped if
  406. # the queue becomes full. This may happen if you send a burst of many messages
  407. # without end-to-end flow control. Note that there is one such queue per
  408. # outbound association. The trade-off of using a larger queue size is that
  409. # it consumes more memory, since the queue is based on preallocated array with
  410. # fixed size.
  411. outbound-large-message-queue-size = 256
  412.  
  413. # This setting defines the maximum number of unacknowledged system messages
  414. # allowed for a remote system. If this limit is reached the remote system is
  415. # declared to be dead and its UID marked as quarantined.
  416. system-message-buffer-size = 20000
  417.  
  418. # unacknowledged system messages are re-delivered with this interval
  419. system-message-resend-interval = 1 second
  420.  
  421. # The timeout for outbound associations to perform the handshake.
  422. # This timeout must be greater than the 'image-liveness-timeout'.
  423. handshake-timeout = 20 s
  424.  
  425. # incomplete handshake attempt is retried with this interval
  426. handshake-retry-interval = 1 second
  427.  
  428. # handshake requests are performed periodically with this interval,
  429. # also after the handshake has been completed to be able to establish
  430. # a new session with a restarted destination system
  431. inject-handshake-interval = 1 second
  432.  
  433. # messages that are not accepted by Aeron are dropped after retrying for this period
  434. give-up-message-after = 60 seconds
  435.  
  436. # System messages that are not acknowledged after re-sending for this period are
  437. # dropped and will trigger quarantine. The value should be longer than the length
  438. # of a network partition that you need to survive.
  439. give-up-system-message-after = 6 hours
  440.  
  441. # during ActorSystem termination the remoting will wait this long for
  442. # an acknowledgment by the destination system that flushing of outstanding
  443. # remote messages has been completed
  444. shutdown-flush-timeout = 1 second
  445.  
  446. # See 'inbound-max-restarts'
  447. inbound-restart-timeout = 5 seconds
  448.  
  449. # Max number of restarts within 'inbound-restart-timeout' for the inbound streams.
  450. # If more restarts occurs the ActorSystem will be terminated.
  451. inbound-max-restarts = 5
  452.  
  453. # See 'outbound-max-restarts'
  454. outbound-restart-timeout = 5 seconds
  455.  
  456. # Max number of restarts within 'outbound-restart-timeout' for the outbound streams.
  457. # If more restarts occurs the ActorSystem will be terminated.
  458. outbound-max-restarts = 5
  459.  
  460. # Stop outbound stream of a quarantined association after this idle timeout, i.e.
  461. # when not used any more.
  462. stop-quarantined-after-idle = 3 seconds
  463.  
  464. # Timeout after which aeron driver has not had keepalive messages
  465. # from a client before it considers the client dead.
  466. client-liveness-timeout = 20 seconds
  467.  
  468. # Timeout for each the INACTIVE and LINGER stages an aeron image
  469. # will be retained for when it is no longer referenced.
  470. # This timeout must be less than the 'handshake-timeout'.
  471. image-liveness-timeout = 10 seconds
  472.  
  473. # Timeout after which the aeron driver is considered dead
  474. # if it does not update its C'n'C timestamp.
  475. driver-timeout = 20 seconds
  476.  
  477. flight-recorder {
  478. // FIXME it should be enabled by default when we have a good solution for naming the files
  479. enabled = off
  480. # Controls where the flight recorder file will be written. There are three options:
  481. # 1. Empty: a file will be generated in the temporary directory of the OS
  482. # 2. A relative or absolute path ending with ".afr": this file will be used
  483. # 3. A relative or absolute path: this directory will be used, the file will get a random file name
  484. destination = ""
  485. }
  486.  
  487. # compression of common strings in remoting messages, like actor destinations, serializers etc
  488. compression {
  489.  
  490. actor-refs {
  491. # Max number of compressed actor-refs
  492. # Note that compression tables are "rolling" (i.e. a new table replaces the old
  493. # compression table once in a while), and this setting is only about the total number
  494. # of compressions within a single such table.
  495. # Must be a positive natural number.
  496. max = 256
  497.  
  498. # interval between new table compression advertisements.
  499. # this means the time during which we collect heavy-hitter data and then turn it into a compression table.
  500. advertisement-interval = 1 minute
  501. }
  502. manifests {
  503. # Max number of compressed manifests
  504. # Note that compression tables are "rolling" (i.e. a new table replaces the old
  505. # compression table once in a while), and this setting is only about the total number
  506. # of compressions within a single such table.
  507. # Must be a positive natural number.
  508. max = 256
  509.  
  510. # interval between new table compression advertisements.
  511. # this means the time during which we collect heavy-hitter data and then turn it into a compression table.
  512. advertisement-interval = 1 minute
  513. }
  514. }
  515.  
  516. # List of fully qualified class names of remote instruments which should
  517. # be initialized and used for monitoring of remote messages.
  518. # The class must extend akka.remote.artery.RemoteInstrument and
  519. # have a public constructor with empty parameters or one ExtendedActorSystem
  520. # parameter.
  521. # A new instance of RemoteInstrument will be created for each encoder and decoder.
  522. # It's only called from the stage, so if it dosn't delegate to any shared instance
  523. # it doesn't have to be thread-safe.
  524. # Refer to `akka.remote.artery.RemoteInstrument` for more information.
  525. instruments = ${?akka.remote.artery.advanced.instruments} []
  526. }
  527. }
  528. }
  529.  
  530. }

§akka-testkit

  1. ######################################
  2. # Akka Testkit Reference Config File #
  3. ######################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. akka {
  9. test {
  10. # factor by which to scale timeouts during tests, e.g. to account for shared
  11. # build system load
  12. timefactor = 1.0
  13.  
  14. # duration of EventFilter.intercept waits after the block is finished until
  15. # all required messages are received
  16. filter-leeway = 3s
  17.  
  18. # duration to wait in expectMsg and friends outside of within() block
  19. # by default
  20. single-expect-default = 3s
  21.  
  22. # The timeout that is added as an implicit by DefaultTimeout trait
  23. default-timeout = 5s
  24.  
  25. calling-thread-dispatcher {
  26. type = akka.testkit.CallingThreadDispatcherConfigurator
  27. }
  28. }
  29. actor.serialization-bindings {
  30. "akka.testkit.JavaSerializable" = java
  31. }
  32. }

akka-cluster-metrics ~~~~~~~~~~~~--------

  1. ##############################################
  2. # Akka Cluster Metrics Reference Config File #
  3. ##############################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits in your application.conf in order to override these settings.
  7.  
  8. # Sigar provisioning:
  9. #
  10. # User can provision sigar classes and native library in one of the following ways:
  11. #
  12. # 1) Use https://github.com/kamon-io/sigar-loader Kamon sigar-loader as a project dependency for the user project.
  13. # Metrics extension will extract and load sigar library on demand with help of Kamon sigar provisioner.
  14. #
  15. # 2) Use https://github.com/kamon-io/sigar-loader Kamon sigar-loader as java agent: `java -javaagent:/path/to/sigar-loader.jar`
  16. # Kamon sigar loader agent will extract and load sigar library during JVM start.
  17. #
  18. # 3) Place `sigar.jar` on the `classpath` and sigar native library for the o/s on the `java.library.path`
  19. # User is required to manage both project dependency and library deployment manually.
  20.  
  21. # Cluster metrics extension.
  22. # Provides periodic statistics collection and publication throughout the cluster.
  23. akka.cluster.metrics {
  24. # Full path of dispatcher configuration key.
  25. # Use "" for default key `akka.actor.default-dispatcher`.
  26. dispatcher = ""
  27. # How long should any actor wait before starting the periodic tasks.
  28. periodic-tasks-initial-delay = 1s
  29. # Sigar native library extract location.
  30. # Use per-application-instance scoped location, such as program working directory.
  31. native-library-extract-folder = ${user.dir}"/native"
  32. # Metrics supervisor actor.
  33. supervisor {
  34. # Actor name. Example name space: /system/cluster-metrics
  35. name = "cluster-metrics"
  36. # Supervision strategy.
  37. strategy {
  38. #
  39. # FQCN of class providing `akka.actor.SupervisorStrategy`.
  40. # Must have a constructor with signature `<init>(com.typesafe.config.Config)`.
  41. # Default metrics strategy provider is a configurable extension of `OneForOneStrategy`.
  42. provider = "akka.cluster.metrics.ClusterMetricsStrategy"
  43. #
  44. # Configuration of the default strategy provider.
  45. # Replace with custom settings when overriding the provider.
  46. configuration = {
  47. # Log restart attempts.
  48. loggingEnabled = true
  49. # Child actor restart-on-failure window.
  50. withinTimeRange = 3s
  51. # Maximum number of restart attempts before child actor is stopped.
  52. maxNrOfRetries = 3
  53. }
  54. }
  55. }
  56. # Metrics collector actor.
  57. collector {
  58. # Enable or disable metrics collector for load-balancing nodes.
  59. # Metrics collection can also be controlled at runtime by sending control messages
  60. # to /system/cluster-metrics actor: `akka.cluster.metrics.{CollectionStartMessage,CollectionStopMessage}`
  61. enabled = on
  62. # FQCN of the metrics collector implementation.
  63. # It must implement `akka.cluster.metrics.MetricsCollector` and
  64. # have public constructor with akka.actor.ActorSystem parameter.
  65. # Will try to load in the following order of priority:
  66. # 1) configured custom collector 2) internal `SigarMetricsCollector` 3) internal `JmxMetricsCollector`
  67. provider = ""
  68. # Try all 3 available collector providers, or else fail on the configured custom collector provider.
  69. fallback = true
  70. # How often metrics are sampled on a node.
  71. # Shorter interval will collect the metrics more often.
  72. # Also controls frequency of the metrics publication to the node system event bus.
  73. sample-interval = 3s
  74. # How often a node publishes metrics information to the other nodes in the cluster.
  75. # Shorter interval will publish the metrics gossip more often.
  76. gossip-interval = 3s
  77. # How quickly the exponential weighting of past data is decayed compared to
  78. # new data. Set lower to increase the bias toward newer values.
  79. # The relevance of each data sample is halved for every passing half-life
  80. # duration, i.e. after 4 times the half-life, a data sample’s relevance is
  81. # reduced to 6% of its original relevance. The initial relevance of a data
  82. # sample is given by 1 – 0.5 ^ (collect-interval / half-life).
  83. # See http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average
  84. moving-average-half-life = 12s
  85. }
  86. }
  87.  
  88. # Cluster metrics extension serializers and routers.
  89. akka.actor {
  90. # Protobuf serializer for remote cluster metrics messages.
  91. serializers {
  92. akka-cluster-metrics = "akka.cluster.metrics.protobuf.MessageSerializer"
  93. }
  94. # Interface binding for remote cluster metrics messages.
  95. serialization-bindings {
  96. "akka.cluster.metrics.ClusterMetricsMessage" = akka-cluster-metrics
  97. }
  98. # Globally unique metrics extension serializer identifier.
  99. serialization-identifiers {
  100. "akka.cluster.metrics.protobuf.MessageSerializer" = 10
  101. }
  102. # Provide routing of messages based on cluster metrics.
  103. router.type-mapping {
  104. cluster-metrics-adaptive-pool = "akka.cluster.metrics.AdaptiveLoadBalancingPool"
  105. cluster-metrics-adaptive-group = "akka.cluster.metrics.AdaptiveLoadBalancingGroup"
  106. }
  107. }

akka-cluster-tools ~~~~~~~~~~~~------

  1. ############################################
  2. # Akka Cluster Tools Reference Config File #
  3. ############################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8. # //#pub-sub-ext-config
  9. # Settings for the DistributedPubSub extension
  10. akka.cluster.pub-sub {
  11. # Actor name of the mediator actor, /system/distributedPubSubMediator
  12. name = distributedPubSubMediator
  13.  
  14. # Start the mediator on members tagged with this role.
  15. # All members are used if undefined or empty.
  16. role = ""
  17.  
  18. # The routing logic to use for 'Send'
  19. # Possible values: random, round-robin, broadcast
  20. routing-logic = random
  21.  
  22. # How often the DistributedPubSubMediator should send out gossip information
  23. gossip-interval = 1s
  24.  
  25. # Removed entries are pruned after this duration
  26. removed-time-to-live = 120s
  27.  
  28. # Maximum number of elements to transfer in one message when synchronizing the registries.
  29. # Next chunk will be transferred in next round of gossip.
  30. max-delta-elements = 3000
  31. # The id of the dispatcher to use for DistributedPubSubMediator actors.
  32. # If not specified default dispatcher is used.
  33. # If specified you need to define the settings of the actual dispatcher.
  34. use-dispatcher = ""
  35. }
  36. # //#pub-sub-ext-config
  37.  
  38. # Protobuf serializer for cluster DistributedPubSubMeditor messages
  39. akka.actor {
  40. serializers {
  41. akka-pubsub = "akka.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer"
  42. }
  43. serialization-bindings {
  44. "akka.cluster.pubsub.DistributedPubSubMessage" = akka-pubsub
  45. }
  46. serialization-identifiers {
  47. "akka.cluster.pubsub.protobuf.DistributedPubSubMessageSerializer" = 9
  48. }
  49. # adds the protobuf serialization of pub sub messages to groups
  50. additional-serialization-bindings {
  51. "akka.cluster.pubsub.DistributedPubSubMediator$Internal$SendToOneSubscriber" = akka-pubsub
  52. }
  53. }
  54.  
  55.  
  56. # //#receptionist-ext-config
  57. # Settings for the ClusterClientReceptionist extension
  58. akka.cluster.client.receptionist {
  59. # Actor name of the ClusterReceptionist actor, /system/receptionist
  60. name = receptionist
  61.  
  62. # Start the receptionist on members tagged with this role.
  63. # All members are used if undefined or empty.
  64. role = ""
  65.  
  66. # The receptionist will send this number of contact points to the client
  67. number-of-contacts = 3
  68.  
  69. # The actor that tunnel response messages to the client will be stopped
  70. # after this time of inactivity.
  71. response-tunnel-receive-timeout = 30s
  72. # The id of the dispatcher to use for ClusterReceptionist actors.
  73. # If not specified default dispatcher is used.
  74. # If specified you need to define the settings of the actual dispatcher.
  75. use-dispatcher = ""
  76.  
  77. # How often failure detection heartbeat messages should be received for
  78. # each ClusterClient
  79. heartbeat-interval = 2s
  80.  
  81. # Number of potentially lost/delayed heartbeats that will be
  82. # accepted before considering it to be an anomaly.
  83. # The ClusterReceptionist is using the akka.remote.DeadlineFailureDetector, which
  84. # will trigger if there are no heartbeats within the duration
  85. # heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
  86. # the default settings.
  87. acceptable-heartbeat-pause = 13s
  88.  
  89. # Failure detection checking interval for checking all ClusterClients
  90. failure-detection-interval = 2s
  91. }
  92. # //#receptionist-ext-config
  93.  
  94. # //#cluster-client-config
  95. # Settings for the ClusterClient
  96. akka.cluster.client {
  97. # Actor paths of the ClusterReceptionist actors on the servers (cluster nodes)
  98. # that the client will try to contact initially. It is mandatory to specify
  99. # at least one initial contact.
  100. # Comma separated full actor paths defined by a string on the form of
  101. # "akka.tcp://system@hostname:port/system/receptionist"
  102. initial-contacts = []
  103. # Interval at which the client retries to establish contact with one of
  104. # ClusterReceptionist on the servers (cluster nodes)
  105. establishing-get-contacts-interval = 3s
  106. # Interval at which the client will ask the ClusterReceptionist for
  107. # new contact points to be used for next reconnect.
  108. refresh-contacts-interval = 60s
  109. # How often failure detection heartbeat messages should be sent
  110. heartbeat-interval = 2s
  111. # Number of potentially lost/delayed heartbeats that will be
  112. # accepted before considering it to be an anomaly.
  113. # The ClusterClient is using the akka.remote.DeadlineFailureDetector, which
  114. # will trigger if there are no heartbeats within the duration
  115. # heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
  116. # the default settings.
  117. acceptable-heartbeat-pause = 13s
  118. # If connection to the receptionist is not established the client will buffer
  119. # this number of messages and deliver them the connection is established.
  120. # When the buffer is full old messages will be dropped when new messages are sent
  121. # via the client. Use 0 to disable buffering, i.e. messages will be dropped
  122. # immediately if the location of the singleton is unknown.
  123. # Maximum allowed buffer size is 10000.
  124. buffer-size = 1000
  125.  
  126. # If connection to the receiptionist is lost and the client has not been
  127. # able to acquire a new connection for this long the client will stop itself.
  128. # This duration makes it possible to watch the cluster client and react on a more permanent
  129. # loss of connection with the cluster, for example by accessing some kind of
  130. # service registry for an updated set of initial contacts to start a new cluster client with.
  131. # If this is not wanted it can be set to "off" to disable the timeout and retry
  132. # forever.
  133. reconnect-timeout = off
  134. }
  135. # //#cluster-client-config
  136.  
  137. # Protobuf serializer for ClusterClient messages
  138. akka.actor {
  139. serializers {
  140. akka-cluster-client = "akka.cluster.client.protobuf.ClusterClientMessageSerializer"
  141. }
  142. serialization-bindings {
  143. "akka.cluster.client.ClusterClientMessage" = akka-cluster-client
  144. }
  145. serialization-identifiers {
  146. "akka.cluster.client.protobuf.ClusterClientMessageSerializer" = 15
  147. }
  148. }
  149.  
  150. # //#singleton-config
  151. akka.cluster.singleton {
  152. # The actor name of the child singleton actor.
  153. singleton-name = "singleton"
  154. # Singleton among the nodes tagged with specified role.
  155. # If the role is not specified it's a singleton among all nodes in the cluster.
  156. role = ""
  157. # When a node is becoming oldest it sends hand-over request to previous oldest,
  158. # that might be leaving the cluster. This is retried with this interval until
  159. # the previous oldest confirms that the hand over has started or the previous
  160. # oldest member is removed from the cluster (+ akka.cluster.down-removal-margin).
  161. hand-over-retry-interval = 1s
  162. # The number of retries are derived from hand-over-retry-interval and
  163. # akka.cluster.down-removal-margin (or ClusterSingletonManagerSettings.removalMargin),
  164. # but it will never be less than this property.
  165. min-number-of-hand-over-retries = 10
  166. }
  167. # //#singleton-config
  168.  
  169. # //#singleton-proxy-config
  170. akka.cluster.singleton-proxy {
  171. # The actor name of the singleton actor that is started by the ClusterSingletonManager
  172. singleton-name = ${akka.cluster.singleton.singleton-name}
  173. # The role of the cluster nodes where the singleton can be deployed.
  174. # If the role is not specified then any node will do.
  175. role = ""
  176. # Interval at which the proxy will try to resolve the singleton instance.
  177. singleton-identification-interval = 1s
  178. # If the location of the singleton is unknown the proxy will buffer this
  179. # number of messages and deliver them when the singleton is identified.
  180. # When the buffer is full old messages will be dropped when new messages are
  181. # sent via the proxy.
  182. # Use 0 to disable buffering, i.e. messages will be dropped immediately if
  183. # the location of the singleton is unknown.
  184. # Maximum allowed buffer size is 10000.
  185. buffer-size = 1000
  186. }
  187. # //#singleton-proxy-config
  188.  
  189. # Serializer for cluster ClusterSingleton messages
  190. akka.actor {
  191. serializers {
  192. akka-singleton = "akka.cluster.singleton.protobuf.ClusterSingletonMessageSerializer"
  193. }
  194. serialization-bindings {
  195. "akka.cluster.singleton.ClusterSingletonMessage" = akka-singleton
  196. }
  197. serialization-identifiers {
  198. "akka.cluster.singleton.protobuf.ClusterSingletonMessageSerializer" = 14
  199. }
  200. }

akka-cluster-sharding ~~~~~~~~~~~~---------

  1. ###############################################
  2. # Akka Cluster Sharding Reference Config File #
  3. ###############################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8.  
  9. # //#sharding-ext-config
  10. # Settings for the ClusterShardingExtension
  11. akka.cluster.sharding {
  12.  
  13. # The extension creates a top level actor with this name in top level system scope,
  14. # e.g. '/system/sharding'
  15. guardian-name = sharding
  16.  
  17. # Specifies that entities runs on cluster nodes with a specific role.
  18. # If the role is not specified (or empty) all nodes in the cluster are used.
  19. role = ""
  20.  
  21. # When this is set to 'on' the active entity actors will automatically be restarted
  22. # upon Shard restart. i.e. if the Shard is started on a different ShardRegion
  23. # due to rebalance or crash.
  24. remember-entities = off
  25.  
  26. # If the coordinator can't store state changes it will be stopped
  27. # and started again after this duration, with an exponential back-off
  28. # of up to 5 times this duration.
  29. coordinator-failure-backoff = 5 s
  30.  
  31. # The ShardRegion retries registration and shard location requests to the
  32. # ShardCoordinator with this interval if it does not reply.
  33. retry-interval = 2 s
  34.  
  35. # Maximum number of messages that are buffered by a ShardRegion actor.
  36. buffer-size = 100000
  37.  
  38. # Timeout of the shard rebalancing process.
  39. handoff-timeout = 60 s
  40.  
  41. # Time given to a region to acknowledge it's hosting a shard.
  42. shard-start-timeout = 10 s
  43.  
  44. # If the shard is remembering entities and can't store state changes
  45. # will be stopped and then started again after this duration. Any messages
  46. # sent to an affected entity may be lost in this process.
  47. shard-failure-backoff = 10 s
  48.  
  49. # If the shard is remembering entities and an entity stops itself without
  50. # using passivate. The entity will be restarted after this duration or when
  51. # the next message for it is received, which ever occurs first.
  52. entity-restart-backoff = 10 s
  53.  
  54. # Rebalance check is performed periodically with this interval.
  55. rebalance-interval = 10 s
  56.  
  57. # Absolute path to the journal plugin configuration entity that is to be
  58. # used for the internal persistence of ClusterSharding. If not defined
  59. # the default journal plugin is used. Note that this is not related to
  60. # persistence used by the entity actors.
  61. journal-plugin-id = ""
  62.  
  63. # Absolute path to the snapshot plugin configuration entity that is to be
  64. # used for the internal persistence of ClusterSharding. If not defined
  65. # the default snapshot plugin is used. Note that this is not related to
  66. # persistence used by the entity actors.
  67. snapshot-plugin-id = ""
  68.  
  69. # Parameter which determines how the coordinator will be store a state
  70. # valid values either "persistence" or "ddata"
  71. # The "ddata" mode is experimental, since it depends on the experimental
  72. # module akka-distributed-data-experimental.
  73. state-store-mode = "persistence"
  74.  
  75. # The shard saves persistent snapshots after this number of persistent
  76. # events. Snapshots are used to reduce recovery times.
  77. snapshot-after = 1000
  78.  
  79. # Setting for the default shard allocation strategy
  80. least-shard-allocation-strategy {
  81. # Threshold of how large the difference between most and least number of
  82. # allocated shards must be to begin the rebalancing.
  83. rebalance-threshold = 10
  84.  
  85. # The number of ongoing rebalancing processes is limited to this number.
  86. max-simultaneous-rebalance = 3
  87. }
  88.  
  89. # Timeout of waiting the initial distributed state (an initial state will be queried again if the timeout happened)
  90. # works only for state-store-mode = "ddata"
  91. waiting-for-state-timeout = 5 s
  92.  
  93. # Timeout of waiting for update the distributed state (update will be retried if the timeout happened)
  94. # works only for state-store-mode = "ddata"
  95. updating-state-timeout = 5 s
  96.  
  97. # The shard uses this strategy to determines how to recover the underlying entity actors. The strategy is only used
  98. # by the persistent shard when rebalancing or restarting. The value can either be "all" or "constant". The "all"
  99. # strategy start all the underlying entity actors at the same time. The constant strategy will start the underlying
  100. # entity actors at a fix rate. The default strategy "all".
  101. entity-recovery-strategy = "all"
  102.  
  103. # Default settings for the constant rate entity recovery strategy
  104. entity-recovery-constant-rate-strategy {
  105. # Sets the frequency at which a batch of entity actors is started.
  106. frequency = 100 ms
  107. # Sets the number of entity actors to be restart at a particular interval
  108. number-of-entities = 5
  109. }
  110.  
  111. # Settings for the coordinator singleton. Same layout as akka.cluster.singleton.
  112. # The "role" of the singleton configuration is not used. The singleton role will
  113. # be the same as "akka.cluster.sharding.role".
  114. coordinator-singleton = ${akka.cluster.singleton}
  115.  
  116. # The id of the dispatcher to use for ClusterSharding actors.
  117. # If not specified default dispatcher is used.
  118. # If specified you need to define the settings of the actual dispatcher.
  119. # This dispatcher for the entity actors is defined by the user provided
  120. # Props, i.e. this dispatcher is not used for the entity actors.
  121. use-dispatcher = ""
  122. }
  123. # //#sharding-ext-config
  124.  
  125.  
  126. # Protobuf serializer for Cluster Sharding messages
  127. akka.actor {
  128. serializers {
  129. akka-sharding = "akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer"
  130. }
  131. serialization-bindings {
  132. "akka.cluster.sharding.ClusterShardingSerializable" = akka-sharding
  133. }
  134. serialization-identifiers {
  135. "akka.cluster.sharding.protobuf.ClusterShardingMessageSerializer" = 13
  136. }
  137. }

akka-distributed-data ~~~~~~~~~~~~---------

  1. ##############################################
  2. # Akka Distributed DataReference Config File #
  3. ##############################################
  4.  
  5. # This is the reference config file that contains all the default settings.
  6. # Make your edits/overrides in your application.conf.
  7.  
  8.  
  9. #//#distributed-data
  10. # Settings for the DistributedData extension
  11. akka.cluster.distributed-data {
  12. # Actor name of the Replicator actor, /system/ddataReplicator
  13. name = ddataReplicator
  14.  
  15. # Replicas are running on members tagged with this role.
  16. # All members are used if undefined or empty.
  17. role = ""
  18.  
  19. # How often the Replicator should send out gossip information
  20. gossip-interval = 2 s
  21.  
  22. # How often the subscribers will be notified of changes, if any
  23. notify-subscribers-interval = 500 ms
  24.  
  25. # Maximum number of entries to transfer in one gossip message when synchronizing
  26. # the replicas. Next chunk will be transferred in next round of gossip.
  27. max-delta-elements = 1000
  28. # The id of the dispatcher to use for Replicator actors. If not specified
  29. # default dispatcher is used.
  30. # If specified you need to define the settings of the actual dispatcher.
  31. use-dispatcher = ""
  32.  
  33. # How often the Replicator checks for pruning of data associated with
  34. # removed cluster nodes.
  35. pruning-interval = 30 s
  36. # How long time it takes (worst case) to spread the data to all other replica nodes.
  37. # This is used when initiating and completing the pruning process of data associated
  38. # with removed cluster nodes. The time measurement is stopped when any replica is
  39. # unreachable, so it should be configured to worst case in a healthy cluster.
  40. max-pruning-dissemination = 60 s
  41. # Serialized Write and Read messages are cached when they are sent to
  42. # several nodes. If no further activity they are removed from the cache
  43. # after this duration.
  44. serializer-cache-time-to-live = 10s
  45. durable {
  46. # List of keys that are durable. Prefix matching is supported by using * at the
  47. # end of a key.
  48. keys = []
  49. # Fully qualified class name of the durable store actor. It must be a subclass
  50. # of akka.actor.Actor and handle the protocol defined in
  51. # akka.cluster.ddata.DurableStore. The class must have a constructor with
  52. # com.typesafe.config.Config parameter.
  53. store-actor-class = akka.cluster.ddata.LmdbDurableStore
  54. use-dispatcher = akka.cluster.distributed-data.durable.pinned-store
  55. pinned-store {
  56. executor = thread-pool-executor
  57. type = PinnedDispatcher
  58. }
  59. # Config for the LmdbDurableStore
  60. lmdb {
  61. # Directory of LMDB file. There are two options:
  62. # 1. A relative or absolute path to a directory that ends with 'ddata'
  63. # the full name of the directory will contain name of the ActorSystem
  64. # and its remote port.
  65. # 2. Otherwise the path is used as is, as a relative or absolute path to
  66. # a directory.
  67. dir = "ddata"
  68. # Size in bytes of the memory mapped file.
  69. map-size = 100 MiB
  70. # Accumulate changes before storing improves performance with the
  71. # risk of losing the last writes if the JVM crashes.
  72. # The interval is by default set to 'off' to write each update immediately.
  73. # Enabling write behind by specifying a duration, e.g. 200ms, is especially
  74. # efficient when performing many writes to the same key, because it is only
  75. # the last value for each key that will be serialized and stored.
  76. # write-behind-interval = 200 ms
  77. write-behind-interval = off
  78. }
  79. }
  80. }
  81. #//#distributed-data
  82.  
  83. # Protobuf serializer for cluster DistributedData messages
  84. akka.actor {
  85. serializers {
  86. akka-data-replication = "akka.cluster.ddata.protobuf.ReplicatorMessageSerializer"
  87. akka-replicated-data = "akka.cluster.ddata.protobuf.ReplicatedDataSerializer"
  88. }
  89. serialization-bindings {
  90. "akka.cluster.ddata.Replicator$ReplicatorMessage" = akka-data-replication
  91. "akka.cluster.ddata.ReplicatedDataSerialization" = akka-replicated-data
  92. }
  93. serialization-identifiers {
  94. "akka.cluster.ddata.protobuf.ReplicatedDataSerializer" = 11
  95. "akka.cluster.ddata.protobuf.ReplicatorMessageSerializer" = 12
  96. }
  97. }