AI model provider configuration
Akka provides integration with several backend AI models. You are responsible for configuring the AI model provider for every agent you build, whether you do so with configuration settings or via code.
As discussed in the Configuring the model section of the Agent documentation, supplying a model provider through code will override the model provider configured through application.conf
settings. You can also have multiple model providers configured and then use the fromConfig
method of the ModelProvider
class to load a specific one.
This page provides a detailed list of all of the configuration values available to each provider. As with all Akka configuration, the model configuration is declared using the HOCON format.
Definitions
The following are a few definitions that might not be familiar to you. Not all models support these properties, but when they do, their definition remains the same.
Temperature
A value from 0.0 to 1.0 that indicates the amount of randomness in the model output. Often described as controlling how "creative" a model can get. The lower the value, the more precise and strict you want the model to behave. The higher the value, the more you expect it to improvise and the less deterministic it will be.
top-p
This property refers to the "Nucleus sampling parameter." Controls text generation by only considering the most likely tokens whose cumulative probability exceeds the threshold value. It helps balance between diversity and quality of outputs—lower values (like 0.3) produce more focused, predictable text while higher values (like 0.9) allow more creativity and variation.
top-k
Top-k sampling limits text generation to only the k most probable tokens at each step, discarding all other possibilities regardless of their probability. It provides a simpler way to control randomness, smaller k values (like 10) produce more focused outputs while larger values (like 50) allow for more diversity.
max-tokens
If this value is supplied and the model supports this property, then it will stop operations in mid flight if the token quota runs out. It’s important to check how the model counts tokens, as some may count differently. Be aware of the fact that this parameter name frequently varies from one provider to the next. Make sure you’re using the right property name.
Model configuration
The following is a list of all natively supported model configurations. Remember that if you don’t see your model or model format here, you can always create your own custom configuration and still use all of the Agent-related components.
Anthropic
Property | Type | Description |
---|---|---|
|
"anthropic" |
Name of the provider. Must always be |
|
String |
The API key. Defaults to the value of the |
|
String |
The name of the model to use. See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
|
Integer |
Top-k sampling parameter |
|
Integer |
Max token quota. Leave as –1 for model default |
See ModelProvider.Anthropic
for programmatic settings.
Gemini
Property | Type | Description |
---|---|---|
|
"googleai-gemini" |
Name of the provider. Must always be |
|
String |
The API key. Defaults to the value of the |
|
String |
The name of the model to use. See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
|
Integer |
Max token output quota. Leave as –1 for model default |
See ModelProvider.GoogleAIGemini
for programmatic settings.
Hugging Face
Property | Type | Description |
---|---|---|
|
"hugging-face" |
Name of the provider. Must always be |
|
String |
The access token for authentication with the Hugging Face API |
|
String |
The ID of the model to use. See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
|
Integer |
Max number of tokens to generate (–1 for model default) |
See ModelProvider.HuggingFace
for programmatic settings.
Local AI
Property | Type | Description |
---|---|---|
|
"local-ai" |
Name of the provider. Must always be |
|
String |
The name of the model to use. See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API (default |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
|
Integer |
Max number of tokens to generate (–1 for model default) |
See ModelProvider.LocalAI
for programmatic settings.
Ollama
Property | Type | Description |
---|---|---|
|
"ollama" |
Name of the provider. Must always be |
|
String |
The name of the model to use. See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API (default |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
See ModelProvider.Ollama
for programmatic settings.
OpenAI
Property | Type | Description |
---|---|---|
|
"openai" |
Name of the provider. Must always be |
|
String |
The API key. Defaults to the value of the |
|
String |
The name of the model to use (e.g. "gpt-4" or "gpt-3.5-turbo"). See vendor documentation for a list of available models |
|
Url |
Optional override to the base URL of the API |
|
Float |
Model randomness. The default is not supplied so check with the model documentation for default behavior |
|
Float |
Nucleus sampling parameter |
|
Integer |
Max token quota. Leave as –1 for model default |
See ModelProvider.OpenAi
for programmatic settings.
Reference configurations
The following is a list of the various reference configurations for each of the AI models
Anthropic
# Configuration for Anthropic's large language models
akka.javasdk.agent.anthropic {
# The provider name, must be "anthropic"
provider = "anthropic"
# The API key for authentication with Anthropic's API
api-key = ""
# Environment variable override for the API key
api-key = ${?ANTHROPIC_API_KEY}
# The name of the model to use, e.g. "claude-2" or "claude-instant-1"
model-name = ""
# Optional base URL override for the Anthropic API
base-url = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
# Top-k sampling parameter (-1 to disable).
# Top-k sampling limits text generation to only the k most probable
# tokens at each step, discarding all other possibilities regardless
# of their probability. It provides a simpler way to control randomness,
# smaller k values (like 10) produce more focused outputs while larger
# values (like 50) allow for more diversity.
top-k = -1
# Maximum number of tokens to generate (-1 for model default)
max-tokens = -1
}
Gemini
# Configuration for Google's Gemini AI large language models
akka.javasdk.agent.googleai-gemini {
# The provider name, must be "googleai-gemini"
provider = "googleai-gemini"
# The API key for authentication with Google AI Gemini's API
api-key = ""
# Environment variable override for the API key
api-key = ${?GOOGLE_AI_GEMINI_API_KEY}
# The name of the model to use, e.g. "gemini-2.0-flash", "gemini-1.5-flash", "gemini-1.5-pro" or "gemini-1.0-pro"
model-name = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
# Maximum number of tokens to generate (-1 for model default)
max-output-tokens = -1
}
Hugging face
# Configuration for large language models from HuggingFace https://huggingface.co
akka.javasdk.agent.hugging-face {
# The provider name, must be "hugging-face"
provider = "hugging-face"
# The access token for authentication with the Hugging Face API
access-token = ""
# The Hugging face model id, e.g. "microsoft/Phi-3.5-mini-instruct"
model-id = ""
# Optional base URL override for the Hugging Face API
base-url = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
# Maximum number of tokens to generate (-1 for model default)
max-new-tokens = -1
}
Local AI
# Configuration for Local AI large language models
akka.javasdk.agent.local-ai {
# The provider name, must be "local-ai"
provider = "local-ai"
# server base url
base-url = "http://localhost:8080/v1"
# One of the models installed in the Ollama server
model-name = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
# Maximum number of tokens to generate (-1 for model default)
max-tokens = -1
}
Ollama
# Configuration for Ollama large language models
akka.javasdk.agent.ollama {
# The provider name, must be "ollama"
provider = "ollama"
# Ollama server base url
base-url = "http://localhost:11434"
# One of the models installed in the Ollama server
model-name = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
}
OpenAI
# Configuration for OpenAI's large language models
akka.javasdk.agent.openai {
# The provider name, must be "openai"
provider = "openai"
# The API key for authentication with OpenAI's API
api-key = ""
# Environment variable override for the API key
api-key = ${?OPENAI_API_KEY}
# The name of the model to use, e.g. "gpt-4" or "gpt-3.5-turbo"
model-name = ""
# Optional base URL override for the OpenAI API
base-url = ""
# Controls randomness in the model's output (0.0 to 1.0)
temperature = NaN
# Nucleus sampling parameter (0.0 to 1.0). Controls text generation by
# only considering the most likely tokens whose cumulative probability
# exceeds the threshold value. It helps balance between diversity and
# quality of outputs—lower values (like 0.3) produce more focused,
# predictable text while higher values (like 0.9) allow more creativity
# and variation.
top-p = NaN
# Maximum number of tokens to generate (-1 for model default)
max-tokens = -1
}