Kotlin Support
Kotlin is a statically-typed language targeting the JVM (and other platforms), enabling concise and elegant code with seamless interoperability with Java libraries. LangChain4j utilizes Kotlin extensions and type-safe builders to enhance Java APIs with Kotlin-specific conveniences. This allows users to extend existing Java classes with additional functionality tailored for Kotlin.
LangChain4j does not require Kotlin libraries as runtime dependencies but allows users to leverage Kotlin's coroutine capabilities for non-blocking execution, enhancing performance and efficiency.
ChatLanguageModel Extensions
This Kotlin code demonstrates how to use coroutines and suspend functions and type-safe builders to interact with a ChatLanguageModel
in LangChain4j.
val model = OpenAiChatModel.builder()
.apiKey("YOUR_API_KEY")
// more configuration parameters here ...
.build()
CoroutineScope(Dispatchers.IO).launch {
val response = model.chat {
messages += systemMessage("You are a helpful assistant")
messages += userMessage("Hello!")
parameters {
temperature = 0.7
}
}
println(response.aiMessage().text())
}
The interaction happens asynchronously using Kotlin's coroutines:
CoroutineScope(Dispatchers.IO).launch
: Executes the process on the IO dispatcher, optimized for blocking tasks like network or file I/O. This ensures responsiveness by preventing the calling thread from being blocked.model.chat
is a suspend function, that uses a builder block to structure the chat request. This approach reduces boilerplate and makes the code more readable and maintainable.
For advanced scenarios, to support custom ChatRequestParameters
, type-safe builder function accepts custom builder:
fun <B : DefaultChatRequestParameters.Builder<*>> parameters(
builder: B = DefaultChatRequestParameters.builder() as B,
configurer: ChatRequestParametersBuilder<B>.() -> Unit
)
Example usage:
model.chat {
messages += systemMessage("You are a helpful assistant")
messages += userMessage("Hello!")
parameters(OpenAiChatRequestParameters.builder()) {
temperature = 0.7 // DefaultChatRequestParameters.Builder property
builder.seed(42) // OpenAiChatRequestParameters.Builder property
}
}
Streaming Use Case
The StreamingChatLanguageModel
extensions provide functionality for use cases where responses need to be processed incrementally as they are generated by the AI model. This is particularly useful in applications requiring real-time feedback, such as chat interfaces, live editors, or systems with streaming token-by-token interaction.
Using Kotlin coroutines, the chatFlow
extension function converts a streaming response from the language model into a structured and cancellable Flow
sequence, enabling a coroutine-friendly, non-blocking implementation.
Here’s how you can implement a complete interaction with chatFlow
:
val flow = model.chatFlow { // similar to non-streaming scenario
messages += userMessage("Can you explain how streaming works?")
parameters { // ChatRequestParameters
temperature = 0.7
maxOutputTokens = 42
}
}
runBlocking { // must run in a coroutine context
flow.collect { reply ->
when (reply) {
is StreamingChatLanguageModelReply.PartialResponse -> {
print(reply.partialResponse) // Stream output as it arrives
}
is StreamingChatLanguageModelReply.CompleteResponse -> {
println("\nComplete: ${reply.response.aiMessage().text()}")
}
is StreamingChatLanguageModelReply.Error -> {
println("Error occurred: ${reply.cause.message}")
}
}
}
}
Check out this test as example.
Compiler Compatibility
When defining tools in Kotlin, ensure that Kotlin compilation is configured to preserve metadata for Java reflection on method parameters by setting javaParameters
to true
. This setting is required to maintain correct argument names in the tool specification.
When using Gradle, this can be achieved with the following configuration:
kotlin {
compilerOptions {
javaParameters = true
}
}