ποΈ Chat and Language Models
This page describes a low-level LLM API.
ποΈ Chat Memory
Maintaining and managing ChatMessages manually is cumbersome.
ποΈ Model Parameters
Depending on the model and provider you choose, you can adjust numerous parameters that will define:
ποΈ Response Streaming
This page describes response streaming with a low-level LLM API.
ποΈ AI Services
So far, we have been covering low-level components like ChatModel, ChatMessage, ChatMemory, etc.
ποΈ Agents
Please note that "Agent" is a very broad term with multiple definitions.
ποΈ Tools (Function Calling)
Some LLMs, in addition to generating text, can also trigger actions.
ποΈ RAG (Retrieval-Augmented Generation)
LLM's knowledge is limited to the data it has been trained on.
ποΈ Structured Outputs
The term "Structured Outputs" is overloaded and can refer to two things:
ποΈ Classification
Overview
ποΈ Embedding (Vector) Stores
Documentation on embedding stores can be found here.
ποΈ Image Models
More info coming soon
ποΈ Quarkus Integration
Quarkus provides a superb extension for LangChain4j.
ποΈ Spring Boot Integration
LangChain4j provides Spring Boot starters for:
ποΈ Helidon Integration
Helidon provides a LangChain4j integration module that simplifies building AI-driven applications while leveraging Helidonβs programming model and style.
ποΈ Kotlin Support
Kotlin is a statically-typed language targeting the JVM (and other platforms), enabling concise and elegant code with seamless interoperability with Java libraries.
ποΈ Logging
LangChain4j uses SLF4J for logging,
ποΈ Observability
Chat Model Observability
ποΈ Customizable HTTP Client
Some LangChain4j modules (currently OpenAI and Ollama) support customizing the HTTP clients used
ποΈ Testing and Evaluation
Examples
ποΈ Model Context Protocol (MCP)
LangChain4j supports the Model Context Protocol (MCP) to communicate with