📄️ Overview
Here you will find tutorials covering all of LangChain4j's functionality, to guide you through the framework in steps of increasing complexity.
📄️ Chat and Language Models
LLMs are currently available in two API types:
📄️ Chat Memory
Maintaining and managing ChatMessages manually is cumbersome.
📄️ Model Parameters
Depending on the model and provider you choose, you can adjust numerous parameters that will define:
📄️ Response Streaming
LLMs generate text one token at a time, so many LLM providers offer a way to stream the response
📄️ AI Services
So far, we have been covering low-level components like ChatLanguageModel, ChatMessage, ChatMemory, etc.
📄️ Tools (Function Calling)
Some LLMs, in addition to generating text, can also trigger actions.
📄️ RAG (Retrieval-Augmented Generation)
LLM's knowledge is limited to the data it has been trained on.
📄️ Structured Data Extraction
More info coming soon
📄️ Classification
More info coming soon
📄️ Embedding (Vector) Stores
- Example of using in-memory embedding store
📄️ Chains
Legacy Chains and chaining of AI Services is discussed here.
📄️ Image Models
More info coming soon
📄️ Quarkus Integration
- Quarkus-LangChain4j extension repo
📄️ Spring Boot Integration
Spring Boot Starters for Popular Integrations
📄️ Logging
LangChain4j uses SLF4J for logging,