Skip to main content

Overview

Here you will find a quick view of available ready-to-use integration capabilities and all our supported integrations and models for every LLM provider supported on LangChain4j so far.

We are making a great effort to have most of the functions enabled according to the progress and updates of the LLM providers and java features.

Capabilities

  1. Native image: You can use this LLM integration for AOT compilation using GraalVM CE or GraalVM Oracle for native image generation.
  2. Sync Completion: Supports the implementation of text-completion and chat-completion models in a synchronous way. This is the most common usage.
  3. Streaming Completion: Supports streaming the model response back for text-completion or chat-completion models, handling each event in StreamingResponseHandler<AiMessage> class. View examples here
  4. Embeddings: Supports the implementation of text-embedding models. Embeddings make it easy to add custom data without fine-tuning. Generally used with RAG (Retrieval-Augmented Generation) and Embedding Stores.
  5. Image Generation: Supports the implementation of text-to-image models to create realistic and coherent images from scratch. View examples here
  6. Scoring: Understands the implementation of scoring models to improve created models by re-organizing their results based on certain parameters.
  7. Function Calling: Supports the implementation of function-calling models to call a function as a Tool. View examples here
note

of course some LLM providers offer large multimodal model (accepting text or image inputs) and it would cover more than one capability.

Supported LLM Integrations

ProviderNative ImageSync CompletionStreaming CompletionEmbeddingImage GenerationScoringFunction Calling
OpenAI
Azure OpenAI
Hugging Face
Amazon Bedrock
Google Vertex AI Gemini
Google Vertex AI
Mistral AI
DashScope
LocalAI
Ollama
Cohere
Qianfan
ChatGLM
Nomic
Anthropic
Zhipu AI