Package dev.langchain4j.model.chat
Interface ChatLanguageModel
- All Known Implementing Classes:
AbstractBedrockChatModel
,AnthropicChatModel
,AzureOpenAiChatModel
,BedrockAI21LabsChatModel
,BedrockAnthropicCompletionChatModel
,BedrockAnthropicMessageChatModel
,BedrockCohereChatModel
,BedrockLlamaChatModel
,BedrockMistralAiChatModel
,BedrockStabilityAIChatModel
,BedrockTitanChatModel
,DisabledChatLanguageModel
,GitHubModelsChatModel
,GoogleAiGeminiChatModel
,HuggingFaceChatModel
,JlamaChatModel
,LocalAiChatModel
,MistralAiChatModel
,OllamaChatModel
,OpenAiChatModel
,VertexAiChatModel
,VertexAiGeminiChatModel
,WorkersAiChatModel
public interface ChatLanguageModel
Represents a language model that has a chat API.
- See Also:
-
Method Summary
Modifier and TypeMethodDescriptiondefault ChatResponse
chat
(ChatRequest chatRequest) This is the main API to interact with the chat model.default String
default ChatRequestParameters
generate
(ChatMessage... messages) Generates a response from the model based on a sequence of messages.default String
Generates a response from the model based on a message from a user.generate
(List<ChatMessage> messages) Generates a response from the model based on a sequence of messages.generate
(List<ChatMessage> messages, ToolSpecification toolSpecification) Generates a response from the model based on a list of messages and a single tool specification.generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) Generates a response from the model based on a list of messages and a list of tool specifications.default Set
<Capability> static void
validate
(ChatRequestParameters parameters) static void
validate
(ResponseFormat responseFormat) static void
validate
(ToolChoice toolChoice)
-
Method Details
-
chat
This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.A temporary default implementation of this method is necessary until all
ChatLanguageModel
implementations adopt it. It should be removed once that occurs.- Parameters:
chatRequest
- aChatRequest
, containing all the inputs to the LLM- Returns:
- a
ChatResponse
, containing all the outputs from the LLM
-
validate
-
validate
-
validate
-
chat
-
defaultRequestParameters
-
supportedCapabilities
-
generate
Generates a response from the model based on a message from a user. This is a convenience method that receives the message from a user as a String and returns only the generated response.- Parameters:
userMessage
- The message from the user.- Returns:
- The response generated by the model.
-
generate
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- An array of messages.- Returns:
- The response generated by the model.
-
generate
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.- Returns:
- The response generated by the model.
-
generate
default Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.- Returns:
- The response generated by the model.
AiMessage
can contain either a textual response or a request to execute one of the tools. - Throws:
UnsupportedFeatureException
- if tools are not supported by the underlying LLM API
-
generate
default Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification) Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.- Returns:
- The response generated by the model.
AiMessage
contains a request to execute the specified tool. - Throws:
UnsupportedFeatureException
- if tools are not supported by the underlying LLM API
-