Package dev.langchain4j.model.chat
Interface StreamingChatLanguageModel
- All Known Implementing Classes:
AbstractBedrockStreamingChatModel
,AnthropicStreamingChatModel
,AzureOpenAiStreamingChatModel
,BedrockAnthropicStreamingChatModel
,DisabledStreamingChatLanguageModel
,GitHubModelsStreamingChatModel
,GoogleAiGeminiStreamingChatModel
,JlamaStreamingChatModel
,LocalAiStreamingChatModel
,MistralAiStreamingChatModel
,OllamaStreamingChatModel
,OpenAiStreamingChatModel
,VertexAiGeminiStreamingChatModel
public interface StreamingChatLanguageModel
Represents a language model that has a chat API and can stream a response one token at a time.
- See Also:
-
Method Summary
Modifier and TypeMethodDescriptiondefault void
chat
(ChatRequest chatRequest, StreamingChatResponseHandler handler) This is the main API to interact with the chat model.default ChatRequestParameters
default void
generate
(UserMessage userMessage, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a message from a user.default void
generate
(String userMessage, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a message from a user.default void
generate
(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification.void
generate
(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a sequence of messages.default void
generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications.default Set
<Capability>
-
Method Details
-
chat
This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.A temporary default implementation of this method is necessary until all
StreamingChatLanguageModel
implementations adopt it. It should be removed once that occurs.- Parameters:
chatRequest
- aChatRequest
, containing all the inputs to the LLMhandler
- aStreamingChatResponseHandler
that will handle streaming response from the LLM
-
defaultRequestParameters
-
supportedCapabilities
-
generate
Generates a response from the model based on a message from a user.- Parameters:
userMessage
- The message from the user.handler
- The handler for streaming the response.
-
generate
Generates a response from the model based on a message from a user.- Parameters:
userMessage
- The message from the user.handler
- The handler for streaming the response.
-
generate
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.handler
- The handler for streaming the response.
-
generate
default void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.handler
- The handler for streaming the response.AiMessage
can contain either a textual response or a request to execute one of the tools.- Throws:
UnsupportedFeatureException
- if tools are not supported by the underlying LLM API
-
generate
default void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.handler
- The handler for streaming the response.- Throws:
UnsupportedFeatureException
- if tools are not supported by the underlying LLM API
-