Package dev.langchain4j.model.chat
Interface StreamingChatLanguageModel
- All Known Implementing Classes:
AbstractBedrockStreamingChatModel
,AnthropicStreamingChatModel
,AzureOpenAiStreamingChatModel
,BedrockAnthropicStreamingChatModel
,DisabledStreamingChatLanguageModel
,GitHubModelsStreamingChatModel
,GoogleAiGeminiStreamingChatModel
,JlamaStreamingChatModel
,LocalAiStreamingChatModel
,MistralAiStreamingChatModel
,OllamaStreamingChatModel
,OpenAiStreamingChatModel
,VertexAiGeminiStreamingChatModel
public interface StreamingChatLanguageModel
Represents a language model that has a chat interface and can stream a response one token at a time.
-
Method Summary
Modifier and TypeMethodDescriptiondefault void
generate
(UserMessage userMessage, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a message from a user.default void
generate
(String userMessage, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a message from a user.default void
generate
(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification.void
generate
(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a sequence of messages.default void
generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications.
-
Method Details
-
generate
Generates a response from the model based on a message from a user.- Parameters:
userMessage
- The message from the user.handler
- The handler for streaming the response.
-
generate
Generates a response from the model based on a message from a user.- Parameters:
userMessage
- The message from the user.handler
- The handler for streaming the response.
-
generate
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.handler
- The handler for streaming the response.
-
generate
default void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.handler
- The handler for streaming the response.AiMessage
can contain either a textual response or a request to execute one of the tools.
-
generate
default void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.handler
- The handler for streaming the response.
-