Package dev.langchain4j.model.anthropic
Class AnthropicStreamingChatModel
java.lang.Object
dev.langchain4j.model.anthropic.AnthropicStreamingChatModel
- All Implemented Interfaces:
StreamingChatLanguageModel
Represents an Anthropic language model with a Messages (chat) API.
The model's response is streamed token by token and should be handled with
More details are available here and here.
It supports
The content of
Sanitization is performed on the
Supports caching
StreamingResponseHandler
.
More details are available here and here.
It supports
Image
s as inputs. UserMessage
s can contain one or multiple ImageContent
s.
Image
s must not be represented as URLs; they should be Base64-encoded strings and include a mimeType
.
The content of
SystemMessage
s is sent using the "system" parameter.
Sanitization is performed on the
ChatMessage
s provided to ensure conformity with Anthropic API requirements.
This includes ensuring the first message is a UserMessage
and that there are no consecutive UserMessage
s.
Any messages removed during sanitization are logged as warnings and not submitted to the API.
Supports caching
SystemMessage
s and ToolSpecification
s.-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
-
Method Summary
Modifier and TypeMethodDescriptionvoid
generate
(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification.void
generate
(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a sequence of messages.void
generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications.static AnthropicStreamingChatModel
withApiKey
(String apiKey) Deprecated, for removal: This API element is subject to removal in a future version.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.StreamingChatLanguageModel
generate, generate
-
Method Details
-
withApiKey
Deprecated, for removal: This API element is subject to removal in a future version.Please usebuilder()
instead, and explicitly set the model name and, if necessary, other parameters. The default value for the model name will be removed in future releases! -
generate
Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.handler
- The handler for streaming the response.
-
generate
public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.handler
- The handler for streaming the response.AiMessage
can contain either a textual response or a request to execute one of the tools.
-
generate
public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.handler
- The handler for streaming the response.
-
builder()
instead, and explicitly set the model name and, if necessary, other parameters.