Class AnthropicStreamingChatModel

java.lang.Object
dev.langchain4j.model.anthropic.AnthropicStreamingChatModel
All Implemented Interfaces:
StreamingChatLanguageModel

public class AnthropicStreamingChatModel extends Object implements StreamingChatLanguageModel
Represents an Anthropic language model with a Messages (chat) API. The model's response is streamed token by token and should be handled with StreamingResponseHandler.
More details are available here and here.

It supports Images as inputs. UserMessages can contain one or multiple ImageContents. Images must not be represented as URLs; they should be Base64-encoded strings and include a mimeType.

The content of SystemMessages is sent using the "system" parameter.

Sanitization is performed on the ChatMessages provided to ensure conformity with Anthropic API requirements. This includes ensuring the first message is a UserMessage and that there are no consecutive UserMessages. Any messages removed during sanitization are logged as warnings and not submitted to the API.

Supports caching SystemMessages and ToolSpecifications.
  • Method Details

    • withApiKey

      @Deprecated(forRemoval=true) public static AnthropicStreamingChatModel withApiKey(String apiKey)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Please use builder() instead, and explicitly set the model name and, if necessary, other parameters. The default value for the model name will be removed in future releases!
    • generate

      public void generate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      handler - The handler for streaming the response.
    • generate

      public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      handler - The handler for streaming the response. AiMessage can contain either a textual response or a request to execute one of the tools.
    • generate

      public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      handler - The handler for streaming the response.