Interface ChatLanguageModel

All Known Implementing Classes:
AbstractBedrockChatModel, AnthropicChatModel, AzureOpenAiChatModel, BedrockAI21LabsChatModel, BedrockAnthropicCompletionChatModel, BedrockAnthropicMessageChatModel, BedrockCohereChatModel, BedrockLlamaChatModel, BedrockMistralAiChatModel, BedrockStabilityAIChatModel, BedrockTitanChatModel, DisabledChatLanguageModel, GitHubModelsChatModel, GoogleAiGeminiChatModel, HuggingFaceChatModel, JlamaChatModel, LocalAiChatModel, MistralAiChatModel, OllamaChatModel, OpenAiChatModel, VertexAiChatModel, VertexAiGeminiChatModel, WorkersAiChatModel

public interface ChatLanguageModel
Represents a language model that has a chat API.
See Also:
  • Method Details

    • chat

      default ChatResponse chat(ChatRequest chatRequest)
      This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.

      A temporary default implementation of this method is necessary until all ChatLanguageModel implementations adopt it. It should be removed once that occurs.

      Parameters:
      chatRequest - a ChatRequest, containing all the inputs to the LLM
      Returns:
      a ChatResponse, containing all the outputs from the LLM
    • validate

      static void validate(ChatRequestParameters parameters)
    • validate

      static void validate(ToolChoice toolChoice)
    • validate

      static void validate(ResponseFormat responseFormat)
    • chat

      default String chat(String userMessage)
    • defaultRequestParameters

      default ChatRequestParameters defaultRequestParameters()
    • supportedCapabilities

      default Set<Capability> supportedCapabilities()
    • generate

      default String generate(String userMessage)
      Generates a response from the model based on a message from a user. This is a convenience method that receives the message from a user as a String and returns only the generated response.
      Parameters:
      userMessage - The message from the user.
      Returns:
      The response generated by the model.
    • generate

      default Response<AiMessage> generate(ChatMessage... messages)
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Parameters:
      messages - An array of messages.
      Returns:
      The response generated by the model.
    • generate

      Response<AiMessage> generate(List<ChatMessage> messages)
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Parameters:
      messages - A list of messages.
      Returns:
      The response generated by the model.
    • generate

      default Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      Returns:
      The response generated by the model. AiMessage can contain either a textual response or a request to execute one of the tools.
      Throws:
      UnsupportedFeatureException - if tools are not supported by the underlying LLM API
    • generate

      default Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      Returns:
      The response generated by the model. AiMessage contains a request to execute the specified tool.
      Throws:
      UnsupportedFeatureException - if tools are not supported by the underlying LLM API