Class OpenAiChatModel

java.lang.Object
dev.langchain4j.model.openai.OpenAiChatModel
All Implemented Interfaces:
ChatLanguageModel, TokenCountEstimator

public class OpenAiChatModel extends Object implements ChatLanguageModel, TokenCountEstimator
Represents an OpenAI language model with a chat completion interface, such as gpt-3.5-turbo and gpt-4. You can find description of parameters here.
  • Constructor Details

  • Method Details

    • modelName

      public String modelName()
    • defaultRequestParameters

      public OpenAiChatRequestParameters defaultRequestParameters()
      Specified by:
      defaultRequestParameters in interface ChatLanguageModel
    • chat

      public ChatResponse chat(ChatRequest chatRequest)
      Description copied from interface: ChatLanguageModel
      This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.

      A temporary default implementation of this method is necessary until all ChatLanguageModel implementations adopt it. It should be removed once that occurs.

      Specified by:
      chat in interface ChatLanguageModel
      Parameters:
      chatRequest - a ChatRequest, containing all the inputs to the LLM
      Returns:
      a ChatResponse, containing all the outputs from the LLM
    • supportedCapabilities

      public Set<Capability> supportedCapabilities()
      Specified by:
      supportedCapabilities in interface ChatLanguageModel
    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      Returns:
      The response generated by the model.
    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      Returns:
      The response generated by the model. AiMessage can contain either a textual response or a request to execute one of the tools.
    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      Returns:
      The response generated by the model. AiMessage contains a request to execute the specified tool.
    • estimateTokenCount

      public int estimateTokenCount(List<ChatMessage> messages)
      Description copied from interface: TokenCountEstimator
      Estimates the count of tokens in the specified list of messages.
      Specified by:
      estimateTokenCount in interface TokenCountEstimator
      Parameters:
      messages - the list of messages
      Returns:
      the estimated count of tokens
    • withApiKey

      @Deprecated(forRemoval=true) public static OpenAiChatModel withApiKey(String apiKey)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Please use builder() instead, and explicitly set the model name and, if necessary, other parameters. The default values for the model name and temperature will be removed in future releases!
    • builder

      public static OpenAiChatModel.OpenAiChatModelBuilder builder()