Class GoogleAiGeminiChatModel

java.lang.Object
dev.langchain4j.model.googleai.GoogleAiGeminiChatModel
All Implemented Interfaces:
ChatLanguageModel, TokenCountEstimator

public class GoogleAiGeminiChatModel extends Object implements ChatLanguageModel, TokenCountEstimator
  • Field Details

    • geminiService

      protected final dev.langchain4j.model.googleai.GeminiService geminiService
    • apiKey

      protected final String apiKey
    • modelName

      protected final String modelName
    • temperature

      protected final Double temperature
    • topK

      protected final Integer topK
    • topP

      protected final Double topP
    • maxOutputTokens

      protected final Integer maxOutputTokens
    • stopSequences

      protected final List<String> stopSequences
    • responseFormat

      protected final ResponseFormat responseFormat
    • toolConfig

      protected final GeminiFunctionCallingConfig toolConfig
    • allowCodeExecution

      protected final boolean allowCodeExecution
    • includeCodeExecutionOutput

      protected final boolean includeCodeExecutionOutput
    • safetySettings

      protected final List<GeminiSafetySetting> safetySettings
    • listeners

      protected final List<ChatModelListener> listeners
    • maxRetries

      protected final Integer maxRetries
  • Constructor Details

  • Method Details

    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      Returns:
      The response generated by the model.
    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      Returns:
      The response generated by the model. AiMessage contains a request to execute the specified tool.
    • generate

      public Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications)
      Description copied from interface: ChatLanguageModel
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface ChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      Returns:
      The response generated by the model. AiMessage can contain either a textual response or a request to execute one of the tools.
    • chat

      public ChatResponse chat(ChatRequest chatRequest)
      Description copied from interface: ChatLanguageModel
      This is the main API to interact with the chat model. All the existing generate(...) methods (see below) will be deprecated and removed before 1.0.0 release.

      A temporary default implementation of this method is necessary until all ChatLanguageModel implementations adopt it. It should be removed once that occurs.

      Specified by:
      chat in interface ChatLanguageModel
      Parameters:
      chatRequest - a ChatRequest, containing all the inputs to the LLM
      Returns:
      a ChatResponse, containing all the outputs from the LLM
    • estimateTokenCount

      public int estimateTokenCount(List<ChatMessage> messages)
      Description copied from interface: TokenCountEstimator
      Estimates the count of tokens in the specified list of messages.
      Specified by:
      estimateTokenCount in interface TokenCountEstimator
      Parameters:
      messages - the list of messages
      Returns:
      the estimated count of tokens
    • supportedCapabilities

      public Set<Capability> supportedCapabilities()
      Specified by:
      supportedCapabilities in interface ChatLanguageModel
    • createGenerateContentRequest

      protected dev.langchain4j.model.googleai.GeminiGenerateContentRequest createGenerateContentRequest(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ResponseFormat responseFormat, ChatRequestParameters requestParameters)
    • createChatModelRequest

      protected ChatModelRequest createChatModelRequest(String modelName, List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ChatRequestParameters requestParameters)
    • computeMimeType

      protected static String computeMimeType(ResponseFormat responseFormat)
    • notifyListenersOnRequest

      protected void notifyListenersOnRequest(ChatModelRequestContext context)
    • notifyListenersOnResponse

      protected void notifyListenersOnResponse(Response<AiMessage> response, ChatModelRequest request, ConcurrentHashMap<Object,Object> attributes)
    • notifyListenersOnError

      protected void notifyListenersOnError(Exception exception, ChatModelRequest request, ConcurrentHashMap<Object,Object> attributes)