Package dev.langchain4j.model.googleai
Class GoogleAiGeminiChatModel
java.lang.Object
dev.langchain4j.model.googleai.GoogleAiGeminiChatModel
- All Implemented Interfaces:
ChatLanguageModel
,TokenCountEstimator
public class GoogleAiGeminiChatModel
extends Object
implements ChatLanguageModel, TokenCountEstimator
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
-
Field Summary
Modifier and TypeFieldDescriptionprotected final boolean
protected final String
protected final dev.langchain4j.model.googleai.GeminiService
protected final boolean
protected final List
<ChatModelListener> protected final Integer
protected final Integer
protected final String
protected final ResponseFormat
protected final List
<GeminiSafetySetting> protected final Double
protected final GeminiFunctionCallingConfig
protected final Integer
protected final Double
-
Constructor Summary
ConstructorDescriptionGoogleAiGeminiChatModel
(String apiKey, String modelName, Integer maxRetries, Double temperature, Integer topK, Double topP, Integer maxOutputTokens, Duration timeout, ResponseFormat responseFormat, List<String> stopSequences, GeminiFunctionCallingConfig toolConfig, Boolean allowCodeExecution, Boolean includeCodeExecutionOutput, Boolean logRequestsAndResponses, List<GeminiSafetySetting> safetySettings, List<ChatModelListener> listeners) -
Method Summary
Modifier and TypeMethodDescriptionchat
(ChatRequest chatRequest) protected static String
computeMimeType
(ResponseFormat responseFormat) protected ChatModelRequest
createChatModelRequest
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) protected dev.langchain4j.model.googleai.GeminiGenerateContentRequest
createGenerateContentRequest
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ResponseFormat responseFormat) int
estimateTokenCount
(List<ChatMessage> messages) Estimates the count of tokens in the specified list of messages.generate
(List<ChatMessage> messages) Generates a response from the model based on a sequence of messages.generate
(List<ChatMessage> messages, ToolSpecification toolSpecification) Generates a response from the model based on a list of messages and a single tool specification.generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) Generates a response from the model based on a list of messages and a list of tool specifications.protected void
notifyListenersOnError
(Exception exception, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) protected void
protected void
notifyListenersOnResponse
(Response<AiMessage> response, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.ChatLanguageModel
generate, generate
Methods inherited from interface dev.langchain4j.model.chat.TokenCountEstimator
estimateTokenCount, estimateTokenCount, estimateTokenCount, estimateTokenCount
-
Field Details
-
geminiService
protected final dev.langchain4j.model.googleai.GeminiService geminiService -
apiKey
-
modelName
-
temperature
-
topK
-
topP
-
maxOutputTokens
-
stopSequences
-
responseFormat
-
toolConfig
-
allowCodeExecution
protected final boolean allowCodeExecution -
includeCodeExecutionOutput
protected final boolean includeCodeExecutionOutput -
safetySettings
-
listeners
-
maxRetries
-
-
Constructor Details
-
GoogleAiGeminiChatModel
public GoogleAiGeminiChatModel(String apiKey, String modelName, Integer maxRetries, Double temperature, Integer topK, Double topP, Integer maxOutputTokens, Duration timeout, ResponseFormat responseFormat, List<String> stopSequences, GeminiFunctionCallingConfig toolConfig, Boolean allowCodeExecution, Boolean includeCodeExecutionOutput, Boolean logRequestsAndResponses, List<GeminiSafetySetting> safetySettings, List<ChatModelListener> listeners)
-
-
Method Details
-
generate
Description copied from interface:ChatLanguageModel
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceChatLanguageModel
- Parameters:
messages
- A list of messages.- Returns:
- The response generated by the model.
-
generate
public Response<AiMessage> generate(List<ChatMessage> messages, ToolSpecification toolSpecification) Description copied from interface:ChatLanguageModel
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.- Returns:
- The response generated by the model.
AiMessage
contains a request to execute the specified tool.
-
generate
public Response<AiMessage> generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) Description copied from interface:ChatLanguageModel
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.- Returns:
- The response generated by the model.
AiMessage
can contain either a textual response or a request to execute one of the tools.
-
chat
- Specified by:
chat
in interfaceChatLanguageModel
-
estimateTokenCount
Description copied from interface:TokenCountEstimator
Estimates the count of tokens in the specified list of messages.- Specified by:
estimateTokenCount
in interfaceTokenCountEstimator
- Parameters:
messages
- the list of messages- Returns:
- the estimated count of tokens
-
supportedCapabilities
- Specified by:
supportedCapabilities
in interfaceChatLanguageModel
-
createGenerateContentRequest
protected dev.langchain4j.model.googleai.GeminiGenerateContentRequest createGenerateContentRequest(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ResponseFormat responseFormat) -
createChatModelRequest
protected ChatModelRequest createChatModelRequest(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) -
computeMimeType
-
notifyListenersOnRequest
-
notifyListenersOnResponse
protected void notifyListenersOnResponse(Response<AiMessage> response, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) -
notifyListenersOnError
protected void notifyListenersOnError(Exception exception, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes)
-