Package dev.langchain4j.model.googleai
Class GoogleAiGeminiStreamingChatModel
java.lang.Object
dev.langchain4j.model.googleai.GoogleAiGeminiStreamingChatModel
- All Implemented Interfaces:
StreamingChatLanguageModel
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
-
Field Summary
Modifier and TypeFieldDescriptionprotected final boolean
protected final String
protected final dev.langchain4j.model.googleai.GeminiService
protected final boolean
protected final List
<ChatModelListener> protected final Integer
protected final Integer
protected final String
protected final ResponseFormat
protected final List
<GeminiSafetySetting> protected final Double
protected final GeminiFunctionCallingConfig
protected final Integer
protected final Double
-
Constructor Summary
ConstructorDescriptionGoogleAiGeminiStreamingChatModel
(String apiKey, String modelName, Double temperature, Integer topK, Double topP, Integer maxOutputTokens, Duration timeout, ResponseFormat responseFormat, List<String> stopSequences, GeminiFunctionCallingConfig toolConfig, Boolean allowCodeExecution, Boolean includeCodeExecutionOutput, Boolean logRequestsAndResponses, List<GeminiSafetySetting> safetySettings, List<ChatModelListener> listeners, Integer maxRetries) -
Method Summary
Modifier and TypeMethodDescriptionprotected static String
computeMimeType
(ResponseFormat responseFormat) protected ChatModelRequest
createChatModelRequest
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) protected dev.langchain4j.model.googleai.GeminiGenerateContentRequest
createGenerateContentRequest
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ResponseFormat responseFormat) void
generate
(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a single tool specification.void
generate
(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a sequence of messages.void
generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates a response from the model based on a list of messages and a list of tool specifications.protected void
notifyListenersOnError
(Exception exception, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) protected void
protected void
notifyListenersOnResponse
(Response<AiMessage> response, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.StreamingChatLanguageModel
generate, generate
-
Field Details
-
geminiService
protected final dev.langchain4j.model.googleai.GeminiService geminiService -
apiKey
-
modelName
-
temperature
-
topK
-
topP
-
maxOutputTokens
-
stopSequences
-
responseFormat
-
toolConfig
-
allowCodeExecution
protected final boolean allowCodeExecution -
includeCodeExecutionOutput
protected final boolean includeCodeExecutionOutput -
safetySettings
-
listeners
-
maxRetries
-
-
Constructor Details
-
GoogleAiGeminiStreamingChatModel
public GoogleAiGeminiStreamingChatModel(String apiKey, String modelName, Double temperature, Integer topK, Double topP, Integer maxOutputTokens, Duration timeout, ResponseFormat responseFormat, List<String> stopSequences, GeminiFunctionCallingConfig toolConfig, Boolean allowCodeExecution, Boolean includeCodeExecutionOutput, Boolean logRequestsAndResponses, List<GeminiSafetySetting> safetySettings, List<ChatModelListener> listeners, Integer maxRetries)
-
-
Method Details
-
generate
Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.handler
- The handler for streaming the response.
-
generate
public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecification
- The specification of a tool that must be executed. The model is forced to execute this tool.handler
- The handler for streaming the response.
-
generate
public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Description copied from interface:StreamingChatLanguageModel
Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- A list of messages.toolSpecifications
- A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.handler
- The handler for streaming the response.AiMessage
can contain either a textual response or a request to execute one of the tools.
-
createGenerateContentRequest
protected dev.langchain4j.model.googleai.GeminiGenerateContentRequest createGenerateContentRequest(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, ResponseFormat responseFormat) -
createChatModelRequest
protected ChatModelRequest createChatModelRequest(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications) -
computeMimeType
-
notifyListenersOnRequest
-
notifyListenersOnResponse
protected void notifyListenersOnResponse(Response<AiMessage> response, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes) -
notifyListenersOnError
protected void notifyListenersOnError(Exception exception, ChatModelRequest request, ConcurrentHashMap<Object, Object> attributes)
-