Package dev.langchain4j.model.vertexai
Class VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder
java.lang.Object
dev.langchain4j.model.vertexai.VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder
- Enclosing class:
VertexAiGeminiStreamingChatModel
public static class VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder
extends Object
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionallowedFunctionNames
(List<String> allowedFunctionNames) build()
customHeaders
(Map<String, String> customHeaders) Sets custom headers to be included in the LLM requests.listeners
(List<ChatModelListener> listeners) logRequests
(Boolean logRequests) logResponses
(Boolean logResponses) maxOutputTokens
(Integer maxOutputTokens) responseMimeType
(String responseMimeType) responseSchema
(com.google.cloud.vertexai.api.Schema responseSchema) safetySettings
(Map<HarmCategory, SafetyThreshold> safetySettings) temperature
(Float temperature) toolCallingMode
(ToolCallingMode toolCallingMode) toString()
useGoogleSearch
(Boolean useGoogleSearch) vertexSearchDatastore
(String vertexSearchDatastore)
-
Constructor Details
-
Method Details
-
project
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder project(String project) -
location
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder location(String location) -
modelName
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder modelName(String modelName) -
temperature
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder temperature(Float temperature) -
maxOutputTokens
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder maxOutputTokens(Integer maxOutputTokens) -
topK
-
topP
-
responseMimeType
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder responseMimeType(String responseMimeType) -
responseSchema
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder responseSchema(com.google.cloud.vertexai.api.Schema responseSchema) -
safetySettings
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder safetySettings(Map<HarmCategory, SafetyThreshold> safetySettings) -
useGoogleSearch
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder useGoogleSearch(Boolean useGoogleSearch) -
vertexSearchDatastore
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder vertexSearchDatastore(String vertexSearchDatastore) -
toolCallingMode
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder toolCallingMode(ToolCallingMode toolCallingMode) -
allowedFunctionNames
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder allowedFunctionNames(List<String> allowedFunctionNames) -
logRequests
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder logRequests(Boolean logRequests) -
logResponses
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder logResponses(Boolean logResponses) -
listeners
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder listeners(List<ChatModelListener> listeners) -
customHeaders
public VertexAiGeminiStreamingChatModel.VertexAiGeminiStreamingChatModelBuilder customHeaders(Map<String, String> customHeaders) Sets custom headers to be included in the LLM requests. Main use-case is to support provision throughput quota. E.g: "X-Vertex-AI-LLM-Request-Type: dedicated" will exhaust the provisioned throughput quota first, and will return HTTP_429 if the quota is exhausted. "X-Vertex-AI-LLM-Request-Type: shared" will bypass the provisioned throughput quota completely. For more information please refer to the official documentation- Parameters:
customHeaders
- a map of custom header keys and their corresponding values- Returns:
- the updated instance of
VertexAiGeminiStreamingChatModelBuilder
-
build
-
toString
-