Class MistralAiChatModel.MistralAiChatModelBuilder
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiChatModel.MistralAiChatModelBuilder
- Enclosing class:
MistralAiChatModel
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionbuild()defaultRequestParameters(ChatRequestParameters parameters) frequencyPenalty(Double frequencyPenalty) httpClientBuilder(HttpClientBuilder httpClientBuilder) listeners(List<ChatModelListener> listeners) logger(org.slf4j.Logger logger) logRequests(Boolean logRequests) logResponses(Boolean logResponses) maxRetries(Integer maxRetries) modelName(MistralAiChatModelName modelName) presencePenalty(Double presencePenalty) randomSeed(Integer randomSeed) responseFormat(ResponseFormat responseFormat) returnThinking(Boolean returnThinking) Controls whether to return thinking/reasoning text (if available) insideAiMessage.thinking().safePrompt(Boolean safePrompt) sendThinking(Boolean sendThinking) Controls whether to send thinking/reasoning text to the LLM in follow-up requests.stopSequences(List<String> stopSequences) supportedCapabilities(Capability... supportedCapabilities) supportedCapabilities(Set<Capability> supportedCapabilities) temperature(Double temperature)
-
Constructor Details
-
MistralAiChatModelBuilder
public MistralAiChatModelBuilder()
-
-
Method Details
-
httpClientBuilder
public MistralAiChatModel.MistralAiChatModelBuilder httpClientBuilder(HttpClientBuilder httpClientBuilder) -
modelName
-
modelName
-
responseFormat
-
baseUrl
- Parameters:
baseUrl- the base URL of the Mistral AI API. It uses the default value if not specified- Returns:
this.
-
apiKey
- Parameters:
apiKey- the API key for authentication- Returns:
this.
-
temperature
- Parameters:
temperature- the temperature parameter for generating chat responses- Returns:
this.
-
topP
- Parameters:
topP- the top-p parameter for generating chat responses- Returns:
this.
-
maxTokens
- Parameters:
maxTokens- the maximum number of new tokens to generate in a chat response- Returns:
this.
-
safePrompt
- Parameters:
safePrompt- a flag indicating whether to use a safe prompt for generating chat responses- Returns:
this.
-
randomSeed
- Parameters:
randomSeed- the random seed for generating chat responses- Returns:
this.
-
returnThinking
Controls whether to return thinking/reasoning text (if available) insideAiMessage.thinking(). Please note that this does not enable thinking/reasoning for the LLM; it only controls whether to parse thethinkingcontent from the API response and return it inside theAiMessage.Disabled by default. If enabled, the thinking text will be stored within the
AiMessageand may be persisted.- See Also:
-
sendThinking
Controls whether to send thinking/reasoning text to the LLM in follow-up requests.Disabled by default. If enabled, the contents of
AiMessage.thinking()will be sent in the API request.- See Also:
-
timeout
- Parameters:
timeout- the timeout duration for API requestsThe default value is 60 seconds
- Returns:
this.
-
logRequests
- Parameters:
logRequests- a flag indicating whether to log API requests- Returns:
this.
-
logResponses
- Parameters:
logResponses- a flag indicating whether to log API responses- Returns:
this.
-
logger
- Parameters:
logger- an alternateLoggerto be used instead of the default one provided by Langchain4J for logging requests and responses.- Returns:
this.
-
maxRetries
- Parameters:
maxRetries-- Returns:
this.
-
supportedCapabilities
public MistralAiChatModel.MistralAiChatModelBuilder supportedCapabilities(Capability... supportedCapabilities) -
supportedCapabilities
public MistralAiChatModel.MistralAiChatModelBuilder supportedCapabilities(Set<Capability> supportedCapabilities) -
stopSequences
-
presencePenalty
-
frequencyPenalty
-
listeners
-
defaultRequestParameters
public MistralAiChatModel.MistralAiChatModelBuilder defaultRequestParameters(ChatRequestParameters parameters) -
build
-