Package dev.langchain4j.model.mistralai
Class MistralAiChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiChatModel
- All Implemented Interfaces:
ChatModel
Represents a Mistral AI Chat Model with a chat completion interface, such as open-mistral-7b and open-mixtral-8x7b
This model allows generating chat completion of a sync way based on a list of chat messages.
You can find description of parameters
here.
-
Nested Class Summary
Nested Classes -
Constructor Summary
ConstructorsConstructorDescriptionMistralAiChatModel
(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, ResponseFormat responseFormat, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries, Set<Capability> supportedCapabilities) Constructs a MistralAiChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
chat
(ChatRequest chatRequest) This is the main API to interact with the chat model.provider()
-
Constructor Details
-
MistralAiChatModel
public MistralAiChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, ResponseFormat responseFormat, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries, Set<Capability> supportedCapabilities) Constructs a MistralAiChatModel with the specified parameters.- Parameters:
baseUrl
- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey
- the API key for authenticationmodelName
- the name of the Mistral AI model to usetemperature
- the temperature parameter for generating chat responsestopP
- the top-p parameter for generating chat responsesmaxTokens
- the maximum number of new tokens to generate in a chat responsesafePrompt
- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed
- the random seed for generating chat responsesresponseFormat
- the response format for generating chat responses.Current values supported are "text" and "json_object".
timeout
- the timeout duration for API requestsThe default value is 60 seconds
logRequests
- a flag indicating whether to log API requestslogResponses
- a flag indicating whether to log API responsesmaxRetries
- the maximum number of retries for API requests. It uses the default value 3 if not specified
-
-
Method Details
-
supportedCapabilities
- Specified by:
supportedCapabilities
in interfaceChatModel
-
chat
Description copied from interface:ChatModel
This is the main API to interact with the chat model.- Specified by:
chat
in interfaceChatModel
- Parameters:
chatRequest
- aChatRequest
, containing all the inputs to the LLM- Returns:
- a
ChatResponse
, containing all the outputs from the LLM
-
provider
-
builder
-