Package dev.langchain4j.model.mistralai
Class MistralAiChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiChatModel
- All Implemented Interfaces:
ChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as open-mistral-7b and open-mixtral-8x7b
This model allows generating chat completion of a sync way based on a list of chat messages.
You can find description of parameters
here.
-
Nested Class Summary
Nested Classes -
Constructor Summary
ConstructorsConstructorDescriptionMistralAiChatModel
(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries) Constructs a MistralAiChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
chat
(ChatRequest chatRequest) This is the main API to interact with the chat model.provider()
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.ChatLanguageModel
chat, chat, chat, defaultRequestParameters, doChat, listeners, supportedCapabilities
-
Constructor Details
-
MistralAiChatModel
public MistralAiChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Duration timeout, Boolean logRequests, Boolean logResponses, Integer maxRetries) Constructs a MistralAiChatModel with the specified parameters.- Parameters:
baseUrl
- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey
- the API key for authenticationmodelName
- the name of the Mistral AI model to usetemperature
- the temperature parameter for generating chat responsestopP
- the top-p parameter for generating chat responsesmaxTokens
- the maximum number of new tokens to generate in a chat responsesafePrompt
- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed
- the random seed for generating chat responsesresponseFormat
- the response format for generating chat responses.Current values supported are "text" and "json_object".
timeout
- the timeout duration for API requestsThe default value is 60 seconds
logRequests
- a flag indicating whether to log API requestslogResponses
- a flag indicating whether to log API responsesmaxRetries
- the maximum number of retries for API requests. It uses the default value 3 if not specified
-
-
Method Details
-
chat
Description copied from interface:ChatLanguageModel
This is the main API to interact with the chat model. A temporary default implementation of this method is necessary until allChatLanguageModel
implementations adopt it. It should be removed once that occurs.- Specified by:
chat
in interfaceChatLanguageModel
- Parameters:
chatRequest
- aChatRequest
, containing all the inputs to the LLM- Returns:
- a
ChatResponse
, containing all the outputs from the LLM
-
provider
- Specified by:
provider
in interfaceChatLanguageModel
-
builder
-