Package dev.langchain4j.model.mistralai
Class MistralAiStreamingChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiStreamingChatModel
- All Implemented Interfaces:
StreamingChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as mistral-tiny and mistral-small.
The model's response is streamed token by token and should be handled with
StreamingResponseHandler
.
You can find description of parameters here.-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic class
-
Constructor Summary
ConstructorsConstructorDescriptionMistralAiStreamingChatModel
(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
void
chat
(ChatRequest chatRequest, StreamingChatResponseHandler handler) This is the main API to interact with the chat model.provider()
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.StreamingChatLanguageModel
chat, chat, defaultRequestParameters, doChat, listeners, supportedCapabilities
-
Constructor Details
-
MistralAiStreamingChatModel
public MistralAiStreamingChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters.- Parameters:
baseUrl
- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey
- the API key for authenticationmodelName
- the name of the Mistral AI model to usetemperature
- the temperature parameter for generating chat responsestopP
- the top-p parameter for generating chat responsesmaxTokens
- the maximum number of new tokens to generate in a chat responsesafePrompt
- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed
- the random seed for generating chat responses (if not specified, a random number is used)responseFormat
- the response format for generating chat responses. Current values supported are "text" and "json_object".logRequests
- a flag indicating whether to log raw HTTP requestslogResponses
- a flag indicating whether to log raw HTTP responsestimeout
- the timeout duration for API requests
-
-
Method Details
-
chat
Description copied from interface:StreamingChatLanguageModel
This is the main API to interact with the chat model.A temporary default implementation of this method is necessary until all
StreamingChatLanguageModel
implementations adopt it. It should be removed once that occurs.- Specified by:
chat
in interfaceStreamingChatLanguageModel
- Parameters:
chatRequest
- aChatRequest
, containing all the inputs to the LLMhandler
- aStreamingChatResponseHandler
that will handle streaming response from the LLM
-
provider
- Specified by:
provider
in interfaceStreamingChatLanguageModel
-
builder
-