Package dev.langchain4j.model.mistralai
Class MistralAiStreamingChatModel
java.lang.Object
dev.langchain4j.model.mistralai.MistralAiStreamingChatModel
- All Implemented Interfaces:
StreamingChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as mistral-tiny and mistral-small.
The model's response is streamed token by token and should be handled with
StreamingResponseHandler
.
You can find description of parameters here.-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
-
Constructor Summary
ConstructorDescriptionMistralAiStreamingChatModel
(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters. -
Method Summary
Modifier and TypeMethodDescriptionbuilder()
void
generate
(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages and tool specification.void
generate
(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages.void
generate
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages and tool specifications.static MistralAiStreamingChatModel
withApiKey
(String apiKey) Deprecated, for removal: This API element is subject to removal in a future version.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.chat.StreamingChatLanguageModel
generate, generate
-
Constructor Details
-
MistralAiStreamingChatModel
public MistralAiStreamingChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Boolean logRequests, Boolean logResponses, Duration timeout) Constructs a MistralAiStreamingChatModel with the specified parameters.- Parameters:
baseUrl
- the base URL of the Mistral AI API. It uses the default value if not specifiedapiKey
- the API key for authenticationmodelName
- the name of the Mistral AI model to usetemperature
- the temperature parameter for generating chat responsestopP
- the top-p parameter for generating chat responsesmaxTokens
- the maximum number of new tokens to generate in a chat responsesafePrompt
- a flag indicating whether to use a safe prompt for generating chat responsesrandomSeed
- the random seed for generating chat responses (if not specified, a random number is used)responseFormat
- the response format for generating chat responses. Current values supported are "text" and "json_object".logRequests
- a flag indicating whether to log raw HTTP requestslogResponses
- a flag indicating whether to log raw HTTP responsestimeout
- the timeout duration for API requests
-
-
Method Details
-
withApiKey
Deprecated, for removal: This API element is subject to removal in a future version.Please usebuilder()
instead, and explicitly set the model name and, if necessary, other parameters. The default value for the model name will be removed in future releases! -
generate
public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages and tool specifications.- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- the list of chat messagestoolSpecifications
- the list of tool specifications. tool_choice is set to AUTO.handler
- the response handler for processing the generated chat chunk responses
-
generate
public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler) Generates streamed token response based on the given list of messages and tool specification.- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- the list of chat messagestoolSpecification
- the tool specification. tool_choice is set to ANY.handler
- the response handler for processing the generated chat chunk responses
-
generate
Generates streamed token response based on the given list of messages.- Specified by:
generate
in interfaceStreamingChatLanguageModel
- Parameters:
messages
- the list of chat messageshandler
- the response handler for processing the generated chat chunk responses
-
builder
-
builder()
instead, and explicitly set the model name and, if necessary, other parameters.