Class MistralAiStreamingChatModel

java.lang.Object
dev.langchain4j.model.mistralai.MistralAiStreamingChatModel
All Implemented Interfaces:
StreamingChatLanguageModel

public class MistralAiStreamingChatModel extends Object implements StreamingChatLanguageModel
Represents a Mistral AI Chat Model with a chat completion interface, such as mistral-tiny and mistral-small. The model's response is streamed token by token and should be handled with StreamingResponseHandler. You can find description of parameters here.
  • Constructor Details

    • MistralAiStreamingChatModel

      public MistralAiStreamingChatModel(String baseUrl, String apiKey, String modelName, Double temperature, Double topP, Integer maxTokens, Boolean safePrompt, Integer randomSeed, String responseFormat, Boolean logRequests, Boolean logResponses, Duration timeout)
      Constructs a MistralAiStreamingChatModel with the specified parameters.
      Parameters:
      baseUrl - the base URL of the Mistral AI API. It uses the default value if not specified
      apiKey - the API key for authentication
      modelName - the name of the Mistral AI model to use
      temperature - the temperature parameter for generating chat responses
      topP - the top-p parameter for generating chat responses
      maxTokens - the maximum number of new tokens to generate in a chat response
      safePrompt - a flag indicating whether to use a safe prompt for generating chat responses
      randomSeed - the random seed for generating chat responses (if not specified, a random number is used)
      responseFormat - the response format for generating chat responses. Current values supported are "text" and "json_object".
      logRequests - a flag indicating whether to log raw HTTP requests
      logResponses - a flag indicating whether to log raw HTTP responses
      timeout - the timeout duration for API requests
  • Method Details