Class AzureOpenAiStreamingChatModel

java.lang.Object
dev.langchain4j.model.azure.AzureOpenAiStreamingChatModel
All Implemented Interfaces:
StreamingChatLanguageModel, TokenCountEstimator

public class AzureOpenAiStreamingChatModel extends Object implements StreamingChatLanguageModel, TokenCountEstimator
Represents an OpenAI language model, hosted on Azure, that has a chat completion interface, such as gpt-3.5-turbo. The model's response is streamed token by token and should be handled with StreamingResponseHandler.

Mandatory parameters for initialization are: endpoint and apikey (or an alternate authentication method, see below for more information). Optionally you can set serviceVersion (if not, the latest version is used) and deploymentName (if not, a default name is used). You can also provide your own OpenAIClient instance, if you need more flexibility.

There are 3 authentication methods:

1. Azure OpenAI API Key Authentication: this is the most common method, using an Azure OpenAI API key. You need to provide the OpenAI API Key as a parameter, using the apiKey() method in the Builder, or the apiKey parameter in the constructor: For example, you would use `builder.apiKey("{key}")`.

2. non-Azure OpenAI API Key Authentication: this method allows to use the OpenAI service, instead of Azure OpenAI. You can use the nonAzureApiKey() method in the Builder, which will also automatically set the endpoint to "https://api.openai.com/v1". For example, you would use `builder.nonAzureApiKey("{key}")`. The constructor requires a KeyCredential instance, which can be created using `new AzureKeyCredential("{key}")`, and doesn't set up the endpoint.

3. Azure OpenAI client with Microsoft Entra ID (formerly Azure Active Directory) credentials. - This requires to add the `com.azure:azure-identity` dependency to your project, which is an optional dependency to this library. - You need to provide a TokenCredential instance, using the tokenCredential() method in the Builder, or the tokenCredential parameter in the constructor. As an example, DefaultAzureCredential can be used to authenticate the client: Set the values of the client ID, tenant ID, and client secret of the AAD application as environment variables: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET. Then, provide the DefaultAzureCredential instance to the builder: `builder.tokenCredential(new DefaultAzureCredentialBuilder().build())`.

  • Constructor Details

    • AzureOpenAiStreamingChatModel

      public AzureOpenAiStreamingChatModel(com.azure.ai.openai.OpenAIClient client, com.azure.ai.openai.OpenAIAsyncClient asyncClient, String deploymentName, Tokenizer tokenizer, Integer maxTokens, Double temperature, Double topP, Map<String,Integer> logitBias, String user, Integer n, List<String> stop, Double presencePenalty, Double frequencyPenalty, List<com.azure.ai.openai.models.AzureChatExtensionConfiguration> dataSources, com.azure.ai.openai.models.AzureChatEnhancementConfiguration enhancements, Long seed, com.azure.ai.openai.models.ChatCompletionsResponseFormat responseFormat, List<ChatModelListener> listeners)
    • AzureOpenAiStreamingChatModel

      public AzureOpenAiStreamingChatModel(String endpoint, String serviceVersion, String apiKey, String deploymentName, Tokenizer tokenizer, Integer maxTokens, Double temperature, Double topP, Map<String,Integer> logitBias, String user, Integer n, List<String> stop, Double presencePenalty, Double frequencyPenalty, List<com.azure.ai.openai.models.AzureChatExtensionConfiguration> dataSources, com.azure.ai.openai.models.AzureChatEnhancementConfiguration enhancements, Long seed, com.azure.ai.openai.models.ChatCompletionsResponseFormat responseFormat, Duration timeout, Integer maxRetries, com.azure.core.http.ProxyOptions proxyOptions, boolean logRequestsAndResponses, boolean useAsyncClient, List<ChatModelListener> listeners, String userAgentSuffix, Map<String,String> customHeaders)
    • AzureOpenAiStreamingChatModel

      public AzureOpenAiStreamingChatModel(String endpoint, String serviceVersion, com.azure.core.credential.KeyCredential keyCredential, String deploymentName, Tokenizer tokenizer, Integer maxTokens, Double temperature, Double topP, Map<String,Integer> logitBias, String user, Integer n, List<String> stop, Double presencePenalty, Double frequencyPenalty, List<com.azure.ai.openai.models.AzureChatExtensionConfiguration> dataSources, com.azure.ai.openai.models.AzureChatEnhancementConfiguration enhancements, Long seed, com.azure.ai.openai.models.ChatCompletionsResponseFormat responseFormat, Duration timeout, Integer maxRetries, com.azure.core.http.ProxyOptions proxyOptions, boolean logRequestsAndResponses, boolean useAsyncClient, List<ChatModelListener> listeners, String userAgentSuffix, Map<String,String> customHeaders)
    • AzureOpenAiStreamingChatModel

      public AzureOpenAiStreamingChatModel(String endpoint, String serviceVersion, com.azure.core.credential.TokenCredential tokenCredential, String deploymentName, Tokenizer tokenizer, Integer maxTokens, Double temperature, Double topP, Map<String,Integer> logitBias, String user, Integer n, List<String> stop, Double presencePenalty, Double frequencyPenalty, List<com.azure.ai.openai.models.AzureChatExtensionConfiguration> dataSources, com.azure.ai.openai.models.AzureChatEnhancementConfiguration enhancements, Long seed, com.azure.ai.openai.models.ChatCompletionsResponseFormat responseFormat, Duration timeout, Integer maxRetries, com.azure.core.http.ProxyOptions proxyOptions, boolean logRequestsAndResponses, boolean useAsyncClient, List<ChatModelListener> listeners, String userAgentSuffix, Map<String,String> customHeaders)
  • Method Details

    • generate

      public void generate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      handler - The handler for streaming the response.
    • generate

      public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      handler - The handler for streaming the response. AiMessage can contain either a textual response or a request to execute one of the tools.
    • generate

      public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      handler - The handler for streaming the response.
    • estimateTokenCount

      public int estimateTokenCount(List<ChatMessage> messages)
      Description copied from interface: TokenCountEstimator
      Estimates the count of tokens in the specified list of messages.
      Specified by:
      estimateTokenCount in interface TokenCountEstimator
      Parameters:
      messages - the list of messages
      Returns:
      the estimated count of tokens
    • builder

      public static AzureOpenAiStreamingChatModel.Builder builder()