Package dev.langchain4j.model.openai
Class OpenAiStreamingLanguageModel
java.lang.Object
dev.langchain4j.model.openai.OpenAiStreamingLanguageModel
- All Implemented Interfaces:
StreamingLanguageModel
,TokenCountEstimator
public class OpenAiStreamingLanguageModel
extends Object
implements StreamingLanguageModel, TokenCountEstimator
Represents an OpenAI language model with a completion interface, such as gpt-3.5-turbo-instruct.
The model's response is streamed token by token and should be handled with
StreamingResponseHandler
.
However, it's recommended to use OpenAiStreamingChatModel
instead,
as it offers more advanced features like function calling, multi-turn conversations, etc.-
Nested Class Summary
Modifier and TypeClassDescriptionstatic class
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
int
estimateTokenCount
(String prompt) Estimates the count of tokens in the given text.void
generate
(String prompt, StreamingResponseHandler<String> handler) Generates a response from the model based on a prompt.static OpenAiStreamingLanguageModel
withApiKey
(String apiKey) Deprecated, for removal: This API element is subject to removal in a future version.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface dev.langchain4j.model.language.StreamingLanguageModel
generate
Methods inherited from interface dev.langchain4j.model.language.TokenCountEstimator
estimateTokenCount, estimateTokenCount
-
Constructor Details
-
OpenAiStreamingLanguageModel
-
-
Method Details
-
modelName
-
generate
Description copied from interface:StreamingLanguageModel
Generates a response from the model based on a prompt.- Specified by:
generate
in interfaceStreamingLanguageModel
- Parameters:
prompt
- The prompt.handler
- The handler for streaming the response.
-
estimateTokenCount
Description copied from interface:TokenCountEstimator
Estimates the count of tokens in the given text.- Specified by:
estimateTokenCount
in interfaceTokenCountEstimator
- Parameters:
prompt
- the text.- Returns:
- the estimated count of tokens.
-
withApiKey
Deprecated, for removal: This API element is subject to removal in a future version.Please usebuilder()
instead, and explicitly set the model name and, if necessary, other parameters. The default values for the model name and temperature will be removed in future releases! -
builder
-
builder()
instead, and explicitly set the model name and, if necessary, other parameters.