Class OpenAiStreamingLanguageModel
java.lang.Object
dev.langchain4j.model.openai.OpenAiStreamingLanguageModel
- All Implemented Interfaces:
StreamingLanguageModel
Represents an OpenAI language model with a completion interface, such as gpt-3.5-turbo-instruct.
The model's response is streamed token by token and should be handled with
StreamingResponseHandler.
However, it's recommended to use OpenAiStreamingChatModel instead,
as it offers more advanced features like function calling, multi-turn conversations, etc.-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic class -
Constructor Summary
ConstructorsConstructorDescriptionOpenAiStreamingLanguageModel(OpenAiStreamingLanguageModel.OpenAiStreamingLanguageModelBuilder builder) -
Method Summary
Modifier and TypeMethodDescriptionbuilder()voidgenerate(String prompt, StreamingResponseHandler<String> handler) Generates a response from the model based on a prompt.Methods inherited from class Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface StreamingLanguageModel
generate
-
Constructor Details
-
OpenAiStreamingLanguageModel
public OpenAiStreamingLanguageModel(OpenAiStreamingLanguageModel.OpenAiStreamingLanguageModelBuilder builder)
-
-
Method Details
-
modelName
-
generate
Description copied from interface:StreamingLanguageModelGenerates a response from the model based on a prompt.- Specified by:
generatein interfaceStreamingLanguageModel- Parameters:
prompt- The prompt.handler- The handler for streaming the response.
-
builder
-