Package dev.langchain4j.model
Interface StreamingResponseHandler<T>
- Type Parameters:
T
- The type of the response.
public interface StreamingResponseHandler<T>
Represents a handler for streaming responses from a language model.
The handler is invoked each time the model generates a new token in a textual response.
If the model executes a tool instead,
onComplete(dev.langchain4j.model.output.Response<T>)
will be invoked instead.-
Method Summary
Modifier and TypeMethodDescriptiondefault void
onComplete
(Response<T> response) Invoked when the language model has finished streaming a response.void
This method is invoked when an error occurs during streaming.void
Invoked each time the language model generates a new token in a textual response.
-
Method Details
-
onNext
Invoked each time the language model generates a new token in a textual response. If the model executes a tool instead, this method will not be invoked;onComplete(dev.langchain4j.model.output.Response<T>)
will be invoked instead.- Parameters:
token
- The newly generated token, which is a part of the complete response.
-
onComplete
Invoked when the language model has finished streaming a response. If the model executes one or multiple tools, it is accessible viaAiMessage.toolExecutionRequests()
.- Parameters:
response
- The complete response generated by the language model. For textual responses, it contains all tokens fromonNext(java.lang.String)
concatenated.
-
onError
This method is invoked when an error occurs during streaming.- Parameters:
error
- The error that occurred
-