Interface StreamingResponseHandler<T>

Type Parameters:
T - The type of the response.

public interface StreamingResponseHandler<T>
Represents a handler for streaming responses from a language model. The handler is invoked each time the model generates a new token in a textual response. If the model executes a tool instead, onComplete(dev.langchain4j.model.output.Response<T>) will be invoked instead.
  • Method Summary

    Modifier and Type
    Method
    Description
    default void
    onComplete(Response<T> response)
    Invoked when the language model has finished streaming a response.
    void
    This method is invoked when an error occurs during streaming.
    void
    onNext(String token)
    Invoked each time the language model generates a new token in a textual response.
  • Method Details

    • onNext

      void onNext(String token)
      Invoked each time the language model generates a new token in a textual response. If the model executes a tool instead, this method will not be invoked; onComplete(dev.langchain4j.model.output.Response<T>) will be invoked instead.
      Parameters:
      token - The newly generated token, which is a part of the complete response.
    • onComplete

      default void onComplete(Response<T> response)
      Invoked when the language model has finished streaming a response. If the model executes one or multiple tools, it is accessible via AiMessage.toolExecutionRequests().
      Parameters:
      response - The complete response generated by the language model. For textual responses, it contains all tokens from onNext(java.lang.String) concatenated.
    • onError

      void onError(Throwable error)
      This method is invoked when an error occurs during streaming.
      Parameters:
      error - The error that occurred