Interface TokenStream

All Known Implementing Classes:
AiServiceTokenStream

public interface TokenStream
Represents a token stream from language model to which you can subscribe and receive updates when a new token is available, when language model finishes streaming, or when an error occurs during streaming. It is intended to be used as a return type in AI Service.
  • Method Details

    • onNext

      TokenStream onNext(Consumer<String> tokenHandler)
      The provided consumer will be invoked every time a new token from a language model is available.
      Parameters:
      tokenHandler - lambda that consumes tokens of the response
      Returns:
      token stream instance used to configure or start stream processing
    • onRetrieved

      TokenStream onRetrieved(Consumer<List<Content>> contentHandler)
      The provided consumer will be invoked if any Contents are retrieved using RetrievalAugmentor.

      The invocation happens before any call is made to the language model.

      Parameters:
      contentHandler - lambda that consumes all retrieved contents
      Returns:
      token stream instance used to configure or start stream processing
    • onToolExecuted

      TokenStream onToolExecuted(Consumer<ToolExecution> toolExecuteHandler)
      The provided consumer will be invoked if any tool is executed.

      The invocation happens after the tool method has finished and before any other tool is executed.

      Parameters:
      toolExecuteHandler - lambda that consumes ToolExecution
      Returns:
      token stream instance used to configure or start stream processing
    • onComplete

      TokenStream onComplete(Consumer<Response<AiMessage>> completionHandler)
      The provided consumer will be invoked when a language model finishes streaming a response.
      Parameters:
      completionHandler - lambda that will be invoked when language model finishes streaming
      Returns:
      token stream instance used to configure or start stream processing
    • onError

      TokenStream onError(Consumer<Throwable> errorHandler)
      The provided consumer will be invoked when an error occurs during streaming.
      Parameters:
      errorHandler - lambda that will be invoked when an error occurs
      Returns:
      token stream instance used to configure or start stream processing
    • ignoreErrors

      TokenStream ignoreErrors()
      All errors during streaming will be ignored (but will be logged with a WARN log level).
      Returns:
      token stream instance used to configure or start stream processing
    • start

      void start()
      Completes the current token stream building and starts processing.

      Will send a request to LLM and start response streaming.