Class AiServiceTokenStream

java.lang.Object
dev.langchain4j.service.AiServiceTokenStream
All Implemented Interfaces:
TokenStream

public class AiServiceTokenStream extends Object implements TokenStream
  • Constructor Details

  • Method Details

    • onNext

      public TokenStream onNext(Consumer<String> tokenHandler)
      Description copied from interface: TokenStream
      The provided consumer will be invoked every time a new token from a language model is available.
      Specified by:
      onNext in interface TokenStream
      Parameters:
      tokenHandler - lambda that consumes tokens of the response
      Returns:
      token stream instance used to configure or start stream processing
    • onRetrieved

      public TokenStream onRetrieved(Consumer<List<Content>> contentsHandler)
      Description copied from interface: TokenStream
      The provided consumer will be invoked if any Contents are retrieved using RetrievalAugmentor.

      The invocation happens before any call is made to the language model.

      Specified by:
      onRetrieved in interface TokenStream
      Parameters:
      contentsHandler - lambda that consumes all retrieved contents
      Returns:
      token stream instance used to configure or start stream processing
    • onToolExecuted

      public TokenStream onToolExecuted(Consumer<ToolExecution> toolExecutionHandler)
      Description copied from interface: TokenStream
      The provided consumer will be invoked if any tool is executed.

      The invocation happens after the tool method has finished and before any other tool is executed.

      Specified by:
      onToolExecuted in interface TokenStream
      Parameters:
      toolExecutionHandler - lambda that consumes ToolExecution
      Returns:
      token stream instance used to configure or start stream processing
    • onComplete

      public TokenStream onComplete(Consumer<Response<AiMessage>> completionHandler)
      Description copied from interface: TokenStream
      The provided consumer will be invoked when a language model finishes streaming a response.
      Specified by:
      onComplete in interface TokenStream
      Parameters:
      completionHandler - lambda that will be invoked when language model finishes streaming
      Returns:
      token stream instance used to configure or start stream processing
    • onError

      public TokenStream onError(Consumer<Throwable> errorHandler)
      Description copied from interface: TokenStream
      The provided consumer will be invoked when an error occurs during streaming.
      Specified by:
      onError in interface TokenStream
      Parameters:
      errorHandler - lambda that will be invoked when an error occurs
      Returns:
      token stream instance used to configure or start stream processing
    • ignoreErrors

      public TokenStream ignoreErrors()
      Description copied from interface: TokenStream
      All errors during streaming will be ignored (but will be logged with a WARN log level).
      Specified by:
      ignoreErrors in interface TokenStream
      Returns:
      token stream instance used to configure or start stream processing
    • start

      public void start()
      Description copied from interface: TokenStream
      Completes the current token stream building and starts processing.

      Will send a request to LLM and start response streaming.

      Specified by:
      start in interface TokenStream