Package dev.langchain4j.service
Interface TokenStream
- All Known Implementing Classes:
AiServiceTokenStream
public interface TokenStream
Represents a token stream from language model to which you can subscribe and receive updates
when a new token is available, when language model finishes streaming, or when an error occurs during streaming.
It is intended to be used as a return type in AI Service.
-
Method Summary
Modifier and TypeMethodDescriptionAll errors during streaming will be ignored (but will be logged with a WARN log level).onComplete
(Consumer<Response<AiMessage>> completionHandler) The provided consumer will be invoked when a language model finishes streaming a response.The provided consumer will be invoked when an error occurs during streaming.The provided consumer will be invoked every time a new token from a language model is available.onRetrieved
(Consumer<List<Content>> contentHandler) The provided consumer will be invoked if anyContent
s are retrieved usingRetrievalAugmentor
.onToolExecuted
(Consumer<ToolExecution> toolExecuteHandler) The provided consumer will be invoked if any tool is executed.void
start()
Completes the current token stream building and starts processing.
-
Method Details
-
onNext
The provided consumer will be invoked every time a new token from a language model is available.- Parameters:
tokenHandler
- lambda that consumes tokens of the response- Returns:
- token stream instance used to configure or start stream processing
-
onRetrieved
The provided consumer will be invoked if anyContent
s are retrieved usingRetrievalAugmentor
.The invocation happens before any call is made to the language model.
- Parameters:
contentHandler
- lambda that consumes all retrieved contents- Returns:
- token stream instance used to configure or start stream processing
-
onToolExecuted
The provided consumer will be invoked if any tool is executed.The invocation happens after the tool method has finished and before any other tool is executed.
- Parameters:
toolExecuteHandler
- lambda that consumesToolExecution
- Returns:
- token stream instance used to configure or start stream processing
-
onComplete
The provided consumer will be invoked when a language model finishes streaming a response.- Parameters:
completionHandler
- lambda that will be invoked when language model finishes streaming- Returns:
- token stream instance used to configure or start stream processing
-
onError
The provided consumer will be invoked when an error occurs during streaming.- Parameters:
errorHandler
- lambda that will be invoked when an error occurs- Returns:
- token stream instance used to configure or start stream processing
-
ignoreErrors
TokenStream ignoreErrors()All errors during streaming will be ignored (but will be logged with a WARN log level).- Returns:
- token stream instance used to configure or start stream processing
-
start
void start()Completes the current token stream building and starts processing.Will send a request to LLM and start response streaming.
-