Package dev.langchain4j.service
Class AiServiceTokenStream
java.lang.Object
dev.langchain4j.service.AiServiceTokenStream
- All Implemented Interfaces:
TokenStream
-
Constructor Summary
ConstructorDescriptionAiServiceTokenStream
(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, Map<String, ToolExecutor> toolExecutors, List<Content> retrievedContents, AiServiceContext context, Object memoryId) -
Method Summary
Modifier and TypeMethodDescriptionAll errors during streaming will be ignored (but will be logged with a WARN log level).onComplete
(Consumer<Response<AiMessage>> completionHandler) The provided consumer will be invoked when a language model finishes streaming a response.The provided consumer will be invoked when an error occurs during streaming.The provided consumer will be invoked every time a new token from a language model is available.onRetrieved
(Consumer<List<Content>> contentsHandler) The provided consumer will be invoked if anyContent
s are retrieved usingRetrievalAugmentor
.onToolExecuted
(Consumer<ToolExecution> toolExecutionHandler) The provided consumer will be invoked if any tool is executed.void
start()
Completes the current token stream building and starts processing.
-
Constructor Details
-
AiServiceTokenStream
public AiServiceTokenStream(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, Map<String, ToolExecutor> toolExecutors, List<Content> retrievedContents, AiServiceContext context, Object memoryId)
-
-
Method Details
-
onNext
Description copied from interface:TokenStream
The provided consumer will be invoked every time a new token from a language model is available.- Specified by:
onNext
in interfaceTokenStream
- Parameters:
tokenHandler
- lambda that consumes tokens of the response- Returns:
- token stream instance used to configure or start stream processing
-
onRetrieved
Description copied from interface:TokenStream
The provided consumer will be invoked if anyContent
s are retrieved usingRetrievalAugmentor
.The invocation happens before any call is made to the language model.
- Specified by:
onRetrieved
in interfaceTokenStream
- Parameters:
contentsHandler
- lambda that consumes all retrieved contents- Returns:
- token stream instance used to configure or start stream processing
-
onToolExecuted
Description copied from interface:TokenStream
The provided consumer will be invoked if any tool is executed.The invocation happens after the tool method has finished and before any other tool is executed.
- Specified by:
onToolExecuted
in interfaceTokenStream
- Parameters:
toolExecutionHandler
- lambda that consumesToolExecution
- Returns:
- token stream instance used to configure or start stream processing
-
onComplete
Description copied from interface:TokenStream
The provided consumer will be invoked when a language model finishes streaming a response.- Specified by:
onComplete
in interfaceTokenStream
- Parameters:
completionHandler
- lambda that will be invoked when language model finishes streaming- Returns:
- token stream instance used to configure or start stream processing
-
onError
Description copied from interface:TokenStream
The provided consumer will be invoked when an error occurs during streaming.- Specified by:
onError
in interfaceTokenStream
- Parameters:
errorHandler
- lambda that will be invoked when an error occurs- Returns:
- token stream instance used to configure or start stream processing
-
ignoreErrors
Description copied from interface:TokenStream
All errors during streaming will be ignored (but will be logged with a WARN log level).- Specified by:
ignoreErrors
in interfaceTokenStream
- Returns:
- token stream instance used to configure or start stream processing
-
start
public void start()Description copied from interface:TokenStream
Completes the current token stream building and starts processing.Will send a request to LLM and start response streaming.
- Specified by:
start
in interfaceTokenStream
-