Class AiServices<T>

java.lang.Object
dev.langchain4j.service.AiServices<T>
Type Parameters:
T - The interface for which AiServices will provide an implementation.

public abstract class AiServices<T> extends Object
AI Services is a high-level API of LangChain4j to interact with ChatLanguageModel and StreamingChatLanguageModel.

You can define your own API (a Java interface with one or more methods), and AiServices will provide an implementation for it, hiding all the complexity from you.

You can find more details here.

Please note that AI Service should not be called concurrently for the same @MemoryId, as it can lead to corrupted ChatMemory. Currently, AI Service does not implement any mechanism to prevent concurrent calls for the same @MemoryId.

Currently, AI Services support:

 - Static system message templates, configured via @SystemMessage annotation on top of the method
 - Dynamic system message templates, configured via systemMessageProvider(Function)
 - Static user message templates, configured via @UserMessage annotation on top of the method
 - Dynamic user message templates, configured via method parameter annotated with @UserMessage
 - Single (shared) ChatMemory, configured via chatMemory(ChatMemory)
 - Separate (per-user) ChatMemory, configured via chatMemoryProvider(ChatMemoryProvider) and a method parameter annotated with @MemoryId
 - RAG, configured via contentRetriever(ContentRetriever) or retrievalAugmentor(RetrievalAugmentor)
 - Tools, configured via tools(List), tools(Object...), tools(Map) or toolProvider(ToolProvider) and methods annotated with @Tool
 - Various method return types (output parsers), see more details below
 - Streaming (use TokenStream as a return type)
 - Structured prompts as method arguments (see @StructuredPrompt)
 - Auto-moderation, configured via @Moderate annotation
 

Here is the simplest example of an AI Service:

 interface Assistant {

     String chat(String userMessage);
 }

 Assistant assistant = AiServices.create(Assistant.class, model);

 String answer = assistant.chat("hello");
 System.out.println(answer); // Hello, how can I help you today?
 
 The return type of methods in your AI Service can be any of the following:
 - a String or an AiMessage, if you want to get the answer from the LLM as-is
 - a List<String> or Set<String>, if you want to receive the answer as a collection of items or bullet points
 - any Enum or a boolean, if you want to use the LLM for classification
 - a primitive or boxed Java type: int, Double, etc., if you want to use the LLM for data extraction
 - many default Java types: Date, LocalDateTime, BigDecimal, etc., if you want to use the LLM for data extraction
 - any custom POJO, if you want to use the LLM for data extraction.
 - Result<T> if you want to access TokenUsage or sources (Contents retrieved during RAG), aside from T, which can be of any type listed above. For example: Result<String>, Result<MyCustomPojo>
 For POJOs, it is advisable to use the "json mode" feature if the LLM provider supports it. For OpenAI, this can be enabled by calling responseFormat("json_object") during model construction.

 

Let's see how we can classify the sentiment of a text:

 enum Sentiment {
     POSITIVE, NEUTRAL, NEGATIVE
 }

 interface SentimentAnalyzer {

     @UserMessage("Analyze sentiment of {{it}}")
     Sentiment analyzeSentimentOf(String text);
 }

 SentimentAnalyzer assistant = AiServices.create(SentimentAnalyzer.class, model);

 Sentiment sentiment = analyzeSentimentOf.chat("I love you");
 System.out.println(sentiment); // POSITIVE
 

As demonstrated, you can put @UserMessage and @SystemMessage annotations above a method to define templates for user and system messages, respectively. In this example, the special {{it}} prompt template variable is used because there's only one method parameter. However, you can use more parameters as demonstrated in the following example:

 interface Translator {

     @SystemMessage("You are a professional translator into {{language}}")
     @UserMessage("Translate the following text: {{text}}")
     String translate(@V("text") String text, @V("language") String language);
 }
 

See more examples here.

  • Field Details

  • Constructor Details

  • Method Details

    • create

      public static <T> T create(Class<T> aiService, ChatLanguageModel chatLanguageModel)
      Creates an AI Service (an implementation of the provided interface), that is backed by the provided chat model. This convenience method can be used to create simple AI Services. For more complex cases, please use builder(java.lang.Class<T>).
      Parameters:
      aiService - The class of the interface to be implemented.
      chatLanguageModel - The chat model to be used under the hood.
      Returns:
      An instance of the provided interface, implementing all its defined methods.
    • create

      public static <T> T create(Class<T> aiService, StreamingChatLanguageModel streamingChatLanguageModel)
      Creates an AI Service (an implementation of the provided interface), that is backed by the provided streaming chat model. This convenience method can be used to create simple AI Services. For more complex cases, please use builder(java.lang.Class<T>).
      Parameters:
      aiService - The class of the interface to be implemented.
      streamingChatLanguageModel - The streaming chat model to be used under the hood. The return type of all methods should be TokenStream.
      Returns:
      An instance of the provided interface, implementing all its defined methods.
    • builder

      public static <T> AiServices<T> builder(Class<T> aiService)
      Begins the construction of an AI Service.
      Parameters:
      aiService - The class of the interface to be implemented.
      Returns:
      builder
    • chatLanguageModel

      public AiServices<T> chatLanguageModel(ChatLanguageModel chatLanguageModel)
      Configures chat model that will be used under the hood of the AI Service.

      Either ChatLanguageModel or StreamingChatLanguageModel should be configured, but not both at the same time.

      Parameters:
      chatLanguageModel - Chat model that will be used under the hood of the AI Service.
      Returns:
      builder
    • streamingChatLanguageModel

      public AiServices<T> streamingChatLanguageModel(StreamingChatLanguageModel streamingChatLanguageModel)
      Configures streaming chat model that will be used under the hood of the AI Service. The methods of the AI Service must return a TokenStream type.

      Either ChatLanguageModel or StreamingChatLanguageModel should be configured, but not both at the same time.

      Parameters:
      streamingChatLanguageModel - Streaming chat model that will be used under the hood of the AI Service.
      Returns:
      builder
    • systemMessageProvider

      public AiServices<T> systemMessageProvider(Function<Object,String> systemMessageProvider)
      Configures the system message provider, which provides a system message to be used each time an AI service is invoked.
      When both @SystemMessage and the system message provider are configured, @SystemMessage takes precedence.
      Parameters:
      systemMessageProvider - A Function that accepts a chat memory ID (a value of a method parameter annotated with @MemoryId) and returns a system message to be used. If there is no parameter annotated with @MemoryId, the value of memory ID is "default". The returned String can be either a complete system message or a system message template containing unresolved template variables (e.g. "{{name}}"), which will be resolved using the values of method parameters annotated with @V.
      Returns:
      builder
    • chatMemory

      public AiServices<T> chatMemory(ChatMemory chatMemory)
      Configures the chat memory that will be used to preserve conversation history between method calls.

      Unless a ChatMemory or ChatMemoryProvider is configured, all method calls will be independent of each other. In other words, the LLM will not remember the conversation from the previous method calls.

      The same ChatMemory instance will be used for every method call.

      If you want to have a separate ChatMemory for each user/conversation, configure chatMemoryProvider(dev.langchain4j.memory.chat.ChatMemoryProvider) instead.

      Either a ChatMemory or a ChatMemoryProvider can be configured, but not both simultaneously.

      Parameters:
      chatMemory - An instance of chat memory to be used by the AI Service.
      Returns:
      builder
    • chatMemoryProvider

      public AiServices<T> chatMemoryProvider(ChatMemoryProvider chatMemoryProvider)
      Configures the chat memory provider, which provides a dedicated instance of ChatMemory for each user/conversation. To distinguish between users/conversations, one of the method's arguments should be a memory ID (of any data type) annotated with MemoryId. For each new (previously unseen) memoryId, an instance of ChatMemory will be automatically obtained by invoking ChatMemoryProvider.get(Object id). Example:
       interface Assistant {
      
           String chat(@MemoryId int memoryId, @UserMessage String message);
       }
       
      If you prefer to use the same (shared) ChatMemory for all users/conversations, configure a chatMemory(dev.langchain4j.memory.ChatMemory) instead.

      Either a ChatMemory or a ChatMemoryProvider can be configured, but not both simultaneously.

      Parameters:
      chatMemoryProvider - The provider of a ChatMemory for each new user/conversation.
      Returns:
      builder
    • moderationModel

      public AiServices<T> moderationModel(ModerationModel moderationModel)
      Configures a moderation model to be used for automatic content moderation. If a method in the AI Service is annotated with Moderate, the moderation model will be invoked to check the user content for any inappropriate or harmful material.
      Parameters:
      moderationModel - The moderation model to be used for content moderation.
      Returns:
      builder
      See Also:
    • tools

      public AiServices<T> tools(Object... objectsWithTools)
      Configures the tools that the LLM can use.
      Parameters:
      objectsWithTools - One or more objects whose methods are annotated with Tool. All these tools (methods annotated with Tool) will be accessible to the LLM. Note that inherited methods are ignored.
      Returns:
      builder
      See Also:
    • tools

      public AiServices<T> tools(List<Object> objectsWithTools)
      Configures the tools that the LLM can use.
      Parameters:
      objectsWithTools - A list of objects whose methods are annotated with Tool. All these tools (methods annotated with Tool) are accessible to the LLM. Note that inherited methods are ignored.
      Returns:
      builder
      See Also:
    • toolProvider

      public AiServices<T> toolProvider(ToolProvider toolProvider)
      Configures the tool provider that the LLM can use
      Parameters:
      toolProvider - Decides which tools the LLM could use to handle the request
      Returns:
      builder
    • tools

      public AiServices<T> tools(Map<ToolSpecification,ToolExecutor> tools)
      Configures the tools that the LLM can use.
      Parameters:
      tools - A map of ToolSpecification to ToolExecutor entries. This method of configuring tools is useful when tools must be configured programmatically. Otherwise, it is recommended to use the Tool-annotated java methods and configure tools with the tools(Object...) and tools(List) methods.
      Returns:
      builder
    • retriever

      @Deprecated(forRemoval=true) public AiServices<T> retriever(Retriever<TextSegment> retriever)
      Deprecated, for removal: This API element is subject to removal in a future version.
      Use contentRetriever(ContentRetriever) (e.g. EmbeddingStoreContentRetriever) instead.
      Configures a retriever that will be invoked on every method call to fetch relevant information related to the current user message from an underlying source (e.g., embedding store). This relevant information is automatically injected into the message sent to the LLM.
      Parameters:
      retriever - The retriever to be used by the AI Service.
      Returns:
      builder
    • contentRetriever

      public AiServices<T> contentRetriever(ContentRetriever contentRetriever)
      Configures a content retriever to be invoked on every method call for retrieving relevant content related to the user's message from an underlying data source (e.g., an embedding store in the case of an EmbeddingStoreContentRetriever). The retrieved relevant content is then automatically incorporated into the message sent to the LLM.
      This method provides a straightforward approach for those who do not require a customized RetrievalAugmentor. It configures a DefaultRetrievalAugmentor with the provided ContentRetriever.
      Parameters:
      contentRetriever - The content retriever to be used by the AI Service.
      Returns:
      builder
    • retrievalAugmentor

      public AiServices<T> retrievalAugmentor(RetrievalAugmentor retrievalAugmentor)
      Configures a retrieval augmentor to be invoked on every method call.
      Parameters:
      retrievalAugmentor - The retrieval augmentor to be used by the AI Service.
      Returns:
      builder
    • build

      public abstract T build()
      Constructs and returns the AI Service.
      Returns:
      An instance of the AI Service implementing the specified interface.
    • performBasicValidation

      protected void performBasicValidation()
    • removeToolMessages

      public static List<ChatMessage> removeToolMessages(List<ChatMessage> messages)
    • verifyModerationIfNeeded

      public static void verifyModerationIfNeeded(Future<Moderation> moderationFuture)