Interface TokenCountEstimator

All Known Implementing Classes:
AzureOpenAiChatModel, AzureOpenAiStreamingChatModel, GoogleAiGeminiChatModel, OpenAiChatModel, OpenAiStreamingChatModel

public interface TokenCountEstimator
Represents an interface for estimating the count of tokens in various text types such as a text, message, prompt, text segment, etc. This can be useful when it's necessary to know in advance the cost of processing a specified text by the LLM.
  • Method Details

    • estimateTokenCount

      default int estimateTokenCount(String text)
      Estimates the count of tokens in the specified text.
      Parameters:
      text - the text
      Returns:
      the estimated count of tokens
    • estimateTokenCount

      default int estimateTokenCount(UserMessage userMessage)
      Estimates the count of tokens in the specified message.
      Parameters:
      userMessage - the message
      Returns:
      the estimated count of tokens
    • estimateTokenCount

      default int estimateTokenCount(Prompt prompt)
      Estimates the count of tokens in the specified prompt.
      Parameters:
      prompt - the prompt
      Returns:
      the estimated count of tokens
    • estimateTokenCount

      default int estimateTokenCount(TextSegment textSegment)
      Estimates the count of tokens in the specified text segment.
      Parameters:
      textSegment - the text segment
      Returns:
      the estimated count of tokens
    • estimateTokenCount

      int estimateTokenCount(List<ChatMessage> messages)
      Estimates the count of tokens in the specified list of messages.
      Parameters:
      messages - the list of messages
      Returns:
      the estimated count of tokens