Class GitHubModelsStreamingChatModel

java.lang.Object
dev.langchain4j.model.github.GitHubModelsStreamingChatModel
All Implemented Interfaces:
StreamingChatLanguageModel

public class GitHubModelsStreamingChatModel extends Object implements StreamingChatLanguageModel
Represents a language model, hosted on GitHub Models, that has a chat completion interface, such as gpt-4o.

Mandatory parameters for initialization are: gitHubToken (the GitHub Token used for authentication) and modelName (the name of the model to use). You can also provide your own ChatCompletionsClient and ChatCompletionsAsyncClient instance, if you need more flexibility.

The list of models, as well as the documentation and a playground to test them, can be found at https://github.com/marketplace/models

  • Method Details

    • generate

      public void generate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a sequence of messages. Typically, the sequence contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      handler - The handler for streaming the response.
    • generate

      public void generate(List<ChatMessage> messages, List<ToolSpecification> toolSpecifications, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a list of tool specifications. The response may either be a text message or a request to execute one of the specified tools. Typically, the list contains messages in the following order: System (optional) - User - AI - User - AI - User ...
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecifications - A list of tools that the model is allowed to execute. The model autonomously decides whether to use any of these tools.
      handler - The handler for streaming the response. AiMessage can contain either a textual response or a request to execute one of the tools.
    • generate

      public void generate(List<ChatMessage> messages, ToolSpecification toolSpecification, StreamingResponseHandler<AiMessage> handler)
      Description copied from interface: StreamingChatLanguageModel
      Generates a response from the model based on a list of messages and a single tool specification. The model is forced to execute the specified tool. This is usually achieved by setting `tool_choice=ANY` in the LLM provider API.
      Specified by:
      generate in interface StreamingChatLanguageModel
      Parameters:
      messages - A list of messages.
      toolSpecification - The specification of a tool that must be executed. The model is forced to execute this tool.
      handler - The handler for streaming the response.
    • builder

      public static GitHubModelsStreamingChatModel.Builder builder()