Skip to main content

DashScope (Qwen)

DashScope is a platform developed by Alibaba Cloud. It provides an interface for model visualization, monitoring, and debugging, particularly when working with AI/ML models in production environments. The platform allows users to visualize performance metrics, track model behavior, and identify potential issues early on in the deployment cycle.

Qwen models are a series of generative AI models developed by Alibaba Cloud. The Qwen family of models are specifically designed for tasks like text generation, summarization, question answering, and various NLP tasks.

You can refer to DashScope Document for more details. LangChain4j integrates with DashScope by Using DashScope Java SDK

Maven Dependency

You can use DashScope with LangChain4j in plain Java or Spring Boot applications.

Plain Java

note

Since 1.0.0-alpha1, langchain4j-dashscope has migrated to langchain4j-community and is renamed to langchain4j-community-dashscope.

Before 1.0.0-alpha1:


<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-dashscope</artifactId>
<version>${previous version here}</version>
</dependency>

1.0.0-alpha1 and later:


<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope</artifactId>
<version>${latest version here}</version>
</dependency>

Spring Boot

note

Since 1.0.0-alpha1, langchain4j-dashscope-spring-boot-starter has migrated to langchain4j-community and is renamed to langchain4j-community-dashscope-spring-boot-starter.

Before 1.0.0-alpha1:


<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-dashscope-spring-boot-starter</artifactId>
<version>${previous version here}</version>
</dependency>

1.0.0-alpha1 and later:


<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-dashscope-spring-boot-starter</artifactId>
<version>${latest version here}</version>
</dependency>

Or, you can use BOM to manage dependencies consistently:


<dependencyManagement>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-community-bom</artifactId>
<version>${latest version here}</version>
<typ>pom</typ>
<scope>import</scope>
</dependency>
</dependencyManagement>

Configurable Parameters

langchain4j-community-dashscope has 4 models to use:

  • QwenChatModel
  • QwenStreamingChatModel
  • QwenLanguageModel
  • QwenStreamingLanguageModel

QwenChatModel

QwenChatModel has following parameters to configure when you initialize it:

PropertyDescriptionDefault Value
baseUrlThe URL to connect to. You can use HTTP or websocket to connect to DashScopeText Inference and Multi-Modal
apiKeyThe API Key
modelNameThe model to use.qwen-plus
topPThe probability threshold for kernel sampling controls the diversity of texts generated by the model. the higher the top_p, the more diverse the generated texts, and vice versa. Value range: (0, 1.0]. We generally recommend altering this or temperature but not both.
topKThe size of the sampled candidate set during the generation process.
enableSearchWhether the model uses Internet search results for reference when generating text or not.
seedSetting the seed parameter will make the text generation process more deterministic, and is typically used to make the results consistent.
repetitionPenaltyRepetition in a continuous sequence during model generation. Increasing repetition_penalty reduces the repetition in model generation, 1.0 means no penalty. Value range: (0, +inf)
temperatureSampling temperature that controls the diversity of the text generated by the model. the higher the temperature, the more diverse the generated text, and vice versa. Value range: [0, 2)
stopsWith the stop parameter, the model will automatically stop generating text when it is about to contain the specified string or token_id.
maxTokensThe maximum number of tokens returned by this request.
listenersListeners that listen for request, response and errors.

QwenStreamingChatModel

Same as QwenChatModel

QwenLanguageModel

Same as QwenChatModel, except listeners.

QwenStreamingLanguageModel

Same as QwenChatModel, except listeners.

Examples

Plain Java

You can initialize QwenChatModel by using following code:

ChatLanguageModel qwenModel = QwenChatModel.builder()
.apiKey("You API key here")
.modelName("qwen-max")
.build();

Or more custom for other parameters:

ChatLanguageModel qwenModel = QwenChatModel.builder()
.apiKey("You API key here")
.modelName("qwen-max")
.enableSearch(true)
.temperature(0.7)
.maxTokens(4096)
.stops(List.of("Hello"))
.build();

Spring Boot

After introduce langchain4j-community-dashscope-spring-boot-starter dependency, you can simply register QwenChatModel bean by using below configuration:

langchain4j.community.dashscope.api-key=<You API Key here>
langchain4j.community.dashscope.model-name=qwen-max
# The properties are the same as `QwenChatModel`
# e.g.
# langchain4j.community.dashscope.temperature=0.7
# langchain4j.community.dashscope.max-tokens=4096

More Examples

You can check more details in LangChain4j Community