aliyun-qwen-generation

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Category: provider
Category: provider

Model Studio Qwen Text Generation

Model Studio Qwen 文本生成

Validation

验证

bash
mkdir -p output/aliyun-qwen-generation
python -m py_compile skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py && echo "py_compile_ok" > output/aliyun-qwen-generation/validate.txt
Pass criteria: command exits 0 and
output/aliyun-qwen-generation/validate.txt
is generated.
bash
mkdir -p output/aliyun-qwen-generation
python -m py_compile skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py && echo "py_compile_ok" > output/aliyun-qwen-generation/validate.txt
通过条件:命令退出码为0,且成功生成
output/aliyun-qwen-generation/validate.txt
文件。

Output And Evidence

输出与证明

  • Save prompt templates, normalized request payloads, and response summaries under
    output/aliyun-qwen-generation/
    .
  • Keep one reproducible request example with model name, region, and key parameters.
Use this skill for general text generation, reasoning, tool-calling, and long-context chat on Alibaba Cloud Model Studio.
  • 将提示词模板、标准化请求负载和响应摘要保存至
    output/aliyun-qwen-generation/
    目录下。
  • 保留一份可复现的请求示例,需包含模型名称、地域和关键参数。
本skill可用于在阿里云Model Studio上完成通用文本生成、推理、工具调用和长上下文聊天任务。

Critical model names

核心模型名称

Prefer the current flagship families:
  • qwen3-max
  • qwen3-max-2026-01-23
  • qwen3.5-plus
  • qwen3.5-plus-2026-02-15
  • qwen3.5-flash
  • qwen3.5-flash-2026-02-23
Common related variants listed in the official model catalog:
  • qwen3.5-397b-a17b
  • qwen3.5-122b-a10b
  • qwen3.5-35b-a3b
  • qwen3.5-27b
优先选用当前旗舰系列:
  • qwen3-max
  • qwen3-max-2026-01-23
  • qwen3.5-plus
  • qwen3.5-plus-2026-02-15
  • qwen3.5-flash
  • qwen3.5-flash-2026-02-23
官方模型目录中列出的常见相关变体:
  • qwen3.5-397b-a17b
  • qwen3.5-122b-a10b
  • qwen3.5-35b-a3b
  • qwen3.5-27b

Prerequisites

前置要求

  • Install SDK in a virtual environment:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Set
    DASHSCOPE_API_KEY
    in your environment, or add
    dashscope_api_key
    to
    ~/.alibabacloud/credentials
    .
  • 在虚拟环境中安装SDK:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • 在环境变量中设置
    DASHSCOPE_API_KEY
    ,或者将
    dashscope_api_key
    添加到
    ~/.alibabacloud/credentials
    文件中。

Normalized interface (text.generate)

标准化接口(text.generate)

Request

请求参数

  • messages
    (array<object>, required): standard chat turns.
  • model
    (string, optional): default
    qwen3.5-plus
    .
  • temperature
    (number, optional)
  • top_p
    (number, optional)
  • max_tokens
    (int, optional)
  • enable_thinking
    (bool, optional)
  • tools
    (array<object>, optional)
  • response_format
    (object, optional)
  • stream
    (bool, optional)
  • messages
    (array<object>, 必填):标准对话轮次。
  • model
    (string, 选填):默认值为
    qwen3.5-plus
  • temperature
    (number, 选填)
  • top_p
    (number, 选填)
  • max_tokens
    (int, 选填)
  • enable_thinking
    (bool, 选填)
  • tools
    (array<object>, 选填)
  • response_format
    (object, 选填)
  • stream
    (bool, 选填)

Response

响应参数

  • text
    (string): assistant output.
  • finish_reason
    (string, optional)
  • usage
    (object, optional)
  • raw
    (object, optional)
  • text
    (string):助手输出内容。
  • finish_reason
    (string, 选填)
  • usage
    (object, 选填)
  • raw
    (object, 选填)

Quick start (OpenAI-compatible endpoint)

快速入门(OpenAI兼容端点)

bash
curl -sS https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
  -H "Authorization: Bearer $DASHSCOPE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3.5-plus",
    "messages": [
      {"role": "system", "content": "You are a concise assistant."},
      {"role": "user", "content": "Summarize why object storage helps media pipelines."}
    ],
    "stream": false
  }'
bash
curl -sS https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
  -H "Authorization: Bearer $DASHSCOPE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3.5-plus",
    "messages": [
      {"role": "system", "content": "You are a concise assistant."},
      {"role": "user", "content": "Summarize why object storage helps media pipelines."}
    ],
    "stream": false
  }'

Local helper script

本地辅助脚本

bash
python skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py \
  --prompt "Draft a concise architecture summary for a media ingestion pipeline." \
  --model qwen3.5-plus
bash
python skills/ai/text/aliyun-qwen-generation/scripts/prepare_generation_request.py \
  --prompt "Draft a concise architecture summary for a media ingestion pipeline." \
  --model qwen3.5-plus

Operational guidance

使用指南

  • Use snapshot IDs when reproducibility matters.
  • Prefer
    qwen3.5-flash
    for lower-latency simple tasks and
    qwen3-max
    for harder multi-step tasks.
  • Keep tool schemas minimal and explicit when enabling tool calls.
  • For multimodal input, route to dedicated VL or Omni skills unless the task is primarily text-centric.
  • 对可复现性有要求时请使用快照ID。
  • 低延迟简单任务优先选用
    qwen3.5-flash
    ,复杂多步骤任务优先选用
    qwen3-max
  • 启用工具调用时,请保持工具Schema简洁明确。
  • 对于多模态输入,除非任务以文本为核心,否则请路由到专用的VL或Omni skill。

Output location

输出位置

  • Default output:
    output/aliyun-qwen-generation/requests/
  • Override base dir with
    OUTPUT_DIR
    .
  • 默认输出路径:
    output/aliyun-qwen-generation/requests/
  • 可通过
    OUTPUT_DIR
    环境变量覆盖基础目录。

References

参考资料

  • references/sources.md
  • references/sources.md