aliyun-qwen-deep-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Category: provider
分类:provider

Model Studio Qwen Deep Research

Model Studio Qwen Deep Research

Validation

验证

bash
mkdir -p output/aliyun-qwen-deep-research
python -m py_compile skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py && echo "py_compile_ok" > output/aliyun-qwen-deep-research/validate.txt
Pass criteria: command exits 0 and
output/aliyun-qwen-deep-research/validate.txt
is generated.
bash
mkdir -p output/aliyun-qwen-deep-research
python -m py_compile skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py && echo "py_compile_ok" > output/aliyun-qwen-deep-research/validate.txt
通过标准:命令退出码为0,且成功生成
output/aliyun-qwen-deep-research/validate.txt
文件。

Output And Evidence

输出与证据

  • Save research goals, confirmation answers, normalized request payloads, and final report snapshots under
    output/aliyun-qwen-deep-research/
    .
  • Keep the exact model, region, and
    enable_feedback
    setting with each saved run.
Use this skill when the user wants a deep, multi-stage research workflow rather than a single chat completion.
  • 所有调研目标、确认回复、标准化请求载荷和最终报告快照均保存至
    output/aliyun-qwen-deep-research/
    目录下。
  • 每次运行的记录需要保留所使用的精确模型名称、区域和
    enable_feedback
    配置项。
当用户需要深度、多阶段的调研工作流而非单次对话补全时,请使用此技能。

Critical model names

关键模型名称

Use one of these exact model strings:
  • qwen-deep-research
  • qwen-deep-research-2025-12-15
Selection guidance:
  • Use
    qwen-deep-research
    for the current mainline model.
  • Use
    qwen-deep-research-2025-12-15
    when you need the snapshot with MCP tool-calling support and stronger reproducibility.
请使用以下精确模型字符串之一:
  • qwen-deep-research
  • qwen-deep-research-2025-12-15
选型指南:
  • 如需使用当前主线版本模型,请选择
    qwen-deep-research
  • 当你需要支持MCP工具调用且可复现性更强的快照版本时,请选择
    qwen-deep-research-2025-12-15

Prerequisites

前置条件

  • Install SDK in a virtual environment:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • Set
    DASHSCOPE_API_KEY
    in your environment, or add
    dashscope_api_key
    to
    ~/.alibabacloud/credentials
    .
  • This model currently applies to the China mainland (Beijing) region and uses its own API shape rather than OpenAI-compatible mode.
  • 在虚拟环境中安装SDK:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope
  • 在环境变量中设置
    DASHSCOPE_API_KEY
    ,或者在
    ~/.alibabacloud/credentials
    文件中添加
    dashscope_api_key
    配置。
  • 该模型目前仅适用于中国大陆(北京)区域,使用其自有API形态,不兼容OpenAI兼容模式。

Normalized interface (research.run)

标准化接口(research.run)

Request

请求参数

  • topic
    (string, required)
  • model
    (string, optional): default
    qwen-deep-research
  • messages
    (array<object>, optional)
  • enable_feedback
    (bool, optional): default
    true
  • stream
    (bool, optional): must be
    true
  • attachments
    (array<object>, optional): image URLs and related context
  • topic
    (字符串,必填)
  • model
    (字符串,选填):默认值为
    qwen-deep-research
  • messages
    (对象数组,选填)
  • enable_feedback
    (布尔值,选填):默认值为
    true
  • stream
    (布尔值,选填):必须设为
    true
  • attachments
    (对象数组,选填):图片URL及相关上下文

Response

响应参数

  • status
    (string): stage status such as
    thinking
    ,
    researching
    , or
    finished
  • text
    (string, optional): streamed content chunk
  • report
    (string, optional): final structured research report
  • raw
    (object, optional)
  • status
    (字符串):阶段状态,例如
    thinking
    (思考中)、
    researching
    (调研中)或
    finished
    (已完成)
  • text
    (字符串,选填):流式输出内容块
  • report
    (字符串,选填):最终的结构化调研报告
  • raw
    (对象,选填)

Quick start

快速开始

bash
python skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py \
  --topic "Compare cloud video generation model trade-offs for marketing automation." \
  --disable-feedback
bash
python skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py \
  --topic "Compare cloud video generation model trade-offs for marketing automation." \
  --disable-feedback

Operational guidance

使用指南

  • Expect streaming output only.
  • Keep the initial topic concrete and bounded; broad topics can trigger long iterative search plans.
  • If the model asks follow-up questions and you already know the constraints, answer them explicitly to avoid wasted rounds.
  • Use the snapshot model when you need stable evaluation runs or MCP tool-calling support.
  • 仅支持流式输出。
  • 初始主题需具体、边界清晰;宽泛的主题会触发耗时较长的迭代搜索计划。
  • 如果模型提出追问,而你已经知晓约束条件,请明确回复,避免无意义的交互轮次。
  • 当你需要稳定的评估运行或MCP工具调用支持时,请使用快照版本模型。

Output location

输出位置

  • Default output:
    output/aliyun-qwen-deep-research/requests/
  • Override base dir with
    OUTPUT_DIR
    .
  • 默认输出路径:
    output/aliyun-qwen-deep-research/requests/
  • 可通过
    OUTPUT_DIR
    环境变量覆盖基础目录。

References

参考资料

  • references/sources.md
  • references/sources.md