aliyun-qwen-deep-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCategory: provider
分类:provider
Model Studio Qwen Deep Research
Model Studio Qwen Deep Research
Validation
验证
bash
mkdir -p output/aliyun-qwen-deep-research
python -m py_compile skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py && echo "py_compile_ok" > output/aliyun-qwen-deep-research/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-qwen-deep-research/validate.txtbash
mkdir -p output/aliyun-qwen-deep-research
python -m py_compile skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py && echo "py_compile_ok" > output/aliyun-qwen-deep-research/validate.txt通过标准:命令退出码为0,且成功生成文件。
output/aliyun-qwen-deep-research/validate.txtOutput And Evidence
输出与证据
- Save research goals, confirmation answers, normalized request payloads, and final report snapshots under .
output/aliyun-qwen-deep-research/ - Keep the exact model, region, and setting with each saved run.
enable_feedback
Use this skill when the user wants a deep, multi-stage research workflow rather than a single chat completion.
- 所有调研目标、确认回复、标准化请求载荷和最终报告快照均保存至目录下。
output/aliyun-qwen-deep-research/ - 每次运行的记录需要保留所使用的精确模型名称、区域和配置项。
enable_feedback
当用户需要深度、多阶段的调研工作流而非单次对话补全时,请使用此技能。
Critical model names
关键模型名称
Use one of these exact model strings:
qwen-deep-researchqwen-deep-research-2025-12-15
Selection guidance:
- Use for the current mainline model.
qwen-deep-research - Use when you need the snapshot with MCP tool-calling support and stronger reproducibility.
qwen-deep-research-2025-12-15
请使用以下精确模型字符串之一:
qwen-deep-researchqwen-deep-research-2025-12-15
选型指南:
- 如需使用当前主线版本模型,请选择。
qwen-deep-research - 当你需要支持MCP工具调用且可复现性更强的快照版本时,请选择。
qwen-deep-research-2025-12-15
Prerequisites
前置条件
- Install SDK in a virtual environment:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope- Set in your environment, or add
DASHSCOPE_API_KEYtodashscope_api_key.~/.alibabacloud/credentials - This model currently applies to the China mainland (Beijing) region and uses its own API shape rather than OpenAI-compatible mode.
- 在虚拟环境中安装SDK:
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope- 在环境变量中设置,或者在
DASHSCOPE_API_KEY文件中添加~/.alibabacloud/credentials配置。dashscope_api_key - 该模型目前仅适用于中国大陆(北京)区域,使用其自有API形态,不兼容OpenAI兼容模式。
Normalized interface (research.run)
标准化接口(research.run)
Request
请求参数
- (string, required)
topic - (string, optional): default
modelqwen-deep-research - (array<object>, optional)
messages - (bool, optional): default
enable_feedbacktrue - (bool, optional): must be
streamtrue - (array<object>, optional): image URLs and related context
attachments
- (字符串,必填)
topic - (字符串,选填):默认值为
modelqwen-deep-research - (对象数组,选填)
messages - (布尔值,选填):默认值为
enable_feedbacktrue - (布尔值,选填):必须设为
streamtrue - (对象数组,选填):图片URL及相关上下文
attachments
Response
响应参数
- (string): stage status such as
status,thinking, orresearchingfinished - (string, optional): streamed content chunk
text - (string, optional): final structured research report
report - (object, optional)
raw
- (字符串):阶段状态,例如
status(思考中)、thinking(调研中)或researching(已完成)finished - (字符串,选填):流式输出内容块
text - (字符串,选填):最终的结构化调研报告
report - (对象,选填)
raw
Quick start
快速开始
bash
python skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py \
--topic "Compare cloud video generation model trade-offs for marketing automation." \
--disable-feedbackbash
python skills/ai/research/aliyun-qwen-deep-research/scripts/prepare_deep_research_request.py \
--topic "Compare cloud video generation model trade-offs for marketing automation." \
--disable-feedbackOperational guidance
使用指南
- Expect streaming output only.
- Keep the initial topic concrete and bounded; broad topics can trigger long iterative search plans.
- If the model asks follow-up questions and you already know the constraints, answer them explicitly to avoid wasted rounds.
- Use the snapshot model when you need stable evaluation runs or MCP tool-calling support.
- 仅支持流式输出。
- 初始主题需具体、边界清晰;宽泛的主题会触发耗时较长的迭代搜索计划。
- 如果模型提出追问,而你已经知晓约束条件,请明确回复,避免无意义的交互轮次。
- 当你需要稳定的评估运行或MCP工具调用支持时,请使用快照版本模型。
Output location
输出位置
- Default output:
output/aliyun-qwen-deep-research/requests/ - Override base dir with .
OUTPUT_DIR
- 默认输出路径:
output/aliyun-qwen-deep-research/requests/ - 可通过环境变量覆盖基础目录。
OUTPUT_DIR
References
参考资料
references/sources.md
references/sources.md