aliyun-wan-video
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCategory: provider
分类:provider
Model Studio Wan Video
Model Studio Wan Video
Validation
校验
bash
mkdir -p output/aliyun-wan-video
python -m py_compile skills/ai/video/aliyun-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/aliyun-wan-video/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-wan-video/validate.txtbash
mkdir -p output/aliyun-wan-video
python -m py_compile skills/ai/video/aliyun-wan-video/scripts/generate_video.py && echo "py_compile_ok" > output/aliyun-wan-video/validate.txt通过标准:命令退出码为0,且成功生成文件。
output/aliyun-wan-video/validate.txtOutput And Evidence
输出与凭证
- Save task IDs, polling responses, and final video URLs to .
output/aliyun-wan-video/ - Keep one end-to-end run log for troubleshooting.
Provide consistent video generation behavior for the video-agent pipeline by standardizing inputs/outputs and using DashScope SDK (Python) with the exact model name.
video.generate- 将任务ID、轮询响应和最终视频URL保存到目录下。
output/aliyun-wan-video/ - 保留一份端到端运行日志用于故障排查。
通过标准化的输入输出,搭配使用指定模型名称的Python版DashScope SDK,为video-agent流水线提供一致的视频生成能力。
video.generateCritical model names
关键模型名称
Use one of these exact model strings:
wan2.6-t2vwan2.6-t2v-uswan2.2-t2v-pluswan2.2-t2v-flashwan2.6-i2v-flashwan2.6-i2vwan2.6-i2v-uswanx2.1-t2v-turbo
请使用以下准确的模型字符串:
wan2.6-t2vwan2.6-t2v-uswan2.2-t2v-pluswan2.2-t2v-flashwan2.6-i2v-flashwan2.6-i2vwan2.6-i2v-uswanx2.1-t2v-turbo
Prerequisites
前置依赖
- Install SDK (recommended in a venv to avoid PEP 668 limits):
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope- Set in your environment, or add
DASHSCOPE_API_KEYtodashscope_api_key(env takes precedence).~/.alibabacloud/credentials
- 安装SDK(推荐在虚拟环境中安装以规避PEP 668限制):
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope- 在环境变量中设置,或是将
DASHSCOPE_API_KEY添加到dashscope_api_key文件中(环境变量优先级更高)。~/.alibabacloud/credentials
Normalized interface (video.generate)
标准化接口(video.generate)
Request
请求参数
- (string, required)
prompt - (string, optional)
negative_prompt - (number, required) seconds
duration - (number, required)
fps - (string, required) e.g.
size1280*720 - (int, optional)
seed - (string | bytes, optional for t2v, required for i2v family models)
reference_image - (number, optional)
motion_strength
- (字符串,必填)
prompt - (字符串,选填)
negative_prompt - (数值,必填)单位为秒
duration - (数值,必填)
fps - (字符串,必填)例如
size1280*720 - (整数,选填)
seed - (字符串 | 字节流,t2v类模型选填,i2v类模型必填)
reference_image - (数值,选填)
motion_strength
Response
返回结果
- (string)
video_url - (number)
duration - (number)
fps - (int)
seed
- (字符串)
video_url - (数值)
duration - (数值)
fps - (整数)
seed
Quick start (Python + DashScope SDK)
快速上手(Python + DashScope SDK)
Video generation is usually asynchronous. Expect a task ID and poll until completion.
Note: Wan i2v models require an input image; pure t2v models such as can omit .
wan2.6-t2vreference_imagepython
import os
from dashscope import VideoSynthesis视频生成通常是异步的,调用后会返回任务ID,需要轮询直至任务完成。
注意:Wan i2v系列模型需要输入图片;纯t2v模型如可省略参数。
wan2.6-t2vreference_imagepython
import os
from dashscope import VideoSynthesisPrefer env var for auth: export DASHSCOPE_API_KEY=...
推荐使用环境变量进行鉴权:export DASHSCOPE_API_KEY=...
Or use ~/.alibabacloud/credentials with dashscope_api_key under [default].
也可使用~/.alibabacloud/credentials文件,在[default]配置下添加dashscope_api_key参数
def generate_video(req: dict) -> dict:
payload = {
"model": req.get("model", "wan2.6-i2v-flash"),
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope expects img_url for i2v models; local files are auto-uploaded.
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# Some SDK versions require polling for the final result.
# If a task_id is returned, poll until status is SUCCEEDED.
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}undefineddef generate_video(req: dict) -> dict:
payload = {
"model": req.get("model", "wan2.6-i2v-flash"),
"prompt": req["prompt"],
"negative_prompt": req.get("negative_prompt"),
"duration": req.get("duration", 4),
"fps": req.get("fps", 24),
"size": req.get("size", "1280*720"),
"seed": req.get("seed"),
"motion_strength": req.get("motion_strength"),
"api_key": os.getenv("DASHSCOPE_API_KEY"),
}
if req.get("reference_image"):
# DashScope的i2v模型需要传入img_url参数;本地文件会自动上传
payload["img_url"] = req["reference_image"]
response = VideoSynthesis.call(**payload)
# 部分SDK版本需要轮询获取最终结果
# 如果返回了task_id,需要轮询直到状态变为SUCCEEDED
result = response.output.get("results", [None])[0]
return {
"video_url": None if not result else result.get("url"),
"duration": response.output.get("duration"),
"fps": response.output.get("fps"),
"seed": response.output.get("seed"),
}undefinedAsync handling (polling)
异步处理(轮询)
python
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model=req.get("model", "wan2.6-i2v-flash"),
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")python
import os
from dashscope import VideoSynthesis
task = VideoSynthesis.async_call(
model=req.get("model", "wan2.6-i2v-flash"),
prompt=req["prompt"],
img_url=req["reference_image"],
duration=req.get("duration", 4),
fps=req.get("fps", 24),
size=req.get("size", "1280*720"),
api_key=os.getenv("DASHSCOPE_API_KEY"),
)
final = VideoSynthesis.wait(task)
video_url = final.output.get("video_url")Operational guidance
操作指引
- Video generation can take minutes; expose progress and allow cancel/retry.
- Cache by .
(prompt, negative_prompt, duration, fps, size, seed, reference_image hash, motion_strength) - Store video assets in object storage and persist only URLs in metadata.
- can be a URL or local path; the SDK auto-uploads local files.
reference_image - If you get , the reference image is missing or not mapped.
Field required: input.img_url - and
wan2.6-t2vadd multi-shot narrative support and optional audio input according to the official docs.wan2.6-t2v-us
- 视频生成可能需要数分钟时间,请展示进度并支持取消/重试操作。
- 可基于进行缓存。
(prompt, negative_prompt, duration, fps, size, seed, reference_image哈希值, motion_strength) - 将视频资源存储在对象存储中,仅在元数据中持久化URL。
- 可以是URL或本地路径;SDK会自动上传本地文件。
reference_image - 如果收到报错,说明参考图片缺失或未正确映射。
Field required: input.img_url - 根据官方文档,和
wan2.6-t2v额外支持多镜头叙事和可选音频输入。wan2.6-t2v-us
Size notes
尺寸说明
- Use format (e.g.
WxH).1280*720 - Prefer common sizes; unsupported sizes can return 400.
- 使用格式(例如
宽x高)。1280*720 - 推荐使用常见尺寸;不支持的尺寸会返回400错误。
Output location
输出路径
- Default output:
output/aliyun-wan-video/videos/ - Override base dir with .
OUTPUT_DIR
- 默认输出路径:
output/aliyun-wan-video/videos/ - 可通过环境变量覆盖基础目录。
OUTPUT_DIR
Anti-patterns
反模式
- Do not invent model names or aliases; use official Wan i2v model IDs only.
- Do not block the UI without progress updates.
- Do not retry blindly on 4xx; handle validation failures explicitly.
- 不要自行编造模型名称或别名;仅使用Wan i2v官方模型ID。
- 不要在无进度更新的情况下阻塞UI。
- 不要对4xx错误盲目重试;需显式处理校验失败的情况。
Workflow
工作流程
- Confirm user intent, region, identifiers, and whether the operation is read-only or mutating.
- Run one minimal read-only query first to verify connectivity and permissions.
- Execute the target operation with explicit parameters and bounded scope.
- Verify results and save output/evidence files.
- 确认用户意图、区域、标识,以及操作是只读还是变更类操作。
- 先运行一次最小化只读查询,验证连通性和权限。
- 使用显式参数和有限范围执行目标操作。
- 验证结果并保存输出/凭证文件。
References
参考资料
-
Seefor DashScope SDK mapping and async handling notes.
references/api_reference.md -
Source list:
references/sources.md
- 查看了解DashScope SDK映射和异步处理说明。
references/api_reference.md - 来源列表:
references/sources.md