aliyun-animate-anyone
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCategory: provider
Category: provider
Model Studio AnimateAnyone
Model Studio AnimateAnyone
Validation
验证
bash
mkdir -p output/aliyun-animate-anyone
python -m py_compile skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py && echo "py_compile_ok" > output/aliyun-animate-anyone/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-animate-anyone/validate.txtbash
mkdir -p output/aliyun-animate-anyone
python -m py_compile skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py && echo "py_compile_ok" > output/aliyun-animate-anyone/validate.txt通过标准:命令退出码为0,且文件已生成。
output/aliyun-animate-anyone/validate.txtOutput And Evidence
输出与凭证
- Save normalized request payloads, detection outputs, template IDs, and task polling snapshots under .
output/aliyun-animate-anyone/ - Record whether the result should keep the reference image background or the source video background.
Use AnimateAnyone when the task needs motion transfer from a template video rather than plain talking-head animation.
- 将标准化请求负载、检测输出、模板ID、任务轮询快照保存到目录下。
output/aliyun-animate-anyone/ - 记录结果是保留参考图像背景还是源视频背景。
当任务需要从模板视频迁移动作,而非简单的说话人头部动画时,请使用AnimateAnyone。
Critical model names
关键模型名称
Use these exact model strings:
animate-anyone-detect-gen2animate-anyone-template-gen2animate-anyone-gen2
Selection guidance:
- Run image detection first.
- Run template generation on the source motion video.
- Use for the final video job.
animate-anyone-gen2
使用以下精确的模型字符串:
animate-anyone-detect-gen2animate-anyone-template-gen2animate-anyone-gen2
选型指南:
- 首先运行图像检测。
- 对源动作视频运行模板生成。
- 最终视频生成任务使用模型。
animate-anyone-gen2
Prerequisites
前置要求
- China mainland (Beijing) only.
- Set in your environment, or add
DASHSCOPE_API_KEYtodashscope_api_key.~/.alibabacloud/credentials - Input files must be public HTTP/HTTPS URLs.
- 仅支持中国大陆(北京)区域使用。
- 在环境变量中配置,或者将
DASHSCOPE_API_KEY添加到dashscope_api_key文件中。~/.alibabacloud/credentials - 输入文件必须是公开的HTTP/HTTPS链接。
Normalized interface (video.animate_anyone)
标准化接口(video.animate_anyone)
Detect Request
检测请求
- (string, optional): default
modelanimate-anyone-detect-gen2 - (string, required)
image_url
- (字符串,可选):默认值为
modelanimate-anyone-detect-gen2 - (字符串,必填)
image_url
Template Request
模板请求
- (string, optional): default
modelanimate-anyone-template-gen2 - (string, required)
video_url
- (字符串,可选):默认值为
modelanimate-anyone-template-gen2 - (字符串,必填)
video_url
Generate Request
生成请求
- (string, optional): default
modelanimate-anyone-gen2 - (string, required)
image_url - (string, required)
template_id - (bool, optional): whether to keep the input image background
use_ref_img_bg
- (字符串,可选):默认值为
modelanimate-anyone-gen2 - (字符串,必填)
image_url - (字符串,必填)
template_id - (布尔值,可选):是否保留输入图像的背景
use_ref_img_bg
Response
返回结果
- (string)
task_id - (string)
task_status - (string, when finished)
video_url
- (字符串)
task_id - (字符串)
task_status - (字符串,任务完成时返回)
video_url
Quick start
快速开始
bash
python skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py \
--image-url "https://example.com/dancer.png" \
--template-id "tmpl_xxx" \
--use-ref-img-bgbash
python skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py \
--image-url "https://example.com/dancer.png" \
--template-id "tmpl_xxx" \
--use-ref-img-bgOperational guidance
操作指导
- The action template must come from the official template-generation API.
- Full-body images work best when ; half-body images are not recommended in that mode.
use_ref_img_bg=false - This skill is best for dancing or large body motion transfer, not generic talking-head tasks.
- 动作模板必须来自官方模板生成API。
- 当时,全身图像效果最佳;该模式下不推荐使用半身图像。
use_ref_img_bg=false - 该工具最适合舞蹈或大幅度肢体动作迁移,不适用于通用的说话人头部动画任务。
Output location
输出位置
- Default output:
output/aliyun-animate-anyone/request.json - Override base dir with .
OUTPUT_DIR
- 默认输出路径:
output/aliyun-animate-anyone/request.json - 可通过环境变量覆盖基础输出目录。
OUTPUT_DIR
References
参考资料
references/sources.md
references/sources.md