aliyun-emoji

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Category: provider
Category: provider

Model Studio Emoji

Model Studio Emoji

Validation

校验

bash
mkdir -p output/aliyun-emoji
python -m py_compile skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py && echo "py_compile_ok" > output/aliyun-emoji/validate.txt
Pass criteria: command exits 0 and
output/aliyun-emoji/validate.txt
is generated.
bash
mkdir -p output/aliyun-emoji
python -m py_compile skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py && echo "py_compile_ok" > output/aliyun-emoji/validate.txt
通过标准:命令退出码为0,且生成
output/aliyun-emoji/validate.txt
文件。

Output And Evidence

输出与凭证

  • Save normalized request payloads, detected face boxes, selected template ID, and task polling snapshots under
    output/aliyun-emoji/
    .
  • Record the exact portrait URL and whether detection passed.
Use Emoji when the user wants a fixed-template facial animation clip rather than open-ended video generation.
  • 将标准化请求负载、检测到的人脸框、选定的模板ID以及任务轮询快照保存到
    output/aliyun-emoji/
    目录下。
  • 记录准确的人像URL以及检测是否通过。
当用户需要固定模板的人脸动画片段而非开放式视频生成时,使用本Emoji能力。

Critical model names

关键模型名称

Use these exact model strings:
  • emoji-detect-v1
  • emoji-v1
Selection guidance:
  • Run
    emoji-detect-v1
    first to obtain
    face_bbox
    and
    ext_bbox_face
    .
  • Use
    emoji-v1
    only after detection succeeds.
请使用以下准确的模型字符串:
  • emoji-detect-v1
  • emoji-v1
选型指南:
  • 首先运行
    emoji-detect-v1
    获取
    face_bbox
    ext_bbox_face
  • 仅在检测成功后再使用
    emoji-v1

Prerequisites

前置要求

  • China mainland (Beijing) only.
  • Set
    DASHSCOPE_API_KEY
    in your environment, or add
    dashscope_api_key
    to
    ~/.alibabacloud/credentials
    .
  • Input image must be a public HTTP/HTTPS URL.
  • 仅支持中国大陆(北京)区域。
  • 在环境变量中设置
    DASHSCOPE_API_KEY
    ,或将
    dashscope_api_key
    添加到
    ~/.alibabacloud/credentials
    文件中。
  • 输入图片必须为公开的HTTP/HTTPS URL。

Normalized interface (video.emoji)

标准化接口(video.emoji)

Detect Request

检测请求

  • model
    (string, optional): default
    emoji-detect-v1
  • image_url
    (string, required)
  • model
    (字符串,可选):默认值为
    emoji-detect-v1
  • image_url
    (字符串,必填)

Generate Request

生成请求

  • model
    (string, optional): default
    emoji-v1
  • image_url
    (string, required)
  • face_bbox
    (array<int>, required)
  • ext_bbox_face
    (array<int>, required)
  • template_id
    (string, required)
  • model
    (字符串,可选):默认值为
    emoji-v1
  • image_url
    (字符串,必填)
  • face_bbox
    (整数数组,必填)
  • ext_bbox_face
    (整数数组,必填)
  • template_id
    (字符串,必填)

Response

响应

  • task_id
    (string)
  • task_status
    (string)
  • video_url
    (string, when finished)
  • task_id
    (字符串)
  • task_status
    (字符串)
  • video_url
    (字符串,任务完成时返回)

Quick start

快速开始

bash
python skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py \
  --image-url "https://example.com/portrait.png" \
  --face-bbox 302,286,610,593 \
  --ext-bbox-face 71,9,840,778 \
  --template-id emoji_001
bash
python skills/ai/video/aliyun-emoji/scripts/prepare_emoji_request.py \
  --image-url "https://example.com/portrait.png" \
  --face-bbox 302,286,610,593 \
  --ext-bbox-face 71,9,840,778 \
  --template-id emoji_001

Operational guidance

操作指南

  • Use a single-person, front-facing portrait with no face occlusion.
  • Template IDs come from the official template list or console experience; do not invent them in production calls.
  • Emoji output is a person video clip, not a sticker pack or text overlay asset.
  • 请使用无面部遮挡的单人正面人像图片。
  • 模板ID来自官方模板列表或控制台体验页,生产环境调用时请勿自行编造。
  • Emoji输出为人物视频片段,而非贴纸包或文字叠加素材。

Output location

输出位置

  • Default output:
    output/aliyun-emoji/request.json
  • Override base dir with
    OUTPUT_DIR
    .
  • 默认输出路径:
    output/aliyun-emoji/request.json
  • 可通过
    OUTPUT_DIR
    环境变量覆盖基础目录。

References

参考资料

  • references/sources.md
  • references/sources.md