aliyun-wan-i2v
Original:🇺🇸 English
Translated
1 scriptsChecked / no sensitive code detected
Use when generating videos from images with DashScope Wan 2.7 image-to-video model (wan2.7-i2v). Use when implementing first-frame video generation, first+last frame interpolation, video continuation, or audio-driven video synthesis via the video-synthesis async API.
4installs
Sourcecinience/alicloud-skills
Added on
NPX Install
npx skill4agent add cinience/alicloud-skills aliyun-wan-i2vTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Wan 2.7 Image-to-Video
Validation
bash
mkdir -p output/aliyun-wan-i2v
python -m py_compile skills/ai/video/aliyun-wan-i2v/scripts/generate_i2v.py && echo "py_compile_ok" > output/aliyun-wan-i2v/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-wan-i2v/validate.txtOutput And Evidence
- Save task IDs, polling responses, and final video URLs to .
output/aliyun-wan-i2v/ - Keep at least one end-to-end run log for troubleshooting.
Prerequisites
- Install SDK (recommended in a venv):
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install dashscope- Set in your environment, or add
DASHSCOPE_API_KEYtodashscope_api_key.~/.alibabacloud/credentials
Critical model names
- — supports first-frame, first+last frame, video continuation, and audio-driven generation
wan2.7-i2v
Capabilities
| Capability | Description | Required media types |
|---|---|---|
| First-frame video | Generate video from a single image | |
| First+last frame | Interpolate video between two images | |
| Video continuation | Extend an existing video clip | |
| Audio-driven | Drive video with audio (lip-sync, rhythm) | |
API endpoint (async only)
POST https://dashscope.aliyuncs.com/api/v1/services/aigc/video-generation/video-synthesisRequired headers:
Authorization: Bearer $DASHSCOPE_API_KEYContent-Type: application/jsonX-DashScope-Async: enable
Singapore endpoint: replace with .
dashscope.aliyuncs.comdashscope-intl.aliyuncs.comNormalized interface
Request
- (string, optional) — up to 5000 characters, describes desired video content
prompt - (string, optional) — up to 500 characters
negative_prompt - (array, required) — media objects with
mediaandtypefields:url- :
type|first_frame|last_frame|driving_audiofirst_clip - : public URL (HTTP/HTTPS) or OSS temporary URL
url
- (string, optional) —
resolutionor720P(default:1080P)1080P - (integer, optional) — video length in seconds, range [2, 15] (default: 5)
duration - (boolean, optional) — AI prompt rewriting (default: true)
prompt_extend - (boolean, optional) — add "AI generated" watermark (default: false)
watermark - (integer, optional) — range [0, 2147483647]
seed
Media input limits
Images (first_frame, last_frame):
- Formats: JPEG, JPG, PNG (no transparency), BMP, WEBP
- Resolution: [240, 8000] pixels per side
- Aspect ratio: 1:8 to 8:1
- Max size: 20MB
Audio (driving_audio):
- Formats: wav, mp3
- Duration: 2-30s
- Max size: 15MB
- Auto-truncated to value if longer
duration
Video (first_clip):
- Formats: mp4, mov
- Duration: 2-10s
- Resolution: [240, 4096] pixels per side
- Aspect ratio: 1:8 to 8:1
- Max size: 100MB
Response (task creation)
- (string) — use for polling, valid 24 hours
output.task_id - (string) — PENDING | RUNNING | SUCCEEDED | FAILED | CANCELED
output.task_status - (string)
request_id
Response (task result)
- (string) — generated video URL
output.video_url - (string) — original prompt
output.orig_prompt - (string) — rewritten prompt (if prompt_extend enabled)
output.actual_prompt - (integer)
usage.video_count - (integer) — duration in seconds
usage.video_duration
Quick start (Python + HTTP)
python
import os
import json
import time
import requests
API_KEY = os.getenv("DASHSCOPE_API_KEY")
BASE_URL = "https://dashscope.aliyuncs.com/api/v1"
def create_i2v_task(req: dict) -> str:
"""Create an image-to-video task and return task_id."""
payload = {
"model": "wan2.7-i2v",
"input": {
"prompt": req.get("prompt", ""),
"media": req["media"],
},
"parameters": {
"resolution": req.get("resolution", "1080P"),
"duration": req.get("duration", 5),
"prompt_extend": req.get("prompt_extend", True),
"watermark": req.get("watermark", False),
},
}
if req.get("negative_prompt"):
payload["input"]["negative_prompt"] = req["negative_prompt"]
if req.get("seed") is not None:
payload["parameters"]["seed"] = req["seed"]
resp = requests.post(
f"{BASE_URL}/services/aigc/video-generation/video-synthesis",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
"X-DashScope-Async": "enable",
},
json=payload,
)
resp.raise_for_status()
data = resp.json()
return data["output"]["task_id"]
def poll_task(task_id: str, interval: int = 15) -> dict:
"""Poll until task completes. Returns final response."""
while True:
resp = requests.get(
f"{BASE_URL}/tasks/{task_id}",
headers={"Authorization": f"Bearer {API_KEY}"},
)
resp.raise_for_status()
data = resp.json()
status = data["output"]["task_status"]
if status in ("SUCCEEDED", "FAILED", "CANCELED"):
return data
time.sleep(interval)Media combination examples
python
# First-frame only
media = [{"type": "first_frame", "url": "https://example.com/image.jpg"}]
# First + last frame interpolation
media = [
{"type": "first_frame", "url": "https://example.com/start.jpg"},
{"type": "last_frame", "url": "https://example.com/end.jpg"},
]
# Audio-driven from first frame
media = [
{"type": "first_frame", "url": "https://example.com/face.jpg"},
{"type": "driving_audio", "url": "https://example.com/speech.mp3"},
]
# Video continuation
media = [{"type": "first_clip", "url": "https://example.com/clip.mp4"}]Error handling
| Error | Likely cause | Action |
|---|---|---|
| 401/403 | Missing or invalid | Check env var or credentials file |
400 | Unsupported resolution, bad duration, missing media | Validate parameters |
| "does not support synchronous calls" | Missing | Add required header |
| 429 | Rate limit or quota | Retry with backoff |
Output location
- Default output:
output/aliyun-wan-i2v/videos/ - Override base dir with .
OUTPUT_DIR
Anti-patterns
- Do not use model names other than .
wan2.7-i2v - Do not call this API synchronously — async header is required.
- Do not pass duplicate media types (e.g., two entries).
first_frame - Video URLs expire after 24 hours; download and persist immediately.
- Do not use this API for video editing — use instead.
aliyun-wan-videoedit
Workflow
- Confirm user intent: first-frame, first+last frame, video continuation, or audio-driven.
- Prepare media array with correct types and valid URLs.
- Create async task and poll for results.
- Download and save generated video before URL expiration.
References
- See for full HTTP API details.
references/api_reference.md - See for source links.
references/sources.md