giggle-generation-video
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGiggle Video Generation
Giggle 视频生成
Source: giggle-official/skills · API: giggle.pro
Generates AI videos via giggle.pro's Generation API. Supports text-to-video and image-to-video. Submit task → query when ready. No polling, no Cron, no file writes—all operations via exec.
来源: giggle-official/skills · API: giggle.pro
通过giggle.pro的生成API生成AI视频。支持文本转视频和图片转视频。提交任务→就绪后查询。无需轮询、无需Cron、无需写入文件——所有操作均通过exec执行。
Installation Requirements
安装要求
| Requirement | Value |
|---|---|
| Binary | |
| Environment | |
| Pip | |
Set before use. The script will prompt if not configured.
GIGGLE_API_KEYNo Retry on Error: If script execution encounters an error, do not retry. Report the error to the user directly and stop.
| 要求项 | 取值 |
|---|---|
| 二进制程序 | |
| 环境变量 | |
| 依赖包 | |
使用前请设置环境变量。若未配置,脚本会进行提示。
GIGGLE_API_KEY错误时不重试:若脚本执行遇到错误,请勿重试。直接向用户报告错误并停止操作。
Supported Models
支持的模型
| Model | Supported Durations (s) | Default | Description |
|---|---|---|---|
| grok | 6, 10 | 6 | Strong overall capability, recommended |
| grok-fast | 6, 10 | 6 | grok fast version |
| sora2 | 4, 8, 12 | 4 | OpenAI Sora 2 |
| sora2-pro | 4, 8, 12 | 4 | Sora 2 Pro |
| sora2-fast | 10, 15 | 10 | Sora 2 Fast |
| sora2-pro-fast | 10, 15 | 10 | Sora 2 Pro Fast |
| kling25 | 5, 10 | 5 | Kling video model |
| seedance15-pro | 4, 8, 12 | 4 | Seedance Pro (with audio) |
| seedance15-pro-no-audio | 4, 8, 12 | 4 | Seedance Pro (no audio) |
| veo31 | 4, 6, 8 | 4 | Google Veo 3.1 (with audio) |
| veo31-no-audio | 4, 6, 8 | 4 | Google Veo 3.1 (no audio) |
| minimax23 | 6 | 6 | MiniMax model |
| wan25 | 5, 10 | 0 | Wanxiang model |
Note: must be chosen from the model's supported durations, otherwise the API will error.
--duration| 模型 | 支持时长(秒) | 默认值 | 描述 |
|---|---|---|---|
| grok | 6, 10 | 6 | 综合能力强,推荐使用 |
| grok-fast | 6, 10 | 6 | grok 快速版本 |
| sora2 | 4, 8, 12 | 4 | OpenAI Sora 2 |
| sora2-pro | 4, 8, 12 | 4 | Sora 2 Pro |
| sora2-fast | 10, 15 | 10 | Sora 2 Fast |
| sora2-pro-fast | 10, 15 | 10 | Sora 2 Pro Fast |
| kling25 | 5, 10 | 5 | Kling 视频模型 |
| seedance15-pro | 4, 8, 12 | 4 | Seedance Pro(带音频) |
| seedance15-pro-no-audio | 4, 8, 12 | 4 | Seedance Pro(无音频) |
| veo31 | 4, 6, 8 | 4 | Google Veo 3.1(带音频) |
| veo31-no-audio | 4, 6, 8 | 4 | Google Veo 3.1(无音频) |
| minimax23 | 6 | 6 | MiniMax 模型 |
| wan25 | 5, 10 | 0 | 万象模型 |
注意:必须选择模型支持的时长,否则API会报错。
--durationFrame Reference (Image-to-Video)
帧参考(图片转视频)
For image-to-video, and support three mutually exclusive formats:
--start-frame--end-frame| Method | Format | Example |
|---|---|---|
| asset_id | | |
| url | | |
| base64 | | |
Each frame parameter can only use one of these methods.
对于图片转视频,和支持三种互斥格式:
--start-frame--end-frame| 方式 | 格式 | 示例 |
|---|---|---|
| asset_id | | |
| url | | |
| base64 | | |
每个帧参数只能使用其中一种方式。
Execution Flow: Submit and Query
执行流程:提交与查询
Video generation is asynchronous (typically 60–300 seconds). Submit a task to get , then query when the user wants to check status. All commands run via ; API key from system env.
task_idexec视频生成为异步操作(通常需要60–300秒)。提交任务以获取,之后当用户想要查看状态时进行查询。所有命令通过运行;API密钥从系统环境变量获取。
task_idexecStep 1: Submit Task
步骤1:提交任务
First send a message to the user: "Video generation submitted. Usually takes 1–5 minutes. You can ask me about the progress anytime."
bash
undefined首先向用户发送消息:"视频生成任务已提交,通常需要1-5分钟。你可以随时询问我进度。"
bash
undefinedText-to-video (default grok-fast)
文本转视频(默认使用grok-fast)
python3 scripts/generation_api.py
--prompt "Camera slowly pushes forward, person smiling in frame"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
--prompt "Camera slowly pushes forward, person smiling in frame"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
python3 scripts/generation_api.py
--prompt "Camera slowly pushes forward, person smiling in frame"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
--prompt "Camera slowly pushes forward, person smiling in frame"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
Image-to-video - use asset_id as start frame
图片转视频 - 使用asset_id作为起始帧
python3 scripts/generation_api.py
--prompt "Person slowly turns around"
--start-frame "asset_id:lkllv0yv81"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
--prompt "Person slowly turns around"
--start-frame "asset_id:lkllv0yv81"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
python3 scripts/generation_api.py
--prompt "Person slowly turns around"
--start-frame "asset_id:lkllv0yv81"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
--prompt "Person slowly turns around"
--start-frame "asset_id:lkllv0yv81"
--model grok-fast --duration 6
--aspect-ratio 16:9 --resolution 720p
Image-to-video - use URL as start frame
图片转视频 - 使用URL作为起始帧
python3 scripts/generation_api.py
--prompt "Scenery from still to motion"
--start-frame "url:https://example.com/img.jpg"
--model grok-fast --duration 6
--prompt "Scenery from still to motion"
--start-frame "url:https://example.com/img.jpg"
--model grok-fast --duration 6
python3 scripts/generation_api.py
--prompt "Scenery from still to motion"
--start-frame "url:https://example.com/img.jpg"
--model grok-fast --duration 6
--prompt "Scenery from still to motion"
--start-frame "url:https://example.com/img.jpg"
--model grok-fast --duration 6
Image-to-video - both start and end frame
图片转视频 - 同时设置起始和结束帧
python3 scripts/generation_api.py
--prompt "Scene transition"
--start-frame "asset_id:abc123"
--end-frame "url:https://example.com/end.jpg"
--model grok --duration 6
--prompt "Scene transition"
--start-frame "asset_id:abc123"
--end-frame "url:https://example.com/end.jpg"
--model grok --duration 6
Response example:
```json
{"status": "started", "task_id": "55bf24ca-e92a-4d9b-a172-8f585a7c5969"}Store task_id in memory ():
addMemorygiggle-generation-video task_id: xxx (submitted: YYYY-MM-DD HH:mm)python3 scripts/generation_api.py
--prompt "Scene transition"
--start-frame "asset_id:abc123"
--end-frame "url:https://example.com/end.jpg"
--model grok --duration 6
--prompt "Scene transition"
--start-frame "asset_id:abc123"
--end-frame "url:https://example.com/end.jpg"
--model grok --duration 6
响应示例:
```json
{"status": "started", "task_id": "55bf24ca-e92a-4d9b-a172-8f585a7c5969"}将task_id存储到内存():
addMemorygiggle-generation-video task_id: xxx (submitted: YYYY-MM-DD HH:mm)Step 2: Query When User Asks
步骤2:用户询问时查询
When the user asks about video progress (e.g. "is my video ready?", "progress?"), run:
bash
python3 scripts/generation_api.py --query --task-id <task_id>Output handling:
| stdout pattern | Action |
|---|---|
| Plain text with video links (视频已就绪) | Forward to user as-is |
| Plain text with error | Forward to user as-is |
JSON | Tell user "Still in progress, please ask again in a moment" |
Link return rule: Video links in results must be full signed URLs (with Policy, Key-Pair-Id, Signature query params). Keep as-is when forwarding.
当用户询问视频进度时(例如"我的视频好了吗?"、"进度如何?"),执行:
bash
python3 scripts/generation_api.py --query --task-id <task_id>输出处理:
| 标准输出模式 | 操作 |
|---|---|
| 包含视频链接的纯文本(视频已就绪) | 直接转发给用户 |
| 包含错误信息的纯文本 | 直接转发给用户 |
JSON格式 | 告知用户"视频仍在生成中,请稍后再问" |
链接返回规则:结果中的视频链接必须是完整的签名URL(包含Policy、Key-Pair-Id、Signature查询参数)。转发时请保持原样。
New Request vs Query Old Task
新请求与旧任务查询
When the user initiates a new video generation request, must run Step 1 to submit a new task. Do not reuse old task_id from memory.
Only when the user explicitly asks about a previous task's progress should you query the old task_id from memory.
当用户发起新的视频生成请求时,必须执行步骤1提交新任务。请勿复用内存中的旧task_id。
只有当用户明确询问之前任务的进度时,才从内存中查询旧task_id。
Parameter Reference
参数参考
| Parameter | Default | Description |
|---|---|---|
| required | Video description prompt |
| grok | See "Supported Models" table |
| model default | Must choose from model's supported durations |
| 16:9 | 16:9, 9:16, 1:1, 3:4, 4:3 |
| 720p | 480p, 720p, 1080p |
| - | Image-to-video start frame: |
| - | Image-to-video end frame, same format as start |
Note: base64 parameter supports base64-encoded images. Pass the raw Base64 string directly, do not add the prefix.
data:image/xxx;base64,| 参数 | 默认值 | 描述 |
|---|---|---|
| 必填 | 视频描述提示词 |
| grok | 参考「支持的模型」表格 |
| 模型默认值 | 必须选择模型支持的时长 |
| 16:9 | 16:9、9:16、1:1、3:4、4:3 |
| 720p | 480p、720p、1080p |
| - | 图片转视频的起始帧: |
| - | 图片转视频的结束帧,格式与起始帧相同 |
注意:base64参数支持base64编码的图片。直接传递原始Base64字符串,无需添加前缀。
data:image/xxx;base64,Interaction Guide
交互指南
When the user request is vague, guide per the steps below. If the user has provided enough info, run the command directly.
当用户的请求模糊时,请按照以下步骤引导。若用户已提供足够信息,请直接执行命令。
Step 1: Model Selection (required)
步骤1:选择模型(必填)
Before generating, must introduce available models and let the user choose. Display the model list from "Supported Models" table. Wait for explicit user choice before continuing.
生成前必须向用户介绍可用模型并让用户选择。展示「支持的模型」表格中的模型列表。等待用户明确选择后再继续。
Step 2: Video Duration
步骤2:视频时长
For the chosen model, show supported duration options. Default to the model's default duration.
针对用户选择的模型,展示支持的时长选项。默认使用模型的默认时长。
Step 3: Generation Mode
步骤3:生成模式
Question: "Do you need reference images as start/end frame?"
Options: No - text-to-video only / Yes - image-to-video (set start/end frame)问题:"是否需要将参考图片作为起始/结束帧?"
选项:否 - 仅文本转视频 / 是 - 图片转视频(设置起始/结束帧)Step 4: Aspect Ratio
步骤4:宽高比
Question: "What aspect ratio do you need?"
Options: 16:9 - Landscape (recommended) / 9:16 - Portrait / 1:1 - Square问题:"需要什么宽高比?"
选项:16:9 - 横屏(推荐) / 9:16 - 竖屏 / 1:1 - 正方形Step 5: Execute and Display
步骤5:执行与展示
Follow the flow: send message → Step 1 submit → user asks → Step 2 query. Forward exec stdout to the user as-is.
遵循流程:发送提示消息→步骤1提交任务→用户询问→步骤2查询。将exec的标准输出直接转发给用户。