Loading...
Loading...
Found 51 Skills
Expert guidance for Google Veo 3.1 video generation. Use when the user wants to (1) create text-to-video or image-to-video prompts, (2) optimize for cinematic quality and native audio syncing, (3) maintain character consistency via reference images, (4) structure multi-shot sequences with timestamp prompting, (5) use First/Last Frame interpolation, (6) select between standard and fast generation modes, or (7) troubleshoot physics, motion, or audio issues in generated video.
AI video generation with LTX-2.3 22B — text-to-video, image-to-video clips for video production. Use when generating video clips, animating images, creating b-roll, animated backgrounds, or motion content. Triggers include video generation, animate image, b-roll, motion, video clip, text-to-video, image-to-video.
Use when generating videos with Alibaba Cloud Model Studio PixVerse models (`pixverse/pixverse-v5.6-t2v`, `pixverse/pixverse-v5.6-it2v`, `pixverse/pixverse-v5.6-kf2v`, `pixverse/pixverse-v5.6-r2v`). Use when building non-Wan text-to-video, first-frame image-to-video, keyframe-to-video, or multi-image reference-to-video workflows on Model Studio.
Comprehensive creation via Xiaoyunque's AI capabilities, supporting generation and editing of images/videos. Covered scenarios include: Generation (text-to-image, text-to-video, image-to-video, animation creation, draw xxx, create xxx clip), Editing & Revision (replace xxx with yyy, remove xxx, add xxx, change to xxx, adjust xxx, local modification, lens adjustment), Style Transfer (style migration, repainting, style change), video continuation, video/TVC/promotional video replication, short drama/short comic drama generation, music MV creation, product advertisement/demo video production, storyboard design, educational video/short video production. This skill should also be triggered when users mention Xiaoyunque, xyq, uploading reference images/videos, or checking generation progress. Key Judgment: This skill must be triggered whenever the user's request involves AI video creation, generation, editing, or revision, regardless of the wording (e.g., "draw a cat", "make a poster", "create a video", "help me revise this video", "help me replicate this video", "make an MV with this song", "generate a short drama with one sentence")
Generate videos from text and image prompts via Together AI. 15+ models including Veo 2/3, Sora 2, Kling 2.1, Hailuo 02, Seedance, PixVerse, Vidu. Supports text-to-video, image-to-video, keyframe control, and reference images. Use when users want to generate videos, create video content, animate images, or work with any video generation task.
[QwenCloud] Generate videos using Wan models. Supports text-to-video, image-to-video, first+last frame, reference-based role-play, and video editing (VACE). TRIGGER when: user wants to create, generate, or edit video content, mentions video generation/animation/video clips/Wan models, or explicitly invokes this skill by name (e.g. use qwencloud-video-generation). DO NOT TRIGGER when: user wants to generate images (use qwencloud-image-generation), understand/analyze existing videos (use qwencloud-vision), text-only tasks.
Video creation skill. Combine images and audio to generate videos, supporting TTS dubbing, fade-in/fade-out transitions, subtitles, outro, and BGM. Triggered when users mention phrases like 'generate video', 'make video', 'educational video', 'image-to-video', 'create video account content', 'dubbed video', 'image-text integrated video', 'ancient poetry video', 'story video'. Includes the full workflow of image generation → dubbing → video synthesis, no need to call image-service separately.
Automatically generate AI videos using the Seedance 2.0 model of Jianying (also called Xiaoyunque). It supports three modes: Text to Video (T2V), Image to Video (I2V) and Reference Video to Video (V2V). This skill is applicable when users need to generate AI videos, create short films with the Seedance model, or perform style conversion based on reference images/videos. Pre-configuration of the cookies.json login credential is required.
Generate images and videos using Kling AI API. Use when creating AI-generated images from text prompts, converting images to videos, or generating videos from text descriptions.
How to use the Seedance 2.0 and Seedance 2.0 fast video generation API (Volcengine Ark platform). Use this skill whenever the user wants to generate videos with Seedance, call the Seedance API, create video generation tasks, poll for video results, write code that uses Seedance/doubao-seedance models, or build anything involving AI video generation with the Ark API. Also trigger when the user mentions "seedance", "video generation API", "doubao-seedance", "ark video", "text to video API", or "image to video API".
Generate AI videos, images, and audio with Runway API. Use when generating video from images, text-to-video, video-to-video, character performance, text-to-image, text-to-speech, sound effects, or voice processing with Runway.
Generate AI videos using Kling video generation models. Use when you need to: (1) create videos from text prompts, (2) animate images into videos, (3) transform existing videos with AI, or (4) create AI avatar videos with speech.