Loading...
Loading...
Found 46 Skills
Automatically generate AI videos using the Seedance 2.0 model of Jianying (also called Xiaoyunque). It supports three modes: Text to Video (T2V), Image to Video (I2V) and Reference Video to Video (V2V). This skill is applicable when users need to generate AI videos, create short films with the Seedance model, or perform style conversion based on reference images/videos. Pre-configuration of the cookies.json login credential is required.
Generate AI videos from text prompts using the HeyGen API. Use when: (1) Generating videos from text descriptions, (2) Creating AI-generated video clips for content production, (3) Image-to-video generation with a reference image, (4) Choosing between video generation providers (VEO, Kling, Sora, Runway, Seedance), (5) Working with HeyGen's /v1/workflows/executions endpoint for video generation.
Use when generating videos with Model Studio DashScope SDK using Wan video generation models (wan2.6-t2v, wan2.6-i2v-flash, wan2.6-i2v and regional variants). Use when implementing or documenting video.generate requests/responses, mapping prompt/negative_prompt/duration/fps/size/seed/reference_image/motion_strength, or integrating video generation into the video-agent pipeline.
Generate images and videos using Kling AI API. Use when creating AI-generated images from text prompts, converting images to videos, or generating videos from text descriptions.
Generate videos using Seedance models. Invoke when user wants to create videos from text prompts, images, or reference materials.
How to use the Seedance 2.0 and Seedance 2.0 fast video generation API (Volcengine Ark platform). Use this skill whenever the user wants to generate videos with Seedance, call the Seedance API, create video generation tasks, poll for video results, write code that uses Seedance/doubao-seedance models, or build anything involving AI video generation with the Ark API. Also trigger when the user mentions "seedance", "video generation API", "doubao-seedance", "ark video", "text to video API", or "image to video API".
Best practices for HeyGen - AI avatar video creation API. Use when creating AI avatar videos, generating talking head videos, or integrating HeyGen with Remotion.
End-to-end AI video generation - create videos from text prompts using image generation, video synthesis, voice-over, and editing. Supports OpenAI DALL-E, Replicate models, LumaAI, Runway, and FFmpeg editing.
Generate AI videos using Kling video generation models. Use when you need to: (1) create videos from text prompts, (2) animate images into videos, (3) transform existing videos with AI, or (4) create AI avatar videos with speech.
Generate videos using Sora via lnapi.com.