Loading...
Loading...
Found 76 Skills
Help users integrate Runway video generation APIs (text-to-video, image-to-video, video-to-video)
Unified media generation via fal.ai MCP — image, video, and audio. Covers text-to-image (Nano Banana), text/image-to-video (Seedance, Kling, Veo 3), text-to-speech (CSM-1B), and video-to-audio (ThinkSound). Use when the user wants to generate images, videos, or audio with AI.
Use this skill for AI video generation. Triggers include: "generate video", "create video", "make video", "animate", "text to video", "video from image", "video of", "animate image", "bring to life", "make it move", "add motion", "video with audio", "video with dialogue" Supports text-to-video, image-to-video, video with dialogue/audio using Google Veo 3.1 (default) or OpenAI Sora.
Triggered when users provide dream text materials, diary fragments, or oral dream descriptions and wish to generate videos. Trigger phrases include: "dreamt of", "had a dream", "dream material", "help me generate a video", "convert to video", "dream to video". It also applies to scenarios where users directly paste a dream description and expect to receive a video file. This skill converts text into video prompts, automatically submits them to the Jiemeng Platform for generation, and downloads the video files.
Generate AI videos with Tavus replicas. Use when creating personalized videos from scripts or audio, adding custom backgrounds, watermarks, or generating videos at scale. Covers the video generation API, not real-time conversations.
Generate images, videos, and audio with fal.ai serverless AI. Use when building AI image generation, video generation, image editing, or real-time AI features. Triggers on fal.ai, fal, AI image generation, Flux, SDXL, real-time AI, serverless AI.
Manga Drama Generator - A comic-style short drama generation tool based on Seedance. It supports automatically generating manga drama storyboard scripts and producing videos based on the protagonist's image. It is suitable for creating comic-style short videos, character stories, animated short films, etc. Use this skill when users want to generate comic-style short video dramas, character stories or manga dramas.
Automatically generate AI videos using the Seedance 2.0 model of Jianying (also called Xiaoyunque). It supports three modes: Text to Video (T2V), Image to Video (I2V) and Reference Video to Video (V2V). This skill is applicable when users need to generate AI videos, create short films with the Seedance model, or perform style conversion based on reference images/videos. Pre-configuration of the cookies.json login credential is required.
Create videos from a text prompt using HeyGen's Video Agent. Use when: (1) Creating a video from a description or idea, (2) Generating explainer, demo, or marketing videos from a prompt, (3) Making a video without specifying exact avatars, voices, or scenes, (4) Quick video prototyping or drafts, (5) One-shot prompt-to-video generation, (6) User says "make me a video" or "create a video about X".
Supports text-to-video and image-to-video (start/end frame). Use when the user needs to generate video, create short videos, or convert text to video. Use cases: (1) Generate video from text description, (2) Use reference images as start/end frame for image-to-video, (3) Customize model, aspect ratio, duration, resolution. Triggers: generate video, text-to-video, image-to-video, AI video.
A natural language workflow for converting literary works (novels, stories, scripts, one-sentence concepts, etc.) into film and video content, which converts novel content into complete videos by orchestrating multiple skills in sequence. This skill is used when users need to convert novels, stories or other literary works into videos.
Generate HeyGen presenter videos via the v3 Video Agent pipeline — handles Frame Check (aspect ratio correction), prompt engineering, avatar resolution, and voice selection. Required for any HeyGen video generation. Replaces deprecated endpoints with v3. Use when: (1) generating any HeyGen video (via API or otherwise), (2) sending a personalized video message (outreach, update, announcement, pitch, knowledge), (3) creating a HeyGen presenter-led explainer, tutorial, or product demo with a human face, (4) "make a video of me saying...", "send a video to my leads", "record an update for my team", "create a video pitch", "make a loom-style message", "I want to appear in this video", "generate a HeyGen video", "make a talking head video". Accepts avatar_id from heygen-avatar for identity-first HeyGen videos, or uses a stock presenter. Returns video share URL + HeyGen session URL for iteration. Chain signal: when the user wants to create/design an avatar AND make a video in the same request, run heygen-avatar first, then return here. Conjunctions to watch: "and then", "and immediately", "first...then", "X and make a video", "design [presenter] and record" = always CHAIN. If the user provides a photo AND wants a video, route to heygen-avatar first. NOT for: avatar creation or identity setup (use heygen-avatar first), cinematic footage or b-roll without a presenter, translating videos, TTS-only, or streaming avatars.