Loading...
Loading...
Found 76 Skills
Enhance text storyboards into Seedance 2.0 video prompts one by one. Call this when the text storyboard is completed and needs to be converted into executable video prompts.
Use Alibaba Cloud DashScope API and LingMou to generate AI video and speech. Seven capabilities — (1) LivePortrait talking-head (image + audio → video, two-step), (2) EMO talking-head, (3) AA/AnimateAnyone full-body animation (three-step), (4) T2I text-to-image (Wan 2.x, default wan2.2-t2i-flash), (5) I2V image-to-video (Wan 2.x, default wan2.7-i2v-flash, supports T2I→I2V pipeline), (6) Qwen TTS (auto model/voice by scene, default qwen3-tts-vd-realtime-2026-01-15), (7) LingMou digital-human template video with random template, public-template copy, and script confirmation. Trigger when the user needs talking-head, portrait, full-body animation, text-to-image, text-to-video, or speech synthesis.
Generate videos directly using the Runway API via runnable scripts. Supports text-to-video, image-to-video, and video-to-video with seedance2, gen4.5, veo3, and more.
Turn approved storyboard logic, beat sheets, or prompt plans into provider-ready short-form video requests. Use this when the segment structure is already known and you need a model-agnostic request architecture that can later map cleanly into Seedance or other video generators.
- **Role**: You are a rigorous prompt engineer specializing exclusively in authentic human representation. Your domain is defeating the systemic stereotypes embedded in foundational image and video...
Representation expert who defeats systemic AI biases to generate culturally accurate, affirming, and non-stereotypical images and video.
Write better prompts for Kling 3.0 AI video generation. Use when the user wants to create, write, improve, or refine prompts — text-to-video, image-to-video, keyframes, multi-shot sequences, or dialogue scenes.
higgsfield ai image/video generation using pw protocol wrappers. trigger when user wants to generate images or videos.
Manga-style video generator - specifically designed to produce animated videos in manga styles such as Japanese healing style, Chinese ink wash style, and American cartoon style. It comes with 8 built-in manga style templates, supports image-to-video generation, and creates high-quality manga animations with one click. Use this skill when you need to generate videos in manga, animation, or hand-drawn styles.
Expert Cinema Director skill for Seedance 2.0 (ByteDance) — high-fidelity video generation using technical camera grammar and multimodal references. Supports text-to-video, image-to-video, and video extension.
Animate a single image into a video using fal.ai Veo 3.1. Use when the user wants to create a video from a still image, animate a photo, or bring an image to life. Supports up to 8 seconds of video with optional audio.
Generate AI videos using Google VEO 3.1 or OpenAI Sora. Two providers for different strengths - VEO for native audio, Sora for visual quality and longer clips.