Loading...
Loading...
Found 94 Skills
Build consistent character designs and character media with genmedia. Use this for original characters, reference sheets, expression sheets, outfit variations, identity-preserving edits, and character-to-video workflows.
Plan and run commercial image or video production with genmedia. Use this for product photography, ads, e-commerce batches, product reveals, lifestyle commercials, background replacement, social formats, and brand-safe prompt work.
[QianWen] Generate videos using Wan models. Supports text-to-video, image-to-video, first+last frame, reference-based role-play, and video editing (VACE). TRIGGER when: user wants to create, generate, or edit video content, mentions video generation/animation/video clips/Wan models, or explicitly invokes this skill by name (e.g. use qianwen-video-generation). DO NOT TRIGGER when: user wants to generate images (use qianwen-image-generation), understand/analyze existing videos (use qianwen-vision), text-only tasks.
Write better prompts for Kling 3.0 AI video generation. Use when the user wants to create, write, improve, or refine prompts — text-to-video, image-to-video, keyframes, multi-shot sequences, or dialogue scenes.
higgsfield ai image/video generation using pw protocol wrappers. trigger when user wants to generate images or videos.
Guides users through AI video production on the Seedance platform — from creative ideation and asset preparation through storyboarding to production-ready prompts. Triggers on keywords such as Seedance, AI video, storyboard, camera movement, video extension, one-shot take.
Manga-style video generator - specifically designed to produce animated videos in manga styles such as Japanese healing style, Chinese ink wash style, and American cartoon style. It comes with 8 built-in manga style templates, supports image-to-video generation, and creates high-quality manga animations with one click. Use this skill when you need to generate videos in manga, animation, or hand-drawn styles.
Expert Cinema Director skill for Seedance 2.0 (ByteDance) — high-fidelity video generation using technical camera grammar and multimodal references. Supports text-to-video, image-to-video, and video extension.
Professional AI Video Storyboard Designer. This skill must be used when users want to create videos, make storyboard scripts, generate AI video prompts, or plan video content structures. It covers all video types: short videos, commercials, educational content, brand videos, vlogs, micro-films, etc. Even if users only say "Help me make a video" or "I want to create content on the theme of XX", this skill should be triggered. It outputs professional storyboard designs + prompts that can be directly used in mainstream AI video tools like Seedance 2.0 (Jimeng), Sora, Kling, Runway, Veo, etc. Among them, Seedance 2.0 supports specialized output with multimodal @ reference syntax.
Animate a single image into a video using fal.ai Veo 3.1. Use when the user wants to create a video from a still image, animate a photo, or bring an image to life. Supports up to 8 seconds of video with optional audio.
UGC video format templates for mobile app brands. Contains 15 TikTok-native, lofi video formats with shot-by-shot structure and AI video generation prompts. Focuses on the person and their physical relationship with their phone — reactions, rituals, and real moments — never screen content or app UI. Use when the user wants to create mobile app UGC content.
UGC lifestyle b-roll video templates for brands. Contains 20 TikTok-native, lofi b-roll formats with shot-by-shot structure and AI video generation prompts. Use when the user wants to create lifestyle b-roll content — aesthetic scene-setting, product-in-context shots, mood pieces, or ambient footage for ads and social.