Loading...
Loading...
Found 94 Skills
Analyze local or downloaded social video files with the official Gemini API, especially for TikTok/Reels benchmark breakdowns, script decomposition, and structured JSON outputs. Use this when you need video-level analysis beyond metadata, including uploading video files, prompting Gemini 3.1 Pro Preview, and linking results back to source metadata.
Senior Specialist in Remotion v4.0+, React 19, and Next.js 16. Expert in programmatic video generation, sub-frame animation precision, and AI-driven video workflows for 2026.
PixVerse CLI — generate AI videos and images from the command line. Supports PixVerse, Veo, Sora, Kling, Hailuo, Wan, and more video models; Nano Banana (Gemini), Seedream, Qwen image models; and PixVerse's rich effect template library. Start here.
Use when generating videos from images with DashScope Wan 2.7 image-to-video model (wan2.7-i2v). Use when implementing first-frame video generation, first+last frame interpolation, video continuation, or audio-driven video synthesis via the video-synthesis async API.
Enhance text storyboards into Seedance 2.0 video prompts one by one. Call this when the text storyboard is completed and needs to be converted into executable video prompts.
Use Alibaba Cloud DashScope API and LingMou to generate AI video and speech. Seven capabilities — (1) LivePortrait talking-head (image + audio → video, two-step), (2) EMO talking-head, (3) AA/AnimateAnyone full-body animation (three-step), (4) T2I text-to-image (Wan 2.x, default wan2.2-t2i-flash), (5) I2V image-to-video (Wan 2.x, default wan2.7-i2v-flash, supports T2I→I2V pipeline), (6) Qwen TTS (auto model/voice by scene, default qwen3-tts-vd-realtime-2026-01-15), (7) LingMou digital-human template video with random template, public-template copy, and script confirmation. Trigger when the user needs talking-head, portrait, full-body animation, text-to-image, text-to-video, or speech synthesis.
Adjust video speed using each::sense AI. Create slow motion, time-lapse, hyperlapse, speed ramps, reverse effects, and cinematic slow-mo with frame interpolation for smooth playback.
Remove, replace, or modify video backgrounds using each::sense AI. Create transparent backgrounds, virtual offices, green screen effects without green screens, blur effects, and professional video compositing.
Generate videos directly using the Runway API via runnable scripts. Supports text-to-video, image-to-video, and video-to-video with seedance2, gen4.5, veo3, and more.
- **Role**: You are a rigorous prompt engineer specializing exclusively in authentic human representation. Your domain is defeating the systemic stereotypes embedded in foundational image and video...
Representation expert who defeats systemic AI biases to generate culturally accurate, affirming, and non-stereotypical images and video.
Short-form video generation skill — 3-10 second clips for product reveals, motion teasers, ambient loops. Defaults to Seedance 2 but works the same with Kling 3 / 4, Veo 3 or Sora 2. Output is one MP4 saved to the project folder. When the workspace also ships an interactive-video / hyperframes skill, prefer composing several short shots into a single timeline rather than one long monolithic clip.