Loading...
Loading...
Found 29 Skills
Generate images, videos, audio, and 3D models via RunningHub API (170+ endpoints) and run any RunningHub AI Application (custom ComfyUI workflow) by webappId. Covers text-to-image, image-to-video, text-to-speech, music generation, 3D modeling, image upscaling, AI apps, and more.
Generate multiple coherent images and videos in batches based on users' creative/story ideas, and present them in the form of a professional storyboard. It supports advanced features such as single shot regeneration, image-to-video conversion, and video generation from first and last frames. Suitable for scenarios like short video scripts, animation storyboards, advertising creativity, and story visualization.
Create and edit videos using Google's Veo 2 and Veo 3 models. Supports Text-to-Video, Image-to-Video, Reference-to-Video, Inpainting, and Video Extension. Available parameters: prompt, image, mask, mode, duration, aspect-ratio. Always confirm parameters with the user or explicitly state defaults before running.
Use this skill for AI video generation. Triggers include: "generate video", "create video", "make video", "animate", "text to video", "video from image", "video of", "animate image", "bring to life", "make it move", "add motion", "video with audio", "video with dialogue" Supports text-to-video, image-to-video, video with dialogue/audio using Google Veo 3.1 (default) or OpenAI Sora.
Write better prompts for Kling 3.0 AI video generation. Use when the user wants to create, write, improve, or refine prompts — text-to-video, image-to-video, keyframes, multi-shot sequences, or dialogue scenes.
Generate videos using ByteDance's Seedance model. It supports text-to-video and image-to-video functions, and calls APIs through the volcengine-ark SDK. This skill is activated when users need to generate videos, create video content, or produce videos based on text or images.
Expert guidance for Google Veo 3.1 video generation. Use when the user wants to (1) create text-to-video or image-to-video prompts, (2) optimize for cinematic quality and native audio syncing, (3) maintain character consistency via reference images, (4) structure multi-shot sequences with timestamp prompting, (5) use First/Last Frame interpolation, (6) select between standard and fast generation modes, or (7) troubleshoot physics, motion, or audio issues in generated video.
(project - Skill) Generate AI videos using Volcengine Jimeng Video 3.0 Pro API. Use when users request video generation from text prompts or images, including text-to-video, image-to-video, or any AI-powered video creation. Triggers include "generate video", "create video", "AI video", "Jimeng video", "text to video", "image to video", or any request involving AI-powered video generation from descriptions.
AI video generation patterns using Sora, Runway, Pika, and other AI video tools. Use when generating videos from text prompts, image-to-video conversion, AI video editing, or integrating AI video APIs.
Create and edit videos using Google's Veo 2 and Veo 3 models. Supports Text-to-Video, Image-to-Video, Inpainting, and Advanced Controls.
Generate videos from text and image prompts via Together AI. 15+ models including Veo 2/3, Sora 2, Kling 2.1, Hailuo 02, Seedance, PixVerse, Vidu. Supports text-to-video, image-to-video, keyframe control, and reference images. Use when users want to generate videos, create video content, animate images, or work with any video generation task.
Video creation skill. Combine images and audio to generate videos, supporting TTS dubbing, fade-in/fade-out transitions, subtitles, outro, and BGM. Triggered when users mention phrases like 'generate video', 'make video', 'educational video', 'image-to-video', 'create video account content', 'dubbed video', 'image-text integrated video', 'ancient poetry video', 'story video'. Includes the full workflow of image generation → dubbing → video synthesis, no need to call image-service separately.