Loading...
Loading...
Found 9 Skills
Generate ComfyUI workflow JSON from natural language descriptions. Validates against installed models/nodes before output. Use when building custom ComfyUI workflows from scratch or modifying existing ones.
Master AI-powered game asset pipelines using ComfyUI, Stable Diffusion, FLUX, ControlNet, and IP-Adapter. Creates production-ready sprites, textures, UI, and environments with consistency, proper licensing, and game engine integration. Use when "AI game art, generate game assets, ComfyUI game, stable diffusion sprites, AI texture generation, character consistency AI, procedural art generation, SDXL game assets, FLUX textures, train LoRA game, AI tileable texture, spritesheet generation, " mentioned.
Expert guidance for integrating ViewComfy API into web applications using Python and FastAPI
Generate images, videos, audio, and 3D models via RunningHub API (170+ endpoints) and run any RunningHub AI Application (custom ComfyUI workflow) by webappId. Covers text-to-image, image-to-video, text-to-speech, music generation, 3D modeling, image upscaling, AI apps, and more.
Use when developing or modifying visual elements (components, pages, layouts, styling). Covers Playwright screenshots, Compodium component preview, responsive/dark-mode verification, and the iteration loop.
Build identity-preserving character generation workflows and pipelines in ComfyUI. Selects the optimal identity method (InfiniteYou, FLUX Kontext, PuLID, InstantID, IP-Adapter) based on use case requirements. Handles face preservation, likeness transfer, cross-domain conversion (3D to photo), multi-reference consistency, iterative character editing, and character variation generation. Triggers on requests to generate consistent characters, preserve identity across images, create face-swapping workflows, or convert 3D renders to photorealistic portraits. Does NOT cover general image generation without identity preservation, model training/LoRA fine-tuning, animation, technical explanations, or workflow debugging.
Use when editing ComfyUI workflow JSON, adding nodes, wiring connections, modifying workflows, adding ControlNet/LoRA/upscaling to a workflow, or submitting workflows to ComfyUI.
Use when: building new UI from scratch and need design workflow (layout → theme → animation → code). Don't use when: extracting design from existing code (use frontend-design-extractor) or need full production site (use frontend-design-ultimate). Routing tree: "Building new site from scratch?" → frontend-design-ultimate; "Need UX critique or design tokens?" → ui-ux-pro-max; "Redesigning existing frontend with quantified eval?" → human-optimized-frontend; "Need design workflow (wireframe → theme → code)?" → frontend-design (superdesign); "Extracting design from existing codebase?" → frontend-design-extractor
Automated UI development loop: dev server + browser + implement + verify + fix. Launches dev server, implements via frontend-design skill, checks for errors (console, TypeScript, network), and iterates up to 5 times. USE WHEN: "implement next feature", "implement [description]", "verify the UI". NOT for one-off design/code tasks — use frontend-design directly for those. 6. Reports completion or escalates with detailed report **Triggers:** - "implement next feature", "implement the hero section" - "verify this implementation", "check the UI" - "fix the errors", "iterate on this" - "start dev server", "manage server"