Loading...
Loading...
Found 5,043 Skills
Guides research ideation through a 5-step goal-driven workflow: define long-term goal, build literature tree (novelty + challenge-insight), select a problem (well-established solution check), design a solution (cross-domain transfer + decomposition), validate and iterate. Also covers structured paper reading (3 depth levels). Use when: user wants to find a research direction, brainstorm ideas, build field vision, do a literature review, evaluate idea novelty, or read papers systematically. Do NOT use for comparing/ranking existing ideas (use idea-tournament) or planning a paper (use paper-planning).
Monitor LLMs and agentic apps: performance, token/cost, response quality, and workflow orchestration. Use when the user asks about LLM monitoring, GenAI observability, or AI cost/quality.
Regression testing strategies for AI-assisted development. Sandbox-mode API testing without database dependencies, automated bug-check workflows, and patterns to catch AI blind spots where the same model writes and reviews code.
Use when building any system where email content triggers actions — AI agent inboxes, automated support handlers, email-to-task pipelines, or any workflow processing untrusted inbound email. Always use this skill when the user wants to receive emails and act on them programmatically, even if they don't mention "agent" — the skill contains critical security patterns (sender allowlists, content filtering, sandboxed processing) that prevent untrusted email from controlling your system.
Query NVIDIA PTX ISA 9.1, CUDA Runtime API 13.1, Driver API 13.1, Programming Guide v13.1, Best Practices Guide, Nsight Compute, Nsight Systems local documentation. Debug and optimize GPU kernels with nsys/ncu/compute-sanitizer workflows. Use when writing, debugging, or optimizing CUDA code, GPU kernels, PTX instructions, inline PTX, TensorCore operations (WMMA, WGMMA, TMA, tcgen05), or when the user mentions CUDA API functions, error codes, device properties, memory management, profiling, GPU performance, compute capabilities, CUDA Graphs, Cooperative Groups, Unified Memory, dynamic parallelism, or CUDA programming model concepts.
Full research pipeline: Workflow 1 (idea discovery) → implementation → Workflow 2 (auto review loop). Goes from a broad research direction all the way to a submission-ready paper. Use when user says "全流程", "full pipeline", "从找idea到投稿", "end-to-end research", or wants the complete autonomous research lifecycle.
Workflow 3: Full paper writing pipeline. Orchestrates paper-plan → paper-figure → paper-write → paper-compile → auto-paper-improvement-loop to go from a narrative report to a polished, submission-ready PDF. Use when user says "写论文全流程", "write paper pipeline", "从报告到PDF", "paper writing", or wants the complete paper generation workflow.
React Native and Expo patterns for building performant mobile apps. Covers list performance, animations with Reanimated, navigation, UI patterns, state management, platform-specific code, and Expo workflows. Use when building or reviewing React Native code. Triggers: 'react native', 'expo', 'mobile app', 'react native performance', 'flatlist', 'reanimated', 'expo router', 'mobile development', 'ios app', 'android app'.
Plan user journeys, onboarding flows, engagement strategies, and day-to-day workflows for digital health products.
Generates custom design system rules for the user's codebase. Use when user says "create design system rules", "generate rules for my project", "set up design rules", "customize design system guidelines", or wants to establish project-specific conventions for Figma-to-code workflows. Requires Figma MCP server connection.
Implement the Syncfusion React Inline AI Assist component. Use this skill to add inline AI suggestions, integrate AI services such as OpenAI, Gemini, Lite-LLM, or Ollama, configure command and response actions, customize toolbars, handle events, and support real-time prompt-response workflows in React.
MiniMax multimodal model skill — use MiniMax Multi-Modal models for speech, music, video, and image. Create voice, music, video, and images with MiniMax AI: TTS (text-to-speech, voice cloning, voice design, multi-segment), music (songs, instrumentals), video (text-to-video, image-to-video, start-end frame, subject reference, templates, long-form multi-scene), image (text-to-image, image-to-image with character reference), and media processing (convert, concat, trim, extract). Use when the user mentions MiniMax, multimodal generation, or wants speech/music/video/image AI, MiniMax APIs, or FFmpeg workflows alongside MiniMax outputs.