Loading...
Loading...
Found 9 Skills
Write, debug, and optimize Triton and Gluon GPU kernels using local source code, tutorials, and kernel references. Use when the user mentions Triton, Gluon, tl.load, tl.store, tl.dot, triton.jit, gluon.jit, wgmma, tcgen05, TMA, tensor descriptor, persistent kernel, warp specialization, fused attention, matmul kernel, kernel fusion, tl.program_id, triton autotune, MXFP, FP8, FP4, block-scaled matmul, SwiGLU, top-k, or asks about writing GPU kernels in Python.
Full optimization workflow, sub-agent launch templates, agent communication contracts, default configurations, tuning strategy, and knowledge base update protocol. Use when: (1) starting an optimization cycle, (2) launching a Profiler or Designer sub-agent, (3) interpreting or formatting agent communication, (4) updating the knowledge base after a profiling or implementation iteration, (5) deciding default configurations or tuning strategy for a kernel.
系统调优
CUDA/GPU computing guardrails, patterns, and best practices for AI-assisted development. Use when working with CUDA files (.cu, .cuh), or when the user mentions CUDA/GPU programming. Provides kernel design patterns, memory hierarchy guidelines, and occupancy optimization specific to this project's coding standards.
将简单Vector类型Triton算子从GPU迁移到昇腾NPU。当用户需要迁移Triton代码到NPU、提到GPU到NPU迁移、Triton迁移、昇腾适配时使用。注意:无法自动迁移存在编译问题的算子。
Shared optimization guidance plus cuTile Python DSL-specific overlays. Use when: (1) selecting optimizations for a cuTile Python DSL kernel, (2) checking cuTile-specific implementation traps, (3) deciding whether a profiling finding belongs in shared knowledge or a cuTile overlay, (4) updating cuTile Python DSL optimization docs, (5) reviewing how a shared pattern maps to cuTile.
Shared optimization guidance plus CuTe Python DSL overlays. Use when: (1) selecting optimizations for a CuTe Python DSL kernel, (2) deciding whether a finding is shared or cute-dsl-specific, (3) recording CuTe Python DSL implementation notes, (4) reviewing the knowledge layout for cute-dsl work, (5) mapping shared patterns to a CuTe Python DSL implementation surface.
cuTile Python DSL kernel implementation patterns, CtKernel runtime wrapper, suitability gate, and cuTile-specific pitfalls. Use when: (1) creating or modifying a cuTile Python DSL kernel version, (2) implementing an optimization that still fits within cuTile's exposed control surface, (3) deciding whether cuTile is still the right DSL, (4) reviewing cuTile-specific runtime patterns. Always also load /design-kernel for shared naming, versioning, and workflow.
Compatibility router for the shared optimization knowledge base and the language-specific optimization catalog skills. Use when: (1) selecting which optimization catalog skill to load, (2) the implementation language is not fixed yet, (3) a workflow still references the legacy optimization-catalog skill name, (4) deciding whether a finding is shared or language-specific, (5) updating the generalized knowledge-base structure.