Loading...
Loading...
Found 164 Skills
When the user wants to plan growth using the AARRR framework, diagnose growth bottlenecks, or map actions across the customer lifecycle. Also use when the user mentions "growth funnel," "AARRR," "pirate metrics," "acquisition activation retention," "customer lifecycle metrics," or "growth framework."
JVM performance profiling with Java Flight Recorder (JFR), jcmd, and GC analysis. Use for identifying bottlenecks and memory issues. USE WHEN: user mentions "Java profiling", "JFR", "JVM performance", asks about "Java Flight Recorder", "jcmd", "heap dump", "GC tuning", "thread dump", "Java memory leak" DO NOT USE FOR: Node.js/Python profiling - use respective skills instead
Analyze Huawei Ascend NPU profiling data to discover hidden performance anomalies and produce a detailed model architecture report reverse-engineered from profiling. Trigger on Ascend profiling traces, NPU bottlenecks, device idle gaps, host-device issues, kernel_details.csv / trace_view.json / op_summary / communication.json. Also trigger on "profiling", "step time", "device bubble", "underfeed", "host bound", "device bound", "AICPU", "wait anchor", "kernel gap", "Ascend performance", "model architecture", "layer structure", "forward pass", "model structure". Runs anomaly discovery (bubble detection, wait-anchor, AICPU exposure) alongside model architecture analysis (layer classification, per-layer sub-structure, communication pipeline). Outputs a separate Markdown architecture report alongside anomaly analysis.
Build and maintain an LLM-curated personal knowledge base — the "LLM Wiki" pattern from Andrej Karpathy's April 2026 gist. Use this skill whenever the user wants to ingest a source (paper, article, transcript, PDF, notes) into a persistent compounding knowledge base, ask a question against accumulated notes, lint or audit such a base, or initialize a new one. Trigger on phrases like "add this to my wiki", "ingest this paper", "compile this into the knowledge base", "what does my wiki say about X", "lint the wiki", "build a knowledge base from these documents", "research notes", "second brain", "personal knowledge base", or any reference to LLM Wiki / OmegaWiki. Trigger even when the user does not say "wiki" — if they are accumulating sources over time and want them organized, this applies. The skill scales — sharded indexes, atomic pages, YAML frontmatter, and a bundled search script keep the wiki from becoming a context bottleneck at hundreds or thousands of pages.
Specializes in analyzing Lynx trace data to diagnose performance issues and provide actionable optimization strategies. Key Scenarios: - Loading Performance: Diagnosing slow startup metrics (FCP, FMP, TTI) and white screen issues. - Smoothness Analysis: Investigating root causes for scroll jank, frame drops, and interaction lag. - Regression Detection: Comparing traces to identify performance degradation or verify optimization gains between versions. - Pipeline Deep Dive: Pinpointing bottlenecks in specific rendering stages like Layout, Paint, JS execution, and background threads. - Native Module Analysis: Investigating performance issues related to native module calls.
Aspire platform help — word-of-mouth commerce for influencer marketing, product seeding, affiliate tracking, UGC sourcing, and paid social. Covers Discovery (170M+ profiles, Quickmatch AI, image recognition), Campaign Management (lifecycle tracking, content approval, term sheets), Product Seeding (Shopify gifting, shipping), Affiliate Tracking (promo codes, attribution), UGC & Content (library, repurposing for ads), Paid Social (TikTok Spark Ads, Meta whitelisting), Creator Payments (free processing). Integrates with Shopify, WooCommerce, Meta, TikTok, Pinterest, Klaviyo, CJ, Impact, ShareASale/Awin. Use when Aspire discovery isn't surfacing the right creators, product seeding orders aren't syncing with Shopify, affiliate tracking isn't attributing sales, content approvals are bottlenecked, not sure which Aspire plan fits, or integrations aren't connecting properly. Do NOT use for influencer strategy across platforms (use /sales-influencer-marketing) or affiliate program design (use /sales-affiliate-program).
Use these skills when you need to troubleshoot performance bottlenecks, analyze query execution plans, identify resource-heavy processes, and monitor system-level PromQL metrics.
Takes a manual business workflow description and designs the automated version. Maps current steps, handoffs, decision points, and bottlenecks. Designs automated flow with triggers, conditions, actions, and error handling. Outputs workflow-automation.md with before/after Mermaid diagrams, tool recommendations, implementation steps, and time savings estimate.
[Hyper] Optimize an existing codebase through baseline-first experiments, binary evaluation, and one-mutation-at-a-time iteration. Use for codebase autoresearch, measured bottleneck reduction, benchmarked code optimization, and evidence-backed refactors.
End-to-end prospect research pipeline: Apollo enrichment → personalized email + call scripts → draft review → Apollo sequence load. Eliminates manual research bottleneck. Use when: 'research prospect', 'prospect [company]', 'build cadence for', 'outreach for [company]', 'research-to-cadence', 'enrich and sequence', 'new prospect batch'.
Operations leadership for scaling companies. Process design, OKR execution, operational cadence, and scaling playbooks. Use when designing operations, setting up OKRs, building processes, scaling teams, analyzing bottlenecks, planning operational cadence, or when user mentions COO, operations, process improvement, OKRs, scaling, operational efficiency, or execution.
Use when build times are slow, investigating build performance, analyzing Build Timeline, identifying type checking bottlenecks, enabling compilation caching, or optimizing incremental builds - comprehensive build optimization workflows including Xcode 26 compilation caching