Loading...
Loading...
Found 675 Skills
Guides rollout configuration for experiments: variant splits, overall rollout percentage, and the critical disambiguation when a user mentions a specific percentage. Covers both initial setup and mid-experiment changes. TRIGGER when: user mentions a rollout percentage, asks about variant splits, wants to change distribution on a running experiment, or asks 'who sees what variant?' DO NOT TRIGGER when: user is asking about metrics, analytics, or experiment results.
Investigate LLM analytics clusters — understand usage patterns in AI/LLM traffic, compare cluster behavior, compute cost/latency metrics, and drill into individual traces within clusters.
Auto-generates weekly KPI reports from multiple data sources including Supabase analytics, CRM data, financial spreadsheets, and email metrics. Produces executive-ready reports with dashboards, trends, highlights, concerns, and action items.
Sync verified experiment results from the code repo or a code worktree into the paper's daily experiments log and project memory. Use when results in code/docs/results, code/docs/reports, code/docs/runs, worktree docs, logs, or user-confirmed metrics should be promoted into paper-facing evidence.
Write structured experiment report documents from ML/research experiment notes, configs, logs, metrics, tables, and figures. Use this skill whenever the user asks to write an experiment report, research update, mentor update, weekly experiment summary, result analysis document, or presentation-ready experiment writeup, especially when the output should explain motivation, setup, algorithms, metrics, results, figures, interpretation, conclusions, limitations, and next steps.
Design hypothesis-driven ML/AI experiments before running them. Use this skill whenever the user wants to plan experiments, ablations, baselines, metrics, controls, seeds, logging, stop conditions, reviewer-proof evidence, or an experiment matrix for a paper claim before using run-experiment or writing results.
Review ML or AI experiment figures, tables, plots, captions, result narratives, and paper visual style before they are shown in a paper, advisor meeting, report, slide deck, rebuttal, or submission. Use this skill whenever the user has experimental results, plots, tables, metrics, screenshots, captions, draft result sections, or wants to audit figure style choices such as color, typography, markers, symbols, line widths, sizing, and venue-consistent visual conventions.
Diagnose surprising, negative, unstable, or ambiguous ML/AI experiment results and decide whether to debug implementation, rerun experiments, change metrics or baselines, revise the algorithm, narrow the paper claim, park, or kill a direction. Use this skill whenever results do not match expectations, a method fails, metrics conflict, seeds vary, baselines beat the method, plots look suspicious, or the user asks what to do next after experimental results.
Aggregate and display system metrics with anomaly detection for a time period
Design and execute customer onboarding playbooks with milestones, success metrics, and automated touchpoints
Design, optimize, and communicate SaaS pricing — tier structure, value metrics, pricing pages, and price increase strategy. Use when building a pricing model from scratch, redesigning existing pricing, planning a price increase, or improving a pricing page. Trigger keywords: pricing tiers, pricing page, price increase, packaging, value metric, per seat pricing, usage-based pricing, freemium, good-better-best, pricing strategy, monetization, pricing page conversion, Van Westendorp. NOT for broader product strategy — use product-strategist for that. NOT for customer success or renewals — use customer-success-manager for expansion revenue.
SEO intelligence toolkit covering the full lifecycle via live web data: keyword research, rank tracking, site audits, content gap analysis, competitor keyword reverse-engineering, AI visibility across five platforms (ChatGPT, Perplexity, Google AI, Gemini, Grok), and GitHub repo SEO. Crawls real sites and SERPs via Nimble CLI — no fabricated metrics. Triggers: "SEO", "keywords", "rank tracker", "site audit", "content gap", "competitor keywords", "AI visibility", "GitHub SEO", "SERP analysis", "keyword research", "technical SEO", "keyword difficulty", "topic clusters", "ranking delta", "on-page SEO", "AI citation audit". Do NOT use for competitor business signals — use `competitor-intel` instead. Do NOT use for competitor messaging — use `competitor-positioning` instead. Do NOT use for general web scraping — use `nimble-web-expert` instead.