Loading...
Loading...
Found 1,140 Skills
Force critical evaluation of proposals, requirements, or decisions by analyzing from multiple adversarial perspectives. Triggers on: accepting a proposal without pushback, 'sounds good', 'let's go with', design decisions with unstated tradeoffs, unchallenged assumptions, premature consensus. Invoke with /challenge-that.
Advisory fee billing: fee calculation (tiered, flat, breakpoint), billing cycles, AUM valuation, direct-debit/invoice collection, GAAP revenue recognition, ADV Part 2A/Reg BI/ERISA 408(b)(2) disclosure, billing exceptions, system migration, revenue forecasting, concentration analysis.
Evaluate Figma designs from operator persona perspectives through design critique and user experience evaluation. Use when reviewing UX for specific user roles (e.g., air-surveillance-tech, weapons-director), conducting design reviews, or evaluating operator interfaces. Analyzes cognitive load, communication patterns, pain points, and system visibility. Works with Figma MCP (desktop/URL) and Outline docs.
Deep-dive analysis of GitHub projects. Use when the user mentions a GitHub repo/project name and wants to understand it — triggered by phrases like "help me look at this project", "learn about XXX", "how is this project", "analyze the repo", or any request to explore/evaluate a GitHub project. Covers architecture, community health, competitive landscape, and cross-platform knowledge sources.
Evaluate every produced output (code, report, plan, data, API response) against type-specific quality criteria, score 1-10, make accept/reject decisions, and provide actionable improvement suggestions. Triggers on "evaluate", "check", "review", "quality control", "is this good enough", "score it", or before passing output to the next step in an agentic workflow.
Evaluate complex requests from 3 independent perspectives (Creative, Pragmatic, Comprehensive), reach consensus, then produce complete outputs. Use for architecture decisions, creative content, analysis, and any task where multiple valid approaches exist.
Turn rough ideas into structured, validated idea documents through collaborative dialogue. Explores context, asks clarifying questions one at a time, proposes alternative approaches with feasibility evaluation, and produces documents ready for requirements definition. Use when: "ideation", "brainstorm", "new idea", "explore an idea", "I want to build", "what if we", "let's think about", "propose approaches", "evaluate this idea", "idea document", "アイデア出し", "案出し", "ブレスト", "アイデアを整理", "検討したい".
Evaluate Clojure code via nREPL using clj-nrepl-eval. Use this when you need to test code, check if edited files compile, verify function behavior, or interact with a running REPL session.
This skill is used when users explicitly request "review NSFC proposals", "simulate expert review", or "evaluate NSFC applications". It simulates the perspective of domain experts to conduct multi-dimensional reviews of NSFC proposals, outputting graded issues and actionable modification suggestions. ⚠️ Not applicable: when users only want to write/modify a specific section of a proposal (use the nsfc-*-writer series skills instead), only want to understand review criteria (answer directly), or have no clear "review/evaluate" intent.
Score how well a creator fits a brand's niche on a 1-10 scale with detailed written rationale. This skill should be used when evaluating creator-brand fit, scoring niche alignment, checking if an influencer matches a brand, assessing creator relevance, rating a creator's fit for a campaign, vetting a creator for niche match, deciding whether a creator is right for a brand, comparing creators by brand fit, or reviewing an influencer's profile against campaign requirements. For full creator vetting beyond niche fit (brand safety, rates, compliance), see creator-vetting-scorecard. For writing outreach to creators who pass vetting, see outreach-writer.
MantaBase T3 Hardware Audit System (Chinese Version). Objectively classify hardware products based on design theories through Brand Blinding, three Auditor (Tool/Toy/Trash) special scoring and Peer Review. Triggers: product link, T3 audit, Tool/Toy/Trash classification, hardware evaluation, VC investment advice
Benchmark compensation against market data. Trigger with "what should we pay", "comp benchmark", "market rate for", "salary range for", "is this offer competitive", or when the user needs help evaluating or setting compensation levels.