Loading...
Loading...
Found 1,685 Skills
Plan and build an RLM (Recursive Language Model) with predict-rlm. Interactively defines inputs, outputs, skills, and architecture from a goal, then implements the code. Use when the user wants to create a new RLM or explore whether one is feasible.
Resolve ambiguities in spec.md through targeted Q&A before planning
Universal task dispatcher. Start, route, and execute any task through the development workflow (Steps 0-9). Invoke on every task — /task <description>, /task
Run an autonomous /loop iteration -- check progress, work on next task, schedule next wake
Interview the user and inspect coding-agent skill trigger counts to recommend unused K-skills for removal.
Initialize a Harness Engineering framework in the current project. Use when user says 'harness', 'init harness', 'initialize framework', 'setup harness engineering', '/harness', or wants to set up a Plan-Build-Verify development workflow with specialized agents (planner, generator, evaluator). Creates CLAUDE.md, agent definitions, command templates, hooks, and documentation structure for autonomous AI-driven development.
Prime a codebase by reading every source file in full. Use when starting work on a new or unfamiliar project, or when the user asks to "learn the codebase", "read the codebase", "prime", or "get up to speed".
Generate a cost report showing token usage and USD costs by agent and model
Set up a recurring research watch on a topic, company, paper area, or product surface. Use when the user asks to monitor a field, track new papers, watch for updates, or set up alerts on a research area.
Run a literature review using paper search and primary-source synthesis. Use when the user asks for a lit review, paper survey, state of the art, or academic landscape summary on a research topic.
Plan or execute a replication of a paper, claim, or benchmark. Use when the user asks to replicate results, reproduce an experiment, verify a claim empirically, or build a replication package.
Compare multiple sources on a topic and produce a grounded comparison matrix. Use when the user asks to compare papers, tools, approaches, frameworks, or claims across multiple sources.