Loading...
Loading...
Found 26 Skills
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
Meta-skill for analyzing PRs, issues, and user interactions to improve Cursor rules and skills automatically
CI-only self-improvement workflow using gh-aw (GitHub Agentic Workflows). Captures recurring failure patterns and quality signals from pull request checks, emits structured learning candidates, and proposes durable prevention rules without interactive prompts. Use when: you want automated learning capture in CI/headless pipelines.
Self-improvement and learning skill that helps Claude learn from user interactions, corrections, and preferences
Orchestrate the full ToolUniverse self-improvement cycle: discover APIs, create tools, test with researcher personas, fix issues, optimize skills, and push via git. References and dispatches to all other devtu skills. Use when asked to: run the self-improvement loop, do a debug/test round, expand tool coverage, improve tool quality, or evolve ToolUniverse.
Create, improve, and manage Droid skills. Use when the user wants to: - Create new skills from scratch or from session learnings - Improve existing skills based on user preferences - Analyze sessions to identify patterns worth codifying - Understand best practices for agentic skill design This is a meta-skill for self-improvement and continuous learning.
CRITICAL: Use for makepad-skills self-evolution and contribution. Triggers on: evolve, evolution, contribute, contribution, self-improve, self-improvement, add pattern, new pattern, capture learning, document solution, hooks, hook system, auto-trigger, skill routing, template, pattern template, shader template, troubleshooting template, 演进, 贡献, 自我改进, 添加模式, 记录学习, 文档化解决方案
Autonomously optimize an existing AI skill by running it repeatedly against binary evals, mutating one instruction at a time, and keeping only changes that improve pass rate. Based on Karpathy-style autoresearch, but applied to SKILL.md iteration instead of ML training. Use when optimizing a skill, benchmarking prompt quality, building evals for a skill, or running self-improvement loops on reusable agent instructions. Triggers on: skill-autoresearch, optimize this skill, improve this skill, benchmark this skill, eval my skill, run autoresearch on this skill, self-improve skill.
Start a repo-local OptimizeSpec self-improvement change. Use when the user wants to create evals, optimize an agent with GEPA, define an agent self-improvement loop, or begin an ASI-first evaluation workflow.
HOWL v2 — Hunt, Optimize, Win, Learn. Nightly self-improvement loop for the WOLF autonomous trading strategy. Runs once per day (via cron) to review all trades from the last 24 hours, compute win rates, analyze signal quality correlation, evaluate DSL tier performance, identify missed opportunities, and produce concrete improvement suggestions for the wolf-strategy skill. v2 adds fee drag ratio (FDR) analysis, holding period bucketing, LONG vs SHORT regime detection, rotation cost tracking, cumulative drift detection, and gross vs net profit factor separation. Use when setting up daily trade review automation, analyzing trading performance, or improving an autonomous trading strategy through data-driven feedback loops. Requires Senpi MCP connection, mcporter CLI, and OpenClaw cron system.
Anthropic's method for training harmless AI through self-improvement. Two-phase approach - supervised learning with self-critique/revision, then RLAIF (RL from AI Feedback). Use for safety alignment, reducing harmful outputs without human labels. Powers Claude's safety system.
Enables continuous self-improvement through learning from failures, user corrections, and capability gaps. Integrates with QAVR for learned memory ranking.