Loading...
Loading...
Found 17 Skills
Douyin Viral Copy Intelligent Generator. It is automatically triggered when the user says "generate new copy", "create Douyin content", or "write short video copy". It automatically executes: (1) Read and analyze historical data (2) Optimize with 9 viral factors (3) Multi-dimensional scoring evaluation (4) Automatic iterative optimization until it meets the 5-star standard (5) Intelligent play volume estimation (6) Output the fully optimized copy. Fully automated, no need to manually specify any analysis or optimization tasks.
Autonomous multi-round research review loop. Repeatedly reviews via Codex MCP, implements fixes, and re-reviews until positive assessment or max rounds reached. Use when user says "auto review loop", "review until it passes", or wants autonomous iterative improvement.
Generate or improve a company-specific data analysis skill by extracting tribal knowledge from analysts. BOOTSTRAP MODE - Triggers: "Create a data context skill", "Set up data analysis for our warehouse", "Help me create a skill for our database", "Generate a data skill for [company]" → Discovers schemas, asks key questions, generates initial skill with reference files ITERATION MODE - Triggers: "Add context about [domain]", "The skill needs more info about [topic]", "Update the data skill with [metrics/tables/terminology]", "Improve the [domain] reference" → Loads existing skill, asks targeted questions, appends/updates reference files Use when data analysts want Claude to understand their company's specific data warehouse, terminology, metrics definitions, and common query patterns.
Trigger: Invoke when you have proposed a solution, hypothesis, or judgment that needs to be verified through practice, iterated via trial and error, or used to upgrade cognition through review. Common signals include experiment, prototype, validate, iterate, feedback loop. Trigger when an idea, hypothesis, or plan must be tested in practice and improved through iteration. Use this skill to move from action to understanding and back to action in a spiral learning loop.
Apply action research through Plan-Act-Observe-Reflect cycles and Participatory Action Research (PAR) to generate knowledge while improving practice. Use this skill when the user needs to design practitioner research that integrates inquiry with intervention, facilitate participatory research with stakeholders, structure iterative improvement cycles, or when they ask 'how do I research my own practice', 'how do I involve participants as co-researchers', or 'how do I combine research with practical change'.
A method for iteratively improving text instructions for agents (skills / slash commands / task prompts / CLAUDE.md sections / code generation prompts) by having unbiased executors run them, then evaluating from both perspectives (executor self-report + instruction-side metrics). Repeat until improvement plateaus. Use immediately after creating or significantly revising a prompt or skill, or when you suspect the reason an agent isn't behaving as expected is due to ambiguity in the instructions.
Review and improve documentation with parallel evaluation and iterative improvement loop.
Autonomously optimize code for performance using CodSpeed benchmarks, flamegraph analysis, and iterative improvement. Use this skill whenever the user wants to make code faster, reduce CPU usage, optimize memory, improve throughput, find performance bottlenecks, or asks to 'optimize', 'speed up', 'make faster', 'reduce latency', 'improve performance', or points at a CodSpeed benchmark result wanting improvements. Also trigger when the user mentions a slow function, a regression, or wants to understand where time is spent in their code.
[BETA] Start the dev server, open the feature in a browser, and iterate on improvements together.
Reflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
Asks for user feedback after each task or cron job completion and runs a recursive learning flow. If output is good, asks what was good until 10 approvals; if needs improvement, asks why/how/what via multiple choice plus optional examples, uses web search and iterative thinking to resolve, and caps iterations by severity (slight 5, medium 10, severe 20). Keeps feedback non-intrusive. Use when completing discrete tasks or cron jobs for the user.