Writing Skill
Overview
The Writing Skill applies Test-Driven Development (TDD) to process documentation.
Personal skills are stored in agent-specific directories ( for Claude Code, for Codex)
You write test cases (stress scenarios for sub-agents), observe them fail (baseline behavior), write the skill (documentation), observe the tests pass (agent compliance), then refactor (close loopholes).
Core Principle: If you don't see the agent fail without the skill, you don't know if the skill teaches the right thing.
Required Background: Before using this skill, you must understand the superpower: Test-Driven Development. That skill defines the basic red-green-refactor cycle. This skill adapts TDD for documentation.
Official Guidance: For Anthropic's official skill creation best practices, refer to anthropic-best-practices.md. This document provides additional patterns and guidelines to complement the TDD-centered approach in that skill.
What is a Skill?
A Skill is a reference guide for validated techniques, patterns, or tools. Skills help future Claude instances find and apply effective methods.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
TDD to Skill Creation Mapping
| TDD Concept | Skill Creation |
|---|
| Test Case | Stress scenario with sub-agent |
| Production Code | Skill documentation (SKILL.md) |
| Test Failure (Red) | Agent violates rules without the skill (baseline) |
| Test Pass (Green) | Agent complies with the existing skill |
| Refactor | Close loopholes while maintaining compliance |
| Write Test First | Run baseline scenarios before writing the skill |
| Watch It Fail | Record the exact rationalizations the agent uses |
| Minimal Code | Write the skill to address these specific violations |
| Watch It Pass | Verify the agent now complies |
| Refactor Cycle | Look for new rationalizations → block → re-verify |
The entire skill creation process follows red-green-refactor.
When to Create a Skill
Create when:
- The technique isn't intuitively obvious to you
- You would reference it again in a project
- The pattern applies broadly (not project-specific)
- Others would benefit
Don't create for:
- One-off solutions
- Standard practices already well-documented elsewhere
- Project-specific conventions (put in CLAUDE.md)
- Mechanical constraints (automate if you can enforce via regex/validation - save documentation for judgment calls)
Skill Types
Technique
Specific methods and steps (condition-based waiting, root cause tracing)
Pattern
Ways of thinking about problems (flatten with flags, test invariants)
Reference
API docs, syntax guides, tool documentation (office docs)
Directory Structure
skills/
skill-name/
SKILL.md # Main reference (required)
supporting-file.* # Only if needed
Flat Namespace - All skills live in a single searchable namespace
Separate Files:
- Heavy Reference (100+ lines) - API docs, comprehensive syntax
- Reusable Tools - Scripts, utilities, templates
Keep Inline:
- Principles and concepts
- Code patterns (<50 lines)
- Everything else
SKILL.md Structure
Front Matter (YAML):
- Only two fields supported: and
- Maximum 1024 characters total
- : Use only letters, numbers, and hyphens (no brackets, special characters)
- : Third person, only describes when to use (not what it does)
- Start with "Use when...", focus on trigger conditions
- Include specific symptoms, situations, and context
- Never summarize the skill's process or workflow (see CSO section for why)
- Keep under 500 characters when possible
markdown
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
Claude Search Optimization (CSO)
Key to Discovery: Future Claude instances need to find your skill
1. Rich Description Field
Purpose: Claude reads descriptions to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
Format: Start with "Use when...", focus on trigger conditions
Key: Description = When to use, not what the skill does
The description should only describe trigger conditions. Do NOT summarize the skill's process or workflow in the description.
Why This Matters: Testing shows that when descriptions summarize the workflow, Claude may follow the description instead of reading the full skill content. Even though the skill's flowchart clearly showed two reviews (spec compliance then code quality), a description of "code review between tasks" led Claude to perform only one review.
When the description was changed to "Use when executing implementation plans with independent tasks in the current session" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
Pitfalls: Descriptions that summarize workflows create shortcuts Claude will take. The skill body becomes documentation Claude skips.
yaml
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks
# ❌ BAD: Too much process detail
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
# ✅ GOOD: Just triggering conditions, no workflow summary
description: Use when executing implementation plans with independent tasks in the current session
# ✅ GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
Content:
- Use specific triggers, symptoms, and situations that show the skill applies
- Describe the problem (race conditions, inconsistent behavior) not language-specific symptoms (setTimeout, sleep)
- Keep triggers technology-agnostic unless the skill itself is technology-specific
- If the skill is technology-specific, explicitly state that in the trigger
- Write in third person (for injection into system prompts)
- Never summarize the skill's process or workflow
yaml
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ❌ BAD: First person
description: I can help you with async tests when they're flaky
# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ✅ GOOD: Starts with "Use when", describes problem, no workflow
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently
# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects
2. Keyword Coverage
Use words Claude will search for:
- Error messages: "hook timeout", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types
3. Descriptive Naming
Use active voice, verb first:
4. Token Efficiency (Critical)
Problem: Onboarding and frequently referenced skills are included in every conversation. Every token counts.
Target Word Counts:
- Onboarding workflows: <150 words each
- Frequently used skills: <200 total words
- Other skills: <500 words (still be concise)
Techniques:
Move Details to Tool Help:
bash
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
Use Cross-References:
markdown
# ❌ BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
Compress Examples:
markdown
# ❌ BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ✅ GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]
Eliminate Redundancy:
- Don't repeat content from cross-referenced skills
- Don't explain what's obvious from the command
- Don't include multiple examples of the same pattern
Verification:
bash
wc -w skills/path/SKILL.md
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
Name by what you do or core insight:
- ✅ >
- ✅ not
- ✅ >
data-structure-refactoring
- ✅ >
Gerunds (-ing) work for processes:
- , ,
- Active, describes the action you're taking
4. Cross-Reference Other Skills
When writing docs that reference other skills:
Only use the skill name with explicit requirement markers:
- ✅ GOOD:
**REQUIRED SUB-SKILL:** Use superpowers:test-driven-development
- ✅ GOOD:
**REQUIRED BACKGROUND:** You MUST understand superpowers:systematic-debugging
- ❌ BAD:
See skills/testing/test-driven-development
(unclear if required)
- ❌ BAD:
@skills/testing/test-driven-development/SKILL.md
(heavy cognitive load, burns context)
Why no @ links: The
syntax forces immediate loading of the document, consuming 200k+ context before it's needed.
Flowchart Usage
dot
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Flowcharts are only for:
- Non-obvious decision points
- Process loops where you might stop early
- "When to use A vs B" decisions
Never use flowcharts for:
- Reference material → tables, lists
- Code examples → Markdown blocks
- Linear instructions → numbered lists
- Non-semantic labels (step1, helper2)
For Graphviz style rules, see @graphviz-conventions.dot.
Visualize for your human partner: Use
in this directory to render skill flowcharts as SVG:
bash
./render-graphs.js ../some-skill # Each diagram separately
./render-graphs.js ../some-skill --combine # All diagrams in one SVG
Code Examples
One excellent example beats many mediocre examples
Choose the most relevant language:
- Testing techniques → TypeScript/JavaScript
- System debugging → Shell/Python
- Data processing → Python
Good Examples:
- Complete and runnable
- Well-commented, explains why
- From real scenarios
- Clearly shows the pattern
- Ready to adapt (not generic templates)
Don't:
- Implement in 5+ languages
- Create fill-in-the-blank templates
- Write contrived examples
You're good at porting - one great example is enough.
Document Organization
Self-Contained Skill
defense-in-depth/
SKILL.md # Everything inline
When: Everything fits, no heavy reference needed
Skill with Reusable Tools
condition-based-waiting/
SKILL.md # Overview + patterns
example.ts # Working helpers to adapt
When: Tools are reusable code, not just narrative
Skill with Heavy Reference
pptx/
SKILL.md # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
When: Reference material is too large for inline
Iron Rule (Same as TDD)
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to new skills and edits to existing skills.
Wrote the skill before testing? Delete it. Start over.
Edited a skill without testing? Same violation.
No exceptions:
- Not for "simple additions"
- Not for "just adding a section"
- Not for "documentation updates"
- Don't keep untested changes as "reference"
- Don't "adapt" while running tests
- Delete means delete
Required Background: The Superpowers: Test-Driven Development skill explains why this matters. The same principles apply to documentation.
Testing All Skill Types
Different skill types require different testing approaches:
Discipline Enforcement Skills (Rules/Requirements)
Examples: TDD, validate before finish, design before coding
Test with:
- Academic questions: Do they understand the rules?
- Stress scenarios: Do they comply under pressure?
- Multiple stress叠加: Time + sunk cost + fatigue
- Identify rationalizations and add explicit counters
Success Criteria: Agent follows rules under maximum pressure
Technique Skills (How-To Guides)
Examples: Condition-based waiting, root cause tracing, defensive programming
Test with:
- Application scenarios: Can they apply the technique correctly?
- Variation scenarios: Do they handle edge cases?
- Missing information tests: Do they indicate gaps?
Success Criteria: Agent successfully applies the technique to new scenarios
Pattern Skills (Mental Models)
Examples: Reduce complexity, information hiding concepts
Test with:
- Recognition scenarios: Can they identify when the pattern applies?
- Application scenarios: Can they use the mental model?
- Counterexamples: Do they know when not to apply it?
Success Criteria: Agent correctly identifies when/how to apply the pattern
Reference Skills (Documentation/API)
Examples: API docs, command references, library guides
Test with:
- Retrieval scenarios: Can they find the correct information?
- Application scenarios: Can they correctly use what they found?
- Gap testing: Are common use cases covered?
Success Criteria: Agent finds and correctly applies reference information
Common Excuses for Skipping Tests
| Excuse | Reality |
|---|
| "The skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have problems. Always. A 15-minute test saves hours. |
| "I'll test if there's a problem" | Problems = agents can't use the skill. Test before deployment. |
| "Testing is too tedious" | Testing is simpler than debugging bad skills in production. |
| "I'm confident it's good" | Overconfidence causes problems. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skills wastes more time fixing it later. |
All of this means: Test before deployment. No exceptions.
Bulletproofing Skills Against Rationalization
Discipline-enforcing skills like TDD need to resist rationalization. Agents are smart and will find loopholes under pressure.
Psychology Note: Understanding why persuasion techniques work helps apply them systematically. For research on authority, commitment, scarcity, social proof, and unity principles (Cialdini, 2021; Meincke et al., 2025), see persuasion-principles.md.
Explicitly Block Every Loophole
Don't just state the rule - prohibit specific workarounds:
<bad>
```markdown
Write code before test? Delete it.
```
</bad>
<good>
```markdown
Write code before test? Delete it. Start over.
No exceptions:
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
</good>
### Address "Spirit vs Letter" Arguments
Add a core principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
This cuts off an entire class of "I followed the spirit" rationalizations.
Build a Rationalization Table
Capture rationalizations from baseline tests (see Testing section below). Every excuse the agent makes goes in the table:
markdown
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
Build a Red Flags List
Give agents an easy self-check for rationalization:
markdown
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
Update CSO with Violation Symptoms
Add to description: Symptoms that you're about to violate the rule:
yaml
description: use when implementing any feature or bugfix, before writing implementation code
Skill Red-Green-Refactor
Follow the TDD cycle:
Red: Write Failing Test (Baseline)
Run stress scenarios with sub-agents without the skill. Record exact behavior:
- What choices did they make?
- What rationalizations did they use (verbatim)?
- Which pressures triggered violations?
This is "watch the test fail" - you must understand what the agent would naturally do before writing the skill.
Green: Write Minimal Skill
Write the skill to address these specific rationalizations. Don't add extra content for hypothetical cases.
Run the same scenarios with the skill. The agent should now comply.
Refactor: Close Loopholes
Agent found new rationalizations? Add explicit counters. Retest until bulletproof.
Testing Methods: See @testing-skills-with-subagents.md for complete testing methodology:
- How to write stress scenarios
- Types of pressure (time, sunk cost, authority, fatigue)
- Systematically closing loopholes
- Meta-testing techniques
Anti-Patterns
❌ Narrative Examples
"In the October 3, 2025 session, we discovered empty projectDir caused..."
Why bad: Too specific, not reusable
❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go
Why bad: Mediocre quality, high maintenance burden
❌ Code in Flowcharts
dot
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
❌ Generic Labels
Helper 1, Helper 2, Step 3, Pattern 4
Why bad: Labels should be semantically meaningful
STOP: Before Moving to Next Skill
After writing any skill, you must stop and complete the deployment process.
Don't:
- Batch-create multiple skills without testing each one
- Move to the next skill before validating the current one
- Skip testing because "batching is more efficient"
The deployment checklist below is mandatory for every skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
Skill Creation Checklist (TDD Adapted)
Important: Use TodoWrite to create todos for each checklist item below.
Red Phase - Write Failing Test:
Green Phase - Write Minimal Skill:
Refactor Phase - Close Loopholes:
Quality Check:
Deployment:
Discovery Workflow
How future Claude instances find your skill:
- Encounter problem ("tests are flaky")
- Find skill (description matches)
- Scan overview (is this relevant?)
- Read pattern (quick reference table)
- Load example (only when implementing)
Optimize for this workflow - place searchable terms early and often.
Bottom Line
Skill creation is TDD for process documentation.
Same iron rule: No skill without a failing test first.
Same cycle: Red (baseline) → Green (write skill) → Refactor (close loopholes).
Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow TDD for skills. It's the same rules applied to documentation.