review-docs

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Review documentation

文档审核

This skill runs an evaluation and improvement loop on a documentation file.
Target: $ARGUMENTS
Relevant skills:
write-docs
该技能针对文档文件运行评估与改进循环。
目标:$ARGUMENTS
相关技能
write-docs

Workflow overview

工作流概述

┌──────────────────────────────────────────────────────────────┐
│  INITIALIZE: Create state file to track issues               │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  EVALUATE (parallel)                                         │
│  ┌─────────────────────┐    ┌─────────────────────────────┐  │
│  │ Style Agent         │    │ Content Agent               │  │
│  │ (readability+voice) │    │ (completeness+accuracy)     │  │
│  └─────────────────────┘    └─────────────────────────────┘  │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  UPDATE STATE: Add new issues, verify fixed issues           │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  SUMMARIZE: Present findings, ask user for next step         │
└──────────────────────────────────────────────────────────────┘
           ┌──────────────────┼──────────────────┐
           ↓                  ↓                  ↓
    [User: improve]   [User: complete]    [User: done]
           ↓                  ↓                  ↓
┌──────────────────┐  ┌──────────────────┐    EXIT
│  IMPROVE         │  │  COMPLETE        │
│  (fix issues)    │  │  (fix all, exit) │
└──────────────────┘  └──────────────────┘
           ↓                  ↓
  LOOP → EVALUATE          EXIT
┌──────────────────────────────────────────────────────────────┐
│  INITIALIZE: Create state file to track issues               │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  EVALUATE (parallel)                                         │
│  ┌─────────────────────┐    ┌─────────────────────────────┐  │
│  │ Style Agent         │    │ Content Agent               │  │
│  │ (readability+voice) │    │ (completeness+accuracy)     │  │
│  └─────────────────────┘    └─────────────────────────────┘  │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  UPDATE STATE: Add new issues, verify fixed issues           │
└──────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────┐
│  SUMMARIZE: Present findings, ask user for next step         │
└──────────────────────────────────────────────────────────────┘
           ┌──────────────────┼──────────────────┐
           ↓                  ↓                  ↓
    [User: improve]   [User: complete]    [User: done]
           ↓                  ↓                  ↓
┌──────────────────┐  ┌──────────────────┐    EXIT
│  IMPROVE         │  │  COMPLETE        │
│  (fix issues)    │  │  (fix all, exit) │
└──────────────────┘  └──────────────────┘
           ↓                  ↓
  LOOP → EVALUATE          EXIT

State file

状态文件

Create a state file in the scratchpad directory to track all issues across rounds. This prevents re-discovering the same issues and allows verification of fixes.
Path:
<scratchpad>/review-<filename>.md
Format:
markdown
undefined
在scratchpad目录中创建状态文件,以跟踪多轮审核中的所有问题。这可以避免重复发现相同问题,并能验证修复效果。
路径
<scratchpad>/review-<filename>.md
格式
markdown
undefined

Review tracker: [filename]

Review tracker: [filename]

Issue tracker

Issue tracker

Status values:
pending
|
fixed
|
verified-fixed
|
not-fixed
|
wont-fix
IDIssueTypeStatusRoundNotes
1[description]Style/Accuracy/Completenesspending1[details]
2[description]Accuracyverified-fixed1Fixed in round 1
3[description]Completenesswont-fix2Out of scope
Status values:
pending
|
fixed
|
verified-fixed
|
not-fixed
|
wont-fix
IDIssueTypeStatusRoundNotes
1[description]Style/Accuracy/Completenesspending1[details]
2[description]Accuracyverified-fixed1Fixed in round 1
3[description]Completenesswont-fix2Out of scope

Round history

Round history

Round 1

Round 1

  • Style: X/10, Voice: X/10, Completeness: X/10, Accuracy: X/10
  • Total: X/40

**Status definitions**:

- `pending`: Issue discovered, not yet addressed
- `fixed`: Improvement agent claims to have fixed it, needs verification
- `verified-fixed`: Evaluation confirmed the fix was applied correctly
- `not-fixed`: Evaluation found the fix wasn't applied correctly
- `wont-fix`: False alarm, out of scope, or intentional (e.g., completeness issues that require documentation expansion)
  • Style: X/10, Voice: X/10, Completeness: X/10, Accuracy: X/10
  • Total: X/40

**状态定义**:

- `pending`:已发现问题,尚未处理
- `fixed`:改进Agent声称已修复,待验证
- `verified-fixed`:评估确认修复已正确应用
- `not-fixed`:评估发现修复未正确应用
- `wont-fix`:误报、超出范围或有意保留(例如,需要扩展文档的完整性问题)

Step 1: Initial evaluation

步骤1:初始评估

For the first round, launch two subagents in parallel using the Task tool:
// Single message with two Task tool calls:
Task(subagent_type="general-purpose", model="opus", prompt="Style evaluation...")
Task(subagent_type="general-purpose", model="opus", prompt="Content evaluation...")
第一轮中,使用Task工具并行启动两个子Agent:
// 包含两个Task工具调用的单条消息:
Task(subagent_type="general-purpose", model="opus", prompt="Style evaluation...")
Task(subagent_type="general-purpose", model="opus", prompt="Content evaluation...")

Style agent prompt (round 1)

风格Agent提示词(第一轮)

Evaluate documentation style for: $ARGUMENTS

Read these files:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS

Score these dimensions (0-10):

READABILITY - How clear and easy to understand is the writing?
- Clear, direct sentences
- Logical flow between sections
- Appropriate use of code snippets and links
- No unnecessary jargon

VOICE - How well does it follow the writing guide?
- Confident assertions (no hedging)
- Active voice, present tense
- No AI writing tells (hollow importance, trailing gerunds, formulaic transitions)
- Appropriate tone (expert-to-developer)
- Sentence case headings

Important! Include as many high-priority fixes as needed.

Return in this exact format:

STYLE REPORT: [filename]

READABILITY: [score]/10
- [specific issue or strength]
- [specific issue or strength]

VOICE: [score]/10
- [specific issue or strength]
- [specific issue or strength]

PRIORITY FIXES:
1. [Most important style issue]
2. [Second most important]
3. [Third most important]
4. ...
Evaluate documentation style for: $ARGUMENTS

Read these files:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS

Score these dimensions (0-10):

READABILITY - How clear and easy to understand is the writing?
- Clear, direct sentences
- Logical flow between sections
- Appropriate use of code snippets and links
- No unnecessary jargon

VOICE - How well does it follow the writing guide?
- Confident assertions (no hedging)
- Active voice, present tense
- No AI writing tells (hollow importance, trailing gerunds, formulaic transitions)
- Appropriate tone (expert-to-developer)
- Sentence case headings

Important! Include as many high-priority fixes as needed.

Return in this exact format:

STYLE REPORT: [filename]

READABILITY: [score]/10
- [specific issue or strength]
- [specific issue or strength]

VOICE: [score]/10
- [specific issue or strength]
- [specific issue or strength]

PRIORITY FIXES:
1. [Most important style issue]
2. [Second most important]
3. [Third most important]
4. ...

Content agent prompt (round 1)

内容Agent提示词(第一轮)

Evaluate documentation content for: $ARGUMENTS

Read $ARGUMENTS, then verify claims against the source code in packages/editor/ and packages/tldraw/.

Score these dimensions (0-10):

COMPLETENESS - How thorough is the coverage?
- Overview establishes purpose before mechanism
- Key concepts explained with enough depth
- Illustrative code snippets where needed
- Links to relevant examples in apps/examples (if applicable)

ACCURACY - Is the technical content correct?
- Code snippets are syntactically correct and use valid APIs
- API references match actual implementation
- Described behavior matches the code
- No outdated information

For accuracy issues, include file:line references to the source code.

Important! Include as many high-priority fixes as needed. Make sure that all accuracy issues are flagged.

Return in this exact format:

CONTENT REPORT: [filename]

COMPLETENESS: [score]/10
- [specific issue or strength]
- [specific issue or strength]

ACCURACY: [score]/10
- [specific issue with file:line reference if inaccurate]
- [specific issue or strength]

PRIORITY FIXES:
1. [Most important content issue]
2. [Second most important]
3. [Third most important]
4. ...
After round 1, create the state file with all discovered issues.
Evaluate documentation content for: $ARGUMENTS

Read $ARGUMENTS, then verify claims against the source code in packages/editor/ and packages/tldraw/.

Score these dimensions (0-10):

COMPLETENESS - How thorough is the coverage?
- Overview establishes purpose before mechanism
- Key concepts explained with enough depth
- Illustrative code snippets where needed
- Links to relevant examples in apps/examples (if applicable)

ACCURACY - Is the technical content correct?
- Code snippets are syntactically correct and use valid APIs
- API references match actual implementation
- Described behavior matches the code
- No outdated information

For accuracy issues, include file:line references to the source code.

Important! Include as many high-priority fixes as needed. Make sure that all accuracy issues are flagged.

Return in this exact format:

CONTENT REPORT: [filename]

COMPLETENESS: [score]/10
- [specific issue or strength]
- [specific issue or strength]

ACCURACY: [score]/10
- [specific issue with file:line reference if inaccurate]
- [specific issue or strength]

PRIORITY FIXES:
1. [Most important content issue]
2. [Second most important]
3. [Third most important]
4. ...
第一轮评估完成后,创建状态文件记录所有发现的问题。

Step 2: Summarize and prompt user

步骤2:总结并询问用户

After both agents return, synthesize their reports into a summary:
markdown
undefined
两个Agent返回结果后,将它们的报告合成为一份总结:
markdown
undefined

Evaluation: [filename]

Evaluation: [filename]

DimensionScoreKey issue
ReadabilityX/10[one-liner]
VoiceX/10[one-liner]
CompletenessX/10[one-liner]
AccuracyX/10[one-liner]
TotalX/40
DimensionScoreKey issue
ReadabilityX/10[one-liner]
VoiceX/10[one-liner]
CompletenessX/10[one-liner]
AccuracyX/10[one-liner]
TotalX/40

Priority fixes

Priority fixes

  1. [Combined priority 1 from both reports]
  2. [Combined priority 2]
  3. [Combined priority 3]
  4. [Combined priority 4]
  5. [Combined priority 5]
  6. ...

Then ask the user using AskUserQuestion:

- **Improve**: Make improvements based on findings, then re-evaluate
- **Complete and finish**: Fix all remaining issues and exit (no re-evaluation)
- **Done**: Exit the loop without making changes
  1. [Combined priority 1 from both reports]
  2. [Combined priority 2]
  3. [Combined priority 3]
  4. [Combined priority 4]
  5. [Combined priority 5]
  6. ...

然后使用AskUserQuestion询问用户:

- **Improve(优化)**:根据评估结果进行改进,然后重新评估
- **Complete and finish(完成并结束)**:修复所有剩余问题后退出(不重新评估)
- **Done(完成)**:不做任何修改直接退出循环

Step 3: Triage (before improvement)

步骤3:分类处理(优化前)

Before running the improvement agent, review the pending issues with the user. Mark completeness issues that require adding new sections as
wont-fix
- these are documentation expansion, not review fixes.
Per CLAUDE.md guidance:
"Do what has been asked; nothing more, nothing less." "Don't add features, refactor code, or make 'improvements' beyond what was asked."
The review skill improves existing content. Adding new sections is a separate task.
在运行改进Agent之前,与用户一起审核待处理问题。将需要添加新章节的完整性问题标记为
wont-fix
——这些属于文档扩展,而非审核修复的范畴。
根据CLAUDE.md的指导原则:
"按要求行事;不多做,不少做。" "不要添加功能、重构代码,或进行超出要求的‘改进’。"
本审核技能仅优化现有内容。添加新章节是一项独立任务。

Step 4: Improve

步骤4:优化

Launch a single improvement agent targeting only pending issues:
Task(subagent_type="general-purpose", model="opus", prompt="Improve documentation...")
启动一个单独的改进Agent,仅针对待处理问题
Task(subagent_type="general-purpose", model="opus", prompt="Improve documentation...")

Improvement agent prompt

改进Agent提示词

Improve documentation based on specific tracked issues: $ARGUMENTS

Fix ONLY these pending issues:

| ID | Issue | Type | Notes |
|----|-------|------|-------|
[paste pending issues from state file]

Instructions:
1. Read .claude/skills/shared/writing-guide.md
2. Read .claude/skills/shared/docs-guide.md
3. Read $ARGUMENTS

4. For each accuracy fix:
   - Read the source file referenced in the notes
   - Verify the correct API/behavior from the source
   - Apply the fix based on what the source code actually shows

5. Apply style fixes

6. Run prettier: yarn prettier --write $ARGUMENTS

DO NOT:
- Add new sections
- Expand the document
- Fix issues not in the list above

Return a summary:

CHANGES MADE:

| ID | Fix applied | Verification |
|----|-------------|--------------|
| X | [description] | [source file:line checked] |
| Y | [description] | n/a |
After improvement, update the state file to mark issues as
fixed
.
Improve documentation based on specific tracked issues: $ARGUMENTS

Fix ONLY these pending issues:

| ID | Issue | Type | Notes |
|----|-------|------|-------|
[paste pending issues from state file]

Instructions:
1. Read .claude/skills/shared/writing-guide.md
2. Read .claude/skills/shared/docs-guide.md
3. Read $ARGUMENTS

4. For each accuracy fix:
   - Read the source file referenced in the notes
   - Verify the correct API/behavior from the source
   - Apply the fix based on what the source code actually shows

5. Apply style fixes

6. Run prettier: yarn prettier --write $ARGUMENTS

DO NOT:
- Add new sections
- Expand the document
- Fix issues not in the list above

Return a summary:

CHANGES MADE:

| ID | Fix applied | Verification |
|----|-------------|--------------|
| X | [description] | [source file:line checked] |
| Y | [description] | n/a |
优化完成后,更新状态文件,将问题标记为
fixed

Step 4b: Complete and finish (alternative to Step 4)

步骤4b:完成并结束(步骤4的替代选项)

If the user selects "Complete and finish", fix all remaining pending issues without re-evaluating. This is useful when the evaluation is satisfactory and the user wants to apply fixes and move on.
Workflow:
  1. Run triage (same as Step 3) to mark out-of-scope items as
    wont-fix
  2. Launch the improvement agent (same prompt as Step 4)
  3. Update state file to mark issues as
    fixed
  4. Exit the loop - do not re-evaluate
This path trusts the improvement agent to apply fixes correctly and skips the verification cycle. Use when:
  • The issues are straightforward style fixes
  • Time is limited and re-evaluation isn't worth the cost
  • Scores are already acceptable and only minor polish remains
如果用户选择“完成并结束”,则修复所有剩余待处理问题无需重新评估。当评估结果已令人满意,用户希望直接应用修复并推进工作时,此选项非常实用。
工作流
  1. 执行分类处理(与步骤3相同),将超出范围的项目标记为
    wont-fix
  2. 启动改进Agent(使用与步骤4相同的提示词)
  3. 更新状态文件,将问题标记为
    fixed
  4. 退出循环——不进行重新评估
此路径信任改进Agent能正确应用修复,跳过验证周期。适用于以下场景:
  • 问题为简单的风格修复
  • 时间有限,重新评估性价比不高
  • 分数已达标,仅需少量润色

Step 5: Verification evaluation

步骤5:验证评估

For subsequent rounds, evaluation agents verify fixes AND find new issues:
后续轮次中,评估Agent会验证修复效果并发现新问题:

Style agent prompt (verification)

风格Agent提示词(验证阶段)

Verify fixes and evaluate documentation: $ARGUMENTS

Read the state file first: [path to state file]

Then read:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS

Your job:
1. VERIFY fixes marked as "fixed" in the state file - confirm they were actually applied
2. Score style dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW issues not already in the state file

VERIFY THESE FIXES:
[paste fixed style issues from state file]

Return in this format:

VERIFICATION REPORT:

| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found] |

STYLE SCORES:
READABILITY: [score]/10
VOICE: [score]/10

NEW ISSUES (not already in state file):
- [issue] or "None found"
Verify fixes and evaluate documentation: $ARGUMENTS

Read the state file first: [path to state file]

Then read:
1. .claude/skills/shared/writing-guide.md
2. .claude/skills/shared/docs-guide.md
3. $ARGUMENTS

Your job:
1. VERIFY fixes marked as "fixed" in the state file - confirm they were actually applied
2. Score style dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW issues not already in the state file

VERIFY THESE FIXES:
[paste fixed style issues from state file]

Return in this format:

VERIFICATION REPORT:

| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found] |

STYLE SCORES:
READABILITY: [score]/10
VOICE: [score]/10

NEW ISSUES (not already in state file):
- [issue] or "None found"

Content agent prompt (verification)

内容Agent提示词(验证阶段)

Verify fixes and evaluate documentation content: $ARGUMENTS

Read the state file first: [path to state file]

Then read $ARGUMENTS and verify claims against source code in packages/tldraw/.

Your job:
1. VERIFY accuracy fixes marked as "fixed" in the state file
2. Score content dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW accuracy issues not already in the state file

VERIFY THESE FIXES:
[paste fixed accuracy issues from state file]

Return in this format:

VERIFICATION REPORT:

| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found in doc AND source] |

CONTENT SCORES:
COMPLETENESS: [score]/10 (score existing content only, ignore wont-fix items)
ACCURACY: [score]/10

NEW ACCURACY ISSUES (not already in state file):
- [issue with source file:line] or "None found"
After verification, update the state file with new statuses and any new issues.
Verify fixes and evaluate documentation content: $ARGUMENTS

Read the state file first: [path to state file]

Then read $ARGUMENTS and verify claims against source code in packages/tldraw/.

Your job:
1. VERIFY accuracy fixes marked as "fixed" in the state file
2. Score content dimensions (do NOT re-flag wont-fix issues)
3. Flag only NEW accuracy issues not already in the state file

VERIFY THESE FIXES:
[paste fixed accuracy issues from state file]

Return in this format:

VERIFICATION REPORT:

| ID | Status | Notes |
|----|--------|-------|
| X | verified-fixed / not-fixed | [what you found in doc AND source] |

CONTENT SCORES:
COMPLETENESS: [score]/10 (score existing content only, ignore wont-fix items)
ACCURACY: [score]/10

NEW ACCURACY ISSUES (not already in state file):
- [issue with source file:line] or "None found"
验证完成后,更新状态文件,记录新的状态和任何新发现的问题。

Step 6: Loop

步骤6:循环

Continue the loop until:
  • User chooses "Done" (exit without changes)
  • User chooses "Complete and finish" (apply fixes, then exit)
  • Scores reach acceptable levels (32/40 or higher)
  • All issues are
    verified-fixed
    or
    wont-fix
持续循环直到满足以下任一条件:
  • 用户选择“Done(完成)”(不做修改直接退出)
  • 用户选择“Complete and finish(完成并结束)”(应用修复后退出)
  • 分数达到可接受水平(32/40或更高)
  • 所有问题均为
    verified-fixed
    wont-fix
    状态

Notes

注意事项

  • The state file prevents re-discovering the same issues across rounds
  • Evaluation agents verify previous fixes before scoring
  • wont-fix
    is appropriate for completeness issues requiring new sections
  • Accuracy verification is critical: The improvement agent must read actual source code before applying any accuracy fix
  • Style and content evaluations always run in parallel for efficiency
  • 状态文件可避免在多轮循环中重复发现相同问题
  • 评估Agent在打分前会先验证之前的修复
  • wont-fix
    适用于需要添加新章节的完整性问题
  • 准确性验证至关重要:改进Agent在应用任何准确性修复前必须查阅实际源代码
  • 风格和内容评估始终并行运行以提高效率