do-parallel

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Parallel Multi-Perspective Analysis

并行多视角分析

$ARGUMENTS - Target agent/skill name + source material file path

$ARGUMENTS - 目标Agent/Skill名称 + 源材料文件路径

Operator Context

操作者上下文

This skill operates as an operator for intensive multi-perspective analysis, configuring Claude's behavior for true parallel independence across 10 analytical agents. It implements the Fan-Out / Fan-In architectural pattern -- dispatch independent agents in parallel, collect results, synthesize into unified recommendations -- with Domain Intelligence embedded in each perspective's focus constraints.
本Skill作为密集型多视角分析的操作者,配置Claude的行为以实现10个分析Agent的真正并行独立性。它采用Fan-Out / Fan-In架构模式——并行调度独立Agent、收集结果、整合成统一建议——每个视角的聚焦约束中都嵌入了领域智能

Hardcoded Behaviors (Always Apply)

硬编码行为(始终生效)

  • CLAUDE.md Compliance: Read and follow repository CLAUDE.md before execution
  • Over-Engineering Prevention: Apply only Priority 1 and Priority 2 synthesized rules. Do not invent improvements beyond what the source material supports. No speculative enhancements.
  • True Parallel Independence: All 10 Task invocations MUST be in a single message. Each agent receives ONLY its assigned perspective with zero cross-contamination.
  • Artifact Persistence: Save synthesis document and completion report to files. Context is ephemeral; artifacts persist.
  • Token Budget Awareness: Estimate total cost before execution. Warn if source material exceeds 5,000 words (drives cost above 50,000 tokens).
  • Validate Inputs First: Verify target agent/skill exists and source material is readable before spawning any agents.
  • No Behavior Changes: Synthesized rules ADD depth. They NEVER remove or significantly alter existing working patterns in the target.
  • CLAUDE.md合规性:执行前阅读并遵循仓库中的CLAUDE.md
  • 过度设计预防:仅应用整合后的1级和2级优先级规则。不得在源材料支持范围外凭空创造优化方案,禁止推测性增强。
  • 真正并行独立性:所有10个任务调用必须在单条消息中完成。每个Agent仅接收其分配的视角,完全避免交叉干扰。
  • 工件持久化:将整合文档和完成报告保存到文件中。上下文是临时的,工件需永久留存。
  • Token预算感知:执行前估算总成本。如果源材料超过5000字(会导致Token成本超过50000),需发出警告。
  • 先验证输入:在生成任何Agent之前,验证目标Agent/Skill是否存在,源材料是否可读。
  • 不修改现有行为:整合后的规则仅用于增加深度,绝不删除或大幅修改目标中已有的有效模式。

Default Behaviors (ON unless disabled)

默认行为(除非禁用否则生效)

  • 10 Perspectives: Use all 10 analytical frameworks (see
    references/perspective-prompts.md
    )
  • Priority-Based Application: Apply Must-Have rules first, then Should-Have. Skip Nice-to-Have unless user requests.
  • Synthesis Before Application: Collect all 10 reports and synthesize before making any changes to the target.
  • Completion Report: Generate detailed report showing impact, changes, and perspective contributions.
  • Graceful Degradation: If agents time out, proceed with available results (3+ of 10 sufficient).
  • Git Commit: Commit improvements with descriptive message after application.
  • 10个视角:使用全部10种分析框架(详见
    references/perspective-prompts.md
  • 基于优先级的应用:先应用必备规则,再应用推荐规则。除非用户要求,否则跳过可选规则。
  • 先整合再应用:收集所有10份报告并完成整合后,再对目标进行任何修改。
  • 完成报告:生成详细报告,展示影响、变更内容以及各视角的贡献。
  • 优雅降级:如果Agent超时,使用已有结果继续执行(10个中完成3个及以上即可)。
  • Git提交:应用优化后,使用描述性消息提交变更。

Optional Behaviors (OFF unless enabled)

可选行为(除非启用否则禁用)

  • Reduced Perspectives: Use 5 perspectives instead of 10 to halve token cost
  • Dry Run Mode: Generate synthesis without applying changes to target
  • Compare Mode: Analyze two sources and extract differences
  • 减少视角数量:使用5个而非10个视角,将Token成本减半
  • 试运行模式:生成整合结果但不应用到目标
  • 对比模式:分析两个源材料并提取差异

What This Skill CAN Do

本Skill能实现的功能

  • Extract comprehensive insights from complex source material through 10 independent lenses
  • Synthesize cross-perspective patterns into prioritized improvement recommendations
  • Apply synthesized rules to enhance an existing agent or skill
  • Produce detailed reports showing which perspectives contributed to each improvement
  • Detect patterns that single-threaded analysis misses due to cognitive anchoring
  • 通过10个独立视角从复杂源材料中提取全面洞察
  • 将跨视角模式整合成优先级明确的优化建议
  • 应用整合后的规则来增强现有Agent或Skill
  • 生成详细报告,展示各视角对每项优化的贡献
  • 发现单线程分析因认知锚定而遗漏的模式

What This Skill CANNOT Do

本Skill不能实现的功能

  • Replace inline analysis for simple or straightforward material (use
    /do-perspectives
    for single-target improvements)
  • Operate without sufficient token budget (requires 25,000-63,000 tokens)
  • Guarantee all 10 agents complete (network/timeout issues may reduce count)
  • Generate value from poor source material (marketing fluff, auto-generated docs)
  • Skip the synthesis phase and apply raw per-perspective rules directly

  • 替代单线程分析处理简单或直白的材料(单目标优化请使用
    /do-perspectives
  • 在Token预算不足的情况下运行(需要25000-63000个Token)
  • 保证10个Agent全部完成(网络/超时问题可能减少完成数量)
  • 从质量低下的源材料中产生价值(如营销话术、自动生成的文档)
  • 跳过整合阶段直接应用各视角的原始规则

Instructions

操作步骤

Phase 1: VALIDATE INPUTS

阶段1:验证输入

Goal: Confirm target exists and source material is suitable before spending tokens.
Step 1: Parse arguments
  • Extract target agent/skill name (first argument)
  • Extract source material path (second argument)
  • If either argument is missing, report usage:
    /do-parallel <target-name> <source-path>
Step 2: Validate target
bash
undefined
目标:在消耗Token前,确认目标存在且源材料合适。
步骤1:解析参数
  • 提取目标Agent/Skill名称(第一个参数)
  • 提取源材料路径(第二个参数)
  • 如果任一参数缺失,报告使用方法:
    /do-parallel <target-name> <source-path>
步骤2:验证目标
bash
undefined

For agents

验证Agent

ls agents/{target_name}.md
ls agents/{target_name}.md

For skills

验证Skill

ls skills/{target_name}/SKILL.md

- Determine if target is an agent or skill based on which path exists
- Read target file to understand current state, capture line count
- If target does not exist in either location, stop and report error

**Step 3: Validate source material**

```markdown
ls skills/{target_name}/SKILL.md

- 根据路径存在情况判断目标是Agent还是Skill
- 读取目标文件以了解当前状态,记录行数
- 如果目标在任一位置都不存在,停止并报告错误

**步骤3:验证源材料**

```markdown

Source Material Assessment

源材料评估

File: [path] Word count: [N] words Estimated token cost: [N * 10 * 3] = [total] tokens
Quality indicators:
  • Contains concrete examples (not just abstract claims)
  • Has systematic structure (sections, progression)
  • Demonstrates expertise (technical depth, nuanced explanations)
  • Sufficient length (500+ words minimum)
Assessment: SUITABLE / UNSUITABLE

- Read source file, confirm it is non-empty
- Estimate word count. If over 5,000 words, warn about elevated token cost.
- If material fails 2+ quality indicators, recommend inline analysis instead and ask user to confirm

**Step 4: Estimate token budget**

| Component | Estimated Tokens |
|-----------|-----------------|
| 10 agents x source material | source_words x 10 x ~3 |
| 10 agent outputs | ~5,000 (500 words each) |
| Synthesis | ~3,000 |
| Application | ~5,000 |
| **Total** | **sum of above** |

If total exceeds 60,000 tokens, warn user and request confirmation before proceeding.

**Gate**: Target exists and is readable. Source material is present and substantive. Token estimate is acceptable. Proceed only when gate passes.
文件:[路径] 字数:[N]字 估算Token成本:[N * 10 * 3] = [总计]个Token
质量指标:
  • 包含具体示例(而非仅抽象声明)
  • 具有系统结构(章节、递进逻辑)
  • 体现专业度(技术深度、细致解释)
  • 长度足够(至少500字)
评估结果:适合 / 不适合

- 读取源文件,确认非空
- 估算字数。如果超过5000字,警告Token成本过高。
- 如果材料未满足2个及以上质量指标,建议使用单线程分析并请求用户确认

**步骤4:估算Token预算**

| 组件 | 估算Token数量 |
|-----------|-----------------|
| 10个Agent × 源材料 | 源文件字数 × 10 × ~3 |
| 10个Agent输出结果 | ~5000(每个500字) |
| 整合阶段 | ~3000 |
| 应用阶段 | ~5000 |
| **总计** | **以上各项之和** |

如果总计超过60000个Token,警告用户并请求确认后再继续。

**准入条件**:目标存在且可读,源材料存在且有实质内容,Token估算在可接受范围内。仅当所有条件满足时才可继续。

Phase 2: MULTI-PERSPECTIVE ANALYSIS (TRUE PARALLEL)

阶段2:多视角分析(真正并行)

Goal: Spawn 10 independent agents to analyze source material from distinct frameworks.
Step 1: Launch all 10 agents in a SINGLE message
Each agent receives:
  1. The FULL source material
  2. ONE assigned perspective (from
    references/perspective-prompts.md
    )
  3. The target name for contextualized recommendations
  4. Instructions to produce 200-500 words of focused analysis
The 10 perspectives are:
  1. Structural Analysis
  2. Clarity and Precision
  3. Technical Explanation Patterns
  4. Audience Assumption Patterns
  5. Evidence and Citation Strategy
  6. Narrative Progression
  7. Paragraph and Sentence Architecture
  8. Header and Signposting Strategy
  9. Complexity Management
  10. Limitation and Nuance Handling
Step 2: Collect results with timeout awareness
Wait for all agents to complete. Monitor using this decision tree:
Agent running > 5 minutes?
    |
    +-- YES --> Check progress (non-blocking)
    |           |
    |           +-- Making progress? --> Wait 2 more minutes
    |           |
    |           +-- Stuck on web fetch? --> Mark as timed out, proceed
    |
    +-- NO --> Continue waiting
Step 3: Assess completeness
Agents CompletedAction
8-10 of 10Full pipeline, excellent coverage
5-7 of 10Proceed, note gaps in report
3-4 of 10Proceed with caution, synthesis will be thinner
1-2 of 10Abort parallel approach, fall back to inline analysis
0 of 10Critical failure, investigate cause
Gate: At least 3 of 10 perspectives have returned results. Proceed only when gate passes.
目标:生成10个独立Agent,从不同框架分析源材料。
步骤1:在单条消息中启动全部10个Agent
每个Agent会收到:
  1. 完整的源材料
  2. 一个分配的视角(来自
    references/perspective-prompts.md
  3. 目标名称,用于生成上下文相关的建议
  4. 生成200-500字聚焦分析内容的指令
10个视角分别是:
  1. 结构分析
  2. 清晰度与精准性
  3. 技术解释模式
  4. 受众假设模式
  5. 证据与引用策略
  6. 叙事递进
  7. 段落与句子架构
  8. 标题与标识策略
  9. 复杂度管理
  10. 局限性与细微差别处理
步骤2:带超时感知的结果收集
等待所有Agent完成。使用以下决策树监控:
Agent运行超过5分钟?
    |
    +-- 是 --> 检查进度(非阻塞)
    |           |
    |           +-- 正在推进? --> 再等待2分钟
    |           |
    |           +-- 卡在网络请求? --> 标记为超时,继续执行
    |
    +-- 否 --> 继续等待
步骤3:评估完成度
完成的Agent数量操作
8-10个完整流程,覆盖全面
5-7个继续执行,在报告中说明缺口
3-4个谨慎继续,整合结果会较简略
1-2个终止并行方式, fallback到单线程分析
0个严重故障,调查原因
准入条件:10个视角中至少有3个返回结果。仅当满足条件时才可继续。

Phase 3: SYNTHESIZE

阶段3:整合

Goal: Merge 10 independent analyses into prioritized, unified recommendations.
Step 1: Create cross-reference matrix
For each rule extracted by any perspective, track which perspectives identified it:
markdown
| Rule | Struct | Clarity | Tech | Audience | Evidence | Narrative | Para | Header | Complex | Nuance | Count |
|------|--------|---------|------|----------|----------|-----------|------|--------|---------|--------|-------|
| [Rule A] | X | X | | X | | X | | | X | | 5 |
| [Rule B] | | | X | | X | | X | X | | X | 4 |
Step 2: Identify common themes
  • Patterns appearing in 4+ perspectives are high-confidence findings
  • Patterns appearing in 7+ perspectives are near-certain insights
  • Group related rules into themes (e.g., "Progressive Disclosure" may appear in Audience, Complexity, and Structure)
Step 3: Extract unique insights
  • Single-perspective findings that are high-value despite low frequency
  • These often represent the unique value of parallel independence
  • Example: Only Narrative Progression spots a "hook-payoff" pattern that would strengthen an agent's introduction section
Step 4: Prioritize rules
markdown
undefined
目标:将10个独立分析结果合并为优先级明确的统一建议。
步骤1:创建交叉引用矩阵
对于任一视角提取的每条规则,记录哪些视角识别出了该规则:
markdown
| 规则 | 结构 | 清晰度 | 技术 | 受众 | 证据 | 叙事 | 段落 | 标题 | 复杂度 | 细微差别 | 计数 |
|------|--------|---------|------|----------|----------|-----------|------|--------|---------|--------|-------|
| [规则A] | X | X | | X | | X | | | X | | 5 |
| [规则B] | | | X | | X | | X | X | | X | 4 |
步骤2:识别共同主题
  • 在4个及以上视角中出现的模式是高可信度发现
  • 在7个及以上视角中出现的模式是近乎确定的洞察
  • 将相关规则分组为主题(例如,「渐进式披露」可能出现在受众、复杂度和结构视角中)
步骤3:提取独特洞察
  • 尽管出现频率低,但价值高的单视角发现
  • 这些通常体现了并行独立性的独特价值
  • 示例:只有叙事递进视角发现了「钩子-回报」模式,可用于强化Agent的介绍部分
步骤4:规则优先级排序
markdown
undefined

Priority Rules for [Target]

[目标]的优先级规则

Must-Have (Priority 1)

必备(优先级1)

Rules present in 7+ perspectives OR critical impact:
  1. [Rule] - Found in: [list of perspectives]
  2. [Rule] - Found in: [list of perspectives]
出现在7个及以上视角中,或影响重大的规则:
  1. [规则] - 来自:[视角列表]
  2. [规则] - 来自:[视角列表]

Should-Have (Priority 2)

推荐(优先级2)

Rules present in 4-6 perspectives OR high impact:
  1. [Rule] - Found in: [list of perspectives]
  2. [Rule] - Found in: [list of perspectives]
出现在4-6个视角中,或影响较大的规则:
  1. [规则] - 来自:[视角列表]
  2. [规则] - 来自:[视角列表]

Nice-to-Have (Priority 3)

可选(优先级3)

Rules present in 1-3 perspectives OR moderate impact:
  1. [Rule] - Found in: [perspective]
  2. [Rule] - Found in: [perspective]

**Step 5: Save synthesis document**
- Write to `skills/do-parallel/artifacts/synthesis-{target}-{date}.md`
- Include the cross-reference matrix, themes, and prioritized rules
- This artifact persists for future reference and can inform later analyses

**Gate**: Synthesis document exists with at least 3 Must-Have and 3 Should-Have rules. Proceed only when gate passes.
出现在1-3个视角中,或影响中等的规则:
  1. [规则] - 来自:[视角]
  2. [规则] - 来自:[视角]

**步骤5:保存整合文档**
- 写入`skills/do-parallel/artifacts/synthesis-{target}-{date}.md`
- 包含交叉引用矩阵、主题和优先级规则
- 该工件将永久留存,用于未来参考和后续分析

**准入条件**:整合文档存在,且包含至少3条必备规则和3条推荐规则。仅当满足条件时才可继续。

Phase 4: APPLY

阶段4:应用

Goal: Improve the target agent/skill using synthesized recommendations.
Step 1: Read current target state
markdown
undefined
目标:使用整合后的建议优化目标Agent或Skill。
步骤1:读取目标当前状态
markdown
undefined

Before State

优化前状态

Target: [name] Type: [agent/skill] Lines: [N] Sections: [list of H2/H3 sections] Version: [current version]

**Step 2: Plan application**

Map each Priority 1 and Priority 2 rule to a specific location in the target:

```markdown
目标:[名称] 类型:[Agent/Skill] 行数:[N] 章节:[H2/H3章节列表] 版本:[当前版本]

**步骤2:规划应用方案**

将每个优先级1和优先级2的规则映射到目标中的具体位置:

```markdown

Application Plan

应用规划

RuleActionTarget SectionRisk
[Rule 1]Add subsectionOperator ContextLOW
[Rule 2]Enhance existingInstructions Phase 2LOW
[Rule 3]Add new sectionAfter Anti-PatternsMEDIUM

**Step 3: Apply Priority 1 rules**
- Add or enhance sections based on Must-Have recommendations
- Preserve all existing working patterns
- After each rule application, verify target file is still valid markdown

**Step 4: Apply Priority 2 rules**
- Add Should-Have enhancements where they integrate naturally
- Do NOT force rules that conflict with existing patterns
- If a Should-Have rule conflicts with an existing pattern, document the conflict in the report and skip

**Step 5: Commit changes**
- Create descriptive git commit explaining what was improved and from what source
- Bump version if target is a skill (e.g., 1.0.0 to 1.1.0)

**Gate**: Target file has been modified. Changes preserve existing behavior. Before/after diff shows additions only (no deletions of existing content). Proceed only when gate passes.
规则操作目标章节风险
[规则1]添加子章节操作者上下文
[规则2]增强现有内容操作步骤阶段2
[规则3]添加新章节反模式之后

**步骤3:应用优先级1规则**
- 根据必备建议添加或增强章节
- 保留所有现有有效模式
- 每条规则应用后,验证目标文件仍是有效的Markdown

**步骤4:应用优先级2规则**
- 在自然整合的地方添加推荐增强内容
- 绝不强制应用与现有模式冲突的规则
- 如果推荐规则与现有模式冲突,在报告中记录冲突并跳过该规则

**步骤5:提交变更**
- 创建描述性Git提交信息,说明优化内容及来源
- 如果目标是Skill,升级版本(例如从1.0.0到1.1.0)

**准入条件**:目标文件已修改,变更保留了现有行为,前后对比显示仅添加了内容(未删除现有内容)。仅当满足条件时才可继续。

Phase 5: VERIFY AND REPORT

阶段5:验证与报告

Goal: Confirm improvements are sound and document the full analysis.
Step 1: Verify target integrity
markdown
undefined
目标:确认优化合理,并记录完整分析过程。
步骤1:验证目标完整性
markdown
undefined

Integrity Check

完整性检查

YAML frontmatter valid: [YES/NO] Sections preserved: [list any missing sections] Before lines: [N] After lines: [M] Net change: +[M-N] lines
Verification:
  • All original H2 sections still present
  • All original H3 sections still present
  • No content was deleted (only additions)
  • Markdown renders correctly

If any check fails, revert the problematic change and re-apply.

**Step 2: Generate completion report**

Use template from `references/perspective-prompts.md`. The report MUST include:
- Per-perspective key insights (one sentence each)
- Cross-reference showing which perspectives contributed to each improvement
- Before/after comparison (line counts, section counts)
- Estimated token usage breakdown
- Recommendations for future improvements

**Step 3: Save completion report**
- Write to `skills/do-parallel/artifacts/report-{target}-{date}.md`
- Present summary to user in conversation

**Step 4: Present results**

```markdown
YAML前置元数据有效:[是/否] 章节保留情况:[列出缺失的章节] 优化前行数:[N] 优化后行数:[M] 净变化:+[M-N]行
验证项:
  • 所有原始H2章节仍存在
  • 所有原始H3章节仍存在
  • 未删除任何内容(仅添加)
  • Markdown渲染正常

如果任何检查项失败,回滚问题变更并重新应用。

**步骤2:生成完成报告**

使用`references/perspective-prompts.md`中的模板。报告必须包含:
- 每个视角的关键洞察(各一句话)
- 交叉引用,展示各视角对每项优化的贡献
- 前后对比(行数、章节数)
- 估算Token使用明细
- 未来优化建议

**步骤3:保存完成报告**
- 写入`skills/do-parallel/artifacts/report-{target}-{date}.md`
- 在对话中向用户展示摘要

**步骤4:呈现结果**

```markdown

Parallel Analysis Complete

并行分析完成

Target: [name] Source: [source path] Perspectives completed: [N] of 10 Rules extracted: [total across all perspectives] Rules applied: [Priority 1 count] Must-Have + [Priority 2 count] Should-Have Lines added: +[count] New sections: [count]
Full report: skills/do-parallel/artifacts/report-{target}-{date}.md Synthesis: skills/do-parallel/artifacts/synthesis-{target}-{date}.md

**Gate**: Completion report exists. Target file is valid. All phases documented.

---
目标:[名称] 源材料:[源路径] 完成的视角数量:[N]/10 提取的规则总数:[所有视角的规则总数] 应用的规则数量:[优先级1数量]条必备 + [优先级2数量]条推荐 新增行数:+[数量] 新增章节数:[数量]
完整报告:skills/do-parallel/artifacts/report-{target}-{date}.md 整合文档:skills/do-parallel/artifacts/synthesis-{target}-{date}.md

**准入条件**:完成报告存在,目标文件有效,所有阶段均已记录。

---

Examples

示例

Example 1: Improve Agent from Article

示例1:通过文章优化Agent

User says:
/do-parallel technical-journalist-writer expert-writing-guide.md
Actions:
  1. Validate target agent exists, read source article (VALIDATE)
  2. Spawn 10 agents analyzing article from 10 perspectives (ANALYZE)
  3. Synthesize: 5 Must-Have rules, 7 Should-Have rules (SYNTHESIZE)
  4. Apply Priority 1 and 2 rules to agent file (APPLY)
  5. Generate report showing +180 lines added (VERIFY) Result: Agent enhanced with synthesized writing patterns from 10 independent analyses
用户输入:
/do-parallel technical-journalist-writer expert-writing-guide.md
操作:
  1. 验证目标Agent存在,读取源文章(验证阶段)
  2. 生成10个Agent从10个视角分析文章(分析阶段)
  3. 整合:5条必备规则,7条推荐规则(整合阶段)
  4. 应用优先级1和2规则到Agent文件(应用阶段)
  5. 生成报告,显示新增180行(验证阶段) 结果:Agent通过10个独立分析的写作模式得到增强

Example 2: Improve Skill from Documentation

示例2:通过文档优化Skill

User says:
/do-parallel systematic-debugging postgres-debugging-guide.md
Actions:
  1. Validate skill exists, assess documentation quality (VALIDATE)
  2. Launch 10 parallel analyses of PostgreSQL debugging patterns (ANALYZE)
  3. Synthesize database-specific debugging rules (SYNTHESIZE)
  4. Add new patterns to debugging skill references (APPLY)
  5. Report: 8 of 10 perspectives contributed new rules (VERIFY) Result: Debugging skill gains domain-specific PostgreSQL patterns

用户输入:
/do-parallel systematic-debugging postgres-debugging-guide.md
操作:
  1. 验证Skill存在,评估文档质量(验证阶段)
  2. 启动10个并行分析,研究PostgreSQL调试模式(分析阶段)
  3. 整合数据库特定的调试规则(整合阶段)
  4. 将新模式添加到调试Skill的参考内容中(应用阶段)
  5. 报告:10个视角中有8个贡献了新规则(验证阶段) 结果:调试Skill获得了领域特定的PostgreSQL模式

Token Budget Management

Token预算管理

This is a high-cost skill. Understanding and managing token usage is essential.
本Skill的Token成本较高,理解并管理Token使用至关重要。

Cost Breakdown

成本明细

PhaseToken RangeNotes
Phase 1: Validate500-1,000Reading target + source
Phase 2: Analysis20,000-50,00010 agents x (source + output)
Phase 3: Synthesize2,000-5,000Cross-reference + prioritization
Phase 4: Apply3,000-8,000Reading target + modifications
Phase 5: Verify1,000-2,000Integrity checks + report
Total26,500-66,0003-5x inline analysis cost
阶段Token范围说明
阶段1:验证500-1000读取目标和源材料
阶段2:分析20000-5000010个Agent ×(源材料 + 输出)
阶段3:整合2000-5000交叉引用 + 优先级排序
阶段4:应用3000-8000读取目标 + 修改
阶段5:验证1000-2000完整性检查 + 报告
总计26500-66000是单线程分析成本的3-5倍

When Cost Is Justified

成本合理的场景

Use do-parallel when:
  • Source material is difficult and hard to grasp from a single reading
  • Multiple independent interpretations could reveal hidden patterns
  • The target agent/skill is high-impact and warrants deep investment
  • Token budget has room for 30,000-60,000 tokens
Use inline analysis (2,000-10,000 tokens) when:
  • Source material is straightforward with obvious patterns
  • A single reading captures the key insights
  • Token budget is constrained
  • Routine incremental improvement is the goal
当满足以下条件时使用do-parallel:
  • 源材料难以理解,单次阅读无法掌握
  • 多个独立解读可能揭示隐藏模式
  • 目标Agent/Skill影响重大,值得深度投入
  • Token预算允许消耗30000-60000个Token
当满足以下条件时使用单线程分析(2000-10000个Token):
  • 源材料简单,模式明显
  • 单次阅读即可获取关键洞察
  • Token预算紧张
  • 目标是常规增量优化

Cost Estimation Formula

成本估算公式

Estimated tokens = (source_words * 3 * 10) + 15,000

Example: 2,000-word article
  = (2,000 * 3 * 10) + 15,000
  = 60,000 + 15,000
  = ~75,000 tokens (HIGH - consider trimming source or reducing perspectives)

Example: 800-word article
  = (800 * 3 * 10) + 15,000
  = 24,000 + 15,000
  = ~39,000 tokens (ACCEPTABLE)

估算Token数 = (源文件字数 × 3 × 10) + 15000

示例:2000字文章
  = (2000 × 3 × 10) + 15000
  = 60000 + 15000
  = ~75000个Token(高成本 - 考虑精简源材料或减少视角)

示例:800字文章
  = (800 × 3 × 10) + 15000
  = 24000 + 15000
  = ~39000个Token(可接受)

Error Handling

错误处理

Error: "Target Agent/Skill Not Found"

错误:「目标Agent/Skill未找到」

Cause: Name mismatch or typo in first argument Solution:
  1. List available agents with
    ls agents/*.md
  2. List available skills with
    ls skills/*/SKILL.md
  3. Retry with exact name matching repository
原因:第一个参数名称不匹配或拼写错误 解决方案:
  1. 使用
    ls agents/*.md
    列出可用Agent
  2. 使用
    ls skills/*/SKILL.md
    列出可用Skill
  3. 使用仓库中的准确名称重试

Error: "Source Material Too Short or Empty"

错误:「源材料过短或为空」

Cause: File path wrong, file empty, or material lacks depth Solution:
  1. Verify file path is absolute and file exists
  2. If material is under 500 words, it likely lacks sufficient patterns
  3. Consider using inline analysis instead of parallel (lower cost, similar value for thin material)
原因:文件路径错误、文件为空或材料缺乏深度 解决方案:
  1. 验证文件路径为绝对路径且文件存在
  2. 如果材料不足500字,可能缺乏足够的分析模式
  3. 考虑使用单线程分析替代并行分析(成本更低,对薄材料的价值相当)

Error: "Agents Timing Out"

错误:「Agent超时」

Cause: Source material too large, network issues, or agent stuck on web fetch Solution:
  1. Check if source exceeds 10,000 words (reduce or split)
  2. After 5 minutes, check agent progress with non-blocking query
  3. Proceed with completed perspectives if 3+ have returned
  4. See graceful degradation table in Phase 2, Step 3
原因:源材料过大、网络问题或Agent卡在网络请求 解决方案:
  1. 检查源材料是否超过10000字(精简或拆分)
  2. 5分钟后,使用非阻塞查询检查Agent进度
  3. 如果完成3个及以上视角,使用已有结果继续
  4. 参考阶段2步骤3的优雅降级表格

Error: "Synthesis Has Insufficient Rules"

错误:「整合结果规则不足」

Cause: Source material lacked depth, or perspectives returned shallow analysis Solution:
  1. Review agent outputs for quality (are they 200-500 words with concrete patterns?)
  2. If most outputs are thin, the source material is unsuitable for parallel analysis
  3. Consider switching to inline analysis with a focused prompt
  4. Report to user: "Source material did not yield sufficient patterns for 10-perspective analysis"

原因:源材料缺乏深度,或视角返回的分析内容浅显 解决方案:
  1. 检查Agent输出质量(是否为200-500字的具体模式分析?)
  2. 如果多数输出内容单薄,说明源材料不适合并行分析
  3. 考虑切换到带聚焦提示的单线程分析
  4. 告知用户:「源材料无法为10视角分析提供足够的模式」

Anti-Patterns

反模式

Anti-Pattern 1: Using Parallel for Simple Material

反模式1:对简单材料使用并行分析

What it looks like: Running 10 agents on a 200-word README Why wrong: Token cost of 25,000+ for material that inline analysis handles in 2,000 tokens. No depth to analyze from 10 angles. Do instead: Use
/do-perspectives
for single-target improvements or simpler inline analysis. Reserve do-parallel for complex, hard-to-grasp material.
表现:对200字的README运行10个Agent 问题:消耗25000+Token处理单线程分析仅需2000Token的材料,10个视角没有深度可分析 正确做法:单目标优化或简单分析使用
/do-perspectives
,仅对复杂、难以理解的材料使用do-parallel

Anti-Pattern 2: Applying All Rules Without Prioritization

反模式2:不做优先级排序就应用所有规则

What it looks like: Dumping all 30-50 extracted rules into the target without filtering Why wrong: Low-frequency rules may conflict with existing patterns. Quantity overwhelms quality. Target becomes bloated. Do instead: Apply Priority 1 first, then Priority 2. Skip Priority 3 unless explicitly requested.
表现:将提取的30-50条规则全部导入目标,不做筛选 问题:低频率规则可能与现有模式冲突,数量大于质量,导致目标臃肿 正确做法:先应用优先级1规则,再应用优先级2规则,除非明确要求否则跳过优先级3规则

Anti-Pattern 3: Skipping Synthesis Phase

反模式3:跳过整合阶段

What it looks like: Reading each agent report and applying rules one perspective at a time Why wrong: Cross-perspective patterns are the primary value. Applying per-perspective rules misses common themes and introduces contradictions. Do instead: Always collect all reports, identify common themes, then create unified recommendations before touching the target.
表现:逐个读取Agent报告并逐视角应用规则 问题:跨视角模式是并行分析的核心价值,逐视角应用会错过共同主题并引入矛盾 正确做法:收集所有报告,识别共同主题,创建统一建议后再修改目标

Anti-Pattern 4: Running Without Budget Awareness

反模式4:无预算意识地运行

What it looks like: Launching 10 agents on a 15,000-word document without estimating cost Why wrong: Could consume 80,000+ tokens. Session may exhaust budget mid-execution, leaving work incomplete. Do instead: Estimate cost in Phase 1. Source words x 10 agents x ~3 tokens/word = rough estimate. Warn if over 50,000.

表现:未估算成本就对15000字文档启动10个Agent 问题:可能消耗80000+Token,会话可能因预算耗尽而中途终止,导致工作未完成 正确做法:在阶段1估算成本,源文件字数 × 10个Agent × ~3Token/字 = 粗略估算,超过50000则发出警告

References

参考

This skill uses these shared patterns:
  • Anti-Rationalization - Prevents shortcut rationalizations
  • Verification Checklist - Pre-completion checks
  • Pipeline Architecture - Phase-gated pipeline design
  • Gate Enforcement - Phase transition rules
本Skill使用以下共享模式:
  • Anti-Rationalization - 防止捷径式合理化
  • Verification Checklist - 完成前检查
  • Pipeline Architecture - 阶段门控的流水线设计
  • Gate Enforcement - 阶段转换规则

Domain-Specific Anti-Rationalization

领域特定的反合理化

RationalizationWhy It's WrongRequired Action
"Source is simple, 10 perspectives overkill"Simple source = use inline analysis insteadCheck material depth in Phase 1, downgrade if thin
"3 perspectives returned, close enough"3 is minimum for synthesis, not idealWait for timeout threshold, then proceed with available
"I can synthesize as I go"Per-perspective application misses cross-cutting themesComplete all collection before ANY synthesis
"Existing patterns in target are outdated"Existing patterns may work; new rules ADD, never replacePreserve all existing content, add depth only
合理化借口问题所在要求的操作
「源材料简单,10个视角是过度设计」简单材料应使用单线程分析在阶段1检查材料深度,若单薄则降级为单线程分析
「3个视角返回结果,足够了」3个是整合的最低要求,而非理想状态等待超时阈值后,使用已有结果继续
「我可以边收集边整合」逐视角应用会错过跨领域主题收集完所有结果后再开始整合
「目标中的现有模式已过时」现有模式可能仍有效,新规则仅用于补充,绝不替换保留所有现有内容,仅添加深度

Reference Files

参考文件

  • ${CLAUDE_SKILL_DIR}/references/perspective-prompts.md
    : All 10 perspective templates, synthesis format, completion report template, and source material guidance
  • ${CLAUDE_SKILL_DIR}/references/perspective-prompts.md
    :包含所有10个视角模板、整合格式、完成报告模板以及源材料指南