octocode-documentaion-writer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Repository Documentation Generator

代码仓库文档生成器

Production-ready 6-phase pipeline with intelligent orchestration, research-first validation, and conflict-free file ownership.
<what> This command orchestrates specialized AI agents in 6 phases to analyze your code repository and generate comprehensive documentation: </what> <steps> <phase_1> **Discovery+Analysis** (Phase 1) Agent: Opus Parallel: 4 parallel agents What: Analyze language, architecture, flows, and APIs Input: Repository path Output: `analysis.json` </phase_1>
<phase_2> Engineer Questions (Phase 2) Agent: Opus What: Generates comprehensive questions based on the analysis Input:
analysis.json
Output:
questions.json
</phase_2>
<phase_3> Research Agent (Phase 3) 🆕 Agent: Sonnet Parallel: Dynamic (based on question volume) What: Deep-dive code forensics to ANSWER the questions with evidence Input:
questions.json
Output:
research.json
</phase_3>
<phase_4> Orchestrator (Phase 4) Agent: Opus What: Groups questions by file target and assigns exclusive file ownership to writers Input:
questions.json
+
research.json
Output:
work-assignments.json
(file-based assignments for parallel writers) </phase_4>
<phase_5> Documentation Writers (Phase 5) Agent: Sonnet Parallel: 1-8 parallel agents (dynamic based on workload) What: Synthesize research and write comprehensive documentation with exclusive file ownership Input:
analysis.json
+
questions.json
+
research.json
+
work-assignments.json
Output:
documentation/*.md
(16 core docs, 5 required + supplementary files) </phase_5>
<phase_6> QA Validator (Phase 6) Agent: Sonnet What: Validates documentation quality using LSP-powered verification Input:
documentation/*.md
+
analysis.json
+
questions.json
Output:
qa-results.json
+
QA-SUMMARY.md
</phase_6> </steps>
<subagents> Use spawn explore opus/sonnet/haiku subagents to explore code with MCP tools (localSearchCode, lspGotoDefinition, lspCallHierarchy, lspFindReferences) </subagents>
Documentation Flow: analysis.json → questions.json → research.json → work-assignments.json → documentation (conflict-free!)

可投入生产的6阶段流水线,具备智能编排、研究优先验证和无冲突文件所有权机制。
<what> 该命令通过6个阶段编排专用AI Agent,分析你的代码仓库并生成全面的文档: </what> <steps> <phase_1> **发现与分析(Phase 1)** Agent: Opus 并行:4个并行Agent 任务:分析语言、架构、流程与API 输入:仓库路径 输出:`analysis.json` </phase_1>
<phase_2> 生成工程化问题(Phase 2) Agent: Opus 任务:基于分析结果生成全面的问题 输入:
analysis.json
输出:
questions.json
</phase_2>
<phase_3> 研究Agent(Phase 3)🆕 Agent: Sonnet 并行:动态调整(基于问题数量) 任务:深度代码取证,为问题提供有证据支撑的答案 输入:
questions.json
输出:
research.json
</phase_3>
<phase_4> 编排器(Phase 4) Agent: Opus 任务:按目标文件分组问题,并为撰写者分配专属文件所有权 输入:
questions.json
+
research.json
输出:
work-assignments.json
(为并行撰写者分配的基于文件的任务) </phase_4>
<phase_5> 文档撰写者(Phase 5) Agent: Sonnet 并行:1-8个并行Agent(基于工作负载动态调整) 任务:整合研究结果,撰写全面的文档,且每个撰写者拥有专属文件所有权 输入:
analysis.json
+
questions.json
+
research.json
+
work-assignments.json
输出:
documentation/*.md
(16份核心文档,含5份必填文档及补充文件) </phase_5>
<phase_6> QA验证器(Phase 6) Agent: Sonnet 任务:使用LSP驱动的验证机制验证文档质量 输入:
documentation/*.md
+
analysis.json
+
questions.json
输出:
qa-results.json
+
QA-SUMMARY.md
</phase_6> </steps>
<subagents> 使用spawn指令调用opus/sonnet/haiku子Agent,结合MCP工具(localSearchCode、lspGotoDefinition、lspCallHierarchy、lspFindReferences)探索代码 </subagents>
文档生成流程: analysis.json → questions.json → research.json → work-assignments.json → 文档(无冲突!)

⚠️ CRITICAL: Parallel Agent Execution

⚠️ 重要提示:并行Agent执行

<parallel_execution_critical importance="maximum">
STOP. READ THIS TWICE.
<parallel_execution_critical importance="maximum">
停!请通读两遍。

1. THE RULE

1. 规则

You MUST spawn parallel agents in a SINGLE message with multiple Task tool calls.
你必须在单条消息中通过多个Task工具调用启动所有并行Agent。

2. FORBIDDEN BEHAVIOR

2. 禁止行为

FORBIDDEN: Calling
Task
sequentially (one per response). REASON: Sequential calls defeat parallelism and slow down execution by 4x-8x.
禁止: 按顺序调用
Task
(每次响应调用一个)。 原因: 顺序调用会破坏并行性,导致执行速度降低4-8倍。

3. REQUIRED CONFIRMATION

3. 必要确认

Before launching any parallel phase (1, 3, 5), you MUST verify:
  • All Task calls are prepared for a SINGLE response
  • No dependencies exist between these parallel agents
  • Each agent has exclusive scope (no file conflicts)
<correct_pattern title="✅ CORRECT: Single response launches all agents concurrently">
// In ONE assistant message, include ALL Task tool invocations:
Task(description="Discovery 1A-language", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1B-components", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1C-dependencies", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1D-flows", subagent_type="general-purpose", prompt="...", model="opus")
// ↑ All 4 execute SIMULTANEOUSLY
</correct_pattern>
<wrong_pattern title="❌ WRONG: Sequential calls lose parallelism">
// DON'T DO THIS - Each waits for previous to complete
Message 1: Task(description="Discovery 1A") → wait for result
Message 2: Task(description="Discovery 1B") → wait for result
Message 3: Task(description="Discovery 1C") → wait for result
Message 4: Task(description="Discovery 1D") → wait for result
// ↑ 4x slower! No parallelism achieved
</wrong_pattern>
</parallel_execution_critical>

在启动任何并行阶段(1、3、5)之前,你必须验证:
  • 所有Task调用已准备好,可在单条响应中发送
  • 这些并行Agent之间不存在依赖关系
  • 每个Agent拥有专属的作用范围(无文件冲突)
<correct_pattern title="✅ 正确:单条响应同时启动所有Agent">
// 在一条助手消息中,包含所有Task工具调用:
Task(description="Discovery 1A-language", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1B-components", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1C-dependencies", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1D-flows", subagent_type="general-purpose", prompt="...", model="opus")
// ↑ 所有4个Agent同时执行
</correct_pattern>
<wrong_pattern title="❌ 错误:顺序调用失去并行性">
// 不要这么做 - 每个调用都要等待前一个完成
Message 1: Task(description="Discovery 1A") → 等待结果
Message 2: Task(description="Discovery 1B") → 等待结果
Message 3: Task(description="Discovery 1C") → 等待结果
Message 4: Task(description="Discovery 1D") → 等待结果
// ↑ 速度慢4倍!无法实现并行性
</wrong_pattern>
</parallel_execution_critical>

Execution Flow Diagram

执行流程图

mermaid
flowchart TB
    Start([/octocode-documentaion-writer PATH]) --> Validate[Pre-Flight Validation]
    Validate --> Init[Initialize Workspace]

    Init --> P1[Phase 1: Discovery+Analysis]

    subgraph P1_Parallel["🔄 RUN IN PARALLEL (4 agents)"]
        P1A[Agent 1A:<br/>Language & Manifests]
        P1B[Agent 1B:<br/>Components]
        P1C[Agent 1C:<br/>Dependencies]
        P1D[Agent 1D:<br/>Flows & APIs]
    end

    P1 --> P1_Parallel
    P1_Parallel --> P1Agg[Aggregation:<br/>Merge into analysis.json]
    P1Agg --> P1Done[✅ analysis.json created]

    P1Done -->|Reads analysis.json| P2[Phase 2: Engineer Questions<br/>Single Agent - Opus]
    P2 --> P2Done[✅ questions.json created]

    P2Done -->|Reads questions.json| P3[Phase 3: Research 🆕<br/>Parallel Agents - Sonnet]
    
    subgraph P3_Parallel["🔄 RUN IN PARALLEL"]
       P3A[Researcher 1]
       P3B[Researcher 2]
       P3C[Researcher 3]
    end
    
    P3 --> P3_Parallel
    P3_Parallel --> P3Agg[Aggregation:<br/>Merge into research.json]
    P3Agg --> P3Done[✅ research.json created<br/>Evidence-backed answers]

    P3Done -->|Reads questions + research| P4[Phase 4: Orchestrator<br/>Single Agent - Opus]
    P4 --> P4Group[Group questions<br/>by file target]
    P4 --> P4Assign[Assign file ownership<br/>to writers]
    P4Assign --> P4Done[✅ work-assignments.json]

    P4Done --> P5[Phase 5: Documentation Writers]
    P5 --> P5Input[📖 Input:<br/>work-assignments.json<br/>+ research.json]
    P5Input --> P5Dist[Each writer gets<br/>exclusive file ownership]

    subgraph P5_Parallel["🔄 RUN IN PARALLEL (1-8 agents)"]
        P5W1[Writer 1]
        P5W2[Writer 2]
        P5W3[Writer 3]
        P5W4[Writer 4]
    end

    P5Dist --> P5_Parallel
    P5_Parallel --> P5Verify[Verify Structure]
    P5Verify --> P5Done[✅ documentation/*.md created]

    P5Done --> P6[Phase 6: QA Validator<br/>Single Agent - Sonnet]
    P6 --> P6Done[✅ qa-results.json +<br/>QA-SUMMARY.md]

    P6Done --> Complete([✅ Documentation Complete])

    style P1_Parallel fill:#e1f5ff
    style P3_Parallel fill:#e1f5ff
    style P5_Parallel fill:#ffe1f5
    style P4 fill:#fff3cd
    style Complete fill:#28a745,color:#fff
mermaid
flowchart TB
    Start([/octocode-documentaion-writer PATH]) --> Validate[Pre-Flight Validation]
    Validate --> Init[Initialize Workspace]

    Init --> P1[Phase 1: Discovery+Analysis]

    subgraph P1_Parallel["🔄 RUN IN PARALLEL (4 agents)"]
        P1A[Agent 1A:<br/>Language & Manifests]
        P1B[Agent 1B:<br/>Components]
        P1C[Agent 1C:<br/>Dependencies]
        P1D[Agent 1D:<br/>Flows & APIs]
    end

    P1 --> P1_Parallel
    P1_Parallel --> P1Agg[Aggregation:<br/>Merge into analysis.json]
    P1Agg --> P1Done[✅ analysis.json created]

    P1Done -->|Reads analysis.json| P2[Phase 2: Engineer Questions<br/>Single Agent - Opus]
    P2 --> P2Done[✅ questions.json created]

    P2Done -->|Reads questions.json| P3[Phase 3: Research 🆕<br/>Parallel Agents - Sonnet]
    
    subgraph P3_Parallel["🔄 RUN IN PARALLEL"]
       P3A[Researcher 1]
       P3B[Researcher 2]
       P3C[Researcher 3]
    end
    
    P3 --> P3_Parallel
    P3_Parallel --> P3Agg[Aggregation:<br/>Merge into research.json]
    P3Agg --> P3Done[✅ research.json created<br/>Evidence-backed answers]

    P3Done -->|Reads questions + research| P4[Phase 4: Orchestrator<br/>Single Agent - Opus]
    P4 --> P4Group[Group questions<br/>by file target]
    P4 --> P4Assign[Assign file ownership<br/>to writers]
    P4Assign --> P4Done[✅ work-assignments.json]

    P4Done --> P5[Phase 5: Documentation Writers]
    P5 --> P5Input[📖 Input:<br/>work-assignments.json<br/>+ research.json]
    P5Input --> P5Dist[Each writer gets<br/>exclusive file ownership]

    subgraph P5_Parallel["🔄 RUN IN PARALLEL (1-8 agents)"]
        P5W1[Writer 1]
        P5W2[Writer 2]
        P5W3[Writer 3]
        P5W4[Writer 4]
    end

    P5Dist --> P5_Parallel
    P5_Parallel --> P5Verify[Verify Structure]
    P5Verify --> P5Done[✅ documentation/*.md created]

    P5Done --> P6[Phase 6: QA Validator<br/>Single Agent - Sonnet]
    P6 --> P6Done[✅ qa-results.json +<br/>QA-SUMMARY.md]

    P6Done --> Complete([✅ Documentation Complete])

    style P1_Parallel fill:#e1f5ff
    style P3_Parallel fill:#e1f5ff
    style P5_Parallel fill:#ffe1f5
    style P4 fill:#fff3cd
    style Complete fill:#28a745,color:#fff

Parallel Execution Rules

并行执行规则

<execution_rules> <phase name="1-discovery" type="parallel" critical="true" spawn="single_message"> <gate> STOP. Verify parallel spawn requirements. REQUIRED: Spawn 4 agents in ONE message. FORBIDDEN: Sequential Task calls. </gate> <agent_count>4</agent_count> <description>Discovery and Analysis</description> <spawn_instruction>⚠️ Launch ALL 4 Task calls in ONE response</spawn_instruction> <rules> <rule>All 4 agents start simultaneously via single-message spawn</rule> <rule>Wait for ALL 4 to complete before aggregation</rule> <rule>Must aggregate 4 partial JSONs into analysis.json</rule> </rules> </phase>
<phase name="2-questions" type="single" critical="true" spawn="sequential">
    <agent_count>1</agent_count>
    <description>Engineer Questions Generation</description>
    <spawn_instruction>Single agent, wait for completion</spawn_instruction>
</phase>

<phase name="3-research" type="parallel" critical="true" spawn="single_message">
    <gate>
    **STOP.** Verify parallel spawn requirements.
    **REQUIRED:** Spawn N researchers in ONE message.
    **FORBIDDEN:** Sequential Task calls.
    </gate>
    <agent_count_logic>
        <case condition="questions &lt; 10">1 agent</case>
        <case condition="questions &gt;= 10">Ceil(questions / 15)</case>
    </agent_count_logic>
    <description>Evidence Gathering</description>
    <spawn_instruction>⚠️ Launch ALL researcher Task calls in ONE response</spawn_instruction>
    <rules>
        <rule>Split questions into batches BEFORE spawning</rule>
        <rule>All researchers start simultaneously</rule>
        <rule>Aggregate findings into research.json</rule>
    </rules>
</phase>

<phase name="4-orchestrator" type="single" critical="true" spawn="sequential">
    <agent_count>1</agent_count>
    <description>Orchestration and Assignment</description>
    <spawn_instruction>Single agent, wait for completion</spawn_instruction>
    <rules>
        <rule>Assign EXCLUSIVE file ownership to writers</rule>
        <rule>Distribute research findings to relevant writers</rule>
    </rules>
</phase>

<phase name="5-writers" type="dynamic_parallel" critical="false" spawn="single_message">
    <gate>
    **STOP.** Verify parallel spawn requirements.
    **REQUIRED:** Spawn all writers in ONE message.
    **FORBIDDEN:** Sequential Task calls.
    </gate>
    <agent_count_logic>
        <case condition="questions &lt; 20">1 agent</case>
        <case condition="questions 20-99">2-4 agents</case>
        <case condition="questions &gt;= 100">4-8 agents</case>
    </agent_count_logic>
    <spawn_instruction>⚠️ Launch ALL writer Task calls in ONE response</spawn_instruction>
    <rules>
        <rule>Each writer owns EXCLUSIVE files - no conflicts possible</rule>
        <rule>All writers start simultaneously via single-message spawn</rule>
        <rule>Use provided research.json as primary source</rule>
    </rules>
</phase>

<phase name="6-qa" type="single" critical="false" spawn="sequential">
    <agent_count>1</agent_count>
    <description>Quality Validation</description>
    <spawn_instruction>Single agent, wait for completion</spawn_instruction>
</phase>
</execution_rules>
<execution_rules> <phase name="1-discovery" type="parallel" critical="true" spawn="single_message"> <gate> 停。 验证并行启动要求。 必须: 在单条消息中启动4个Agent。 禁止: 顺序调用Task。 </gate> <agent_count>4</agent_count> <description>发现与分析</description> <spawn_instruction>⚠️ 在单条响应中启动所有4个Task调用</spawn_instruction> <rules> <rule>所有4个Agent通过单条消息启动,同时开始执行</rule> <rule>等待所有4个Agent完成后再进行结果聚合</rule> <rule>必须将4个部分JSON结果合并为analysis.json</rule> </rules> </phase>
<phase name="2-questions" type="single" critical="true" spawn="sequential">
    <agent_count>1</agent_count>
    <description>工程化问题生成</description>
    <spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
</phase>

<phase name="3-research" type="parallel" critical="true" spawn="single_message">
    <gate>
    **停。** 验证并行启动要求。
    **必须:** 在单条消息中启动N个研究Agent。
    **禁止:** 顺序调用Task。
    </gate>
    <agent_count_logic>
        <case condition="questions &lt; 10">1 agent</case>
        <case condition="questions &gt;= 10">Ceil(questions / 15)</case>
    </agent_count_logic>
    <description>证据收集</description>
    <spawn_instruction>⚠️ 在单条响应中启动所有研究Agent的Task调用</spawn_instruction>
    <rules>
        <rule>在启动前将问题拆分为多个批次</rule>
        <rule>所有研究Agent同时开始执行</rule>
        <rule>将研究结果聚合为research.json</rule>
    </rules>
</phase>

<phase name="4-orchestrator" type="single" critical="true" spawn="sequential">
    <agent_count>1</agent_count>
    <description>编排与任务分配</description>
    <spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
    <rules>
        <rule>为撰写者分配专属的文件所有权</rule>
        <rule>将研究结果分发给对应的撰写者</rule>
    </rules>
</phase>

<phase name="5-writers" type="dynamic_parallel" critical="false" spawn="single_message">
    <gate>
    **停。** 验证并行启动要求。
    **必须:** 在单条消息中启动所有撰写Agent。
    **禁止:** 顺序调用Task。
    </gate>
    <agent_count_logic>
        <case condition="questions &lt; 20">1 agent</case>
        <case condition="questions 20-99">2-4 agents</case>
        <case condition="questions &gt;= 100">4-8 agents</case>
    </agent_count_logic>
    <spawn_instruction>⚠️ 在单条响应中启动所有撰写Agent的Task调用</spawn_instruction>
    <rules>
        <rule>每个撰写者拥有专属文件 - 无冲突可能</rule>
        <rule>所有撰写Agent通过单条消息启动,同时开始执行</rule>
        <rule>以提供的research.json为主要数据源</rule>
    </rules>
</phase>

<phase name="6-qa" type="single" critical="false" spawn="sequential">
    <agent_count>1</agent_count>
    <description>质量验证</description>
    <spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
</phase>
</execution_rules>

Pre-Flight Checks

预启动检查

<pre_flight_gate> HALT. Complete these requirements before proceeding:
<pre_flight_gate> 停。在继续前请完成以下要求:

Required Checks

必要检查项

  1. Verify Path Existence
    • IF
      repository_path
      missing → THEN ERROR & EXIT
  2. Verify Directory Status
    • IF not a directory → THEN ERROR & EXIT
  3. Source Code Check
    • IF < 3 source files → THEN WARN & Ask User (Exit if no)
  4. Build Directory Check
    • IF contains
      node_modules
      or
      dist
      THEN ERROR & EXIT
  5. Size Estimation
    • IF > 200k LOC → THEN WARN & Ask User (Exit if no)
FORBIDDEN until gate passes:
  • Any agent spawning
  • Workspace initialization </pre_flight_gate>
<instruction> Before starting, validate the repository path and check for edge cases.
  1. Verify Path Existence
    • Ensure
      repository_path
      exists.
    • If not, raise an ERROR: "Repository path does not exist: " + path and EXIT.
  2. Verify Directory Status
    • Confirm
      repository_path
      is a directory.
    • If not, raise an ERROR: "Path is not a directory: " + path and EXIT.
  3. Source Code Check
    • Count files ending in
      .ts
      ,
      .js
      ,
      .py
      ,
      .go
      , or
      .rs
      .
    • Exclude directories:
      node_modules
      ,
      .git
      ,
      dist
      ,
      build
      .
    • If fewer than 3 source files are found:
      • WARN: "Very few source files detected ({count}). This may not be a code repository."
      • Ask user: "Continue anyway? [y/N]"
      • If not confirmed, EXIT.
  4. Build Directory Check
    • Ensure the path does not contain
      node_modules
      ,
      dist
      , or
      build
      .
    • If it does, raise an ERROR: "Repository path appears to be a build directory. Please specify the project root." and EXIT.
  5. Size Estimation
    • Estimate the repository size.
    • If larger than 200,000 LOC:
      • WARN: "Large repository detected (~{size} LOC)."
      • Ask user: "Continue anyway? [y/N]"
      • If not confirmed, EXIT.
</instruction>
  1. 验证路径存在性
    • 如果
      repository_path
      缺失 → 报错并退出
  2. 验证目录状态
    • 如果 不是目录 → 报错并退出
  3. 源代码检查
    • 如果 源代码文件数量 < 3 → 发出警告并询问用户(若用户选择否则退出)
  4. 构建目录检查
    • 如果 目录包含
      node_modules
      dist
      报错并退出
  5. 大小估算
    • 如果 代码行数 > 200k → 发出警告并询问用户(若用户选择否则退出)
在通过预启动检查前禁止:
  • 启动任何Agent
  • 初始化工作区 </pre_flight_gate>
<instruction> 在开始前,验证仓库路径并检查边缘情况。
  1. 验证路径存在性
    • 确保
      repository_path
      存在。
    • 若不存在,抛出错误:"仓库路径不存在:" + 路径 并退出。
  2. 验证目录状态
    • 确认
      repository_path
      是目录。
    • 若不是,抛出错误:"路径不是目录:" + 路径 并退出。
  3. 源代码检查
    • 统计后缀为
      .ts
      .js
      .py
      .go
      .rs
      的文件数量。
    • 排除目录:
      node_modules
      .git
      dist
      build
    • 如果发现的源代码文件少于3个:
      • 警告:"检测到极少的源代码文件({count}个)。这可能不是代码仓库。"
      • 询问用户:"是否继续?[y/N]"
      • 若未得到确认,退出。
  4. 构建目录检查
    • 确保路径不包含
      node_modules
      dist
      build
    • 若包含,抛出错误:"仓库路径似乎是构建目录。请指定项目根目录。" 并退出。
  5. 大小估算
    • 估算仓库大小。
    • 如果代码行数超过200,000行:
      • 警告:"检测到大仓库(约{size}行代码)。"
      • 询问用户:"是否继续?[y/N]"
      • 若未得到确认,退出。
</instruction>

Initialize Workspace

初始化工作区

<init_gate> STOP. Verify state before initialization.
<init_gate> 停。在初始化前验证状态。

Required Actions

必要操作

  1. Define Directories (
    CONTEXT_DIR
    ,
    DOC_DIR
    )
  2. Handle Existing State
    • IF
      state.json
      exists → THEN Prompt User to Resume
    • IF User says NO → THEN Reset state
  3. Create Directories
  4. Initialize New State (if not resuming)
FORBIDDEN:
  • Starting Phase 1 before state is initialized. </init_gate>
<instruction>
  1. 定义目录
    CONTEXT_DIR
    DOC_DIR
  2. 处理现有状态
    • 如果
      state.json
      存在 → 提示用户是否恢复
    • 如果 用户选择否 → 重置状态
  3. 创建目录
  4. 初始化新状态(如果不恢复)
禁止:
  • 在状态初始化前启动阶段1。 </init_gate>
<instruction>

Workspace Initialization

工作区初始化

Before starting the pipeline, set up the working environment and handle any existing state.
  1. Define Directories
    • Context Directory (
      CONTEXT_DIR
      ):
      ${REPOSITORY_PATH}/.context
    • Documentation Directory (
      DOC_DIR
      ):
      ${REPOSITORY_PATH}/documentation
  2. Handle Existing State
    • Check if
      ${CONTEXT_DIR}/state.json
      exists.
    • If it exists and the phase is NOT "complete" or "failed":
      • Prompt User: "Found existing documentation generation in progress (phase: [PHASE]). Resume from last checkpoint? [Y/n]"
      • If User Confirms (Yes):
        • Set
          RESUME_MODE = true
        • Set
          START_PHASE
          from the saved state.
      • If User Declines (No):
        • WARN: "Restarting from beginning. Previous progress will be overwritten."
        • Set
          RESUME_MODE = false
        • Set
          START_PHASE = "initialized"
    • If
      state.json
      does not exist or previous run finished/failed, start fresh (
      RESUME_MODE = false
      ).
  3. Create Directories
    • Ensure
      CONTEXT_DIR
      exists (create if missing).
    • Ensure
      DOC_DIR
      exists (create if missing).
  4. Initialize New State (If NOT Resuming)
    • Create a new
      state.json
      using the schema defined in
      schemas/state-schema.json
      .
</instruction>
在启动流水线前,设置工作环境并处理任何现有状态。
  1. 定义目录
    • 上下文目录(
      CONTEXT_DIR
      ):
      ${REPOSITORY_PATH}/.context
    • 文档目录(
      DOC_DIR
      ):
      ${REPOSITORY_PATH}/documentation
  2. 处理现有状态
    • 检查
      ${CONTEXT_DIR}/state.json
      是否存在。
    • 如果存在且当前阶段不是"complete"或"failed":
      • 提示用户:"发现正在进行的文档生成任务(阶段:[PHASE])。是否从上次检查点恢复?[Y/n]"
      • 如果用户确认(是)
        • 设置
          RESUME_MODE = true
        • 从保存的状态中设置
          START_PHASE
      • 如果用户拒绝(否)
        • 警告:"将从头开始。之前的进度将被覆盖。"
        • 设置
          RESUME_MODE = false
        • 设置
          START_PHASE = "initialized"
    • 如果
      state.json
      不存在或之前的任务已完成/失败,则从头开始(
      RESUME_MODE = false
      )。
  3. 创建目录
    • 确保
      CONTEXT_DIR
      存在(若不存在则创建)。
    • 确保
      DOC_DIR
      存在(若不存在则创建)。
  4. 初始化新状态(如果不恢复)
    • 使用
      schemas/state-schema.json
      中定义的 schema 创建新的
      state.json
</instruction>

Progress Tracker

进度跟踪器

Display real-time progress:
📊 Documentation Generation Progress v3.1
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Repository: {REPOSITORY_PATH}
Mode: {RESUME_MODE ? "Resume" : "New"}

{if RESUME_MODE}
Resuming from: {START_PHASE}
{end}

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
显示实时进度:
📊 文档生成进度 v3.1
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

仓库:{REPOSITORY_PATH}
模式:{RESUME_MODE ? "恢复" : "新建"}

{if RESUME_MODE}
从以下阶段恢复:{START_PHASE}
{end}

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Agent Pipeline Execution

Agent流水线执行

Phase 1: Discovery+Analysis Agent

阶段1:发现与分析Agent

<phase_1_gate> GATE: START Phase 1 REQUIRED: Spawn 4 agents in ONE message. FORBIDDEN: Sequential calls. </phase_1_gate>
Agent Spec:
references/agent-discovery-analysis.md
Task Config:
schemas/discovery-tasks.json
PropertyValue
Parallel Agents4 (1a-language, 1b-components, 1c-dependencies, 1d-flows-apis)
CriticalYes
Output
.context/analysis.json
See
references/agent-discovery-analysis.md
Orchestrator Execution Logic section for full implementation.
<phase_1_gate> 检查点:启动阶段1 必须:单条消息中启动4个Agent。 禁止: 顺序调用。 </phase_1_gate>
Agent规格
references/agent-discovery-analysis.md
任务配置
schemas/discovery-tasks.json
属性
并行Agent数量4个(1a-language、1b-components、1c-dependencies、1d-flows-apis)
关键性
输出
.context/analysis.json
详见
references/agent-discovery-analysis.md
编排器执行逻辑 章节获取完整实现细节。

Phase 2: Engineer Questions Agent

阶段2:工程化问题生成Agent

Agent Spec:
references/agent-engineer-questions.md
PropertyValue
Agent TypeSingle (Opus)
CriticalYes
Input
.context/analysis.json
Output
.context/questions.json
See
references/agent-engineer-questions.md
Orchestrator Execution Logic section for full implementation.
Agent规格
references/agent-engineer-questions.md
属性
Agent类型单个(Opus)
关键性
输入
.context/analysis.json
输出
.context/questions.json
详见
references/agent-engineer-questions.md
编排器执行逻辑 章节获取完整实现细节。

Phase 3: Research Agent 🆕

阶段3:研究Agent 🆕

<phase_3_gate> GATE: START Phase 3 REQUIRED: Spawn N agents in ONE message. FORBIDDEN: Sequential calls. </phase_3_gate>
Agent Spec:
references/agent-researcher.md
PropertyValue
Agent TypeParallel (Sonnet)
CriticalYes
Input
.context/questions.json
Output
.context/research.json
See
references/agent-researcher.md
Orchestrator Execution Logic section for full implementation.
<phase_3_gate> 检查点:启动阶段3 必须:单条消息中启动N个Agent。 禁止: 顺序调用。 </phase_3_gate>
Agent规格
references/agent-researcher.md
属性
Agent类型并行(Sonnet)
关键性
输入
.context/questions.json
输出
.context/research.json
详见
references/agent-researcher.md
编排器执行逻辑 章节获取完整实现细节。

Phase 4: Orchestrator Agent

阶段4:编排器Agent

Agent Spec:
references/agent-orchestrator.md
PropertyValue
Agent TypeSingle (Opus)
CriticalYes
Input
.context/analysis.json
,
.context/questions.json
,
.context/research.json
Output
.context/work-assignments.json
See
references/agent-orchestrator.md
Orchestrator Execution Logic section for full implementation.
Agent规格
references/agent-orchestrator.md
属性
Agent类型单个(Opus)
关键性
输入
.context/analysis.json
.context/questions.json
.context/research.json
输出
.context/work-assignments.json
详见
references/agent-orchestrator.md
编排器执行逻辑 章节获取完整实现细节。

Phase 5: Documentation Writers

阶段5:文档撰写者

<phase_5_gate> GATE: START Phase 5 REQUIRED: Spawn all writers in ONE message. FORBIDDEN: Sequential calls. </phase_5_gate>
Agent Spec:
references/agent-documentation-writer.md
PropertyValue
Agent TypeParallel (1-8 Sonnet writers)
Primary WriterWriter 1 (Critical)
Non-PrimaryPartial failure allowed
Retry LogicUp to 2 retries per failed writer
Input
.context/analysis.json
,
.context/research.json
,
.context/work-assignments.json
Output
documentation/*.md
(16 core, 5 required + supplementary)
File OwnershipExclusive (no conflicts)
<phase_5_gate> 检查点:启动阶段5 必须:单条消息中启动所有撰写Agent。 禁止: 顺序调用。 </phase_5_gate>
Agent规格
references/agent-documentation-writer.md
属性
Agent类型并行(1-8个Sonnet撰写Agent)
主撰写者撰写者1(关键)
非主撰写者允许部分失败
重试逻辑每个失败的撰写者最多重试2次
输入
.context/analysis.json
.context/research.json
.context/work-assignments.json
输出
documentation/*.md
(16份核心文档,含5份必填文档及补充文件)
文件所有权专属(无冲突)

Writer Scaling Strategy

撰写Agent扩容策略

StrategyAgent CountWhen Used
sequential
1< 20 questions
parallel-core
2-420-99 questions
parallel-all
4-8>= 100 questions
See
references/agent-documentation-writer.md
Orchestrator Execution Logic section for full implementation.
策略Agent数量使用场景
sequential
1问题数量 < 20
parallel-core
2-4问题数量 20-99
parallel-all
4-8问题数量 >= 100
详见
references/agent-documentation-writer.md
编排器执行逻辑 章节获取完整实现细节。

Phase 6: QA Validator

阶段6:QA验证器

Agent Spec:
references/agent-qa-validator.md
PropertyValue
Agent TypeSingle (Sonnet)
CriticalNo (failure produces warning)
Input
.context/analysis.json
,
.context/questions.json
,
documentation/*.md
Output
.context/qa-results.json
,
documentation/QA-SUMMARY.md
Score Range0-100
Quality Ratings
excellent
(≥90),
good
(≥75),
fair
(≥60),
needs-improvement
(<60)
See
references/agent-qa-validator.md
Orchestrator Execution Logic section for full implementation.
Agent规格
references/agent-qa-validator.md
属性
Agent类型单个(Sonnet)
关键性否(失败仅产生警告)
输入
.context/analysis.json
.context/questions.json
documentation/*.md
输出
.context/qa-results.json
documentation/QA-SUMMARY.md
评分范围0-100
质量评级
excellent
(≥90)、
good
(≥75)、
fair
(≥60)、
needs-improvement
(<60)
详见
references/agent-qa-validator.md
编排器执行逻辑 章节获取完整实现细节。

Completion

完成

javascript
update_state({
  phase: "complete",
  completed_at: new Date().toISOString(),
  current_agent: null
})

DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "✅ Documentation Complete!"
DISPLAY: ""
DISPLAY: "📁 Location: {DOC_DIR}/"
DISPLAY: "📊 QA Report: {DOC_DIR}/QA-SUMMARY.md"
DISPLAY: ""

if (parsed_qa && parsed_qa.overall_score):
  DISPLAY: "Quality Score: {parsed_qa.overall_score}/100 ({parsed_qa.quality_rating})"

  if (parsed_qa.overall_score >= 90):
    DISPLAY: "Status: Excellent ✅ - Ready for release"
  else if (parsed_qa.overall_score >= 75):
    DISPLAY: "Status: Good ✅ - Minor improvements recommended"
  else if (parsed_qa.overall_score >= 60):
    DISPLAY: "Status: Fair -️ - Address gaps before release"
  else:
    DISPLAY: "Status: Needs Work -️ - Major improvements required"

  if (parsed_qa.gaps && parsed_qa.gaps.length > 0):
    DISPLAY: ""
    DISPLAY: "Next Steps:"
    for (i = 0; i < Math.min(3, parsed_qa.gaps.length); i++):
      gap = parsed_qa.gaps[i]
      DISPLAY: "  {i+1}. {gap.fix}"

DISPLAY: ""
DISPLAY: "📝 Documentation Coverage:"
DISPLAY: "   {parsed_questions.summary.total_questions} questions researched"
DISPLAY: "   {parsed_qa.question_coverage.answered} questions answered in docs"
DISPLAY: ""
DISPLAY: "View documentation: {DOC_DIR}/index.md"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

EXIT code 0
javascript
update_state({
  phase: "complete",
  completed_at: new Date().toISOString(),
  current_agent: null
})

DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "✅ 文档生成完成!"
DISPLAY: ""
DISPLAY: "📁 位置:{DOC_DIR}/"
DISPLAY: "📊 QA报告:{DOC_DIR}/QA-SUMMARY.md"
DISPLAY: ""

if (parsed_qa && parsed_qa.overall_score):
  DISPLAY: "质量评分:{parsed_qa.overall_score}/100({parsed_qa.quality_rating})"

  if (parsed_qa.overall_score >= 90):
    DISPLAY: "状态:优秀 ✅ - 可发布"
  else if (parsed_qa.overall_score >= 75):
    DISPLAY: "状态:良好 ✅ - 建议进行小幅优化"
  else if (parsed_qa.overall_score >= 60):
    DISPLAY: "状态:一般 -️ - 发布前需填补空白"
  else:
    DISPLAY: "状态:需要改进 -️ - 需进行重大优化"

  if (parsed_qa.gaps && parsed_qa.gaps.length > 0):
    DISPLAY: ""
    DISPLAY: "下一步:"
    for (i = 0; i < Math.min(3, parsed_qa.gaps.length); i++):
      gap = parsed_qa.gaps[i]
      DISPLAY: "  {i+1}. {gap.fix}"

DISPLAY: ""
DISPLAY: "📝 文档覆盖范围:"
DISPLAY: "   研究的问题总数:{parsed_questions.summary.total_questions}个"
DISPLAY: "   文档中已解答的问题:{parsed_qa.question_coverage.answered}个"
DISPLAY: ""
DISPLAY: "查看文档:{DOC_DIR}/index.md"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

EXIT code 0

Error Recovery

错误恢复

If any agent fails critically:
javascript
function handle_critical_failure(phase, error):
  DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
  DISPLAY: "❌ Documentation Generation Failed"
  DISPLAY: ""
  DISPLAY: "Phase: {phase}"
  DISPLAY: "Error: {error.message}"
  DISPLAY: ""

  if (error.recoverable):
    DISPLAY: "This error is recoverable. Run /octocode-documentaion-writer again to resume."
    DISPLAY: "State saved in: {CONTEXT_DIR}/state.json"
  else:
    DISPLAY: "This error is not recoverable. Please check the error and try again."
    DISPLAY: "You may need to fix the issue before retrying."

  DISPLAY: ""
  DISPLAY: "Logs: {CONTEXT_DIR}/state.json"
  DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

  EXIT code 1
如果任何Agent发生严重失败:
javascript
function handle_critical_failure(phase, error):
  DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
  DISPLAY: "❌ 文档生成失败"
  DISPLAY: ""
  DISPLAY: "阶段:{phase}"
  DISPLAY: "错误:{error.message}"
  DISPLAY: ""

  if (error.recoverable):
    DISPLAY: "该错误可恢复。再次运行/octocode-documentaion-writer即可恢复。"
    DISPLAY: "状态已保存至:{CONTEXT_DIR}/state.json"
  else:
    DISPLAY: "该错误无法恢复。请检查错误后重试。"
    DISPLAY: "你可能需要先修复问题再重试。"

  DISPLAY: ""
  DISPLAY: "日志:{CONTEXT_DIR}/state.json"
  DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

  EXIT code 1

Helper Functions

辅助函数

IMPORTANT: State Synchronization Only the main orchestrator process should update
state.json
. Individual parallel agents (Discovery 1A-1D, Researchers, Writers) must NOT directly modify
state.json
to avoid race conditions. Parallel agents should only write to their designated partial result files in
partials/<phase>/<task_id>.json
. The orchestrator aggregates these results and updates
state.json
after all parallel agents complete.
javascript
// NOTE: This function should ONLY be called by the main orchestrator process,
// never by parallel sub-agents. Parallel agents use save_partial_result() instead.
function update_state(updates):
  current_state = Read(CONTEXT_DIR + "/state.json")
  parsed = JSON.parse(current_state)

  for key, value in updates:
    parsed[key] = value

  Write(CONTEXT_DIR + "/state.json", JSON.stringify(parsed, null, 2))

function estimate_repo_size(path):
  // Quick estimate: count source files
  files = count_files(path, ["*.ts", "*.js", "*.py", "*.go", "*.rs", "*.java"], excludeDir=["node_modules", ".git", "dist", "build"])
  // Assume ~200 LOC per file average
  return files * 200

function count_files(path, patterns, excludeDir):
  // Use localFindFiles MCP tool (mcp__octocode__localFindFiles)
  // Return count of matching files
重要提示:状态同步 只有主编排器进程可以更新
state.json
。单个并行Agent (发现阶段1A-1D、研究Agent、撰写Agent)不得直接修改
state.json
,以避免 竞争条件。并行Agent应仅将结果写入
partials/<phase>/<task_id>.json
中的指定部分结果文件。编排器会在所有并行Agent完成后聚合这些结果并更新
state.json
javascript
// 注意:此函数仅应由主编排器进程调用,
// 绝不能由并行子Agent调用。并行Agent应使用save_partial_result()替代。
function update_state(updates):
  current_state = Read(CONTEXT_DIR + "/state.json")
  parsed = JSON.parse(current_state)

  for key, value in updates:
    parsed[key] = value

  Write(CONTEXT_DIR + "/state.json", JSON.stringify(parsed, null, 2))

function estimate_repo_size(path):
  // 快速估算:统计源代码文件数量
  files = count_files(path, ["*.ts", "*.js", "*.py", "*.go", "*.rs", "*.java"], excludeDir=["node_modules", ".git", "dist", "build"])
  // 假设每个文件平均约200行代码
  return files * 200

function count_files(path, patterns, excludeDir):
  // 使用localFindFiles MCP工具(mcp__octocode__localFindFiles)
  // 返回匹配文件的数量

Retry & Data Preservation Logic

重试与数据保留逻辑

CRITICAL: Never lose partial work. All agents support retry with state preservation.
javascript
const RETRY_CONFIG = {
  discovery_analysis: { max_attempts: 3, backoff_ms: 2000 },
  engineer_questions: { max_attempts: 3, backoff_ms: 2000 },
  research:           { max_attempts: 3, backoff_ms: 3000 },
  orchestrator:       { max_attempts: 3, backoff_ms: 2000 },
  documentation:      { max_attempts: 3, backoff_ms: 5000 },  // per writer
  qa:                 { max_attempts: 2, backoff_ms: 1000 }
}

// === RETRY WRAPPER FOR ALL AGENTS ===
function retry_agent(phase_name, agent_fn, options = {}):
  config = RETRY_CONFIG[phase_name]
  state = get_retry_state(phase_name)

  while (state.attempts < config.max_attempts):
    state.attempts++
    update_retry_state(phase_name, state)

    DISPLAY: `${phase_name} attempt ${state.attempts}/${config.max_attempts}`

    try:
      result = agent_fn(options)

      // Success - clear retry state
      clear_retry_state(phase_name)
      return { success: true, result }

    catch (error):
      state.last_error = error.message
      update_retry_state(phase_name, state)

      DISPLAY: `⚠️ ${phase_name} failed: ${error.message}`

      if (state.attempts < config.max_attempts):
        DISPLAY: `   Retrying in ${config.backoff_ms}ms...`
        sleep(config.backoff_ms * state.attempts)  // Exponential backoff
      else:
        DISPLAY: `${phase_name} exhausted all ${config.max_attempts} attempts`
        return { success: false, error, attempts: state.attempts }

  return { success: false, error: state.last_error, attempts: state.attempts }

// === PARALLEL AGENT RETRY (for Discovery, Research, Writers) ===
function retry_parallel_agents(phase_name, agent_tasks, options = {}):
  config = RETRY_CONFIG[phase_name]
  results = {}
  failed_tasks = []

  // First attempt - run all in parallel
  parallel_results = Task_Parallel(agent_tasks)

  for (task_id, result) in parallel_results:
    if (result.success):
      results[task_id] = result
      save_partial_result(phase_name, task_id, result)
    else:
      failed_tasks.push({ id: task_id, task: agent_tasks[task_id], attempts: 1 })

  // Retry failed tasks individually
  for failed in failed_tasks:
    while (failed.attempts < config.max_attempts):
      failed.attempts++
      DISPLAY: `⟳ Retrying ${phase_name}/${failed.id} (attempt ${failed.attempts}/${config.max_attempts})`

      try:
        result = Task(failed.task)
        if (result.success):
          results[failed.id] = result
          save_partial_result(phase_name, failed.id, result)
          break
      catch (error):
        DISPLAY: `⚠️ ${phase_name}/${failed.id} failed: ${error.message}`
        if (failed.attempts < config.max_attempts):
          sleep(config.backoff_ms * failed.attempts)

    if (failed.attempts >= config.max_attempts && !results[failed.id]):
      DISPLAY: `${phase_name}/${failed.id} failed after ${config.max_attempts} attempts`
      // Load any partial result saved during attempts
      results[failed.id] = load_partial_result(phase_name, failed.id) || { success: false, partial: true }

  return results

// === PARTIAL RESULT PRESERVATION ===
// Uses atomic writes to prevent corruption from concurrent access
function save_partial_result(phase_name, task_id, result):
  partial_dir = CONTEXT_DIR + "/partials/" + phase_name
  mkdir_p(partial_dir)

  target_path = partial_dir + "/" + task_id + ".json"
  temp_path = partial_dir + "/" + task_id + ".json.tmp." + random_uuid()

  // Atomic write: write to temp file, then rename (rename is atomic on POSIX)
  Write(temp_path, JSON.stringify(result))
  rename(temp_path, target_path)  // Atomic operation

function load_partial_result(phase_name, task_id):
  path = CONTEXT_DIR + "/partials/" + phase_name + "/" + task_id + ".json"
  if (exists(path)):
    return JSON.parse(Read(path))
  return null

function load_all_partial_results(phase_name):
  partial_dir = CONTEXT_DIR + "/partials/" + phase_name
  if (!exists(partial_dir)):
    return {}
  files = list_files(partial_dir, "*.json")
  results = {}
  for file in files:
    task_id = file.replace(".json", "")
    results[task_id] = JSON.parse(Read(partial_dir + "/" + file))
  return results

// === RETRY STATE MANAGEMENT ===
function get_retry_state(phase_name):
  state = Read(CONTEXT_DIR + "/state.json")
  parsed = JSON.parse(state)
  return parsed.retry_state?.[phase_name] || { attempts: 0 }

function update_retry_state(phase_name, retry_state):
  update_state({
    retry_state: {
      ...current_state.retry_state,
      [phase_name]: retry_state
    }
  })

function clear_retry_state(phase_name):
  state = JSON.parse(Read(CONTEXT_DIR + "/state.json"))
  if (state.retry_state):
    delete state.retry_state[phase_name]
    Write(CONTEXT_DIR + "/state.json", JSON.stringify(state, null, 2))
重要提示: 绝不能丢失部分工作成果。所有Agent都支持带状态保留的重试机制。
javascript
const RETRY_CONFIG = {
  discovery_analysis: { max_attempts: 3, backoff_ms: 2000 },
  engineer_questions: { max_attempts: 3, backoff_ms: 2000 },
  research:           { max_attempts: 3, backoff_ms: 3000 },
  orchestrator:       { max_attempts: 3, backoff_ms: 2000 },
  documentation:      { max_attempts: 3, backoff_ms: 5000 },  // 每个撰写者
  qa:                 { max_attempts: 2, backoff_ms: 1000 }
}

// === 所有Agent的重试包装器 ===
function retry_agent(phase_name, agent_fn, options = {}):
  config = RETRY_CONFIG[phase_name]
  state = get_retry_state(phase_name)

  while (state.attempts < config.max_attempts):
    state.attempts++
    update_retry_state(phase_name, state)

    DISPLAY: `${phase_name} 尝试次数 ${state.attempts}/${config.max_attempts}`

    try:
      result = agent_fn(options)

      // 成功 - 清除重试状态
      clear_retry_state(phase_name)
      return { success: true, result }

    catch (error):
      state.last_error = error.message
      update_retry_state(phase_name, state)

      DISPLAY: `⚠️ ${phase_name} 失败:${error.message}`

      if (state.attempts < config.max_attempts):
        DISPLAY: `   ${config.backoff_ms}ms后重试...`
        sleep(config.backoff_ms * state.attempts)  // 指数退避
      else:
        DISPLAY: `${phase_name} 已用完${config.max_attempts}次尝试机会`
        return { success: false, error, attempts: state.attempts }

  return { success: false, error: state.last_error, attempts: state.attempts }

// === 并行Agent重试(针对发现、研究、撰写阶段) ===
function retry_parallel_agents(phase_name, agent_tasks, options = {}):
  config = RETRY_CONFIG[phase_name]
  results = {}
  failed_tasks = []

  // 第一次尝试 - 并行运行所有任务
  parallel_results = Task_Parallel(agent_tasks)

  for (task_id, result) in parallel_results:
    if (result.success):
      results[task_id] = result
      save_partial_result(phase_name, task_id, result)
    else:
      failed_tasks.push({ id: task_id, task: agent_tasks[task_id], attempts: 1 })

  // 单独重试失败的任务
  for failed in failed_tasks:
    while (failed.attempts < config.max_attempts):
      failed.attempts++
      DISPLAY: `⟳ 重试 ${phase_name}/${failed.id}(尝试次数 ${failed.attempts}/${config.max_attempts}`

      try:
        result = Task(failed.task)
        if (result.success):
          results[failed.id] = result
          save_partial_result(phase_name, failed.id, result)
          break
      catch (error):
        DISPLAY: `⚠️ ${phase_name}/${failed.id} 失败:${error.message}`
        if (failed.attempts < config.max_attempts):
          sleep(config.backoff_ms * failed.attempts)

    if (failed.attempts >= config.max_attempts && !results[failed.id]):
      DISPLAY: `${phase_name}/${failed.id}${config.max_attempts}次尝试后失败`
      // 加载尝试过程中保存的任何部分结果
      results[failed.id] = load_partial_result(phase_name, failed.id) || { success: false, partial: true }

  return results

// === 部分结果保留 ===
// 使用原子写入防止并发访问导致的损坏
function save_partial_result(phase_name, task_id, result):
  partial_dir = CONTEXT_DIR + "/partials/" + phase_name
  mkdir_p(partial_dir)

  target_path = partial_dir + "/" + task_id + ".json"
  temp_path = partial_dir + "/" + task_id + ".json.tmp." + random_uuid()

  // 原子写入:先写入临时文件,再重命名(重命名在POSIX系统中是原子操作)
  Write(temp_path, JSON.stringify(result))
  rename(temp_path, target_path)  // 原子操作

function load_partial_result(phase_name, task_id):
  path = CONTEXT_DIR + "/partials/" + phase_name + "/" + task_id + ".json"
  if (exists(path)):
    return JSON.parse(Read(path))
  return null

function load_all_partial_results(phase_name):
  partial_dir = CONTEXT_DIR + "/partials/" + phase_name
  if (!exists(partial_dir)):
    return {}
  files = list_files(partial_dir, "*.json")
  results = {}
  for file in files:
    task_id = file.replace(".json", "")
    results[task_id] = JSON.parse(Read(partial_dir + "/" + file))
  return results

// === 重试状态管理 ===
function get_retry_state(phase_name):
  state = Read(CONTEXT_DIR + "/state.json")
  parsed = JSON.parse(state)
  return parsed.retry_state?.[phase_name] || { attempts: 0 }

function update_retry_state(phase_name, retry_state):
  update_state({
    retry_state: {
      ...current_state.retry_state,
      [phase_name]: retry_state
    }
  })

function clear_retry_state(phase_name):
  state = JSON.parse(Read(CONTEXT_DIR + "/state.json"))
  if (state.retry_state):
    delete state.retry_state[phase_name]
    Write(CONTEXT_DIR + "/state.json", JSON.stringify(state, null, 2))

Phase-Specific Retry Behavior

阶段特定的重试行为

PhaseRetry StrategyPartial Data Preserved
DiscoveryRetry failed sub-agents (1A-1D) individually
partials/discovery/*.json
QuestionsRetry entire phasePrevious
questions.json
kept until success
ResearchRetry failed batches only
partials/research/batch-*.json
OrchestratorRetry entire phasePrevious
work-assignments.json
kept
WritersRetry failed writers only
partials/writers/writer-*.json
+ completed files
QARetry once, then warn
partials/qa/partial-results.json
阶段重试策略保留的部分数据
发现阶段单独重试失败的子Agent(1A-1D)
partials/discovery/*.json
问题生成阶段重试整个阶段保留之前的
questions.json
直到成功
研究阶段仅重试失败的批次
partials/research/batch-*.json
编排器阶段重试整个阶段保留之前的
work-assignments.json
撰写阶段仅重试失败的撰写Agent
partials/writers/writer-*.json
+ 已完成的文件
QA阶段重试一次,然后发出警告
partials/qa/partial-results.json

Critical Data Protection Rules

关键数据保护规则

javascript
// RULE 1: Never overwrite successful output until new output is validated
function safe_write_output(path, content):
  backup_path = path + ".backup"
  if (exists(path)):
    copy(path, backup_path)

  try:
    Write(path, content)
    validate_json(path)  // Ensure valid JSON
    delete(backup_path)  // Only delete backup after validation
  catch (error):
    // Restore from backup
    if (exists(backup_path)):
      copy(backup_path, path)
    throw error

// RULE 2: Aggregate partial results even on failure
// Uses file locking to prevent race conditions during aggregation
function aggregate_with_partials(phase_name, new_results):
  lock_file = CONTEXT_DIR + "/partials/" + phase_name + "/.aggregate.lock"

  // Acquire exclusive lock before aggregation
  lock_fd = acquire_file_lock(lock_file, timeout_ms=5000)
  if (!lock_fd):
    throw new Error("Failed to acquire lock for aggregation: " + phase_name)

  try:
    existing = load_all_partial_results(phase_name)
    merged = { ...existing, ...new_results }
    return merged
  finally:
    release_file_lock(lock_fd)
    delete(lock_file)

// RULE 3: Resume-aware execution
function should_skip_task(phase_name, task_id):
  partial = load_partial_result(phase_name, task_id)
  return partial?.success === true

javascript
// 规则1:在新输出验证通过前,绝不能覆盖成功的输出
function safe_write_output(path, content):
  backup_path = path + ".backup"
  if (exists(path)):
    copy(path, backup_path)

  try:
    Write(path, content)
    validate_json(path)  // 确保是有效的JSON
    delete(backup_path)  // 仅在验证通过后删除备份
  catch (error):
    // 从备份恢复
    if (exists(backup_path)):
      copy(backup_path, path)
    throw error

// 规则2:即使失败也要聚合部分结果
// 使用文件锁防止聚合过程中的竞争条件
function aggregate_with_partials(phase_name, new_results):
  lock_file = CONTEXT_DIR + "/partials/" + phase_name + "/.aggregate.lock"

  // 在聚合前获取排他锁
  lock_fd = acquire_file_lock(lock_file, timeout_ms=5000)
  if (!lock_fd):
    throw new Error("无法获取聚合锁:" + phase_name)

  try:
    existing = load_all_partial_results(phase_name)
    merged = { ...existing, ...new_results }
    return merged
  finally:
    release_file_lock(lock_fd)
    delete(lock_file)

// 规则3:支持恢复的执行逻辑
function should_skip_task(phase_name, task_id):
  partial = load_partial_result(phase_name, task_id)
  return partial?.success === true

Key Features

核心特性

<key_features>
#FeatureDescription
1True Parallel ExecutionPhases 1, 3, 5 spawn ALL agents in ONE message for concurrent execution
2Single-Message Spawn⚠️ Critical: Multiple Task calls in one response = true parallelism
3Evidence-BasedResearch agent proves answers with code traces before writing
4Engineer-Driven QuestionsPhase 2 generates comprehensive questions
5Conflict-Free WritingOrchestrator assigns exclusive file ownership per writer
6LSP-PoweredIntelligent verification with semantic analysis
7State RecoveryResume from any phase if interrupted
8Unified ToolsetAll agents use octocode local + LSP tools
9Dynamic ScalingAgent count scales based on question volume
</key_features>
<efficiency_summary>
<key_features>
序号特性描述
1真正的并行执行阶段1、3、5在单条消息中启动所有Agent,实现并发执行
2单消息启动⚠️ 关键:单条响应中包含多个Task调用 = 真正的并行性
3基于证据研究Agent在撰写前通过代码追踪为答案提供证据
4工程化驱动的问题生成阶段2生成全面的问题
5无冲突撰写编排器为每个撰写者分配专属的文件所有权
6LSP驱动基于语义分析的智能验证
7状态恢复中断后可从任意阶段恢复
8统一工具集所有Agent使用octocode本地工具 + LSP工具
9动态扩容Agent数量基于问题数量动态调整
</key_features>
<efficiency_summary>

Efficiency Maximization

效率最大化

Phase 1: 4 agents × parallel = ~4x faster than sequential
Phase 3: N agents × parallel = ~Nx faster than sequential
Phase 5: M agents × parallel = ~Mx faster than sequential

Total speedup: Significant when spawn="single_message" is followed
Remember:
spawn="single_message"
phases MUST have all Task calls in ONE response. </efficiency_summary>

阶段1:4个Agent × 并行 = 比顺序执行快约4倍
阶段3:N个Agent × 并行 = 比顺序执行快约N倍
阶段5:M个Agent × 并行 = 比顺序执行快约M倍

总提速:严格遵循spawn="single_message"时,提速效果显著
请记住
spawn="single_message"
的阶段必须在单条响应中包含所有Task调用。 </efficiency_summary>