octocode-documentaion-writer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseRepository Documentation Generator
代码仓库文档生成器
Production-ready 6-phase pipeline with intelligent orchestration, research-first validation, and conflict-free file ownership.
<what>
This command orchestrates specialized AI agents in 6 phases to analyze your code repository and generate comprehensive documentation:
</what>
<steps>
<phase_1>
**Discovery+Analysis** (Phase 1)
Agent: Opus
Parallel: 4 parallel agents
What: Analyze language, architecture, flows, and APIs
Input: Repository path
Output: `analysis.json`
</phase_1>
<phase_2>
Engineer Questions (Phase 2)
Agent: Opus
What: Generates comprehensive questions based on the analysis
Input:
Output:
</phase_2>
analysis.jsonquestions.json<phase_3>
Research Agent (Phase 3) 🆕
Agent: Sonnet
Parallel: Dynamic (based on question volume)
What: Deep-dive code forensics to ANSWER the questions with evidence
Input:
Output:
</phase_3>
questions.jsonresearch.json<phase_4>
Orchestrator (Phase 4)
Agent: Opus
What: Groups questions by file target and assigns exclusive file ownership to writers
Input: +
Output: (file-based assignments for parallel writers)
</phase_4>
questions.jsonresearch.jsonwork-assignments.json<phase_5>
Documentation Writers (Phase 5)
Agent: Sonnet
Parallel: 1-8 parallel agents (dynamic based on workload)
What: Synthesize research and write comprehensive documentation with exclusive file ownership
Input: + + +
Output: (16 core docs, 5 required + supplementary files)
</phase_5>
analysis.jsonquestions.jsonresearch.jsonwork-assignments.jsondocumentation/*.md<phase_6>
QA Validator (Phase 6)
Agent: Sonnet
What: Validates documentation quality using LSP-powered verification
Input: + +
Output: +
</phase_6>
</steps>
<subagents>
Use spawn explore opus/sonnet/haiku subagents to explore code with MCP tools (localSearchCode, lspGotoDefinition, lspCallHierarchy, lspFindReferences)
</subagents>
documentation/*.mdanalysis.jsonquestions.jsonqa-results.jsonQA-SUMMARY.mdDocumentation Flow: analysis.json → questions.json → research.json → work-assignments.json → documentation (conflict-free!)
可投入生产的6阶段流水线,具备智能编排、研究优先验证和无冲突文件所有权机制。
<what>
该命令通过6个阶段编排专用AI Agent,分析你的代码仓库并生成全面的文档:
</what>
<steps>
<phase_1>
**发现与分析(Phase 1)**
Agent: Opus
并行:4个并行Agent
任务:分析语言、架构、流程与API
输入:仓库路径
输出:`analysis.json`
</phase_1>
<phase_2>
生成工程化问题(Phase 2)
Agent: Opus
任务:基于分析结果生成全面的问题
输入:
输出:
</phase_2>
analysis.jsonquestions.json<phase_3>
研究Agent(Phase 3)🆕
Agent: Sonnet
并行:动态调整(基于问题数量)
任务:深度代码取证,为问题提供有证据支撑的答案
输入:
输出:
</phase_3>
questions.jsonresearch.json<phase_4>
编排器(Phase 4)
Agent: Opus
任务:按目标文件分组问题,并为撰写者分配专属文件所有权
输入: +
输出:(为并行撰写者分配的基于文件的任务)
</phase_4>
questions.jsonresearch.jsonwork-assignments.json<phase_5>
文档撰写者(Phase 5)
Agent: Sonnet
并行:1-8个并行Agent(基于工作负载动态调整)
任务:整合研究结果,撰写全面的文档,且每个撰写者拥有专属文件所有权
输入: + + +
输出:(16份核心文档,含5份必填文档及补充文件)
</phase_5>
analysis.jsonquestions.jsonresearch.jsonwork-assignments.jsondocumentation/*.md<phase_6>
QA验证器(Phase 6)
Agent: Sonnet
任务:使用LSP驱动的验证机制验证文档质量
输入: + +
输出: +
</phase_6>
</steps>
<subagents>
使用spawn指令调用opus/sonnet/haiku子Agent,结合MCP工具(localSearchCode、lspGotoDefinition、lspCallHierarchy、lspFindReferences)探索代码
</subagents>
documentation/*.mdanalysis.jsonquestions.jsonqa-results.jsonQA-SUMMARY.md文档生成流程: analysis.json → questions.json → research.json → work-assignments.json → 文档(无冲突!)
⚠️ CRITICAL: Parallel Agent Execution
⚠️ 重要提示:并行Agent执行
<parallel_execution_critical importance="maximum">
STOP. READ THIS TWICE.
<parallel_execution_critical importance="maximum">
停!请通读两遍。
1. THE RULE
1. 规则
You MUST spawn parallel agents in a SINGLE message with multiple Task tool calls.
你必须在单条消息中通过多个Task工具调用启动所有并行Agent。
2. FORBIDDEN BEHAVIOR
2. 禁止行为
FORBIDDEN: Calling sequentially (one per response).
REASON: Sequential calls defeat parallelism and slow down execution by 4x-8x.
Task禁止: 按顺序调用(每次响应调用一个)。
原因: 顺序调用会破坏并行性,导致执行速度降低4-8倍。
Task3. REQUIRED CONFIRMATION
3. 必要确认
Before launching any parallel phase (1, 3, 5), you MUST verify:
- All Task calls are prepared for a SINGLE response
- No dependencies exist between these parallel agents
- Each agent has exclusive scope (no file conflicts)
<correct_pattern title="✅ CORRECT: Single response launches all agents concurrently">
// In ONE assistant message, include ALL Task tool invocations:
Task(description="Discovery 1A-language", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1B-components", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1C-dependencies", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1D-flows", subagent_type="general-purpose", prompt="...", model="opus")
// ↑ All 4 execute SIMULTANEOUSLY</correct_pattern>
<wrong_pattern title="❌ WRONG: Sequential calls lose parallelism">
// DON'T DO THIS - Each waits for previous to complete
Message 1: Task(description="Discovery 1A") → wait for result
Message 2: Task(description="Discovery 1B") → wait for result
Message 3: Task(description="Discovery 1C") → wait for result
Message 4: Task(description="Discovery 1D") → wait for result
// ↑ 4x slower! No parallelism achieved</wrong_pattern>
</parallel_execution_critical>
在启动任何并行阶段(1、3、5)之前,你必须验证:
- 所有Task调用已准备好,可在单条响应中发送
- 这些并行Agent之间不存在依赖关系
- 每个Agent拥有专属的作用范围(无文件冲突)
<correct_pattern title="✅ 正确:单条响应同时启动所有Agent">
// 在一条助手消息中,包含所有Task工具调用:
Task(description="Discovery 1A-language", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1B-components", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1C-dependencies", subagent_type="general-purpose", prompt="...", model="opus")
Task(description="Discovery 1D-flows", subagent_type="general-purpose", prompt="...", model="opus")
// ↑ 所有4个Agent同时执行</correct_pattern>
<wrong_pattern title="❌ 错误:顺序调用失去并行性">
// 不要这么做 - 每个调用都要等待前一个完成
Message 1: Task(description="Discovery 1A") → 等待结果
Message 2: Task(description="Discovery 1B") → 等待结果
Message 3: Task(description="Discovery 1C") → 等待结果
Message 4: Task(description="Discovery 1D") → 等待结果
// ↑ 速度慢4倍!无法实现并行性</wrong_pattern>
</parallel_execution_critical>
Execution Flow Diagram
执行流程图
mermaid
flowchart TB
Start([/octocode-documentaion-writer PATH]) --> Validate[Pre-Flight Validation]
Validate --> Init[Initialize Workspace]
Init --> P1[Phase 1: Discovery+Analysis]
subgraph P1_Parallel["🔄 RUN IN PARALLEL (4 agents)"]
P1A[Agent 1A:<br/>Language & Manifests]
P1B[Agent 1B:<br/>Components]
P1C[Agent 1C:<br/>Dependencies]
P1D[Agent 1D:<br/>Flows & APIs]
end
P1 --> P1_Parallel
P1_Parallel --> P1Agg[Aggregation:<br/>Merge into analysis.json]
P1Agg --> P1Done[✅ analysis.json created]
P1Done -->|Reads analysis.json| P2[Phase 2: Engineer Questions<br/>Single Agent - Opus]
P2 --> P2Done[✅ questions.json created]
P2Done -->|Reads questions.json| P3[Phase 3: Research 🆕<br/>Parallel Agents - Sonnet]
subgraph P3_Parallel["🔄 RUN IN PARALLEL"]
P3A[Researcher 1]
P3B[Researcher 2]
P3C[Researcher 3]
end
P3 --> P3_Parallel
P3_Parallel --> P3Agg[Aggregation:<br/>Merge into research.json]
P3Agg --> P3Done[✅ research.json created<br/>Evidence-backed answers]
P3Done -->|Reads questions + research| P4[Phase 4: Orchestrator<br/>Single Agent - Opus]
P4 --> P4Group[Group questions<br/>by file target]
P4 --> P4Assign[Assign file ownership<br/>to writers]
P4Assign --> P4Done[✅ work-assignments.json]
P4Done --> P5[Phase 5: Documentation Writers]
P5 --> P5Input[📖 Input:<br/>work-assignments.json<br/>+ research.json]
P5Input --> P5Dist[Each writer gets<br/>exclusive file ownership]
subgraph P5_Parallel["🔄 RUN IN PARALLEL (1-8 agents)"]
P5W1[Writer 1]
P5W2[Writer 2]
P5W3[Writer 3]
P5W4[Writer 4]
end
P5Dist --> P5_Parallel
P5_Parallel --> P5Verify[Verify Structure]
P5Verify --> P5Done[✅ documentation/*.md created]
P5Done --> P6[Phase 6: QA Validator<br/>Single Agent - Sonnet]
P6 --> P6Done[✅ qa-results.json +<br/>QA-SUMMARY.md]
P6Done --> Complete([✅ Documentation Complete])
style P1_Parallel fill:#e1f5ff
style P3_Parallel fill:#e1f5ff
style P5_Parallel fill:#ffe1f5
style P4 fill:#fff3cd
style Complete fill:#28a745,color:#fffmermaid
flowchart TB
Start([/octocode-documentaion-writer PATH]) --> Validate[Pre-Flight Validation]
Validate --> Init[Initialize Workspace]
Init --> P1[Phase 1: Discovery+Analysis]
subgraph P1_Parallel["🔄 RUN IN PARALLEL (4 agents)"]
P1A[Agent 1A:<br/>Language & Manifests]
P1B[Agent 1B:<br/>Components]
P1C[Agent 1C:<br/>Dependencies]
P1D[Agent 1D:<br/>Flows & APIs]
end
P1 --> P1_Parallel
P1_Parallel --> P1Agg[Aggregation:<br/>Merge into analysis.json]
P1Agg --> P1Done[✅ analysis.json created]
P1Done -->|Reads analysis.json| P2[Phase 2: Engineer Questions<br/>Single Agent - Opus]
P2 --> P2Done[✅ questions.json created]
P2Done -->|Reads questions.json| P3[Phase 3: Research 🆕<br/>Parallel Agents - Sonnet]
subgraph P3_Parallel["🔄 RUN IN PARALLEL"]
P3A[Researcher 1]
P3B[Researcher 2]
P3C[Researcher 3]
end
P3 --> P3_Parallel
P3_Parallel --> P3Agg[Aggregation:<br/>Merge into research.json]
P3Agg --> P3Done[✅ research.json created<br/>Evidence-backed answers]
P3Done -->|Reads questions + research| P4[Phase 4: Orchestrator<br/>Single Agent - Opus]
P4 --> P4Group[Group questions<br/>by file target]
P4 --> P4Assign[Assign file ownership<br/>to writers]
P4Assign --> P4Done[✅ work-assignments.json]
P4Done --> P5[Phase 5: Documentation Writers]
P5 --> P5Input[📖 Input:<br/>work-assignments.json<br/>+ research.json]
P5Input --> P5Dist[Each writer gets<br/>exclusive file ownership]
subgraph P5_Parallel["🔄 RUN IN PARALLEL (1-8 agents)"]
P5W1[Writer 1]
P5W2[Writer 2]
P5W3[Writer 3]
P5W4[Writer 4]
end
P5Dist --> P5_Parallel
P5_Parallel --> P5Verify[Verify Structure]
P5Verify --> P5Done[✅ documentation/*.md created]
P5Done --> P6[Phase 6: QA Validator<br/>Single Agent - Sonnet]
P6 --> P6Done[✅ qa-results.json +<br/>QA-SUMMARY.md]
P6Done --> Complete([✅ Documentation Complete])
style P1_Parallel fill:#e1f5ff
style P3_Parallel fill:#e1f5ff
style P5_Parallel fill:#ffe1f5
style P4 fill:#fff3cd
style Complete fill:#28a745,color:#fffParallel Execution Rules
并行执行规则
<execution_rules>
<phase name="1-discovery" type="parallel" critical="true" spawn="single_message">
<gate>
STOP. Verify parallel spawn requirements.
REQUIRED: Spawn 4 agents in ONE message.
FORBIDDEN: Sequential Task calls.
</gate>
<agent_count>4</agent_count>
<description>Discovery and Analysis</description>
<spawn_instruction>⚠️ Launch ALL 4 Task calls in ONE response</spawn_instruction>
<rules>
<rule>All 4 agents start simultaneously via single-message spawn</rule>
<rule>Wait for ALL 4 to complete before aggregation</rule>
<rule>Must aggregate 4 partial JSONs into analysis.json</rule>
</rules>
</phase>
<phase name="2-questions" type="single" critical="true" spawn="sequential">
<agent_count>1</agent_count>
<description>Engineer Questions Generation</description>
<spawn_instruction>Single agent, wait for completion</spawn_instruction>
</phase>
<phase name="3-research" type="parallel" critical="true" spawn="single_message">
<gate>
**STOP.** Verify parallel spawn requirements.
**REQUIRED:** Spawn N researchers in ONE message.
**FORBIDDEN:** Sequential Task calls.
</gate>
<agent_count_logic>
<case condition="questions < 10">1 agent</case>
<case condition="questions >= 10">Ceil(questions / 15)</case>
</agent_count_logic>
<description>Evidence Gathering</description>
<spawn_instruction>⚠️ Launch ALL researcher Task calls in ONE response</spawn_instruction>
<rules>
<rule>Split questions into batches BEFORE spawning</rule>
<rule>All researchers start simultaneously</rule>
<rule>Aggregate findings into research.json</rule>
</rules>
</phase>
<phase name="4-orchestrator" type="single" critical="true" spawn="sequential">
<agent_count>1</agent_count>
<description>Orchestration and Assignment</description>
<spawn_instruction>Single agent, wait for completion</spawn_instruction>
<rules>
<rule>Assign EXCLUSIVE file ownership to writers</rule>
<rule>Distribute research findings to relevant writers</rule>
</rules>
</phase>
<phase name="5-writers" type="dynamic_parallel" critical="false" spawn="single_message">
<gate>
**STOP.** Verify parallel spawn requirements.
**REQUIRED:** Spawn all writers in ONE message.
**FORBIDDEN:** Sequential Task calls.
</gate>
<agent_count_logic>
<case condition="questions < 20">1 agent</case>
<case condition="questions 20-99">2-4 agents</case>
<case condition="questions >= 100">4-8 agents</case>
</agent_count_logic>
<spawn_instruction>⚠️ Launch ALL writer Task calls in ONE response</spawn_instruction>
<rules>
<rule>Each writer owns EXCLUSIVE files - no conflicts possible</rule>
<rule>All writers start simultaneously via single-message spawn</rule>
<rule>Use provided research.json as primary source</rule>
</rules>
</phase>
<phase name="6-qa" type="single" critical="false" spawn="sequential">
<agent_count>1</agent_count>
<description>Quality Validation</description>
<spawn_instruction>Single agent, wait for completion</spawn_instruction>
</phase></execution_rules>
<execution_rules>
<phase name="1-discovery" type="parallel" critical="true" spawn="single_message">
<gate>
停。 验证并行启动要求。
必须: 在单条消息中启动4个Agent。
禁止: 顺序调用Task。
</gate>
<agent_count>4</agent_count>
<description>发现与分析</description>
<spawn_instruction>⚠️ 在单条响应中启动所有4个Task调用</spawn_instruction>
<rules>
<rule>所有4个Agent通过单条消息启动,同时开始执行</rule>
<rule>等待所有4个Agent完成后再进行结果聚合</rule>
<rule>必须将4个部分JSON结果合并为analysis.json</rule>
</rules>
</phase>
<phase name="2-questions" type="single" critical="true" spawn="sequential">
<agent_count>1</agent_count>
<description>工程化问题生成</description>
<spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
</phase>
<phase name="3-research" type="parallel" critical="true" spawn="single_message">
<gate>
**停。** 验证并行启动要求。
**必须:** 在单条消息中启动N个研究Agent。
**禁止:** 顺序调用Task。
</gate>
<agent_count_logic>
<case condition="questions < 10">1 agent</case>
<case condition="questions >= 10">Ceil(questions / 15)</case>
</agent_count_logic>
<description>证据收集</description>
<spawn_instruction>⚠️ 在单条响应中启动所有研究Agent的Task调用</spawn_instruction>
<rules>
<rule>在启动前将问题拆分为多个批次</rule>
<rule>所有研究Agent同时开始执行</rule>
<rule>将研究结果聚合为research.json</rule>
</rules>
</phase>
<phase name="4-orchestrator" type="single" critical="true" spawn="sequential">
<agent_count>1</agent_count>
<description>编排与任务分配</description>
<spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
<rules>
<rule>为撰写者分配专属的文件所有权</rule>
<rule>将研究结果分发给对应的撰写者</rule>
</rules>
</phase>
<phase name="5-writers" type="dynamic_parallel" critical="false" spawn="single_message">
<gate>
**停。** 验证并行启动要求。
**必须:** 在单条消息中启动所有撰写Agent。
**禁止:** 顺序调用Task。
</gate>
<agent_count_logic>
<case condition="questions < 20">1 agent</case>
<case condition="questions 20-99">2-4 agents</case>
<case condition="questions >= 100">4-8 agents</case>
</agent_count_logic>
<spawn_instruction>⚠️ 在单条响应中启动所有撰写Agent的Task调用</spawn_instruction>
<rules>
<rule>每个撰写者拥有专属文件 - 无冲突可能</rule>
<rule>所有撰写Agent通过单条消息启动,同时开始执行</rule>
<rule>以提供的research.json为主要数据源</rule>
</rules>
</phase>
<phase name="6-qa" type="single" critical="false" spawn="sequential">
<agent_count>1</agent_count>
<description>质量验证</description>
<spawn_instruction>单个Agent,等待执行完成</spawn_instruction>
</phase></execution_rules>
Pre-Flight Checks
预启动检查
<pre_flight_gate>
HALT. Complete these requirements before proceeding:
<pre_flight_gate>
停。在继续前请完成以下要求:
Required Checks
必要检查项
- Verify Path Existence
- IF missing → THEN ERROR & EXIT
repository_path
- IF
- Verify Directory Status
- IF not a directory → THEN ERROR & EXIT
- Source Code Check
- IF < 3 source files → THEN WARN & Ask User (Exit if no)
- Build Directory Check
- IF contains or
node_modules→ THEN ERROR & EXITdist
- IF contains
- Size Estimation
- IF > 200k LOC → THEN WARN & Ask User (Exit if no)
FORBIDDEN until gate passes:
- Any agent spawning
- Workspace initialization </pre_flight_gate>
-
Verify Path Existence
- Ensure exists.
repository_path - If not, raise an ERROR: "Repository path does not exist: " + path and EXIT.
- Ensure
-
Verify Directory Status
- Confirm is a directory.
repository_path - If not, raise an ERROR: "Path is not a directory: " + path and EXIT.
- Confirm
-
Source Code Check
- Count files ending in ,
.ts,.js,.py, or.go..rs - Exclude directories: ,
node_modules,.git,dist.build - If fewer than 3 source files are found:
- WARN: "Very few source files detected ({count}). This may not be a code repository."
- Ask user: "Continue anyway? [y/N]"
- If not confirmed, EXIT.
- Count files ending in
-
Build Directory Check
- Ensure the path does not contain ,
node_modules, ordist.build - If it does, raise an ERROR: "Repository path appears to be a build directory. Please specify the project root." and EXIT.
- Ensure the path does not contain
-
Size Estimation
- Estimate the repository size.
- If larger than 200,000 LOC:
- WARN: "Large repository detected (~{size} LOC)."
- Ask user: "Continue anyway? [y/N]"
- If not confirmed, EXIT.
- 验证路径存在性
- 如果 缺失 → 则 报错并退出
repository_path
- 如果
- 验证目录状态
- 如果 不是目录 → 则 报错并退出
- 源代码检查
- 如果 源代码文件数量 < 3 → 则 发出警告并询问用户(若用户选择否则退出)
- 构建目录检查
- 如果 目录包含 或
node_modules→ 则 报错并退出dist
- 如果 目录包含
- 大小估算
- 如果 代码行数 > 200k → 则 发出警告并询问用户(若用户选择否则退出)
在通过预启动检查前禁止:
- 启动任何Agent
- 初始化工作区 </pre_flight_gate>
-
验证路径存在性
- 确保 存在。
repository_path - 若不存在,抛出错误:"仓库路径不存在:" + 路径 并退出。
- 确保
-
验证目录状态
- 确认 是目录。
repository_path - 若不是,抛出错误:"路径不是目录:" + 路径 并退出。
- 确认
-
源代码检查
- 统计后缀为 、
.ts、.js、.py或.go的文件数量。.rs - 排除目录:、
node_modules、.git、dist。build - 如果发现的源代码文件少于3个:
- 警告:"检测到极少的源代码文件({count}个)。这可能不是代码仓库。"
- 询问用户:"是否继续?[y/N]"
- 若未得到确认,退出。
- 统计后缀为
-
构建目录检查
- 确保路径不包含 、
node_modules或dist。build - 若包含,抛出错误:"仓库路径似乎是构建目录。请指定项目根目录。" 并退出。
- 确保路径不包含
-
大小估算
- 估算仓库大小。
- 如果代码行数超过200,000行:
- 警告:"检测到大仓库(约{size}行代码)。"
- 询问用户:"是否继续?[y/N]"
- 若未得到确认,退出。
Initialize Workspace
初始化工作区
<init_gate>
STOP. Verify state before initialization.
<init_gate>
停。在初始化前验证状态。
Required Actions
必要操作
- Define Directories (,
CONTEXT_DIR)DOC_DIR - Handle Existing State
- IF exists → THEN Prompt User to Resume
state.json - IF User says NO → THEN Reset state
- IF
- Create Directories
- Initialize New State (if not resuming)
FORBIDDEN:
- Starting Phase 1 before state is initialized. </init_gate>
- 定义目录(、
CONTEXT_DIR)DOC_DIR - 处理现有状态
- 如果 存在 → 则 提示用户是否恢复
state.json - 如果 用户选择否 → 则 重置状态
- 如果
- 创建目录
- 初始化新状态(如果不恢复)
禁止:
- 在状态初始化前启动阶段1。 </init_gate>
Workspace Initialization
工作区初始化
Before starting the pipeline, set up the working environment and handle any existing state.
-
Define Directories
- Context Directory ():
CONTEXT_DIR${REPOSITORY_PATH}/.context - Documentation Directory ():
DOC_DIR${REPOSITORY_PATH}/documentation
- Context Directory (
-
Handle Existing State
- Check if exists.
${CONTEXT_DIR}/state.json - If it exists and the phase is NOT "complete" or "failed":
- Prompt User: "Found existing documentation generation in progress (phase: [PHASE]). Resume from last checkpoint? [Y/n]"
- If User Confirms (Yes):
- Set
RESUME_MODE = true - Set from the saved state.
START_PHASE
- Set
- If User Declines (No):
- WARN: "Restarting from beginning. Previous progress will be overwritten."
- Set
RESUME_MODE = false - Set
START_PHASE = "initialized"
- If does not exist or previous run finished/failed, start fresh (
state.json).RESUME_MODE = false
- Check if
-
Create Directories
- Ensure exists (create if missing).
CONTEXT_DIR - Ensure exists (create if missing).
DOC_DIR
- Ensure
-
Initialize New State (If NOT Resuming)
- Create a new using the schema defined in
state.json.schemas/state-schema.json
- Create a new
在启动流水线前,设置工作环境并处理任何现有状态。
-
定义目录
- 上下文目录():
CONTEXT_DIR${REPOSITORY_PATH}/.context - 文档目录():
DOC_DIR${REPOSITORY_PATH}/documentation
- 上下文目录(
-
处理现有状态
- 检查 是否存在。
${CONTEXT_DIR}/state.json - 如果存在且当前阶段不是"complete"或"failed":
- 提示用户:"发现正在进行的文档生成任务(阶段:[PHASE])。是否从上次检查点恢复?[Y/n]"
- 如果用户确认(是):
- 设置
RESUME_MODE = true - 从保存的状态中设置 。
START_PHASE
- 设置
- 如果用户拒绝(否):
- 警告:"将从头开始。之前的进度将被覆盖。"
- 设置
RESUME_MODE = false - 设置
START_PHASE = "initialized"
- 如果 不存在或之前的任务已完成/失败,则从头开始(
state.json)。RESUME_MODE = false
- 检查
-
创建目录
- 确保 存在(若不存在则创建)。
CONTEXT_DIR - 确保 存在(若不存在则创建)。
DOC_DIR
- 确保
-
初始化新状态(如果不恢复)
- 使用 中定义的 schema 创建新的
schemas/state-schema.json。state.json
- 使用
Progress Tracker
进度跟踪器
Display real-time progress:
📊 Documentation Generation Progress v3.1
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Repository: {REPOSITORY_PATH}
Mode: {RESUME_MODE ? "Resume" : "New"}
{if RESUME_MODE}
Resuming from: {START_PHASE}
{end}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━显示实时进度:
📊 文档生成进度 v3.1
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
仓库:{REPOSITORY_PATH}
模式:{RESUME_MODE ? "恢复" : "新建"}
{if RESUME_MODE}
从以下阶段恢复:{START_PHASE}
{end}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Agent Pipeline Execution
Agent流水线执行
Phase 1: Discovery+Analysis Agent
阶段1:发现与分析Agent
<phase_1_gate>
GATE: START Phase 1
REQUIRED: Spawn 4 agents in ONE message.
FORBIDDEN: Sequential calls.
</phase_1_gate>
Agent Spec:
Task Config:
references/agent-discovery-analysis.mdschemas/discovery-tasks.json| Property | Value |
|---|---|
| Parallel Agents | 4 (1a-language, 1b-components, 1c-dependencies, 1d-flows-apis) |
| Critical | Yes |
| Output | |
See→ Orchestrator Execution Logic section for full implementation.references/agent-discovery-analysis.md
<phase_1_gate>
检查点:启动阶段1
必须: 在单条消息中启动4个Agent。
禁止: 顺序调用。
</phase_1_gate>
Agent规格:
任务配置:
references/agent-discovery-analysis.mdschemas/discovery-tasks.json| 属性 | 值 |
|---|---|
| 并行Agent数量 | 4个(1a-language、1b-components、1c-dependencies、1d-flows-apis) |
| 关键性 | 是 |
| 输出 | |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-discovery-analysis.md
Phase 2: Engineer Questions Agent
阶段2:工程化问题生成Agent
Agent Spec:
references/agent-engineer-questions.md| Property | Value |
|---|---|
| Agent Type | Single (Opus) |
| Critical | Yes |
| Input | |
| Output | |
See→ Orchestrator Execution Logic section for full implementation.references/agent-engineer-questions.md
Agent规格:
references/agent-engineer-questions.md| 属性 | 值 |
|---|---|
| Agent类型 | 单个(Opus) |
| 关键性 | 是 |
| 输入 | |
| 输出 | |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-engineer-questions.md
Phase 3: Research Agent 🆕
阶段3:研究Agent 🆕
<phase_3_gate>
GATE: START Phase 3
REQUIRED: Spawn N agents in ONE message.
FORBIDDEN: Sequential calls.
</phase_3_gate>
Agent Spec:
references/agent-researcher.md| Property | Value |
|---|---|
| Agent Type | Parallel (Sonnet) |
| Critical | Yes |
| Input | |
| Output | |
See→ Orchestrator Execution Logic section for full implementation.references/agent-researcher.md
<phase_3_gate>
检查点:启动阶段3
必须: 在单条消息中启动N个Agent。
禁止: 顺序调用。
</phase_3_gate>
Agent规格:
references/agent-researcher.md| 属性 | 值 |
|---|---|
| Agent类型 | 并行(Sonnet) |
| 关键性 | 是 |
| 输入 | |
| 输出 | |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-researcher.md
Phase 4: Orchestrator Agent
阶段4:编排器Agent
Agent Spec:
references/agent-orchestrator.md| Property | Value |
|---|---|
| Agent Type | Single (Opus) |
| Critical | Yes |
| Input | |
| Output | |
See→ Orchestrator Execution Logic section for full implementation.references/agent-orchestrator.md
Agent规格:
references/agent-orchestrator.md| 属性 | 值 |
|---|---|
| Agent类型 | 单个(Opus) |
| 关键性 | 是 |
| 输入 | |
| 输出 | |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-orchestrator.md
Phase 5: Documentation Writers
阶段5:文档撰写者
<phase_5_gate>
GATE: START Phase 5
REQUIRED: Spawn all writers in ONE message.
FORBIDDEN: Sequential calls.
</phase_5_gate>
Agent Spec:
references/agent-documentation-writer.md| Property | Value |
|---|---|
| Agent Type | Parallel (1-8 Sonnet writers) |
| Primary Writer | Writer 1 (Critical) |
| Non-Primary | Partial failure allowed |
| Retry Logic | Up to 2 retries per failed writer |
| Input | |
| Output | |
| File Ownership | Exclusive (no conflicts) |
<phase_5_gate>
检查点:启动阶段5
必须: 在单条消息中启动所有撰写Agent。
禁止: 顺序调用。
</phase_5_gate>
Agent规格:
references/agent-documentation-writer.md| 属性 | 值 |
|---|---|
| Agent类型 | 并行(1-8个Sonnet撰写Agent) |
| 主撰写者 | 撰写者1(关键) |
| 非主撰写者 | 允许部分失败 |
| 重试逻辑 | 每个失败的撰写者最多重试2次 |
| 输入 | |
| 输出 | |
| 文件所有权 | 专属(无冲突) |
Writer Scaling Strategy
撰写Agent扩容策略
| Strategy | Agent Count | When Used |
|---|---|---|
| 1 | < 20 questions |
| 2-4 | 20-99 questions |
| 4-8 | >= 100 questions |
See→ Orchestrator Execution Logic section for full implementation.references/agent-documentation-writer.md
| 策略 | Agent数量 | 使用场景 |
|---|---|---|
| 1 | 问题数量 < 20 |
| 2-4 | 问题数量 20-99 |
| 4-8 | 问题数量 >= 100 |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-documentation-writer.md
Phase 6: QA Validator
阶段6:QA验证器
Agent Spec:
references/agent-qa-validator.md| Property | Value |
|---|---|
| Agent Type | Single (Sonnet) |
| Critical | No (failure produces warning) |
| Input | |
| Output | |
| Score Range | 0-100 |
| Quality Ratings | |
See→ Orchestrator Execution Logic section for full implementation.references/agent-qa-validator.md
Agent规格:
references/agent-qa-validator.md| 属性 | 值 |
|---|---|
| Agent类型 | 单个(Sonnet) |
| 关键性 | 否(失败仅产生警告) |
| 输入 | |
| 输出 | |
| 评分范围 | 0-100 |
| 质量评级 | |
详见→ 编排器执行逻辑 章节获取完整实现细节。references/agent-qa-validator.md
Completion
完成
javascript
update_state({
phase: "complete",
completed_at: new Date().toISOString(),
current_agent: null
})
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "✅ Documentation Complete!"
DISPLAY: ""
DISPLAY: "📁 Location: {DOC_DIR}/"
DISPLAY: "📊 QA Report: {DOC_DIR}/QA-SUMMARY.md"
DISPLAY: ""
if (parsed_qa && parsed_qa.overall_score):
DISPLAY: "Quality Score: {parsed_qa.overall_score}/100 ({parsed_qa.quality_rating})"
if (parsed_qa.overall_score >= 90):
DISPLAY: "Status: Excellent ✅ - Ready for release"
else if (parsed_qa.overall_score >= 75):
DISPLAY: "Status: Good ✅ - Minor improvements recommended"
else if (parsed_qa.overall_score >= 60):
DISPLAY: "Status: Fair -️ - Address gaps before release"
else:
DISPLAY: "Status: Needs Work -️ - Major improvements required"
if (parsed_qa.gaps && parsed_qa.gaps.length > 0):
DISPLAY: ""
DISPLAY: "Next Steps:"
for (i = 0; i < Math.min(3, parsed_qa.gaps.length); i++):
gap = parsed_qa.gaps[i]
DISPLAY: " {i+1}. {gap.fix}"
DISPLAY: ""
DISPLAY: "📝 Documentation Coverage:"
DISPLAY: " {parsed_questions.summary.total_questions} questions researched"
DISPLAY: " {parsed_qa.question_coverage.answered} questions answered in docs"
DISPLAY: ""
DISPLAY: "View documentation: {DOC_DIR}/index.md"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
EXIT code 0javascript
update_state({
phase: "complete",
completed_at: new Date().toISOString(),
current_agent: null
})
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "✅ 文档生成完成!"
DISPLAY: ""
DISPLAY: "📁 位置:{DOC_DIR}/"
DISPLAY: "📊 QA报告:{DOC_DIR}/QA-SUMMARY.md"
DISPLAY: ""
if (parsed_qa && parsed_qa.overall_score):
DISPLAY: "质量评分:{parsed_qa.overall_score}/100({parsed_qa.quality_rating})"
if (parsed_qa.overall_score >= 90):
DISPLAY: "状态:优秀 ✅ - 可发布"
else if (parsed_qa.overall_score >= 75):
DISPLAY: "状态:良好 ✅ - 建议进行小幅优化"
else if (parsed_qa.overall_score >= 60):
DISPLAY: "状态:一般 -️ - 发布前需填补空白"
else:
DISPLAY: "状态:需要改进 -️ - 需进行重大优化"
if (parsed_qa.gaps && parsed_qa.gaps.length > 0):
DISPLAY: ""
DISPLAY: "下一步:"
for (i = 0; i < Math.min(3, parsed_qa.gaps.length); i++):
gap = parsed_qa.gaps[i]
DISPLAY: " {i+1}. {gap.fix}"
DISPLAY: ""
DISPLAY: "📝 文档覆盖范围:"
DISPLAY: " 研究的问题总数:{parsed_questions.summary.total_questions}个"
DISPLAY: " 文档中已解答的问题:{parsed_qa.question_coverage.answered}个"
DISPLAY: ""
DISPLAY: "查看文档:{DOC_DIR}/index.md"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
EXIT code 0Error Recovery
错误恢复
If any agent fails critically:
javascript
function handle_critical_failure(phase, error):
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "❌ Documentation Generation Failed"
DISPLAY: ""
DISPLAY: "Phase: {phase}"
DISPLAY: "Error: {error.message}"
DISPLAY: ""
if (error.recoverable):
DISPLAY: "This error is recoverable. Run /octocode-documentaion-writer again to resume."
DISPLAY: "State saved in: {CONTEXT_DIR}/state.json"
else:
DISPLAY: "This error is not recoverable. Please check the error and try again."
DISPLAY: "You may need to fix the issue before retrying."
DISPLAY: ""
DISPLAY: "Logs: {CONTEXT_DIR}/state.json"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
EXIT code 1如果任何Agent发生严重失败:
javascript
function handle_critical_failure(phase, error):
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
DISPLAY: "❌ 文档生成失败"
DISPLAY: ""
DISPLAY: "阶段:{phase}"
DISPLAY: "错误:{error.message}"
DISPLAY: ""
if (error.recoverable):
DISPLAY: "该错误可恢复。再次运行/octocode-documentaion-writer即可恢复。"
DISPLAY: "状态已保存至:{CONTEXT_DIR}/state.json"
else:
DISPLAY: "该错误无法恢复。请检查错误后重试。"
DISPLAY: "你可能需要先修复问题再重试。"
DISPLAY: ""
DISPLAY: "日志:{CONTEXT_DIR}/state.json"
DISPLAY: "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
EXIT code 1Helper Functions
辅助函数
IMPORTANT: State Synchronization Only the main orchestrator process should update. Individual parallel agents (Discovery 1A-1D, Researchers, Writers) must NOT directly modifystate.jsonto avoid race conditions. Parallel agents should only write to their designated partial result files instate.json. The orchestrator aggregates these results and updatespartials/<phase>/<task_id>.jsonafter all parallel agents complete.state.json
javascript
// NOTE: This function should ONLY be called by the main orchestrator process,
// never by parallel sub-agents. Parallel agents use save_partial_result() instead.
function update_state(updates):
current_state = Read(CONTEXT_DIR + "/state.json")
parsed = JSON.parse(current_state)
for key, value in updates:
parsed[key] = value
Write(CONTEXT_DIR + "/state.json", JSON.stringify(parsed, null, 2))
function estimate_repo_size(path):
// Quick estimate: count source files
files = count_files(path, ["*.ts", "*.js", "*.py", "*.go", "*.rs", "*.java"], excludeDir=["node_modules", ".git", "dist", "build"])
// Assume ~200 LOC per file average
return files * 200
function count_files(path, patterns, excludeDir):
// Use localFindFiles MCP tool (mcp__octocode__localFindFiles)
// Return count of matching files重要提示:状态同步 只有主编排器进程可以更新。单个并行Agent (发现阶段1A-1D、研究Agent、撰写Agent)不得直接修改state.json,以避免 竞争条件。并行Agent应仅将结果写入state.json中的指定部分结果文件。编排器会在所有并行Agent完成后聚合这些结果并更新partials/<phase>/<task_id>.json。state.json
javascript
// 注意:此函数仅应由主编排器进程调用,
// 绝不能由并行子Agent调用。并行Agent应使用save_partial_result()替代。
function update_state(updates):
current_state = Read(CONTEXT_DIR + "/state.json")
parsed = JSON.parse(current_state)
for key, value in updates:
parsed[key] = value
Write(CONTEXT_DIR + "/state.json", JSON.stringify(parsed, null, 2))
function estimate_repo_size(path):
// 快速估算:统计源代码文件数量
files = count_files(path, ["*.ts", "*.js", "*.py", "*.go", "*.rs", "*.java"], excludeDir=["node_modules", ".git", "dist", "build"])
// 假设每个文件平均约200行代码
return files * 200
function count_files(path, patterns, excludeDir):
// 使用localFindFiles MCP工具(mcp__octocode__localFindFiles)
// 返回匹配文件的数量Retry & Data Preservation Logic
重试与数据保留逻辑
CRITICAL: Never lose partial work. All agents support retry with state preservation.
javascript
const RETRY_CONFIG = {
discovery_analysis: { max_attempts: 3, backoff_ms: 2000 },
engineer_questions: { max_attempts: 3, backoff_ms: 2000 },
research: { max_attempts: 3, backoff_ms: 3000 },
orchestrator: { max_attempts: 3, backoff_ms: 2000 },
documentation: { max_attempts: 3, backoff_ms: 5000 }, // per writer
qa: { max_attempts: 2, backoff_ms: 1000 }
}
// === RETRY WRAPPER FOR ALL AGENTS ===
function retry_agent(phase_name, agent_fn, options = {}):
config = RETRY_CONFIG[phase_name]
state = get_retry_state(phase_name)
while (state.attempts < config.max_attempts):
state.attempts++
update_retry_state(phase_name, state)
DISPLAY: `⟳ ${phase_name} attempt ${state.attempts}/${config.max_attempts}`
try:
result = agent_fn(options)
// Success - clear retry state
clear_retry_state(phase_name)
return { success: true, result }
catch (error):
state.last_error = error.message
update_retry_state(phase_name, state)
DISPLAY: `⚠️ ${phase_name} failed: ${error.message}`
if (state.attempts < config.max_attempts):
DISPLAY: ` Retrying in ${config.backoff_ms}ms...`
sleep(config.backoff_ms * state.attempts) // Exponential backoff
else:
DISPLAY: `❌ ${phase_name} exhausted all ${config.max_attempts} attempts`
return { success: false, error, attempts: state.attempts }
return { success: false, error: state.last_error, attempts: state.attempts }
// === PARALLEL AGENT RETRY (for Discovery, Research, Writers) ===
function retry_parallel_agents(phase_name, agent_tasks, options = {}):
config = RETRY_CONFIG[phase_name]
results = {}
failed_tasks = []
// First attempt - run all in parallel
parallel_results = Task_Parallel(agent_tasks)
for (task_id, result) in parallel_results:
if (result.success):
results[task_id] = result
save_partial_result(phase_name, task_id, result)
else:
failed_tasks.push({ id: task_id, task: agent_tasks[task_id], attempts: 1 })
// Retry failed tasks individually
for failed in failed_tasks:
while (failed.attempts < config.max_attempts):
failed.attempts++
DISPLAY: `⟳ Retrying ${phase_name}/${failed.id} (attempt ${failed.attempts}/${config.max_attempts})`
try:
result = Task(failed.task)
if (result.success):
results[failed.id] = result
save_partial_result(phase_name, failed.id, result)
break
catch (error):
DISPLAY: `⚠️ ${phase_name}/${failed.id} failed: ${error.message}`
if (failed.attempts < config.max_attempts):
sleep(config.backoff_ms * failed.attempts)
if (failed.attempts >= config.max_attempts && !results[failed.id]):
DISPLAY: `❌ ${phase_name}/${failed.id} failed after ${config.max_attempts} attempts`
// Load any partial result saved during attempts
results[failed.id] = load_partial_result(phase_name, failed.id) || { success: false, partial: true }
return results
// === PARTIAL RESULT PRESERVATION ===
// Uses atomic writes to prevent corruption from concurrent access
function save_partial_result(phase_name, task_id, result):
partial_dir = CONTEXT_DIR + "/partials/" + phase_name
mkdir_p(partial_dir)
target_path = partial_dir + "/" + task_id + ".json"
temp_path = partial_dir + "/" + task_id + ".json.tmp." + random_uuid()
// Atomic write: write to temp file, then rename (rename is atomic on POSIX)
Write(temp_path, JSON.stringify(result))
rename(temp_path, target_path) // Atomic operation
function load_partial_result(phase_name, task_id):
path = CONTEXT_DIR + "/partials/" + phase_name + "/" + task_id + ".json"
if (exists(path)):
return JSON.parse(Read(path))
return null
function load_all_partial_results(phase_name):
partial_dir = CONTEXT_DIR + "/partials/" + phase_name
if (!exists(partial_dir)):
return {}
files = list_files(partial_dir, "*.json")
results = {}
for file in files:
task_id = file.replace(".json", "")
results[task_id] = JSON.parse(Read(partial_dir + "/" + file))
return results
// === RETRY STATE MANAGEMENT ===
function get_retry_state(phase_name):
state = Read(CONTEXT_DIR + "/state.json")
parsed = JSON.parse(state)
return parsed.retry_state?.[phase_name] || { attempts: 0 }
function update_retry_state(phase_name, retry_state):
update_state({
retry_state: {
...current_state.retry_state,
[phase_name]: retry_state
}
})
function clear_retry_state(phase_name):
state = JSON.parse(Read(CONTEXT_DIR + "/state.json"))
if (state.retry_state):
delete state.retry_state[phase_name]
Write(CONTEXT_DIR + "/state.json", JSON.stringify(state, null, 2))重要提示: 绝不能丢失部分工作成果。所有Agent都支持带状态保留的重试机制。
javascript
const RETRY_CONFIG = {
discovery_analysis: { max_attempts: 3, backoff_ms: 2000 },
engineer_questions: { max_attempts: 3, backoff_ms: 2000 },
research: { max_attempts: 3, backoff_ms: 3000 },
orchestrator: { max_attempts: 3, backoff_ms: 2000 },
documentation: { max_attempts: 3, backoff_ms: 5000 }, // 每个撰写者
qa: { max_attempts: 2, backoff_ms: 1000 }
}
// === 所有Agent的重试包装器 ===
function retry_agent(phase_name, agent_fn, options = {}):
config = RETRY_CONFIG[phase_name]
state = get_retry_state(phase_name)
while (state.attempts < config.max_attempts):
state.attempts++
update_retry_state(phase_name, state)
DISPLAY: `⟳ ${phase_name} 尝试次数 ${state.attempts}/${config.max_attempts}`
try:
result = agent_fn(options)
// 成功 - 清除重试状态
clear_retry_state(phase_name)
return { success: true, result }
catch (error):
state.last_error = error.message
update_retry_state(phase_name, state)
DISPLAY: `⚠️ ${phase_name} 失败:${error.message}`
if (state.attempts < config.max_attempts):
DISPLAY: ` ${config.backoff_ms}ms后重试...`
sleep(config.backoff_ms * state.attempts) // 指数退避
else:
DISPLAY: `❌ ${phase_name} 已用完${config.max_attempts}次尝试机会`
return { success: false, error, attempts: state.attempts }
return { success: false, error: state.last_error, attempts: state.attempts }
// === 并行Agent重试(针对发现、研究、撰写阶段) ===
function retry_parallel_agents(phase_name, agent_tasks, options = {}):
config = RETRY_CONFIG[phase_name]
results = {}
failed_tasks = []
// 第一次尝试 - 并行运行所有任务
parallel_results = Task_Parallel(agent_tasks)
for (task_id, result) in parallel_results:
if (result.success):
results[task_id] = result
save_partial_result(phase_name, task_id, result)
else:
failed_tasks.push({ id: task_id, task: agent_tasks[task_id], attempts: 1 })
// 单独重试失败的任务
for failed in failed_tasks:
while (failed.attempts < config.max_attempts):
failed.attempts++
DISPLAY: `⟳ 重试 ${phase_name}/${failed.id}(尝试次数 ${failed.attempts}/${config.max_attempts})`
try:
result = Task(failed.task)
if (result.success):
results[failed.id] = result
save_partial_result(phase_name, failed.id, result)
break
catch (error):
DISPLAY: `⚠️ ${phase_name}/${failed.id} 失败:${error.message}`
if (failed.attempts < config.max_attempts):
sleep(config.backoff_ms * failed.attempts)
if (failed.attempts >= config.max_attempts && !results[failed.id]):
DISPLAY: `❌ ${phase_name}/${failed.id} 在${config.max_attempts}次尝试后失败`
// 加载尝试过程中保存的任何部分结果
results[failed.id] = load_partial_result(phase_name, failed.id) || { success: false, partial: true }
return results
// === 部分结果保留 ===
// 使用原子写入防止并发访问导致的损坏
function save_partial_result(phase_name, task_id, result):
partial_dir = CONTEXT_DIR + "/partials/" + phase_name
mkdir_p(partial_dir)
target_path = partial_dir + "/" + task_id + ".json"
temp_path = partial_dir + "/" + task_id + ".json.tmp." + random_uuid()
// 原子写入:先写入临时文件,再重命名(重命名在POSIX系统中是原子操作)
Write(temp_path, JSON.stringify(result))
rename(temp_path, target_path) // 原子操作
function load_partial_result(phase_name, task_id):
path = CONTEXT_DIR + "/partials/" + phase_name + "/" + task_id + ".json"
if (exists(path)):
return JSON.parse(Read(path))
return null
function load_all_partial_results(phase_name):
partial_dir = CONTEXT_DIR + "/partials/" + phase_name
if (!exists(partial_dir)):
return {}
files = list_files(partial_dir, "*.json")
results = {}
for file in files:
task_id = file.replace(".json", "")
results[task_id] = JSON.parse(Read(partial_dir + "/" + file))
return results
// === 重试状态管理 ===
function get_retry_state(phase_name):
state = Read(CONTEXT_DIR + "/state.json")
parsed = JSON.parse(state)
return parsed.retry_state?.[phase_name] || { attempts: 0 }
function update_retry_state(phase_name, retry_state):
update_state({
retry_state: {
...current_state.retry_state,
[phase_name]: retry_state
}
})
function clear_retry_state(phase_name):
state = JSON.parse(Read(CONTEXT_DIR + "/state.json"))
if (state.retry_state):
delete state.retry_state[phase_name]
Write(CONTEXT_DIR + "/state.json", JSON.stringify(state, null, 2))Phase-Specific Retry Behavior
阶段特定的重试行为
| Phase | Retry Strategy | Partial Data Preserved |
|---|---|---|
| Discovery | Retry failed sub-agents (1A-1D) individually | |
| Questions | Retry entire phase | Previous |
| Research | Retry failed batches only | |
| Orchestrator | Retry entire phase | Previous |
| Writers | Retry failed writers only | |
| QA | Retry once, then warn | |
| 阶段 | 重试策略 | 保留的部分数据 |
|---|---|---|
| 发现阶段 | 单独重试失败的子Agent(1A-1D) | |
| 问题生成阶段 | 重试整个阶段 | 保留之前的 |
| 研究阶段 | 仅重试失败的批次 | |
| 编排器阶段 | 重试整个阶段 | 保留之前的 |
| 撰写阶段 | 仅重试失败的撰写Agent | |
| QA阶段 | 重试一次,然后发出警告 | |
Critical Data Protection Rules
关键数据保护规则
javascript
// RULE 1: Never overwrite successful output until new output is validated
function safe_write_output(path, content):
backup_path = path + ".backup"
if (exists(path)):
copy(path, backup_path)
try:
Write(path, content)
validate_json(path) // Ensure valid JSON
delete(backup_path) // Only delete backup after validation
catch (error):
// Restore from backup
if (exists(backup_path)):
copy(backup_path, path)
throw error
// RULE 2: Aggregate partial results even on failure
// Uses file locking to prevent race conditions during aggregation
function aggregate_with_partials(phase_name, new_results):
lock_file = CONTEXT_DIR + "/partials/" + phase_name + "/.aggregate.lock"
// Acquire exclusive lock before aggregation
lock_fd = acquire_file_lock(lock_file, timeout_ms=5000)
if (!lock_fd):
throw new Error("Failed to acquire lock for aggregation: " + phase_name)
try:
existing = load_all_partial_results(phase_name)
merged = { ...existing, ...new_results }
return merged
finally:
release_file_lock(lock_fd)
delete(lock_file)
// RULE 3: Resume-aware execution
function should_skip_task(phase_name, task_id):
partial = load_partial_result(phase_name, task_id)
return partial?.success === truejavascript
// 规则1:在新输出验证通过前,绝不能覆盖成功的输出
function safe_write_output(path, content):
backup_path = path + ".backup"
if (exists(path)):
copy(path, backup_path)
try:
Write(path, content)
validate_json(path) // 确保是有效的JSON
delete(backup_path) // 仅在验证通过后删除备份
catch (error):
// 从备份恢复
if (exists(backup_path)):
copy(backup_path, path)
throw error
// 规则2:即使失败也要聚合部分结果
// 使用文件锁防止聚合过程中的竞争条件
function aggregate_with_partials(phase_name, new_results):
lock_file = CONTEXT_DIR + "/partials/" + phase_name + "/.aggregate.lock"
// 在聚合前获取排他锁
lock_fd = acquire_file_lock(lock_file, timeout_ms=5000)
if (!lock_fd):
throw new Error("无法获取聚合锁:" + phase_name)
try:
existing = load_all_partial_results(phase_name)
merged = { ...existing, ...new_results }
return merged
finally:
release_file_lock(lock_fd)
delete(lock_file)
// 规则3:支持恢复的执行逻辑
function should_skip_task(phase_name, task_id):
partial = load_partial_result(phase_name, task_id)
return partial?.success === trueKey Features
核心特性
<key_features>
| # | Feature | Description |
|---|---|---|
| 1 | True Parallel Execution | Phases 1, 3, 5 spawn ALL agents in ONE message for concurrent execution |
| 2 | Single-Message Spawn | ⚠️ Critical: Multiple Task calls in one response = true parallelism |
| 3 | Evidence-Based | Research agent proves answers with code traces before writing |
| 4 | Engineer-Driven Questions | Phase 2 generates comprehensive questions |
| 5 | Conflict-Free Writing | Orchestrator assigns exclusive file ownership per writer |
| 6 | LSP-Powered | Intelligent verification with semantic analysis |
| 7 | State Recovery | Resume from any phase if interrupted |
| 8 | Unified Toolset | All agents use octocode local + LSP tools |
| 9 | Dynamic Scaling | Agent count scales based on question volume |
</key_features>
<efficiency_summary>
<key_features>
| 序号 | 特性 | 描述 |
|---|---|---|
| 1 | 真正的并行执行 | 阶段1、3、5在单条消息中启动所有Agent,实现并发执行 |
| 2 | 单消息启动 | ⚠️ 关键:单条响应中包含多个Task调用 = 真正的并行性 |
| 3 | 基于证据 | 研究Agent在撰写前通过代码追踪为答案提供证据 |
| 4 | 工程化驱动的问题生成 | 阶段2生成全面的问题 |
| 5 | 无冲突撰写 | 编排器为每个撰写者分配专属的文件所有权 |
| 6 | LSP驱动 | 基于语义分析的智能验证 |
| 7 | 状态恢复 | 中断后可从任意阶段恢复 |
| 8 | 统一工具集 | 所有Agent使用octocode本地工具 + LSP工具 |
| 9 | 动态扩容 | Agent数量基于问题数量动态调整 |
</key_features>
<efficiency_summary>
Efficiency Maximization
效率最大化
Phase 1: 4 agents × parallel = ~4x faster than sequential
Phase 3: N agents × parallel = ~Nx faster than sequential
Phase 5: M agents × parallel = ~Mx faster than sequential
Total speedup: Significant when spawn="single_message" is followedRemember: phases MUST have all Task calls in ONE response.
</efficiency_summary>
spawn="single_message"阶段1:4个Agent × 并行 = 比顺序执行快约4倍
阶段3:N个Agent × 并行 = 比顺序执行快约N倍
阶段5:M个Agent × 并行 = 比顺序执行快约M倍
总提速:严格遵循spawn="single_message"时,提速效果显著请记住:的阶段必须在单条响应中包含所有Task调用。
</efficiency_summary>
spawn="single_message"