wave-plan-pipeline

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Auto Mode

自动模式

When
--yes
or
-y
: Auto-confirm decomposition, skip interactive validation, use defaults.
当使用
--yes
-y
参数时:自动确认拆解结果,跳过交互式验证,使用默认配置。

Wave Plan Pipeline

波浪式规划流水线

Usage

使用方法

bash
$wave-plan-pipeline "Implement user authentication with OAuth, JWT, and 2FA"
$wave-plan-pipeline -c 4 "Refactor payment module with Stripe and PayPal"
$wave-plan-pipeline -y "Build notification system with email and SMS"
$wave-plan-pipeline --continue "auth-20260228"
Flags:
  • -y, --yes
    : Skip all confirmations (auto mode)
  • -c, --concurrency N
    : Max concurrent agents within each wave (default: 4)
  • --continue
    : Resume existing session
Output Directory:
.workflow/.wave-plan/{session-id}/

bash
$wave-plan-pipeline "实现基于OAuth、JWT和2FA的用户认证"
$wave-plan-pipeline -c 4 "基于Stripe和PayPal重构支付模块"
$wave-plan-pipeline -y "构建包含邮件和短信的通知系统"
$wave-plan-pipeline --continue "auth-20260228"
参数:
  • -y, --yes
    : 跳过所有确认步骤(自动模式)
  • -c, --concurrency N
    : 每个波浪中的最大并发Agent数(默认值:4)
  • --continue
    : 恢复现有会话
输出目录:
.workflow/.wave-plan/{session-id}/

Overview

概述

Explore-first wave-based pipeline using
spawn_agents_on_csv
. Two-stage CSV execution: explore.csv (codebase discovery) → tasks.csv (implementation), with cross-phase context propagation via
context_from
linking (
E*
T*
).
Core workflow: Decompose → Wave Explore → Synthesize & Plan → Wave Execute → Aggregate
┌──────────────────────────────────────────────────────────────────────┐
│                    WAVE PLAN PIPELINE                                │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Phase 1: Requirement → explore.csv                                  │
│     ├─ Analyze complexity → select exploration angles (1-4)          │
│     ├─ Generate explore.csv (1 row per angle)                        │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 2: Wave Explore (spawn_agents_on_csv)                         │
│     ├─ For each explore wave:                                        │
│     │   ├─ Build wave CSV from explore.csv                           │
│     │   ├─ spawn_agents_on_csv(explore instruction template)         │
│     │   └─ Merge findings/key_files into explore.csv                 │
│     └─ discoveries.ndjson shared across agents                       │
│                                                                      │
│  Phase 3: Synthesize & Plan → tasks.csv                              │
│     ├─ Read all explore findings → cross-reference                   │
│     ├─ Resolve conflicts between angles                              │
│     ├─ Decompose into execution tasks with context_from: E*;T*       │
│     ├─ Compute dependency waves (topological sort)                   │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 4: Wave Execute (spawn_agents_on_csv)                         │
│     ├─ For each task wave:                                           │
│     │   ├─ Build prev_context from explore.csv + tasks.csv           │
│     │   ├─ Build wave CSV with prev_context column                   │
│     │   ├─ spawn_agents_on_csv(execute instruction template)         │
│     │   └─ Merge results into tasks.csv                              │
│     └─ discoveries.ndjson carries across all waves                   │
│                                                                      │
│  Phase 5: Aggregate                                                  │
│     ├─ Export results.csv                                            │
│     ├─ Generate context.md with all findings                         │
│     └─ Display summary                                               │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

基于
spawn_agents_on_csv
的探索优先型波浪式流水线。分为两个阶段的CSV执行:explore.csv(代码库探索)→ tasks.csv(任务实现),通过
context_from
关联实现跨阶段上下文传递(
E*
T*
)。
核心工作流: 需求拆解 → 波浪式探索 → 结果整合与规划 → 波浪式执行 → 结果汇总
┌──────────────────────────────────────────────────────────────────────┐
│                    波浪式规划流水线                                │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  阶段1: 需求 → explore.csv                                          │
│     ├─ 分析复杂度 → 选择探索维度(1-4个)                            │
│     ├─ 生成explore.csv(每个维度对应一行)                          │
│     └─ 用户验证(若使用-y则跳过)                                   │
│                                                                      │
│  阶段2: 波浪式探索(spawn_agents_on_csv)                           │
│     ├─ 针对每个探索波浪:                                           │
│     │   ├─ 基于explore.csv构建波浪CSV                              │
│     │   ├─ 调用spawn_agents_on_csv(探索指令模板)                    │
│     │   └─ 将探索结果/关键文件合并到explore.csv                     │
│     └─ discoveries.ndjson为所有Agent共享                            │
│                                                                      │
│  阶段3: 结果整合与规划 → tasks.csv                                  │
│     ├─ 读取所有探索结果 → 交叉参考                                  │
│     ├─ 解决不同维度间的冲突                                        │
│     ├─ 拆解为执行任务,并通过context_from: E*;T*关联上下文           │
│     ├─ 计算依赖波浪(拓扑排序)                                      │
│     └─ 用户验证(若使用-y则跳过)                                   │
│                                                                      │
│  阶段4: 波浪式执行(spawn_agents_on_csv)                           │
│     ├─ 针对每个任务波浪:                                           │
│     │   ├─ 基于explore.csv + tasks.csv构建prev_context             │
│     │   ├─ 构建包含prev_context列的波浪CSV                          │
│     │   ├─ 调用spawn_agents_on_csv(执行指令模板)                    │
│     │   └─ 将执行结果合并到tasks.csv                               │
│     └─ discoveries.ndjson在所有波浪间传递                            │
│                                                                      │
│  阶段5: 结果汇总                                                  │
│     ├─ 导出results.csv                                              │
│     ├─ 生成包含所有探索结果的context.md                            │
│     └─ 显示汇总信息                                               │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

Context Flow

上下文流转

explore.csv             tasks.csv
┌──────────┐           ┌──────────┐
│ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
│ findings │           │ prev_ctx │← E1+E2 findings
├──────────┤           ├──────────┤
│ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
│ findings │           │ prev_ctx │← E1+T1 findings
├──────────┤           ├──────────┤
│ E3: test │──┐   ┌───→│ T3: test │ context_from: E3;T2
│ findings │  └───┘    │ prev_ctx │← E3+T2 findings
└──────────┘           └──────────┘

Two context channels:
1. Directed: context_from → prev_context (CSV findings lookup)
2. Broadcast: discoveries.ndjson (append-only shared board)

context_from prefix: E* → explore.csv lookup, T* → tasks.csv lookup

explore.csv             tasks.csv
┌──────────┐           ┌──────────┐
│ E1: 架构 │──────────→│ T1: 搭建│ context_from: E1;E2
│ 探索结果 │           │ 上下文  │← E1+E2探索结果
├──────────┤           ├──────────┤
│ E2: 依赖 │──────────→│ T2: 实现│ context_from: E1;T1
│ 探索结果 │           │ 上下文  │← E1+T1执行结果
├──────────┤           ├──────────┤
│ E3: 测试 │──┐   ┌───→│ T3: 测试│ context_from: E3;T2
│ 探索结果 │  └───┘    │ 上下文  │← E3+T2执行结果
└──────────┘           └──────────┘

两种上下文通道:
1. 定向传递: context_from → prev_context(CSV结果查询)
2. 广播传递: discoveries.ndjson(仅追加的共享看板)

context_from前缀规则: E* → 从explore.csv查询,T* → 从tasks.csv查询

CSV Schemas

CSV Schema

explore.csv

explore.csv

csv
id,angle,description,focus,deps,wave,status,findings,key_files,error
"E1","architecture","Explore codebase architecture for: auth system","architecture","","1","pending","","",""
"E2","dependencies","Explore dependency landscape for: auth system","dependencies","","1","pending","","",""
"E3","testing","Explore test infrastructure for: auth system","testing","","1","pending","","",""
Columns:
ColumnPhaseDescription
id
InputExploration ID: E1, E2, ...
angle
InputExploration angle name
description
InputWhat to explore from this angle
focus
InputKeywords and focus areas
deps
InputSemicolon-separated dep IDs (usually empty — all wave 1)
wave
ComputedWave number (usually 1 for all explorations)
status
Output
pending
completed
/
failed
findings
OutputDiscoveries (max 800 chars)
key_files
OutputRelevant files (semicolon-separated)
error
OutputError message if failed
csv
id,angle,description,focus,deps,wave,status,findings,key_files,error
"E1","architecture","探索代码库架构:认证系统","architecture","","1","pending","","",""
"E2","dependencies","探索依赖生态:认证系统","dependencies","","1","pending","","",""
"E3","testing","探索测试基础设施:认证系统","testing","","1","pending","","",""
列说明:
列名阶段描述
id
输入探索ID: E1, E2, ...
angle
输入探索维度名称
description
输入该维度的探索目标
focus
输入关键词和探索重点
deps
输入分号分隔的依赖ID(通常为空 — 所有探索都在第1波浪)
wave
计算生成波浪编号(所有探索通常为1)
status
输出
pending
completed
/
failed
findings
输出探索结果(最多800字符)
key_files
输出相关文件(分号分隔)
error
输出执行失败时的错误信息

tasks.csv

tasks.csv

csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error
"T1","Setup types","Create type definitions","Verify types compile with tsc","All interfaces exported","src/types/**","Follow existing patterns || src/types/index.ts","tsc --noEmit","","E1;E2","1","pending","","","","",""
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","pending","","","","",""
Columns:
ColumnPhaseDescription
id
InputTask ID: T1, T2, ...
title
InputShort task title
description
InputSelf-contained task description — what to implement
test
InputTest cases: what tests to write and how to verify (unit/integration/edge)
acceptance_criteria
InputMeasurable conditions that define "done"
scope
InputTarget file/directory glob — constrains agent write area, prevents cross-task file conflicts
hints
InputImplementation tips + reference files. Format:
tips text || file1;file2
. Either part is optional
execution_directives
InputExecution constraints: commands to run for verification, tool restrictions
deps
InputDependency task IDs: T1;T2 (semicolon-separated)
context_from
InputContext source IDs: E1;E2;T1
E*
lookups in explore.csv,
T*
in tasks.csv
wave
ComputedWave number (computed by topological sort, 1-based)
status
Output
pending
completed
/
failed
/
skipped
findings
OutputExecution findings (max 500 chars)
files_modified
OutputSemicolon-separated file paths
tests_passed
OutputWhether all defined test cases passed (true/false)
acceptance_met
OutputSummary of which acceptance criteria were met/unmet
error
OutputError message if failed (empty if success)
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error
"T1","类型定义","创建类型定义","验证类型可通过tsc编译","所有接口已导出","src/types/**","遵循现有模式 || src/types/index.ts","tsc --noEmit","","E1;E2","1","pending","","","","",""
"T2","核心实现","实现核心认证逻辑","单元测试:登录返回token","登录流程端到端可用","src/auth/**","复用BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","pending","","","","",""
列说明:
列名阶段描述
id
输入任务ID: T1, T2, ...
title
输入简短任务标题
description
输入自包含的任务描述 — 需要实现的内容
test
输入测试用例:需要编写的测试及验证方式(单元/集成/边界测试)
acceptance_criteria
输入定义“完成”的可衡量条件
scope
输入目标文件/目录通配符 — 限制Agent的写入范围,避免跨任务文件冲突
hints
输入实现提示 + 参考文件。格式:
提示文本 || 文件1;文件2
。两部分均可省略
execution_directives
输入执行约束:用于验证的命令、工具限制
deps
输入依赖任务ID: T1;T2(分号分隔)
context_from
输入上下文来源ID: E1;E2;T1
E*
从explore.csv查询,
T*
从tasks.csv查询
wave
计算生成波浪编号(通过拓扑排序计算,从1开始)
status
输出
pending
completed
/
failed
/
skipped
findings
输出执行结果(最多500字符)
files_modified
输出分号分隔的文件路径
tests_passed
输出是否所有定义的测试用例通过(true/false)
acceptance_met
输出哪些验收条件已满足/未满足的汇总
error
输出执行失败时的错误信息(成功则为空)

Per-Wave CSV (Temporary)

波浪临时CSV

Each wave generates a temporary CSV with an extra
prev_context
column.
Explore wave:
explore-wave-{N}.csv
— same columns as explore.csv (no prev_context, explorations are independent).
Execute wave:
task-wave-{N}.csv
— all task columns +
prev_context
:
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","[Explore architecture] Found BaseService pattern in src/services/\n[Task T1] Created types at src/types/auth.ts"
The
prev_context
column is built from
context_from
by looking up completed rows'
findings
in both explore.csv (
E*
) and tasks.csv (
T*
).

每个波浪会生成一个包含额外
prev_context
列的临时CSV。
探索波浪:
explore-wave-{N}.csv
— 与explore.csv列相同(无prev_context,探索任务相互独立)。
执行波浪:
task-wave-{N}.csv
— 包含所有任务列 +
prev_context
:
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
"T2","核心实现","实现核心认证逻辑","单元测试:登录返回token","登录流程端到端可用","src/auth/**","复用BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","[探索架构] 在src/services/中发现BaseService模式\n[任务T1] 在src/types/auth.ts创建了类型定义"
prev_context
列通过
context_from
查询explore.csv(
E*
)和tasks.csv(
T*
)中已完成行的
findings
来构建。

Output Artifacts

输出产物

FilePurposeLifecycle
explore.csv
Exploration state — angles with findings/key_filesUpdated after Phase 2
tasks.csv
Execution state — tasks with resultsUpdated after each wave in Phase 4
explore-wave-{N}.csv
Per-wave explore input (temporary)Created before wave, deleted after
task-wave-{N}.csv
Per-wave execute input (temporary)Created before wave, deleted after
results.csv
Final results exportCreated in Phase 5
discoveries.ndjson
Shared discovery board (all agents, all phases)Append-only
context.md
Human-readable execution reportCreated in Phase 5

文件用途生命周期
explore.csv
探索状态 — 包含各维度的探索结果/关键文件阶段2后更新
tasks.csv
执行状态 — 包含任务的执行结果阶段4中每个波浪后更新
explore-wave-{N}.csv
探索波浪的临时输入文件波浪前创建,波浪后删除
task-wave-{N}.csv
执行波浪的临时输入文件波浪前创建,波浪后删除
results.csv
最终结果导出文件阶段5创建
discoveries.ndjson
所有Agent、所有阶段共享的探索看板仅追加
context.md
人类可读的执行报告阶段5创建

Session Structure

会话结构

.workflow/.wave-plan/{session-id}/
├── explore.csv              # Exploration state
├── tasks.csv                # Execution state
├── results.csv              # Final results export
├── discoveries.ndjson       # Shared discovery board
├── context.md               # Full context summary
├── explore-wave-{N}.csv     # Temporary per-wave explore input (cleaned up)
└── task-wave-{N}.csv        # Temporary per-wave execute input (cleaned up)

.workflow/.wave-plan/{session-id}/
├── explore.csv              # 探索状态文件
├── tasks.csv                # 执行状态文件
├── results.csv              # 最终结果导出文件
├── discoveries.ndjson       # 共享探索看板
├── context.md               # 完整上下文汇总报告
├── explore-wave-{N}.csv     # 探索波浪临时输入文件(已清理)
└── task-wave-{N}.csv        # 执行波浪临时输入文件(已清理)

Implementation

实现细节

Session Initialization

会话初始化

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `wpp-${slug}-${dateStr}`
const sessionFolder = `.workflow/.wave-plan/${sessionId}`

// Continue mode: find existing session
if (continueMode) {
  const existing = Bash(`ls -t .workflow/.wave-plan/ 2>/dev/null | head -1`).trim()
  if (existing) {
    sessionId = existing
    sessionFolder = `.workflow/.wave-plan/${sessionId}`
    // Check which phase to resume: if tasks.csv exists → Phase 4, else → Phase 2
  }
}

Bash(`mkdir -p ${sessionFolder}`)

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// 解析参数
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `wpp-${slug}-${dateStr}`
const sessionFolder = `.workflow/.wave-plan/${sessionId}`

// 恢复模式:查找现有会话
if (continueMode) {
  const existing = Bash(`ls -t .workflow/.wave-plan/ 2>/dev/null | head -1`).trim()
  if (existing) {
    sessionId = existing
    sessionFolder = `.workflow/.wave-plan/${sessionId}`
    // 检查需要恢复的阶段:若存在tasks.csv → 阶段4,否则 → 阶段2
  }
}

Bash(`mkdir -p ${sessionFolder}`)

Phase 1: Requirement → explore.csv

阶段1: 需求 → explore.csv

Objective: Analyze requirement complexity, select exploration angles, generate explore.csv.
Steps:
  1. Analyze & Decompose
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Analyze requirement complexity and select 1-4 exploration angles for codebase discovery before implementation.
TASK: • Classify requirement type (feature/bugfix/refactor/security/performance) • Assess complexity (Low: 1 angle, Medium: 2-3, High: 3-4) • Select exploration angles from: architecture, dependencies, integration-points, testing, patterns, security, performance, state-management, error-handling, edge-cases • For each angle, define focus keywords and what to discover MODE: analysis CONTEXT: @**/* EXPECTED: JSON object: {type: string, complexity: string, angles: [{id: string, angle: string, description: string, focus: string}]}. Each angle id = E1, E2, etc. CONSTRAINTS: 1-4 angles | Angles must be distinct | Each angle must have clear focus
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion via hook callback // Parse JSON from CLI output → { type, complexity, angles[] }

2. **Generate explore.csv**

```javascript
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
const rows = angles.map(a =>
  [a.id, a.angle, a.description, a.focus, '', '1', 'pending', '', '', '']
    .map(v => `"${String(v).replace(/"/g, '""')}"`)
    .join(',')
)

Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
  1. User Validation (skip if AUTO_YES)
    javascript
    if (!AUTO_YES) {
      console.log(`\n## Exploration Plan (${angles.length} angles)\n`)
      angles.forEach(a => console.log(`  - [${a.id}] ${a.angle}: ${a.focus}`))
    
      const answer = AskUserQuestion({
        questions: [{
          question: "Approve exploration angles?",
          header: "Validation",
          multiSelect: false,
          options: [
            { label: "Approve", description: "Proceed with wave exploration" },
            { label: "Modify", description: `Edit ${sessionFolder}/explore.csv manually, then --continue` },
            { label: "Cancel", description: "Abort" }
          ]
        }]
      })
    
      if (answer.Validation === "Modify") {
        console.log(`Edit: ${sessionFolder}/explore.csv\nResume: $wave-plan-pipeline --continue`)
        return
      } else if (answer.Validation === "Cancel") {
        return
      }
    }
Success Criteria:
  • explore.csv created with 1-4 exploration angles
  • User approved (or AUTO_YES)

目标: 分析需求复杂度,选择探索维度,生成explore.csv。
步骤:
  1. 分析与拆解
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: 分析需求复杂度,在实现前选择1-4个探索维度进行代码库探索。
TASK: • 分类需求类型(功能/BUG修复/重构/安全/性能) • 评估复杂度(低:1个维度,中:2-3个,高:3-4个) • 从以下选项中选择探索维度:architecture, dependencies, integration-points, testing, patterns, security, performance, state-management, error-handling, edge-cases • 为每个维度定义聚焦关键词和探索目标 MODE: analysis CONTEXT: @**/* EXPECTED: JSON对象: {type: string, complexity: string, angles: [{id: string, angle: string, description: string, focus: string}]}。每个维度id = E1, E2等。 CONSTRAINTS: 1-4个维度 | 维度必须唯一 | 每个维度必须有明确的探索重点
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // 通过钩子回调等待CLI完成 // 解析CLI输出的JSON → { type, complexity, angles[] }

2. **生成explore.csv**

```javascript
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
const rows = angles.map(a =>
  [a.id, a.angle, a.description, a.focus, '', '1', 'pending', '', '', '']
    .map(v => `"${String(v).replace(/"/g, '""')}"`)
    .join(',')
)

Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
  1. 用户验证(若使用AUTO_YES则跳过)
    javascript
    if (!AUTO_YES) {
      console.log(`\n## 探索计划 (${angles.length}个维度)\n`)
      angles.forEach(a => console.log(`  - [${a.id}] ${a.angle}: ${a.focus}`))
    
      const answer = AskUserQuestion({
        questions: [{
          question: "是否批准探索维度?",
          header: "验证",
          multiSelect: false,
          options: [
            { label: "批准", description: "继续执行波浪式探索" },
            { label: "修改", description: `手动编辑${sessionFolder}/explore.csv,然后使用--continue恢复` },
            { label: "取消", description: "终止流程" }
          ]
        }]
      })
    
      if (answer.Validation === "修改") {
        console.log(`编辑: ${sessionFolder}/explore.csv\n恢复: $wave-plan-pipeline --continue`)
        return
      } else if (answer.Validation === "取消") {
        return
      }
    }
成功标准:
  • 生成包含1-4个探索维度的explore.csv
  • 用户已批准(或使用AUTO_YES)

Phase 2: Wave Explore (spawn_agents_on_csv)

阶段2: 波浪式探索(spawn_agents_on_csv)

Objective: Execute exploration via
spawn_agents_on_csv
. Each angle produces findings and key_files.
Steps:
  1. Explore Wave Loop
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
    
    for (let wave = 1; wave <= maxExploreWave; wave++) {
      const waveTasks = exploreCSV.filter(r =>
        parseInt(r.wave) === wave && r.status === 'pending'
      )
      if (waveTasks.length === 0) continue
    
      // Skip rows with failed dependencies
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => {
          const dep = exploreCSV.find(r => r.id === d)
          return !dep || dep.status !== 'completed'
        })) {
          task.status = 'skipped'
          task.error = 'Dependency failed/skipped'
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) continue
    
      // Write explore wave CSV
      const waveHeader = 'id,angle,description,focus,deps,wave'
      const waveRows = executableTasks.map(t =>
        [t.id, t.angle, t.description, t.focus, t.deps, t.wave]
          .map(v => `"${String(v).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/explore-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute explore wave
      console.log(`  Exploring ${executableTasks.length} angles (wave ${wave})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/explore-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExploreInstruction(sessionFolder),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 300,
        output_csv_path: `${sessionFolder}/explore-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            key_files: { type: "array", items: { type: "string" } },
            error: { type: "string" }
          },
          required: ["id", "status", "findings"]
        }
      })
    
      // Merge results into explore.csv
      const waveResults = parseCsv(Read(`${sessionFolder}/explore-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/explore.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          key_files: Array.isArray(result.key_files) ? result.key_files.join(';') : (result.key_files || ''),
          error: result.error || ''
        })
      }
    
      // Cleanup temporary wave CSV
      Bash(`rm -f "${sessionFolder}/explore-wave-${wave}.csv" "${sessionFolder}/explore-wave-${wave}-results.csv"`)
    }
  2. Explore Instruction Template
    javascript
    function buildExploreInstruction(sessionFolder) {
      return `
目标: 通过
spawn_agents_on_csv
执行探索任务。每个维度生成探索结果和关键文件。
步骤:
  1. 探索波浪循环
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
    
    for (let wave = 1; wave <= maxExploreWave; wave++) {
      const waveTasks = exploreCSV.filter(r =>
        parseInt(r.wave) === wave && r.status === 'pending'
      )
      if (waveTasks.length === 0) continue
    
      // 跳过依赖失败的任务
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => {
          const dep = exploreCSV.find(r => r.id === d)
          return !dep || dep.status !== 'completed'
        })) {
          task.status = 'skipped'
          task.error = '依赖失败/被跳过'
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) continue
    
      // 写入探索波浪CSV
      const waveHeader = 'id,angle,description,focus,deps,wave'
      const waveRows = executableTasks.map(t =>
        [t.id, t.angle, t.description, t.focus, t.deps, t.wave]
          .map(v => `"${String(v).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/explore-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // 执行探索波浪
      console.log(`  正在探索${executableTasks.length}个维度(波浪${wave})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/explore-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExploreInstruction(sessionFolder),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 300,
        output_csv_path: `${sessionFolder}/explore-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            key_files: { type: "array", items: { type: "string" } },
            error: { type: "string" }
          },
          required: ["id", "status", "findings"]
        }
      })
    
      // 将结果合并到explore.csv
      const waveResults = parseCsv(Read(`${sessionFolder}/explore-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/explore.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          key_files: Array.isArray(result.key_files) ? result.key_files.join(';') : (result.key_files || ''),
          error: result.error || ''
        })
      }
    
      // 清理临时波浪CSV
      Bash(`rm -f "${sessionFolder}/explore-wave-${wave}.csv" "${sessionFolder}/explore-wave-${wave}-results.csv"`)
    }
  2. 探索指令模板
    javascript
    function buildExploreInstruction(sessionFolder) {
      return `

EXPLORATION ASSIGNMENT

探索任务分配

MANDATORY FIRST STEPS

强制前置步骤

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

  1. 读取共享探索结果: ${sessionFolder}/discoveries.ndjson(若存在,不存在则跳过)
  2. 读取项目上下文: .workflow/project-tech.json(若存在)

Your Exploration

你的探索任务

Exploration ID: {id} Angle: {angle} Description: {description} Focus: {focus}

探索ID: {id} 维度: {angle} 描述: {description} 聚焦: {focus}

Exploration Protocol

探索流程

  1. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared findings
  2. Explore: Search the codebase from the {angle} perspective
  3. Discover: Find relevant files, patterns, integration points, constraints
  4. Share discoveries: Append findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  5. Report result: Return JSON via report_agent_job_result
  1. 读取探索结果: 加载${sessionFolder}/discoveries.ndjson获取共享探索发现
  2. 探索: 从{angle}维度搜索代码库
  3. 发现: 找到相关文件、模式、集成点、约束
  4. 共享探索结果: 将发现追加到共享看板: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  5. 报告结果: 通过report_agent_job_result返回JSON

What to Look For

需要探索的内容

  • Existing patterns and conventions to follow
  • Integration points and module boundaries
  • Dependencies and constraints
  • Test infrastructure and coverage
  • Risks and potential blockers
  • 可遵循的现有模式和约定
  • 集成点和模块边界
  • 依赖和约束
  • 测试基础设施和覆盖率
  • 风险和潜在障碍

Discovery Types to Share

需要共享的发现类型

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `tech_stack`: {framework, version, config} — technology stack details

  • `code_pattern`: {name, file, description} — 找到的可复用模式
  • `integration_point`: {file, description, exports[]} — 模块连接点
  • `convention`: {naming, imports, formatting} — 代码风格约定
  • `tech_stack`: {framework, version, config} — 技术栈细节

Output (report_agent_job_result)

输出(report_agent_job_result)

Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Concise summary of ${'{'}angle{'}'} discoveries (max 800 chars)", "key_files": ["relevant/file1.ts", "relevant/file2.ts"], "error": "" } ` }

**Success Criteria**:
- All explore angles executed
- explore.csv updated with findings and key_files
- discoveries.ndjson accumulated

---
返回JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "{angle}探索结果的简洁汇总(最多800字符)", "key_files": ["相关/file1.ts", "相关/file2.ts"], "error": "" } ` }

**成功标准**:
- 所有探索维度已执行
- explore.csv已更新探索结果和关键文件
- discoveries.ndjson已积累探索发现

---

Phase 3: Synthesize & Plan → tasks.csv

阶段3: 结果整合与规划 → tasks.csv

Objective: Read exploration findings, cross-reference, resolve conflicts, generate tasks.csv with context_from linking to E* rows.
Steps:
  1. Synthesize Exploration Findings
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = exploreCSV.filter(r => r.status === 'completed')
    
    // Cross-reference: find shared files across angles
    const fileRefs = {}
    completed.forEach(r => {
      (r.key_files || '').split(';').filter(Boolean).forEach(f => {
        if (!fileRefs[f]) fileRefs[f] = []
        fileRefs[f].push({ angle: r.angle, id: r.id })
      })
    })
    const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
    
    // Build synthesis context for task decomposition
    const synthesisContext = completed.map(r =>
      `[${r.id}: ${r.angle}] ${r.findings}\n  Key files: ${r.key_files || 'none'}`
    ).join('\n\n')
    
    const sharedFilesContext = sharedFiles.length > 0
      ? `\nShared files (referenced by multiple angles):\n${sharedFiles.map(([f, refs]) =>
          `  ${f}${refs.map(r => r.id).join(', ')}`
        ).join('\n')}`
      : ''
  2. Decompose into Tasks
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Based on exploration findings, decompose requirement into 3-10 atomic execution tasks. Each task must include test cases, acceptance criteria, and link to relevant exploration findings.
TASK: • Use exploration findings to inform task decomposition • Each task must be self-contained with specific implementation instructions • Link tasks to exploration rows via context_from (E1, E2, etc.) • Define dependencies between tasks (T1 must finish before T2, etc.) • For each task: define test cases, acceptance criteria, scope, hints, and execution directives • Ensure same-wave tasks have non-overlapping scopes MODE: analysis CONTEXT: @**/* EXPECTED: JSON object with tasks array. Each task: {id: string, title: string, description: string, test: string, acceptance_criteria: string, scope: string, hints: string, execution_directives: string, deps: string[], context_from: string[]}.
  • id: T1, T2, etc.
  • description: what to implement (specific enough for an agent)
  • test: what tests to write (e.g. 'Unit test: X returns Y')
  • acceptance_criteria: what defines done (e.g. 'API returns 200')
  • scope: target glob (e.g. 'src/auth/**') — non-overlapping within same wave
  • hints: tips + ref files (format: 'tips || file1;file2')
  • execution_directives: verification commands (e.g. 'npm test --bail')
  • deps: task IDs that must complete first (T*)
  • context_from: explore (E*) and task (T*) IDs whose findings are needed CONSTRAINTS: 3-10 tasks | Atomic | No circular deps | Concrete test/acceptance_criteria | Non-overlapping scopes per wave
EXPLORATION FINDINGS: ${synthesisContext} ${sharedFilesContext}
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion → decomposedTasks[]

3. **Compute Waves & Write tasks.csv**

```javascript
const { waveAssignment, maxWave } = computeWaves(decomposedTasks)

const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
const rows = decomposedTasks.map(task => {
  const wave = waveAssignment.get(task.id)
  return [
    task.id,
    csvEscape(task.title),
    csvEscape(task.description),
    csvEscape(task.test),
    csvEscape(task.acceptance_criteria),
    csvEscape(task.scope),
    csvEscape(task.hints),
    csvEscape(task.execution_directives),
    task.deps.join(';'),
    task.context_from.join(';'),
    wave,
    'pending', '', '', '', '', ''
  ].map(cell => `"${String(cell).replace(/"/g, '""')}"`).join(',')
})

Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
  1. User Validation (skip if AUTO_YES)
    javascript
    if (!AUTO_YES) {
      console.log(`
目标: 读取探索结果,交叉参考,解决冲突,生成包含context_from关联E*行的tasks.csv。
步骤:
  1. 整合探索结果
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = exploreCSV.filter(r => r.status === 'completed')
    
    // 交叉参考:找到多个维度共享的文件
    const fileRefs = {}
    completed.forEach(r => {
      (r.key_files || '').split(';').filter(Boolean).forEach(f => {
        if (!fileRefs[f]) fileRefs[f] = []
        fileRefs[f].push({ angle: r.angle, id: r.id })
      })
    })
    const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
    
    // 构建任务拆解的整合上下文
    const synthesisContext = completed.map(r =>
      `[${r.id}: ${r.angle}] ${r.findings}\n  关键文件: ${r.key_files || '无'}`
    ).join('\n\n')
    
    const sharedFilesContext = sharedFiles.length > 0
      ? `\n共享文件(被多个维度引用):\n${sharedFiles.map(([f, refs]) =>
          `  ${f}${refs.map(r => r.id).join(', ')}`
        ).join('\n')}`
      : ''
  2. 拆解为任务
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: 基于探索结果,将需求拆解为3-10个原子执行任务。每个任务必须包含测试用例、验收标准,并关联到相关的探索结果。
TASK: • 使用探索结果指导任务拆解 • 每个任务必须是自包含的,具有明确的实现指令 • 通过context_from(E1, E2等)关联到探索行 • 定义任务间的依赖关系(T1必须在T2之前完成等) • 为每个任务:定义测试用例、验收标准、范围、提示和执行指令 • 确保同一波浪的任务范围不重叠 MODE: analysis CONTEXT: @**/* EXPECTED: 包含tasks数组的JSON对象。每个任务: {id: string, title: string, description: string, test: string, acceptance_criteria: string, scope: string, hints: string, execution_directives: string, deps: string[], context_from: string[]}.
  • id: T1, T2等。
  • description: 需要实现的内容(足够具体,可供Agent执行)
  • test: 需要编写的测试(例如 '单元测试: X返回Y')
  • acceptance_criteria: 定义完成的标准(例如 'API返回200')
  • scope: 目标通配符(例如 'src/auth/**')— 同一波浪内的范围不重叠
  • hints: 提示 + 参考文件(格式: '提示 || 文件1;文件2')
  • execution_directives: 验证命令(例如 'npm test --bail')
  • deps: 必须先完成的任务ID(T*)
  • context_from: 需要的探索(E*)和任务(T*)ID CONSTRAINTS: 3-10个任务 | 原子化 | 无循环依赖 | 具体的测试/验收标准 | 同一波浪的范围不重叠
探索结果: ${synthesisContext} ${sharedFilesContext}
需求: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // 等待CLI完成 → decomposedTasks[]

3. **计算波浪并写入tasks.csv**

```javascript
const { waveAssignment, maxWave } = computeWaves(decomposedTasks)

const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
const rows = decomposedTasks.map(task => {
  const wave = waveAssignment.get(task.id)
  return [
    task.id,
    csvEscape(task.title),
    csvEscape(task.description),
    csvEscape(task.test),
    csvEscape(task.acceptance_criteria),
    csvEscape(task.scope),
    csvEscape(task.hints),
    csvEscape(task.execution_directives),
    task.deps.join(';'),
    task.context_from.join(';'),
    wave,
    'pending', '', '', '', '', ''
  ].map(cell => `"${String(cell).replace(/"/g, '""')}"`).join(',')
})

Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
  1. 用户验证(若使用AUTO_YES则跳过)
    javascript
    if (!AUTO_YES) {
      console.log(`

Execution Plan

执行计划

Explore: ${completed.length} angles completed Shared files: ${sharedFiles.length} Tasks: ${decomposedTasks.length} across ${maxWave} waves
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => { const wt = decomposedTasks.filter(t => waveAssignment.get(t.id) === w) return
### Wave ${w} (${wt.length} tasks, concurrent) ${wt.map(t => 
- [${t.id}] ${t.title} (scope: ${t.scope}, from: ${t.context_from.join(';')})
).join('\n')}
}).join('\n')} `)
 const answer = AskUserQuestion({
   questions: [{
     question: `Proceed with ${decomposedTasks.length} tasks across ${maxWave} waves?`,
     header: "Confirm",
     multiSelect: false,
     options: [
       { label: "Execute", description: "Proceed with wave execution" },
       { label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then --continue` },
       { label: "Cancel", description: "Abort" }
     ]
   }]
 })

 if (answer.Confirm === "Modify") {
   console.log(`Edit: ${sessionFolder}/tasks.csv\nResume: $wave-plan-pipeline --continue`)
   return
 } else if (answer.Confirm === "Cancel") {
   return
 }
}

**Success Criteria**:
- tasks.csv created with context_from linking to E* rows
- No circular dependencies
- User approved (or AUTO_YES)

---
探索: ${completed.length}个维度已完成 共享文件: ${sharedFiles.length} 任务: ${decomposedTasks.length}个,分布在${maxWave}个波浪中
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => { const wt = decomposedTasks.filter(t => waveAssignment.get(t.id) === w) return
### 波浪${w}(${wt.length}个任务,并发执行) ${wt.map(t => 
- [${t.id}] ${t.title}(范围: ${t.scope}, 来源: ${t.context_from.join(';')})
).join('\n')}
}).join('\n')} `)
 const answer = AskUserQuestion({
   questions: [{
     question: `是否继续执行${decomposedTasks.length}个任务,分布在${maxWave}个波浪中?`,
     header: "确认",
     multiSelect: false,
     options: [
       { label: "执行", description: "继续执行波浪式任务" },
       { label: "修改", description: `编辑${sessionFolder}/tasks.csv然后使用--continue恢复` },
       { label: "取消", description: "终止流程" }
     ]
   }]
 })

 if (answer.Confirm === "修改") {
   console.log(`编辑: ${sessionFolder}/tasks.csv\n恢复: $wave-plan-pipeline --continue`)
   return
 } else if (answer.Confirm === "取消") {
   return
 }
}

**成功标准**:
- 生成包含context_from关联E*行的tasks.csv
- 无循环依赖
- 用户已批准(或使用AUTO_YES)

---

Phase 4: Wave Execute (spawn_agents_on_csv)

阶段4: 波浪式执行(spawn_agents_on_csv)

Objective: Execute tasks wave-by-wave via
spawn_agents_on_csv
. Each wave's prev_context is built from both explore.csv and tasks.csv.
Steps:
  1. Wave Loop
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const failedIds = new Set()
    const skippedIds = new Set()
    
    for (let wave = 1; wave <= maxWave; wave++) {
      console.log(`\n## Wave ${wave}/${maxWave}\n`)
    
      // Re-read master CSV
      const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
      const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
    
      // Skip tasks whose deps failed
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
          skippedIds.add(task.id)
          updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, {
            status: 'skipped',
            error: 'Dependency failed or skipped'
          })
          console.log(`  [${task.id}] ${task.title} → SKIPPED (dependency failed)`)
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) {
        console.log(`  No executable tasks in wave ${wave}`)
        continue
      }
    
      // Build prev_context for each task (cross-phase: E* + T*)
      for (const task of executableTasks) {
        task.prev_context = buildPrevContext(task.context_from, exploreCSV, masterCsv)
      }
    
      // Write wave CSV
      const waveHeader = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context'
      const waveRows = executableTasks.map(t =>
        [t.id, t.title, t.description, t.test, t.acceptance_criteria, t.scope, t.hints, t.execution_directives, t.deps, t.context_from, t.wave, t.prev_context]
          .map(cell => `"${String(cell).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/task-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute wave
      console.log(`  Executing ${executableTasks.length} tasks (concurrency: ${maxConcurrency})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/task-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExecuteInstruction(sessionFolder, wave),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 600,
        output_csv_path: `${sessionFolder}/task-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            files_modified: { type: "array", items: { type: "string" } },
            tests_passed: { type: "boolean" },
            acceptance_met: { type: "string" },
            error: { type: "string" }
          },
          required: ["id", "status", "findings", "tests_passed"]
        }
      })
    
      // Merge results into master CSV
      const waveResults = parseCsv(Read(`${sessionFolder}/task-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/tasks.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          files_modified: Array.isArray(result.files_modified) ? result.files_modified.join(';') : (result.files_modified || ''),
          tests_passed: String(result.tests_passed ?? ''),
          acceptance_met: result.acceptance_met || '',
          error: result.error || ''
        })
    
        if (result.status === 'failed') {
          failedIds.add(result.id)
          console.log(`  [${result.id}] → FAILED: ${result.error}`)
        } else {
          console.log(`  [${result.id}] → COMPLETED${result.tests_passed ? ' ✓tests' : ''}`)
        }
      }
    
      // Cleanup
      Bash(`rm -f "${sessionFolder}/task-wave-${wave}.csv" "${sessionFolder}/task-wave-${wave}-results.csv"`)
    
      console.log(`  Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
    }
  2. prev_context Builder (Cross-Phase)
    The key function linking exploration context to execution:
    javascript
    function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
      if (!contextFrom) return 'No previous context available'
    
      const ids = contextFrom.split(';').filter(Boolean)
      const entries = []
    
      ids.forEach(id => {
        if (id.startsWith('E')) {
          // ← Look up in explore.csv (cross-phase link)
          const row = exploreCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[Explore ${row.angle}] ${row.findings}`)
            if (row.key_files) entries.push(`  Key files: ${row.key_files}`)
          }
        } else if (id.startsWith('T')) {
          // ← Look up in tasks.csv (same-phase link)
          const row = tasksCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
            if (row.files_modified) entries.push(`  Modified: ${row.files_modified}`)
          }
        }
      })
    
      return entries.length > 0 ? entries.join('\n') : 'No previous context available'
    }
  3. Execute Instruction Template
    javascript
    function buildExecuteInstruction(sessionFolder, wave) {
      return `
目标: 通过
spawn_agents_on_csv
按波浪执行任务。每个波浪的prev_context从explore.csv和tasks.csv构建。
步骤:
  1. 波浪循环
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const failedIds = new Set()
    const skippedIds = new Set()
    
    for (let wave = 1; wave <= maxWave; wave++) {
      console.log(`\n## 波浪${wave}/${maxWave}\n`)
    
      // 重新读取主CSV
      const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
      const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
    
      // 跳过依赖失败的任务
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
          skippedIds.add(task.id)
          updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, {
            status: 'skipped',
            error: '依赖失败或被跳过'
          })
          console.log(`  [${task.id}] ${task.title} → 已跳过(依赖失败)`)
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) {
        console.log(`  波浪${wave}中无可执行任务`)
        continue
      }
    
      // 为每个任务构建prev_context(跨阶段: E* + T*)
      for (const task of executableTasks) {
        task.prev_context = buildPrevContext(task.context_from, exploreCSV, masterCsv)
      }
    
      // 写入波浪CSV
      const waveHeader = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context'
      const waveRows = executableTasks.map(t =>
        [t.id, t.title, t.description, t.test, t.acceptance_criteria, t.scope, t.hints, t.execution_directives, t.deps, t.context_from, t.wave, t.prev_context]
          .map(cell => `"${String(cell).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/task-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // 执行波浪
      console.log(`  正在执行${executableTasks.length}个任务(并发数: ${maxConcurrency})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/task-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExecuteInstruction(sessionFolder, wave),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 600,
        output_csv_path: `${sessionFolder}/task-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            files_modified: { type: "array", items: { type: "string" } },
            tests_passed: { type: "boolean" },
            acceptance_met: { type: "string" },
            error: { type: "string" }
          },
          required: ["id", "status", "findings", "tests_passed"]
        }
      })
    
      // 将结果合并到主CSV
      const waveResults = parseCsv(Read(`${sessionFolder}/task-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/tasks.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          files_modified: Array.isArray(result.files_modified) ? result.files_modified.join(';') : (result.files_modified || ''),
          tests_passed: String(result.tests_passed ?? ''),
          acceptance_met: result.acceptance_met || '',
          error: result.error || ''
        })
    
        if (result.status === 'failed') {
          failedIds.add(result.id)
          console.log(`  [${result.id}] → 失败: ${result.error}`)
        } else {
          console.log(`  [${result.id}] → 完成${result.tests_passed ? ' ✓测试通过' : ''}`)
        }
      }
    
      // 清理临时文件
      Bash(`rm -f "${sessionFolder}/task-wave-${wave}.csv" "${sessionFolder}/task-wave-${wave}-results.csv"`)
    
      console.log(`  波浪${wave}完成: ${waveResults.filter(r => r.status === 'completed').length}个完成,${waveResults.filter(r => r.status === 'failed').length}个失败`)
    }
  2. prev_context构建器(跨阶段)
    连接探索上下文到执行的核心函数:
    javascript
    function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
      if (!contextFrom) return '无可用的先前上下文'
    
      const ids = contextFrom.split(';').filter(Boolean)
      const entries = []
    
      ids.forEach(id => {
        if (id.startsWith('E')) {
          // ← 从explore.csv查询(跨阶段关联)
          const row = exploreCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[探索${row.angle}] ${row.findings}`)
            if (row.key_files) entries.push(`  关键文件: ${row.key_files}`)
          }
        } else if (id.startsWith('T')) {
          // ← 从tasks.csv查询(同阶段关联)
          const row = tasksCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[任务${row.id}: ${row.title}] ${row.findings}`)
            if (row.files_modified) entries.push(`  修改文件: ${row.files_modified}`)
          }
        }
      })
    
      return entries.length > 0 ? entries.join('\n') : '无可用的先前上下文'
    }
  3. 执行指令模板
    javascript
    function buildExecuteInstruction(sessionFolder, wave) {
      return `

TASK ASSIGNMENT

任务分配

MANDATORY FIRST STEPS

强制前置步骤

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

  1. 读取共享探索结果: ${sessionFolder}/discoveries.ndjson(若存在,不存在则跳过)
  2. 读取项目上下文: .workflow/project-tech.json(若存在)

Your Task

你的任务

Task ID: {id} Title: {title} Description: {description} Scope: {scope}
任务ID: {id} 标题: {title} 描述: {description} 范围: {scope}

Implementation Hints & Reference Files

实现提示与参考文件

{hints}
Format: `tips text || file1;file2`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.
{hints}
格式: `提示文本 || 文件1;文件2`。开始前请阅读||后的所有参考文件。遵循||前的提示指导。

Execution Directives

执行指令

{execution_directives}
Commands to run for verification, tool restrictions, or environment requirements.
{execution_directives}
用于验证的命令、工具限制或环境要求。

Test Cases

测试用例

{test}
{test}

Acceptance Criteria

验收标准

{acceptance_criteria}
{acceptance_criteria}

Previous Context (from exploration and predecessor tasks)

先前上下文(来自探索和前置任务)

{prev_context}

{prev_context}

Execution Protocol

执行流程

  1. Read references: Parse {hints} — read all files listed after `||` to understand existing patterns
  2. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
  3. Use context: Apply previous tasks' findings from prev_context above
  4. Stay in scope: ONLY create/modify files within {scope} — do NOT touch files outside this boundary
  5. Apply hints: Follow implementation tips from {hints} (before `||`)
  6. Execute: Implement the task as described
  7. Write tests: Implement the test cases defined above
  8. Run directives: Execute commands from {execution_directives} to verify your work
  9. Verify acceptance: Ensure all acceptance criteria are met before reporting completion
  10. Share discoveries: Append exploration findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  11. Report result: Return JSON via report_agent_job_result
  1. 读取参考文件: 解析{hints} — 阅读||后列出的所有文件以了解现有模式
  2. 读取探索结果: 加载${sessionFolder}/discoveries.ndjson获取共享探索发现
  3. 使用上下文: 应用上述prev_context中的前置任务结果
  4. 保持范围: 仅在{scope}内创建/修改文件 — 不要触碰此边界外的文件
  5. 应用提示: 遵循{hints}中的实现提示(||前的部分)
  6. 执行: 按描述实现任务
  7. 编写测试: 实现上述定义的测试用例
  8. 执行指令: 执行{execution_directives}中的命令以验证你的工作
  9. 验证验收标准: 报告完成前确保所有验收标准已满足
  10. 共享探索结果: 将探索发现追加到共享看板: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  11. 报告结果: 通过report_agent_job_result返回JSON

Discovery Types to Share

需要共享的发现类型

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `blocker`: {issue, severity, impact} — blocking issues encountered

  • `code_pattern`: {name, file, description} — 找到的可复用模式
  • `integration_point`: {file, description, exports[]} — 模块连接点
  • `convention`: {naming, imports, formatting} — 代码风格约定
  • `blocker`: {issue, severity, impact} — 遇到的阻塞问题

Output (report_agent_job_result)

输出(report_agent_job_result)

Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Key discoveries and implementation notes (max 500 chars)", "files_modified": ["path1", "path2"], "tests_passed": true | false, "acceptance_met": "Summary of which acceptance criteria were met/unmet", "error": "" }
IMPORTANT: Set status to "completed" ONLY if:
  • All test cases pass
  • All acceptance criteria are met Otherwise set status to "failed" with details in error field. ` }
    undefined
  1. Master CSV Update Helper
    javascript
    function updateMasterCsvRow(csvPath, taskId, updates) {
      const content = Read(csvPath)
      const lines = content.split('\n')
      const header = lines[0].split(',')
    
      for (let i = 1; i < lines.length; i++) {
        const cells = parseCsvLine(lines[i])
        if (cells[0] === taskId || cells[0] === `"${taskId}"`) {
          for (const [col, val] of Object.entries(updates)) {
            const colIdx = header.indexOf(col)
            if (colIdx >= 0) {
              cells[colIdx] = `"${String(val).replace(/"/g, '""')}"`
            }
          }
          lines[i] = cells.join(',')
          break
        }
      }
    
      Write(csvPath, lines.join('\n'))
    }
Success Criteria:
  • All waves executed in order
  • Each wave's results merged into master CSV before next wave starts
  • Dependent tasks skipped when predecessor failed
  • discoveries.ndjson accumulated across all phases

返回JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "关键发现和实现说明(最多500字符)", "files_modified": ["路径1", "路径2"], "tests_passed": true | false, "acceptance_met": "哪些验收条件已满足/未满足的汇总", "error": "" }
重要提示: 仅当以下条件满足时才设置status为"completed":
  • 所有测试用例通过
  • 所有验收标准已满足 否则设置status为"failed"并在error字段中填写详细信息。 ` }
    undefined
  1. 主CSV更新助手
    javascript
    function updateMasterCsvRow(csvPath, taskId, updates) {
      const content = Read(csvPath)
      const lines = content.split('\n')
      const header = lines[0].split(',')
    
      for (let i = 1; i < lines.length; i++) {
        const cells = parseCsvLine(lines[i])
        if (cells[0] === taskId || cells[0] === `"${taskId}"`) {
          for (const [col, val] of Object.entries(updates)) {
            const colIdx = header.indexOf(col)
            if (colIdx >= 0) {
              cells[colIdx] = `"${String(val).replace(/"/g, '""')}"`
            }
          }
          lines[i] = cells.join(',')
          break
        }
      }
    
      Write(csvPath, lines.join('\n'))
    }
成功标准:
  • 所有波浪已按顺序执行
  • 每个波浪的结果已合并到主CSV后再执行下一个波浪
  • 前置任务失败时,依赖任务被跳过
  • discoveries.ndjson已积累所有阶段的探索发现

Phase 5: Results Aggregation

阶段5: 结果汇总

Objective: Generate final results and human-readable report.
Steps:
  1. Export results.csv
    javascript
    const masterCsv = Read(`${sessionFolder}/tasks.csv`)
    Write(`${sessionFolder}/results.csv`, masterCsv)
  2. Generate context.md
    javascript
    const finalTasks = parseCsv(masterCsv)
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = finalTasks.filter(t => t.status === 'completed')
    const failed = finalTasks.filter(t => t.status === 'failed')
    const skipped = finalTasks.filter(t => t.status === 'skipped')
    
    const contextContent = `# Wave Plan Execution Report
Session: ${sessionId} Requirement: ${requirement} Completed: ${getUtc8ISOString()} Waves: ${maxWave} | Concurrency: ${maxConcurrency}

目标: 生成最终结果和人类可读的报告。
步骤:
  1. 导出results.csv
    javascript
    const masterCsv = Read(`${sessionFolder}/tasks.csv`)
    Write(`${sessionFolder}/results.csv`, masterCsv)
  2. 生成context.md
    javascript
    const finalTasks = parseCsv(masterCsv)
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = finalTasks.filter(t => t.status === 'completed')
    const failed = finalTasks.filter(t => t.status === 'failed')
    const skipped = finalTasks.filter(t => t.status === 'skipped')
    
    const contextContent = `# 波浪式规划执行报告
会话: ${sessionId} 需求: ${requirement} 完成时间: ${getUtc8ISOString()} 波浪数: ${maxWave} | 并发数: ${maxConcurrency}

Summary

汇总

MetricCount
Explore Angles${exploreCSV.length}
Total Tasks${finalTasks.length}
Completed${completed.length}
Failed${failed.length}
Skipped${skipped.length}
Waves${maxWave}

指标数量
探索维度${exploreCSV.length}
总任务数${finalTasks.length}
已完成${completed.length}
失败${failed.length}
已跳过${skipped.length}
波浪数${maxWave}

Exploration Results

探索结果

${exploreCSV.map(e =>
### ${e.id}: ${e.angle} (${e.status}) ${e.findings || 'N/A'} Key files: ${e.key_files || 'none'}
).join('\n\n')}

${exploreCSV.map(e =>
### ${e.id}: ${e.angle}(${e.status}) ${e.findings || '无'} 关键文件: ${e.key_files || '无'}
).join('\n\n')}

Task Results

任务结果

${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
FieldValue
Wave${t.wave}
Scope${t.scope
Dependencies${t.deps
Context From${t.context_from
Tests Passed${t.tests_passed
Acceptance Met${t.acceptance_met
Error${t.error
Description: ${t.description}
Test Cases: ${t.test || 'N/A'}
Acceptance Criteria: ${t.acceptance_criteria || 'N/A'}
Hints: ${t.hints || 'N/A'}
Execution Directives: ${t.execution_directives || 'N/A'}
Findings: ${t.findings || 'N/A'}
Files Modified: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}

${finalTasks.map(t => `### ${t.id}: ${t.title}(${t.status})
字段
波浪${t.wave}
范围${t.scope
依赖${t.deps
上下文来源${t.context_from
测试通过${t.tests_passed
验收满足${t.acceptance_met
错误${t.error
描述: ${t.description}
测试用例: ${t.test || '无'}
验收标准: ${t.acceptance_criteria || '无'}
提示: ${t.hints || '无'}
执行指令: ${t.execution_directives || '无'}
结果: ${t.findings || '无'}
修改文件: ${t.files_modified || '无'}`).join('\n\n---\n\n')}

All Modified Files

所有修改的文件

${[...new Set(finalTasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boolean))].map(f => '- ' + f).join('\n') || 'None'} `
Write(
${sessionFolder}/context.md
, contextContent)

3. **Display Summary**

```javascript
console.log(`
${[...new Set(finalTasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boolean))].map(f => '- ' + f).join('\n') || '无'} `
Write(
${sessionFolder}/context.md
, contextContent)

3. **显示汇总信息**

```javascript
console.log(`

Wave Plan Complete

波浪式规划完成

  • Session: ${sessionId}
  • Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
  • Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
  • Waves: ${maxWave}
Results: ${sessionFolder}/results.csv Report: ${sessionFolder}/context.md Discoveries: ${sessionFolder}/discoveries.ndjson `)

4. **Offer Next Steps** (skip if AUTO_YES)

```javascript
if (!AUTO_YES && failed.length > 0) {
  const answer = AskUserQuestion({
    questions: [{
      question: `${failed.length} tasks failed. Next action?`,
      header: "Next Step",
      multiSelect: false,
      options: [
        { label: "Retry Failed", description: `Re-execute ${failed.length} failed tasks with updated context` },
        { label: "View Report", description: "Display context.md" },
        { label: "Done", description: "Complete session" }
      ]
    }]
  })

  if (answer['Next Step'] === "Retry Failed") {
    for (const task of failed) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    for (const task of skipped) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    // Re-execute Phase 4
  } else if (answer['Next Step'] === "View Report") {
    console.log(Read(`${sessionFolder}/context.md`))
  }
}
Success Criteria:
  • results.csv exported
  • context.md generated with full field coverage
  • Summary displayed to user

  • 会话: ${sessionId}
  • 探索: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length}个维度完成
  • 任务: ${completed.length}/${finalTasks.length}个完成,${failed.length}个失败,${skipped.length}个被跳过
  • 波浪数: ${maxWave}
结果文件: ${sessionFolder}/results.csv 报告: ${sessionFolder}/context.md 探索发现: ${sessionFolder}/discoveries.ndjson `)

4. **提供下一步操作**(若使用AUTO_YES则跳过)

```javascript
if (!AUTO_YES && failed.length > 0) {
  const answer = AskUserQuestion({
    questions: [{
      question: `${failed.length}个任务失败。下一步操作?`,
      header: "下一步",
      multiSelect: false,
      options: [
        { label: "重试失败任务", description: `使用更新的上下文重新执行${failed.length}个失败任务` },
        { label: "查看报告", description: "显示context.md" },
        { label: "完成", description: "结束会话" }
      ]
    }]
  })

  if (answer['下一步'] === "重试失败任务") {
    for (const task of failed) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    for (const task of skipped) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    // 重新执行阶段4
  } else if (answer['下一步'] === "查看报告") {
    console.log(Read(`${sessionFolder}/context.md`))
  }
}
成功标准:
  • 已导出results.csv
  • 已生成包含完整字段的context.md
  • 已向用户显示汇总信息

Wave Computation (Kahn's BFS)

波浪计算(Kahn's BFS算法)

javascript
function computeWaves(tasks) {
  const taskMap = new Map(tasks.map(t => [t.id, t]))
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  const adjList = new Map(tasks.map(t => [t.id, []]))

  for (const task of tasks) {
    for (const dep of task.deps) {
      if (taskMap.has(dep)) {
        adjList.get(dep).push(task.id)
        inDegree.set(task.id, inDegree.get(task.id) + 1)
      }
    }
  }

  const queue = []
  const waveAssignment = new Map()

  for (const [id, deg] of inDegree) {
    if (deg === 0) {
      queue.push([id, 1])
      waveAssignment.set(id, 1)
    }
  }

  let maxWave = 1
  let idx = 0
  while (idx < queue.length) {
    const [current, depth] = queue[idx++]
    for (const next of adjList.get(current)) {
      const newDeg = inDegree.get(next) - 1
      inDegree.set(next, newDeg)
      const nextDepth = Math.max(waveAssignment.get(next) || 0, depth + 1)
      waveAssignment.set(next, nextDepth)
      if (newDeg === 0) {
        queue.push([next, nextDepth])
        maxWave = Math.max(maxWave, nextDepth)
      }
    }
  }

  for (const task of tasks) {
    if (!waveAssignment.has(task.id)) {
      throw new Error(`Circular dependency detected involving task ${task.id}`)
    }
  }

  return { waveAssignment, maxWave }
}

javascript
function computeWaves(tasks) {
  const taskMap = new Map(tasks.map(t => [t.id, t]))
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  const adjList = new Map(tasks.map(t => [t.id, []]))

  for (const task of tasks) {
    for (const dep of task.deps) {
      if (taskMap.has(dep)) {
        adjList.get(dep).push(task.id)
        inDegree.set(task.id, inDegree.get(task.id) + 1)
      }
    }
  }

  const queue = []
  const waveAssignment = new Map()

  for (const [id, deg] of inDegree) {
    if (deg === 0) {
      queue.push([id, 1])
      waveAssignment.set(id, 1)
    }
  }

  let maxWave = 1
  let idx = 0
  while (idx < queue.length) {
    const [current, depth] = queue[idx++]
    for (const next of adjList.get(current)) {
      const newDeg = inDegree.get(next) - 1
      inDegree.set(next, newDeg)
      const nextDepth = Math.max(waveAssignment.get(next) || 0, depth + 1)
      waveAssignment.set(next, nextDepth)
      if (newDeg === 0) {
        queue.push([next, nextDepth])
        maxWave = Math.max(maxWave, nextDepth)
      }
    }
  }

  for (const task of tasks) {
    if (!waveAssignment.has(task.id)) {
      throw new Error(`检测到涉及任务${task.id}的循环依赖`)
    }
  }

  return { waveAssignment, maxWave }
}

Shared Discovery Board Protocol

共享探索看板协议

All agents across all phases share
discoveries.ndjson
. This eliminates redundant codebase exploration.
jsonl
{"ts":"2026-02-28T10:00:00+08:00","worker":"E1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
{"ts":"2026-02-28T10:01:00+08:00","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
Types:
code_pattern
,
integration_point
,
convention
,
blocker
,
tech_stack
,
test_command
Rules: Read first → write immediately → deduplicate → append-only

所有阶段的所有Agent共享
discoveries.ndjson
。这消除了重复的代码库探索。
jsonl
{"ts":"2026-02-28T10:00:00+08:00","worker":"E1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"抽象CRUD仓库"}}
{"ts":"2026-02-28T10:01:00+08:00","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"认证模块入口","exports":["authenticate","authorize"]}}
类型:
code_pattern
,
integration_point
,
convention
,
blocker
,
tech_stack
,
test_command
规则: 先读取 → 立即写入 → 去重 → 仅追加

Error Handling

错误处理

ErrorResolution
Explore agent failureMark as failed in explore.csv, exclude from planning
All explores failedFallback: plan directly from requirement without exploration
Circular dependencyAbort wave computation, report cycle
Execute agent timeoutMark as failed in results, continue with wave
Execute agent failedMark as failed, skip dependent tasks in later waves
CSV parse errorValidate CSV format before execution, show line number
discoveries.ndjson corruptIgnore malformed lines, continue with valid entries
Continue mode: no sessionList available sessions, prompt user to select

错误解决方法
探索Agent失败在explore.csv中标记为失败,规划时排除
所有探索失败回退方案:不进行探索,直接基于需求规划
循环依赖终止波浪计算,报告循环
执行Agent超时在结果中标记为失败,继续执行波浪
执行Agent失败标记为失败,后续波浪中跳过依赖任务
CSV解析错误执行前验证CSV格式,显示错误行号
discoveries.ndjson损坏忽略格式错误的行,继续使用有效条目
恢复模式:无会话列出可用会话,提示用户选择

Core Rules

核心规则

  1. Explore Before Execute: Phase 2 completes before Phase 4 starts
  2. Wave Order is Sacred: Never execute wave N before wave N-1 completes and results are merged
  3. CSV is Source of Truth: Master CSVs hold all state — always read before wave, always write after
  4. Cross-Phase Context: prev_context built from both explore.csv (E*) and tasks.csv (T*), not from memory
  5. E ↔ T Linking**: tasks.csv
    context_from
    references explore.csv rows for cross-phase context
  6. Discovery Board is Append-Only: Never clear, modify, or recreate discoveries.ndjson
  7. Skip on Failure: If a dependency failed, skip the dependent task (cascade)
  8. Cleanup Temp Files: Remove wave CSVs after results are merged
  9. DO NOT STOP: Continuous execution until all waves complete or all remaining tasks are skipped

  1. 先探索后执行: 阶段2完成后再开始阶段4
  2. 波浪顺序不可打破: 绝不在波浪N-1完成并合并结果前执行波浪N
  3. CSV是唯一可信源: 主CSV保存所有状态 — 波浪前必须读取,波浪后必须写入
  4. 跨阶段上下文: prev_context从explore.csv(E*)和tasks.csv(T*)构建,而非内存
  5. E ↔ T关联**: tasks.csv的
    context_from
    引用explore.csv行以实现跨阶段上下文
  6. 探索看板仅追加: 绝不清理、修改或重新创建discoveries.ndjson
  7. 失败则跳过: 若依赖失败,跳过依赖任务(级联)
  8. 清理临时文件: 结果合并后删除波浪CSV
  9. 持续执行: 持续执行直到所有波浪完成或所有剩余任务被跳过

Best Practices

最佳实践

  1. Exploration Angles: 1 for simple, 3-4 for complex; avoid redundant angles
  2. Context Linking: Link every task to at least one explore row (E*) — exploration was done for a reason
  3. Task Granularity: 3-10 tasks optimal; too many = overhead, too few = no parallelism
  4. Minimize Cross-Wave Deps: More tasks in wave 1 = more parallelism
  5. Specific Descriptions: Agent sees only its CSV row + prev_context — make description self-contained
  6. Non-Overlapping Scopes: Same-wave tasks must not write to the same files
  7. Concurrency Tuning:
    -c 1
    for serial (max context sharing);
    -c 8
    for I/O-bound tasks

  1. 探索维度: 简单需求用1个,复杂需求用3-4个;避免重复维度
  2. 上下文关联: 每个任务至少关联一个探索行(E*)— 探索是有意义的
  3. 任务粒度: 3-10个任务最优;太多=开销大,太少=无并行性
  4. 减少跨波浪依赖: 波浪1中的任务越多=并行性越高
  5. 具体描述: Agent仅能看到其CSV行+prev_context — 确保描述自包含
  6. 范围不重叠: 同一波浪的任务不能写入相同文件
  7. 并发调优:
    -c 1
    用于串行执行(最大化上下文共享);
    -c 8
    用于I/O密集型任务

Usage Recommendations

使用建议

ScenarioRecommended Approach
Complex feature (unclear architecture)
$wave-plan-pipeline
— explore first, then plan
Simple known-pattern task
$csv-wave-pipeline
— skip exploration, direct execution
Independent parallel tasks
$csv-wave-pipeline -c 8
— single wave, max parallelism
Diamond dependency (A→B,C→D)
$wave-plan-pipeline
— 3 waves with context propagation
Unknown codebase
$wave-plan-pipeline
— exploration phase is essential
场景推荐方法
复杂功能(架构不明确)
$wave-plan-pipeline
— 先探索,再规划
简单已知模式任务
$csv-wave-pipeline
— 跳过探索,直接执行
独立并行任务
$csv-wave-pipeline -c 8
— 单波浪,最大并行性
菱形依赖(A→B,C→D)
$wave-plan-pipeline
— 3个波浪,上下文传播
未知代码库
$wave-plan-pipeline
— 探索阶段必不可少