workflow-lite-plan

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Auto Mode

自动模式

When
--yes
or
-y
: Auto-confirm decomposition, skip interactive validation, use defaults.
当传入
--yes
-y
参数时:自动确认拆解结果,跳过交互校验,使用默认配置。

Workflow Lite Planex

Workflow Lite Planex

Usage

使用方法

bash
$workflow-lite-plan "Implement user authentication with OAuth, JWT, and 2FA"
$workflow-lite-plan -c 4 "Refactor payment module with Stripe and PayPal"
$workflow-lite-plan -y "Build notification system with email and SMS"
$workflow-lite-plan --continue "auth-20260228"
Flags:
  • -y, --yes
    : Skip all confirmations (auto mode)
  • -c, --concurrency N
    : Max concurrent agents within each wave (default: 4)
  • --continue
    : Resume existing session
Output Directory:
.workflow/.lite-plan/{session-id}/

bash
$workflow-lite-plan "Implement user authentication with OAuth, JWT, and 2FA"
$workflow-lite-plan -c 4 "Refactor payment module with Stripe and PayPal"
$workflow-lite-plan -y "Build notification system with email and SMS"
$workflow-lite-plan --continue "auth-20260228"
参数说明:
  • -y, --yes
    : 跳过所有确认步骤(自动模式)
  • -c, --concurrency N
    : 每轮波浪中最大并发Agent数(默认值:4)
  • --continue
    : 恢复已有的会话
输出目录:
.workflow/.lite-plan/{session-id}/

Overview

概述

Explore-first wave-based pipeline using
spawn_agents_on_csv
. Two-stage CSV execution: explore.csv (codebase discovery) → tasks.csv (implementation), with cross-phase context propagation via
context_from
linking (
E*
T*
).
Core workflow: Decompose → Wave Explore → Synthesize & Plan → Wave Execute → Aggregate
┌──────────────────────────────────────────────────────────────────────┐
│                    WORKFLOW LITE PLANEX                               │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Phase 1: Requirement → explore.csv                                  │
│     ├─ Analyze complexity → select exploration angles (1-4)          │
│     ├─ Generate explore.csv (1 row per angle)                        │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 2: Wave Explore (spawn_agents_on_csv)                         │
│     ├─ For each explore wave:                                        │
│     │   ├─ Build wave CSV from explore.csv                           │
│     │   ├─ spawn_agents_on_csv(explore instruction template)         │
│     │   └─ Merge findings/key_files into explore.csv                 │
│     └─ discoveries.ndjson shared across agents                       │
│                                                                      │
│  Phase 3: Synthesize & Plan → tasks.csv                              │
│     ├─ Read all explore findings → cross-reference                   │
│     ├─ Resolve conflicts between angles                              │
│     ├─ Decompose into execution tasks with context_from: E*;T*       │
│     ├─ Compute dependency waves (topological sort)                   │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 4: Wave Execute (spawn_agents_on_csv)                         │
│     ├─ For each task wave:                                           │
│     │   ├─ Build prev_context from explore.csv + tasks.csv           │
│     │   ├─ Build wave CSV with prev_context column                   │
│     │   ├─ spawn_agents_on_csv(execute instruction template)         │
│     │   └─ Merge results into tasks.csv                              │
│     └─ discoveries.ndjson carries across all waves                   │
│                                                                      │
│  Phase 5: Aggregate                                                  │
│     ├─ Export results.csv                                            │
│     ├─ Generate context.md with all findings                         │
│     └─ Display summary                                               │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

基于
spawn_agents_on_csv
实现的探索优先波浪式流水线。分两个阶段执行CSV:explore.csv(代码库调研)→ tasks.csv(功能实现),通过
context_from
关联实现跨阶段上下文传递(
E*
T*
)。
核心工作流: 需求拆解 → 波浪式探索 → 结果整合与规划 → 波浪式执行 → 结果聚合
┌──────────────────────────────────────────────────────────────────────┐
│                    WORKFLOW LITE PLANEX                               │
├──────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  Phase 1: Requirement → explore.csv                                  │
│     ├─ Analyze complexity → select exploration angles (1-4)          │
│     ├─ Generate explore.csv (1 row per angle)                        │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 2: Wave Explore (spawn_agents_on_csv)                         │
│     ├─ For each explore wave:                                        │
│     │   ├─ Build wave CSV from explore.csv                           │
│     │   ├─ spawn_agents_on_csv(explore instruction template)         │
│     │   └─ Merge findings/key_files into explore.csv                 │
│     └─ discoveries.ndjson shared across agents                       │
│                                                                      │
│  Phase 3: Synthesize & Plan → tasks.csv                              │
│     ├─ Read all explore findings → cross-reference                   │
│     ├─ Resolve conflicts between angles                              │
│     ├─ Decompose into execution tasks with context_from: E*;T*       │
│     ├─ Compute dependency waves (topological sort)                   │
│     └─ User validates (skip if -y)                                   │
│                                                                      │
│  Phase 4: Wave Execute (spawn_agents_on_csv)                         │
│     ├─ For each task wave:                                           │
│     │   ├─ Build prev_context from explore.csv + tasks.csv           │
│     │   ├─ Build wave CSV with prev_context column                   │
│     │   ├─ spawn_agents_on_csv(execute instruction template)         │
│     │   └─ Merge results into tasks.csv                              │
│     └─ discoveries.ndjson carries across all waves                   │
│                                                                      │
│  Phase 5: Aggregate                                                  │
│     ├─ Export results.csv                                            │
│     ├─ Generate context.md with all findings                         │
│     └─ Display summary                                               │
│                                                                      │
└──────────────────────────────────────────────────────────────────────┘

Context Flow

上下文流转

explore.csv             tasks.csv
┌──────────┐           ┌──────────┐
│ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
│ findings │           │ prev_ctx │← E1+E2 findings
├──────────┤           ├──────────┤
│ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
│ findings │           │ prev_ctx │← E1+T1 findings
├──────────┤           ├──────────┤
│ E3: test │──┐   ┌───→│ T3: test │ context_from: E3;T2
│ findings │  └───┘    │ prev_ctx │← E3+T2 findings
└──────────┘           └──────────┘

Two context channels:
1. Directed: context_from → prev_context (CSV findings lookup)
2. Broadcast: discoveries.ndjson (append-only shared board)

context_from prefix: E* → explore.csv lookup, T* → tasks.csv lookup

explore.csv             tasks.csv
┌──────────┐           ┌──────────┐
│ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
│ findings │           │ prev_ctx │← E1+E2 findings
├──────────┤           ├──────────┤
│ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
│ findings │           │ prev_ctx │← E1+T1 findings
├──────────┤           ├──────────┤
│ E3: test │──┐   ┌───→│ T3: test │ context_from: E3;T2
│ findings │  └───┘    │ prev_ctx │← E3+T2 findings
└──────────┘           └──────────┘

两种上下文通道:
1. 定向传递: context_from → prev_context(CSV发现结果查询)
2. 广播传递: discoveries.ndjson(仅追加的共享看板)

context_from前缀说明: E* → 从explore.csv查询,T* → 从tasks.csv查询

CSV Schemas

CSV结构定义

explore.csv

explore.csv

csv
id,angle,description,focus,deps,wave,status,findings,key_files,error
"E1","architecture","Explore codebase architecture for: auth system","architecture","","1","pending","","",""
"E2","dependencies","Explore dependency landscape for: auth system","dependencies","","1","pending","","",""
"E3","testing","Explore test infrastructure for: auth system","testing","","1","pending","","",""
Columns:
ColumnPhaseDescription
id
InputExploration ID: E1, E2, ...
angle
InputExploration angle name
description
InputWhat to explore from this angle
focus
InputKeywords and focus areas
deps
InputSemicolon-separated dep IDs (usually empty — all wave 1)
wave
ComputedWave number (usually 1 for all explorations)
status
Output
pending
completed
/
failed
findings
OutputDiscoveries (max 800 chars)
key_files
OutputRelevant files (semicolon-separated)
error
OutputError message if failed
csv
id,angle,description,focus,deps,wave,status,findings,key_files,error
"E1","architecture","Explore codebase architecture for: auth system","architecture","","1","pending","","",""
"E2","dependencies","Explore dependency landscape for: auth system","dependencies","","1","pending","","",""
"E3","testing","Explore test infrastructure for: auth system","testing","","1","pending","","",""
字段说明:
字段所属阶段说明
id
输入探索任务ID: E1, E2, ...
angle
输入探索角度名称
description
输入该角度下需要探索的内容
focus
输入关键词与重点关注领域
deps
输入分号分隔的依赖ID(通常为空,所有探索任务都在第一轮)
wave
计算生成轮次编号(通常所有探索任务都在第1轮)
status
输出
pending
completed
/
failed
findings
输出探索发现(最多800字符)
key_files
输出关联的核心文件(分号分隔)
error
输出执行失败时的错误信息

tasks.csv

tasks.csv

csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error
"T1","Setup types","Create type definitions","Verify types compile with tsc","All interfaces exported","src/types/**","Follow existing patterns || src/types/index.ts","tsc --noEmit","","E1;E2","1","pending","","","","",""
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","pending","","","","",""
Columns:
ColumnPhaseDescription
id
InputTask ID: T1, T2, ...
title
InputShort task title
description
InputSelf-contained task description — what to implement
test
InputTest cases: what tests to write and how to verify (unit/integration/edge)
acceptance_criteria
InputMeasurable conditions that define "done"
scope
InputTarget file/directory glob — constrains agent write area, prevents cross-task file conflicts
hints
InputImplementation tips + reference files. Format:
tips text || file1;file2
. Either part is optional
execution_directives
InputExecution constraints: commands to run for verification, tool restrictions
deps
InputDependency task IDs: T1;T2 (semicolon-separated)
context_from
InputContext source IDs: E1;E2;T1
E*
lookups in explore.csv,
T*
in tasks.csv
wave
ComputedWave number (computed by topological sort, 1-based)
status
Output
pending
completed
/
failed
/
skipped
findings
OutputExecution findings (max 500 chars)
files_modified
OutputSemicolon-separated file paths
tests_passed
OutputWhether all defined test cases passed (true/false)
acceptance_met
OutputSummary of which acceptance criteria were met/unmet
error
OutputError message if failed (empty if success)
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error
"T1","Setup types","Create type definitions","Verify types compile with tsc","All interfaces exported","src/types/**","Follow existing patterns || src/types/index.ts","tsc --noEmit","","E1;E2","1","pending","","","","",""
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","pending","","","","",""
字段说明:
字段所属阶段说明
id
输入任务ID: T1, T2, ...
title
输入简短的任务标题
description
输入独立完整的任务描述,说明需要实现的内容
test
输入测试用例:需要编写的测试内容与验证方式(单元/集成/边界场景)
acceptance_criteria
输入定义「完成」的可量化条件
scope
输入目标文件/目录通配符,限制Agent的写入范围,避免跨任务文件冲突
hints
输入实现提示 + 参考文件,格式:`提示文本
execution_directives
输入执行约束:验证用的命令、工具限制
deps
输入依赖的任务ID,格式T1;T2(分号分隔)
context_from
输入上下文来源ID:E1;E2;T1
E*
从explore.csv查询,
T*
从tasks.csv查询
wave
计算生成轮次编号(通过拓扑排序计算,从1开始)
status
输出
pending
completed
/
failed
/
skipped
findings
输出执行发现(最多500字符)
files_modified
输出分号分隔的修改文件路径
tests_passed
输出所有定义的测试用例是否通过(true/false)
acceptance_met
输出验收条件满足/未满足的总结
error
输出执行失败时的错误信息(成功则为空)

Per-Wave CSV (Temporary)

每轮临时CSV

Each wave generates a temporary CSV with an extra
prev_context
column.
Explore wave:
explore-wave-{N}.csv
— same columns as explore.csv (no prev_context, explorations are independent).
Execute wave:
task-wave-{N}.csv
— all task columns +
prev_context
:
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","[Explore architecture] Found BaseService pattern in src/services/\n[Task T1] Created types at src/types/auth.ts"
The
prev_context
column is built from
context_from
by looking up completed rows'
findings
in both explore.csv (
E*
) and tasks.csv (
T*
).

每轮执行会生成带额外
prev_context
字段的临时CSV。
探索轮次:
explore-wave-{N}.csv
— 字段与explore.csv一致(无prev_context,探索任务相互独立)。
执行轮次:
task-wave-{N}.csv
— 包含所有任务字段 +
prev_context
:
csv
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
"T2","Implement core","Implement core auth logic","Unit test: login returns token","Login flow works end-to-end","src/auth/**","Reuse BaseService || src/services/Base.ts","npm test -- --grep auth","T1","E1;E2;T1","2","[Explore architecture] Found BaseService pattern in src/services/\n[Task T1] Created types at src/types/auth.ts"
prev_context
字段是通过
context_from
查询explore.csv(
E*
)和tasks.csv(
T*
)中已完成行的
findings
字段生成的。

Output Artifacts

输出产物

FilePurposeLifecycle
explore.csv
Exploration state — angles with findings/key_filesUpdated after Phase 2
tasks.csv
Execution state — tasks with resultsUpdated after each wave in Phase 4
explore-wave-{N}.csv
Per-wave explore input (temporary)Created before wave, deleted after
task-wave-{N}.csv
Per-wave execute input (temporary)Created before wave, deleted after
results.csv
Final results exportCreated in Phase 5
discoveries.ndjson
Shared discovery board (all agents, all phases)Append-only
context.md
Human-readable execution reportCreated in Phase 5

文件用途生命周期
explore.csv
探索状态 — 存储各探索角度的发现结果与核心文件第2阶段完成后更新
tasks.csv
执行状态 — 存储任务执行结果第4阶段每轮执行完成后更新
explore-wave-{N}.csv
每轮探索的输入文件(临时)轮次开始前创建,执行完成后删除
task-wave-{N}.csv
每轮执行的输入文件(临时)轮次开始前创建,执行完成后删除
results.csv
最终结果导出文件第5阶段生成
discoveries.ndjson
共享发现看板(所有Agent、所有阶段共享)仅追加写入
context.md
可读的执行报告第5阶段生成

Session Structure

会话目录结构

.workflow/.lite-plan/{session-id}/
├── explore.csv              # Exploration state
├── tasks.csv                # Execution state
├── results.csv              # Final results export
├── discoveries.ndjson       # Shared discovery board
├── context.md               # Full context summary
├── explore-wave-{N}.csv     # Temporary per-wave explore input (cleaned up)
└── task-wave-{N}.csv        # Temporary per-wave execute input (cleaned up)

.workflow/.lite-plan/{session-id}/
├── explore.csv              # 探索状态
├── tasks.csv                # 执行状态
├── results.csv              # 最终结果导出
├── discoveries.ndjson       # 共享发现看板
├── context.md               # 完整上下文总结
├── explore-wave-{N}.csv     # 每轮探索的临时输入文件(执行后清理)
└── task-wave-{N}.csv        # 每轮执行的临时输入文件(执行后清理)

Implementation

实现逻辑

Session Initialization

会话初始化

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `wpp-${slug}-${dateStr}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`

// Continue mode: find existing session
if (continueMode) {
  const existing = Bash(`ls -t .workflow/.lite-plan/ 2>/dev/null | head -1`).trim()
  if (existing) {
    sessionId = existing
    sessionFolder = `.workflow/.lite-plan/${sessionId}`
    // Check which phase to resume: if tasks.csv exists → Phase 4, else → Phase 2
  }
}

Bash(`mkdir -p ${sessionFolder}`)

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `wpp-${slug}-${dateStr}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`

// Continue mode: find existing session
if (continueMode) {
  const existing = Bash(`ls -t .workflow/.lite-plan/ 2>/dev/null | head -1`).trim()
  if (existing) {
    sessionId = existing
    sessionFolder = `.workflow/.lite-plan/${sessionId}`
    // Check which phase to resume: if tasks.csv exists → Phase 4, else → Phase 2
  }
}

Bash(`mkdir -p ${sessionFolder}`)

Phase 1: Requirement → explore.csv

第1阶段:需求拆解生成explore.csv

Objective: Analyze requirement complexity, select exploration angles, generate explore.csv.
Steps:
  1. Analyze & Decompose
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Analyze requirement complexity and select 1-4 exploration angles for codebase discovery before implementation.
TASK: • Classify requirement type (feature/bugfix/refactor/security/performance) • Assess complexity (Low: 1 angle, Medium: 2-3, High: 3-4) • Select exploration angles from: architecture, dependencies, integration-points, testing, patterns, security, performance, state-management, error-handling, edge-cases • For each angle, define focus keywords and what to discover MODE: analysis CONTEXT: @**/* EXPECTED: JSON object: {type: string, complexity: string, angles: [{id: string, angle: string, description: string, focus: string}]}. Each angle id = E1, E2, etc. CONSTRAINTS: 1-4 angles | Angles must be distinct | Each angle must have clear focus
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion via hook callback // Parse JSON from CLI output → { type, complexity, angles[] }

2. **Generate explore.csv**

```javascript
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
const rows = angles.map(a =>
  [a.id, a.angle, a.description, a.focus, '', '1', 'pending', '', '', '']
    .map(v => `"${String(v).replace(/"/g, '""')}"`)
    .join(',')
)

Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
  1. User Validation (skip if AUTO_YES)
    javascript
    if (!AUTO_YES) {
      console.log(`\n## Exploration Plan (${angles.length} angles)\n`)
      angles.forEach(a => console.log(`  - [${a.id}] ${a.angle}: ${a.focus}`))
    
      const answer = AskUserQuestion({
        questions: [{
          question: "Approve exploration angles?",
          header: "Validation",
          multiSelect: false,
          options: [
            { label: "Approve", description: "Proceed with wave exploration" },
            { label: "Modify", description: `Edit ${sessionFolder}/explore.csv manually, then --continue` },
            { label: "Cancel", description: "Abort" }
          ]
        }]
      })
    
      if (answer.Validation === "Modify") {
        console.log(`Edit: ${sessionFolder}/explore.csv\nResume: $workflow-lite-plan --continue`)
        return
      } else if (answer.Validation === "Cancel") {
        return
      }
    }
Success Criteria:
  • explore.csv created with 1-4 exploration angles
  • User approved (or AUTO_YES)

目标: 分析需求复杂度,选择探索角度,生成explore.csv。
步骤:
  1. 分析与拆解
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Analyze requirement complexity and select 1-4 exploration angles for codebase discovery before implementation.
TASK: • Classify requirement type (feature/bugfix/refactor/security/performance) • Assess complexity (Low: 1 angle, Medium: 2-3, High: 3-4) • Select exploration angles from: architecture, dependencies, integration-points, testing, patterns, security, performance, state-management, error-handling, edge-cases • For each angle, define focus keywords and what to discover MODE: analysis CONTEXT: @**/* EXPECTED: JSON object: {type: string, complexity: string, angles: [{id: string, angle: string, description: string, focus: string}]}. Each angle id = E1, E2, etc. CONSTRAINTS: 1-4 angles | Angles must be distinct | Each angle must have clear focus
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion via hook callback // Parse JSON from CLI output → { type, complexity, angles[] }

2. **生成explore.csv**

```javascript
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
const rows = angles.map(a =>
  [a.id, a.angle, a.description, a.focus, '', '1', 'pending', '', '', '']
    .map(v => `"${String(v).replace(/"/g, '""')}"`)
    .join(',')
)

Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
  1. 用户校验(AUTO_YES模式下跳过)
    javascript
    if (!AUTO_YES) {
      console.log(`\n## Exploration Plan (${angles.length} angles)\n`)
      angles.forEach(a => console.log(`  - [${a.id}] ${a.angle}: ${a.focus}`))
    
      const answer = AskUserQuestion({
        questions: [{
          question: "Approve exploration angles?",
          header: "Validation",
          multiSelect: false,
          options: [
            { label: "Approve", description: "Proceed with wave exploration" },
            { label: "Modify", description: `Edit ${sessionFolder}/explore.csv manually, then --continue` },
            { label: "Cancel", description: "Abort" }
          ]
        }]
      })
    
      if (answer.Validation === "Modify") {
        console.log(`Edit: ${sessionFolder}/explore.csv\nResume: $workflow-lite-plan --continue`)
        return
      } else if (answer.Validation === "Cancel") {
        return
      }
    }
成功条件:
  • 生成的explore.csv包含1-4个探索角度
  • 用户确认通过(或AUTO_YES模式)

Phase 2: Wave Explore (spawn_agents_on_csv)

第2阶段:波浪式探索(spawn_agents_on_csv)

Objective: Execute exploration via
spawn_agents_on_csv
. Each angle produces findings and key_files.
Steps:
  1. Explore Wave Loop
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
    
    for (let wave = 1; wave <= maxExploreWave; wave++) {
      const waveTasks = exploreCSV.filter(r =>
        parseInt(r.wave) === wave && r.status === 'pending'
      )
      if (waveTasks.length === 0) continue
    
      // Skip rows with failed dependencies
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => {
          const dep = exploreCSV.find(r => r.id === d)
          return !dep || dep.status !== 'completed'
        })) {
          task.status = 'skipped'
          task.error = 'Dependency failed/skipped'
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) continue
    
      // Write explore wave CSV
      const waveHeader = 'id,angle,description,focus,deps,wave'
      const waveRows = executableTasks.map(t =>
        [t.id, t.angle, t.description, t.focus, t.deps, t.wave]
          .map(v => `"${String(v).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/explore-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute explore wave
      console.log(`  Exploring ${executableTasks.length} angles (wave ${wave})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/explore-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExploreInstruction(sessionFolder),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 300,
        output_csv_path: `${sessionFolder}/explore-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            key_files: { type: "array", items: { type: "string" } },
            error: { type: "string" }
          },
          required: ["id", "status", "findings"]
        }
      })
    
      // Merge results into explore.csv
      const waveResults = parseCsv(Read(`${sessionFolder}/explore-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/explore.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          key_files: Array.isArray(result.key_files) ? result.key_files.join(';') : (result.key_files || ''),
          error: result.error || ''
        })
      }
    
      // Cleanup temporary wave CSV
      Bash(`rm -f "${sessionFolder}/explore-wave-${wave}.csv" "${sessionFolder}/explore-wave-${wave}-results.csv"`)
    }
  2. Explore Instruction Template
    javascript
    function buildExploreInstruction(sessionFolder) {
      return `
目标: 通过
spawn_agents_on_csv
执行探索任务,每个角度产出发现结果与核心文件列表。
步骤:
  1. 探索轮次循环
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
    
    for (let wave = 1; wave <= maxExploreWave; wave++) {
      const waveTasks = exploreCSV.filter(r =>
        parseInt(r.wave) === wave && r.status === 'pending'
      )
      if (waveTasks.length === 0) continue
    
      // Skip rows with failed dependencies
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => {
          const dep = exploreCSV.find(r => r.id === d)
          return !dep || dep.status !== 'completed'
        })) {
          task.status = 'skipped'
          task.error = 'Dependency failed/skipped'
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) continue
    
      // Write explore wave CSV
      const waveHeader = 'id,angle,description,focus,deps,wave'
      const waveRows = executableTasks.map(t =>
        [t.id, t.angle, t.description, t.focus, t.deps, t.wave]
          .map(v => `"${String(v).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/explore-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute explore wave
      console.log(`  Exploring ${executableTasks.length} angles (wave ${wave})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/explore-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExploreInstruction(sessionFolder),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 300,
        output_csv_path: `${sessionFolder}/explore-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            key_files: { type: "array", items: { type: "string" } },
            error: { type: "string" }
          },
          required: ["id", "status", "findings"]
        }
      })
    
      // Merge results into explore.csv
      const waveResults = parseCsv(Read(`${sessionFolder}/explore-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/explore.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          key_files: Array.isArray(result.key_files) ? result.key_files.join(';') : (result.key_files || ''),
          error: result.error || ''
        })
      }
    
      // Cleanup temporary wave CSV
      Bash(`rm -f "${sessionFolder}/explore-wave-${wave}.csv" "${sessionFolder}/explore-wave-${wave}-results.csv"`)
    }
  2. 探索指令模板
    javascript
    function buildExploreInstruction(sessionFolder) {
      return `

EXPLORATION ASSIGNMENT

EXPLORATION ASSIGNMENT

MANDATORY FIRST STEPS

MANDATORY FIRST STEPS

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

Your Exploration

Your Exploration

Exploration ID: {id} Angle: {angle} Description: {description} Focus: {focus}

Exploration ID: {id} Angle: {angle} Description: {description} Focus: {focus}

Exploration Protocol

Exploration Protocol

  1. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared findings
  2. Explore: Search the codebase from the {angle} perspective
  3. Discover: Find relevant files, patterns, integration points, constraints
  4. Share discoveries: Append findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  5. Report result: Return JSON via report_agent_job_result
  1. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared findings
  2. Explore: Search the codebase from the {angle} perspective
  3. Discover: Find relevant files, patterns, integration points, constraints
  4. Share discoveries: Append findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  5. Report result: Return JSON via report_agent_job_result

What to Look For

What to Look For

  • Existing patterns and conventions to follow
  • Integration points and module boundaries
  • Dependencies and constraints
  • Test infrastructure and coverage
  • Risks and potential blockers
  • Existing patterns and conventions to follow
  • Integration points and module boundaries
  • Dependencies and constraints
  • Test infrastructure and coverage
  • Risks and potential blockers

Discovery Types to Share

Discovery Types to Share

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `tech_stack`: {framework, version, config} — technology stack details

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `tech_stack`: {framework, version, config} — technology stack details

Output (report_agent_job_result)

Output (report_agent_job_result)

Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Concise summary of ${'{'}angle{'}'} discoveries (max 800 chars)", "key_files": ["relevant/file1.ts", "relevant/file2.ts"], "error": "" } ` }

**Success Criteria**:
- All explore angles executed
- explore.csv updated with findings and key_files
- discoveries.ndjson accumulated

---
Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Concise summary of ${'{'}angle{'}'} discoveries (max 800 chars)", "key_files": ["relevant/file1.ts", "relevant/file2.ts"], "error": "" } ` }

**成功条件**:
- 所有探索角度执行完毕
- explore.csv已更新发现结果与核心文件
- discoveries.ndjson已累计所有探索内容

---

Phase 3: Synthesize & Plan → tasks.csv

第3阶段:结果整合与规划生成tasks.csv

Objective: Read exploration findings, cross-reference, resolve conflicts, generate tasks.csv with context_from linking to E* rows.
Steps:
  1. Synthesize Exploration Findings
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = exploreCSV.filter(r => r.status === 'completed')
    
    // Cross-reference: find shared files across angles
    const fileRefs = {}
    completed.forEach(r => {
      (r.key_files || '').split(';').filter(Boolean).forEach(f => {
        if (!fileRefs[f]) fileRefs[f] = []
        fileRefs[f].push({ angle: r.angle, id: r.id })
      })
    })
    const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
    
    // Build synthesis context for task decomposition
    const synthesisContext = completed.map(r =>
      `[${r.id}: ${r.angle}] ${r.findings}\n  Key files: ${r.key_files || 'none'}`
    ).join('\n\n')
    
    const sharedFilesContext = sharedFiles.length > 0
      ? `\nShared files (referenced by multiple angles):\n${sharedFiles.map(([f, refs]) =>
          `  ${f}${refs.map(r => r.id).join(', ')}`
        ).join('\n')}`
      : ''
  2. Decompose into Tasks
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Based on exploration findings, decompose requirement into 3-10 atomic execution tasks. Each task must include test cases, acceptance criteria, and link to relevant exploration findings.
TASK: • Use exploration findings to inform task decomposition • Each task must be self-contained with specific implementation instructions • Link tasks to exploration rows via context_from (E1, E2, etc.) • Define dependencies between tasks (T1 must finish before T2, etc.) • For each task: define test cases, acceptance criteria, scope, hints, and execution directives • Ensure same-wave tasks have non-overlapping scopes MODE: analysis CONTEXT: @**/* EXPECTED: JSON object with tasks array. Each task: {id: string, title: string, description: string, test: string, acceptance_criteria: string, scope: string, hints: string, execution_directives: string, deps: string[], context_from: string[]}.
  • id: T1, T2, etc.
  • description: what to implement (specific enough for an agent)
  • test: what tests to write (e.g. 'Unit test: X returns Y')
  • acceptance_criteria: what defines done (e.g. 'API returns 200')
  • scope: target glob (e.g. 'src/auth/**') — non-overlapping within same wave
  • hints: tips + ref files (format: 'tips || file1;file2')
  • execution_directives: verification commands (e.g. 'npm test --bail')
  • deps: task IDs that must complete first (T*)
  • context_from: explore (E*) and task (T*) IDs whose findings are needed CONSTRAINTS: 3-10 tasks | Atomic | No circular deps | Concrete test/acceptance_criteria | Non-overlapping scopes per wave
EXPLORATION FINDINGS: ${synthesisContext} ${sharedFilesContext}
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion → decomposedTasks[]

3. **Compute Waves & Write tasks.csv**

```javascript
const { waveAssignment, maxWave } = computeWaves(decomposedTasks)

const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
const rows = decomposedTasks.map(task => {
  const wave = waveAssignment.get(task.id)
  return [
    task.id,
    csvEscape(task.title),
    csvEscape(task.description),
    csvEscape(task.test),
    csvEscape(task.acceptance_criteria),
    csvEscape(task.scope),
    csvEscape(task.hints),
    csvEscape(task.execution_directives),
    task.deps.join(';'),
    task.context_from.join(';'),
    wave,
    'pending', '', '', '', '', ''
  ].map(cell => `"${String(cell).replace(/"/g, '""')}"`).join(',')
})

Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
  1. User Validation (skip if AUTO_YES)
    javascript
    if (!AUTO_YES) {
      console.log(`
目标: 读取探索发现结果,交叉校验,解决冲突,生成带
context_from
关联E*行的tasks.csv。
步骤:
  1. 整合探索发现结果
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = exploreCSV.filter(r => r.status === 'completed')
    
    // Cross-reference: find shared files across angles
    const fileRefs = {}
    completed.forEach(r => {
      (r.key_files || '').split(';').filter(Boolean).forEach(f => {
        if (!fileRefs[f]) fileRefs[f] = []
        fileRefs[f].push({ angle: r.angle, id: r.id })
      })
    })
    const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
    
    // Build synthesis context for task decomposition
    const synthesisContext = completed.map(r =>
      `[${r.id}: ${r.angle}] ${r.findings}\n  Key files: ${r.key_files || 'none'}`
    ).join('\n\n')
    
    const sharedFilesContext = sharedFiles.length > 0
      ? `\nShared files (referenced by multiple angles):\n${sharedFiles.map(([f, refs]) =>
          `  ${f}${refs.map(r => r.id).join(', ')}`
        ).join('\n')}`
      : ''
  2. 拆解为执行任务
    javascript
    Bash({
      command: `ccw cli -p "PURPOSE: Based on exploration findings, decompose requirement into 3-10 atomic execution tasks. Each task must include test cases, acceptance criteria, and link to relevant exploration findings.
TASK: • Use exploration findings to inform task decomposition • Each task must be self-contained with specific implementation instructions • Link tasks to exploration rows via context_from (E1, E2, etc.) • Define dependencies between tasks (T1 must finish before T2, etc.) • For each task: define test cases, acceptance criteria, scope, hints, and execution directives • Ensure same-wave tasks have non-overlapping scopes MODE: analysis CONTEXT: @**/* EXPECTED: JSON object with tasks array. Each task: {id: string, title: string, description: string, test: string, acceptance_criteria: string, scope: string, hints: string, execution_directives: string, deps: string[], context_from: string[]}.
  • id: T1, T2, etc.
  • description: what to implement (specific enough for an agent)
  • test: what tests to write (e.g. 'Unit test: X returns Y')
  • acceptance_criteria: what defines done (e.g. 'API returns 200')
  • scope: target glob (e.g. 'src/auth/**') — non-overlapping within same wave
  • hints: tips + ref files (format: 'tips || file1;file2')
  • execution_directives: verification commands (e.g. 'npm test --bail')
  • deps: task IDs that must complete first (T*)
  • context_from: explore (E*) and task (T*) IDs whose findings are needed CONSTRAINTS: 3-10 tasks | Atomic | No circular deps | Concrete test/acceptance_criteria | Non-overlapping scopes per wave
EXPLORATION FINDINGS: ${synthesisContext} ${sharedFilesContext}
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, run_in_background: true }) // Wait for CLI completion → decomposedTasks[]

3. **计算轮次并写入tasks.csv**

```javascript
const { waveAssignment, maxWave } = computeWaves(decomposedTasks)

const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
const rows = decomposedTasks.map(task => {
  const wave = waveAssignment.get(task.id)
  return [
    task.id,
    csvEscape(task.title),
    csvEscape(task.description),
    csvEscape(task.test),
    csvEscape(task.acceptance_criteria),
    csvEscape(task.scope),
    csvEscape(task.hints),
    csvEscape(task.execution_directives),
    task.deps.join(';'),
    task.context_from.join(';'),
    wave,
    'pending', '', '', '', '', ''
  ].map(cell => `"${String(cell).replace(/"/g, '""')}"`).join(',')
})

Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
  1. 用户校验(AUTO_YES模式下跳过)
    javascript
    if (!AUTO_YES) {
      console.log(`

Execution Plan

Execution Plan

Explore: ${completed.length} angles completed Shared files: ${sharedFiles.length} Tasks: ${decomposedTasks.length} across ${maxWave} waves
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => { const wt = decomposedTasks.filter(t => waveAssignment.get(t.id) === w) return
### Wave ${w} (${wt.length} tasks, concurrent) ${wt.map(t => 
- [${t.id}] ${t.title} (scope: ${t.scope}, from: ${t.context_from.join(';')})
).join('\n')}
}).join('\n')} `)
 const answer = AskUserQuestion({
   questions: [{
     question: `Proceed with ${decomposedTasks.length} tasks across ${maxWave} waves?`,
     header: "Confirm",
     multiSelect: false,
     options: [
       { label: "Execute", description: "Proceed with wave execution" },
       { label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then --continue` },
       { label: "Cancel", description: "Abort" }
     ]
   }]
 })

 if (answer.Confirm === "Modify") {
   console.log(`Edit: ${sessionFolder}/tasks.csv\nResume: $workflow-lite-plan --continue`)
   return
 } else if (answer.Confirm === "Cancel") {
   return
 }
}

**Success Criteria**:
- tasks.csv created with context_from linking to E* rows
- No circular dependencies
- User approved (or AUTO_YES)

---
Explore: ${completed.length} angles completed Shared files: ${sharedFiles.length} Tasks: ${decomposedTasks.length} across ${maxWave} waves
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => { const wt = decomposedTasks.filter(t => waveAssignment.get(t.id) === w) return
### Wave ${w} (${wt.length} tasks, concurrent) ${wt.map(t => 
- [${t.id}] ${t.title} (scope: ${t.scope}, from: ${t.context_from.join(';')})
).join('\n')}
}).join('\n')} `)
 const answer = AskUserQuestion({
   questions: [{
     question: `Proceed with ${decomposedTasks.length} tasks across ${maxWave} waves?`,
     header: "Confirm",
     multiSelect: false,
     options: [
       { label: "Execute", description: "Proceed with wave execution" },
       { label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then --continue` },
       { label: "Cancel", description: "Abort" }
     ]
   }]
 })

 if (answer.Confirm === "Modify") {
   console.log(`Edit: ${sessionFolder}/tasks.csv\nResume: $workflow-lite-plan --continue`)
   return
 } else if (answer.Confirm === "Cancel") {
   return
 }
}

**成功条件**:
- 生成的tasks.csv包含关联E*行的context_from字段
- 无循环依赖
- 用户确认通过(或AUTO_YES模式)

---

Phase 4: Wave Execute (spawn_agents_on_csv)

第4阶段:波浪式执行(spawn_agents_on_csv)

Objective: Execute tasks wave-by-wave via
spawn_agents_on_csv
. Each wave's prev_context is built from both explore.csv and tasks.csv.
Steps:
  1. Wave Loop
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const failedIds = new Set()
    const skippedIds = new Set()
    
    for (let wave = 1; wave <= maxWave; wave++) {
      console.log(`\n## Wave ${wave}/${maxWave}\n`)
    
      // Re-read master CSV
      const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
      const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
    
      // Skip tasks whose deps failed
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
          skippedIds.add(task.id)
          updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, {
            status: 'skipped',
            error: 'Dependency failed or skipped'
          })
          console.log(`  [${task.id}] ${task.title} → SKIPPED (dependency failed)`)
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) {
        console.log(`  No executable tasks in wave ${wave}`)
        continue
      }
    
      // Build prev_context for each task (cross-phase: E* + T*)
      for (const task of executableTasks) {
        task.prev_context = buildPrevContext(task.context_from, exploreCSV, masterCsv)
      }
    
      // Write wave CSV
      const waveHeader = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context'
      const waveRows = executableTasks.map(t =>
        [t.id, t.title, t.description, t.test, t.acceptance_criteria, t.scope, t.hints, t.execution_directives, t.deps, t.context_from, t.wave, t.prev_context]
          .map(cell => `"${String(cell).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/task-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute wave
      console.log(`  Executing ${executableTasks.length} tasks (concurrency: ${maxConcurrency})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/task-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExecuteInstruction(sessionFolder, wave),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 600,
        output_csv_path: `${sessionFolder}/task-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            files_modified: { type: "array", items: { type: "string" } },
            tests_passed: { type: "boolean" },
            acceptance_met: { type: "string" },
            error: { type: "string" }
          },
          required: ["id", "status", "findings", "tests_passed"]
        }
      })
    
      // Merge results into master CSV
      const waveResults = parseCsv(Read(`${sessionFolder}/task-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/tasks.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          files_modified: Array.isArray(result.files_modified) ? result.files_modified.join(';') : (result.files_modified || ''),
          tests_passed: String(result.tests_passed ?? ''),
          acceptance_met: result.acceptance_met || '',
          error: result.error || ''
        })
    
        if (result.status === 'failed') {
          failedIds.add(result.id)
          console.log(`  [${result.id}] → FAILED: ${result.error}`)
        } else {
          console.log(`  [${result.id}] → COMPLETED${result.tests_passed ? ' ✓tests' : ''}`)
        }
      }
    
      // Cleanup
      Bash(`rm -f "${sessionFolder}/task-wave-${wave}.csv" "${sessionFolder}/task-wave-${wave}-results.csv"`)
    
      console.log(`  Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
    }
  2. prev_context Builder (Cross-Phase)
    The key function linking exploration context to execution:
    javascript
    function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
      if (!contextFrom) return 'No previous context available'
    
      const ids = contextFrom.split(';').filter(Boolean)
      const entries = []
    
      ids.forEach(id => {
        if (id.startsWith('E')) {
          // ← Look up in explore.csv (cross-phase link)
          const row = exploreCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[Explore ${row.angle}] ${row.findings}`)
            if (row.key_files) entries.push(`  Key files: ${row.key_files}`)
          }
        } else if (id.startsWith('T')) {
          // ← Look up in tasks.csv (same-phase link)
          const row = tasksCSV.find(r => r.id === id)
          if (row && row.status === 'completed' && row.findings) {
            entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
            if (row.files_modified) entries.push(`  Modified: ${row.files_modified}`)
          }
        }
      })
    
      return entries.length > 0 ? entries.join('\n') : 'No previous context available'
    }
  3. Execute Instruction Template
    javascript
    function buildExecuteInstruction(sessionFolder, wave) {
      return `
目标: 通过
spawn_agents_on_csv
逐轮执行任务,每轮的prev_context由explore.csv和tasks.csv共同构建。
步骤:
  1. 轮次循环
    javascript
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const failedIds = new Set()
    const skippedIds = new Set()
    
    for (let wave = 1; wave <= maxWave; wave++) {
      console.log(`\n## Wave ${wave}/${maxWave}\n`)
    
      // Re-read master CSV
      const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
      const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
    
      // Skip tasks whose deps failed
      const executableTasks = []
      for (const task of waveTasks) {
        const deps = (task.deps || '').split(';').filter(Boolean)
        if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
          skippedIds.add(task.id)
          updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, {
            status: 'skipped',
            error: 'Dependency failed or skipped'
          })
          console.log(`  [${task.id}] ${task.title} → SKIPPED (dependency failed)`)
          continue
        }
        executableTasks.push(task)
      }
    
      if (executableTasks.length === 0) {
        console.log(`  No executable tasks in wave ${wave}`)
        continue
      }
    
      // Build prev_context for each task (cross-phase: E* + T*)
      for (const task of executableTasks) {
        task.prev_context = buildPrevContext(task.context_from, exploreCSV, masterCsv)
      }
    
      // Write wave CSV
      const waveHeader = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context'
      const waveRows = executableTasks.map(t =>
        [t.id, t.title, t.description, t.test, t.acceptance_criteria, t.scope, t.hints, t.execution_directives, t.deps, t.context_from, t.wave, t.prev_context]
          .map(cell => `"${String(cell).replace(/"/g, '""')}"`)
          .join(',')
      )
      Write(`${sessionFolder}/task-wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
    
      // Execute wave
      console.log(`  Executing ${executableTasks.length} tasks (concurrency: ${maxConcurrency})...`)
    
      spawn_agents_on_csv({
        csv_path: `${sessionFolder}/task-wave-${wave}.csv`,
        id_column: "id",
        instruction: buildExecuteInstruction(sessionFolder, wave),
        max_concurrency: maxConcurrency,
        max_runtime_seconds: 600,
        output_csv_path: `${sessionFolder}/task-wave-${wave}-results.csv`,
        output_schema: {
          type: "object",
          properties: {
            id: { type: "string" },
            status: { type: "string", enum: ["completed", "failed"] },
            findings: { type: "string" },
            files_modified: { type: "array", items: { type: "string" } },
            tests_passed: { type: "boolean" },
            acceptance_met: { type: "string" },
            error: { type: "string" }
          },
          required: ["id", "status", "findings", "tests_passed"]
        }
      })
    
      // Merge results into master CSV
      const waveResults = parseCsv(Read(`${sessionFolder}/task-wave-${wave}-results.csv`))
      for (const result of waveResults) {
        updateMasterCsvRow(`${sessionFolder}/tasks.csv`, result.id, {
          status: result.status,
          findings: result.findings || '',
          files_modified: Array.isArray(result.files_modified) ? result.files_modified.join(';') : (result.files_modified || ''),
          tests_passed: String(result.tests_passed ?? ''),
          acceptance_met: result.acceptance_met || '',
          error: result.error || ''
        })
    
        if (result.status === 'failed') {
          failedIds.add(result.id)
          console.log(`  [${result.id}] → FAILED: ${result.error}`)
        } else {
          console.log(`  [${result.id}] → COMPLETED${result.tests_passed ? ' ✓tests' : ''}`)
        }
      }
    
      // Cleanup
      Bash(`rm -f "${sessionFolder}/task-wave-${wave}.csv" "${sessionFolder}/task-wave-${wave}-results.csv"`)
    
      console.log(`  Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
    }
  2. prev_context构建器(跨阶段)
这是关联探索上下文与执行的核心函数:
javascript
function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
  if (!contextFrom) return 'No previous context available'

  const ids = contextFrom.split(';').filter(Boolean)
  const entries = []

  ids.forEach(id => {
    if (id.startsWith('E')) {
      // ← Look up in explore.csv (cross-phase link)
      const row = exploreCSV.find(r => r.id === id)
      if (row && row.status === 'completed' && row.findings) {
        entries.push(`[Explore ${row.angle}] ${row.findings}`)
        if (row.key_files) entries.push(`  Key files: ${row.key_files}`)
      }
    } else if (id.startsWith('T')) {
      // ← Look up in tasks.csv (same-phase link)
      const row = tasksCSV.find(r => r.id === id)
      if (row && row.status === 'completed' && row.findings) {
        entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
        if (row.files_modified) entries.push(`  Modified: ${row.files_modified}`)
      }
    }
  })

  return entries.length > 0 ? entries.join('\n') : 'No previous context available'
}
  1. 执行指令模板
    javascript
    function buildExecuteInstruction(sessionFolder, wave) {
      return `

TASK ASSIGNMENT

TASK ASSIGNMENT

MANDATORY FIRST STEPS

MANDATORY FIRST STEPS

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

  1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
  2. Read project context: .workflow/project-tech.json (if exists)

Your Task

Your Task

Task ID: {id} Title: {title} Description: {description} Scope: {scope}
Task ID: {id} Title: {title} Description: {description} Scope: {scope}

Implementation Hints & Reference Files

Implementation Hints & Reference Files

{hints}
Format: `tips text || file1;file2`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.
{hints}
Format: `tips text || file1;file2`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.

Execution Directives

Execution Directives

{execution_directives}
Commands to run for verification, tool restrictions, or environment requirements.
{execution_directives}
Commands to run for verification, tool restrictions, or environment requirements.

Test Cases

Test Cases

{test}
{test}

Acceptance Criteria

Acceptance Criteria

{acceptance_criteria}
{acceptance_criteria}

Previous Context (from exploration and predecessor tasks)

Previous Context (from exploration and predecessor tasks)

{prev_context}

{prev_context}

Execution Protocol

Execution Protocol

  1. Read references: Parse {hints} — read all files listed after `||` to understand existing patterns
  2. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
  3. Use context: Apply previous tasks' findings from prev_context above
  4. Stay in scope: ONLY create/modify files within {scope} — do NOT touch files outside this boundary
  5. Apply hints: Follow implementation tips from {hints} (before `||`)
  6. Execute: Implement the task as described
  7. Write tests: Implement the test cases defined above
  8. Run directives: Execute commands from {execution_directives} to verify your work
  9. Verify acceptance: Ensure all acceptance criteria are met before reporting completion
  10. Share discoveries: Append exploration findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  11. Report result: Return JSON via report_agent_job_result
  1. Read references: Parse {hints} — read all files listed after `||` to understand existing patterns
  2. Read discoveries: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
  3. Use context: Apply previous tasks' findings from prev_context above
  4. Stay in scope: ONLY create/modify files within {scope} — do NOT touch files outside this boundary
  5. Apply hints: Follow implementation tips from {hints} (before `||`)
  6. Execute: Implement the task as described
  7. Write tests: Implement the test cases defined above
  8. Run directives: Execute commands from {execution_directives} to verify your work
  9. Verify acceptance: Ensure all acceptance criteria are met before reporting completion
  10. Share discoveries: Append exploration findings to shared board: ```bash echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson ```
  11. Report result: Return JSON via report_agent_job_result

Discovery Types to Share

Discovery Types to Share

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `blocker`: {issue, severity, impact} — blocking issues encountered

  • `code_pattern`: {name, file, description} — reusable patterns found
  • `integration_point`: {file, description, exports[]} — module connection points
  • `convention`: {naming, imports, formatting} — code style conventions
  • `blocker`: {issue, severity, impact} — blocking issues encountered

Output (report_agent_job_result)

Output (report_agent_job_result)

Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Key discoveries and implementation notes (max 500 chars)", "files_modified": ["path1", "path2"], "tests_passed": true | false, "acceptance_met": "Summary of which acceptance criteria were met/unmet", "error": "" }
IMPORTANT: Set status to "completed" ONLY if:
  • All test cases pass
  • All acceptance criteria are met Otherwise set status to "failed" with details in error field. ` }
    undefined
  1. Master CSV Update Helper
    javascript
    function updateMasterCsvRow(csvPath, taskId, updates) {
      const content = Read(csvPath)
      const lines = content.split('\n')
      const header = lines[0].split(',')
    
      for (let i = 1; i < lines.length; i++) {
        const cells = parseCsvLine(lines[i])
        if (cells[0] === taskId || cells[0] === `"${taskId}"`) {
          for (const [col, val] of Object.entries(updates)) {
            const colIdx = header.indexOf(col)
            if (colIdx >= 0) {
              cells[colIdx] = `"${String(val).replace(/"/g, '""')}"`
            }
          }
          lines[i] = cells.join(',')
          break
        }
      }
    
      Write(csvPath, lines.join('\n'))
    }
Success Criteria:
  • All waves executed in order
  • Each wave's results merged into master CSV before next wave starts
  • Dependent tasks skipped when predecessor failed
  • discoveries.ndjson accumulated across all phases

Return JSON: { "id": "{id}", "status": "completed" | "failed", "findings": "Key discoveries and implementation notes (max 500 chars)", "files_modified": ["path1", "path2"], "tests_passed": true | false, "acceptance_met": "Summary of which acceptance criteria were met/unmet", "error": "" }
IMPORTANT: Set status to "completed" ONLY if:
  • All test cases pass
  • All acceptance criteria are met Otherwise set status to "failed" with details in error field. ` }
    undefined
  1. 主CSV更新辅助函数
    javascript
    function updateMasterCsvRow(csvPath, taskId, updates) {
      const content = Read(csvPath)
      const lines = content.split('\n')
      const header = lines[0].split(',')
    
      for (let i = 1; i < lines.length; i++) {
        const cells = parseCsvLine(lines[i])
        if (cells[0] === taskId || cells[0] === `"${taskId}"`) {
          for (const [col, val] of Object.entries(updates)) {
            const colIdx = header.indexOf(col)
            if (colIdx >= 0) {
              cells[colIdx] = `"${String(val).replace(/"/g, '""')}"`
            }
          }
          lines[i] = cells.join(',')
          break
        }
      }
    
      Write(csvPath, lines.join('\n'))
    }
成功条件:
  • 所有轮次按顺序执行完毕
  • 每轮执行结果在进入下一轮前合并到主CSV
  • 前置任务失败时自动跳过依赖任务
  • discoveries.ndjson累计所有阶段的发现内容

Phase 5: Results Aggregation

第5阶段:结果聚合

Objective: Generate final results and human-readable report.
Steps:
  1. Export results.csv
    javascript
    const masterCsv = Read(`${sessionFolder}/tasks.csv`)
    Write(`${sessionFolder}/results.csv`, masterCsv)
  2. Generate context.md
    javascript
    const finalTasks = parseCsv(masterCsv)
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = finalTasks.filter(t => t.status === 'completed')
    const failed = finalTasks.filter(t => t.status === 'failed')
    const skipped = finalTasks.filter(t => t.status === 'skipped')
    
    const contextContent = `# Lite Planex Execution Report
Session: ${sessionId} Requirement: ${requirement} Completed: ${getUtc8ISOString()} Waves: ${maxWave} | Concurrency: ${maxConcurrency}

目标: 生成最终结果与可读报告。
步骤:
  1. 导出results.csv
    javascript
    const masterCsv = Read(`${sessionFolder}/tasks.csv`)
    Write(`${sessionFolder}/results.csv`, masterCsv)
  2. 生成context.md
    javascript
    const finalTasks = parseCsv(masterCsv)
    const exploreCSV = parseCsv(Read(`${sessionFolder}/explore.csv`))
    const completed = finalTasks.filter(t => t.status === 'completed')
    const failed = finalTasks.filter(t => t.status === 'failed')
    const skipped = finalTasks.filter(t => t.status === 'skipped')
    
    const contextContent = `# Lite Planex Execution Report
Session: ${sessionId} Requirement: ${requirement} Completed: ${getUtc8ISOString()} Waves: ${maxWave} | Concurrency: ${maxConcurrency}

Summary

Summary

MetricCount
Explore Angles${exploreCSV.length}
Total Tasks${finalTasks.length}
Completed${completed.length}
Failed${failed.length}
Skipped${skipped.length}
Waves${maxWave}

MetricCount
Explore Angles${exploreCSV.length}
Total Tasks${finalTasks.length}
Completed${completed.length}
Failed${failed.length}
Skipped${skipped.length}
Waves${maxWave}

Exploration Results

Exploration Results

${exploreCSV.map(e =>
### ${e.id}: ${e.angle} (${e.status}) ${e.findings || 'N/A'} Key files: ${e.key_files || 'none'}
).join('\n\n')}

${exploreCSV.map(e =>
### ${e.id}: ${e.angle} (${e.status}) ${e.findings || 'N/A'} Key files: ${e.key_files || 'none'}
).join('\n\n')}

Task Results

Task Results

${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
FieldValue
Wave${t.wave}
Scope${t.scope
Dependencies${t.deps
Context From${t.context_from
Tests Passed${t.tests_passed
Acceptance Met${t.acceptance_met
Error${t.error
Description: ${t.description}
Test Cases: ${t.test || 'N/A'}
Acceptance Criteria: ${t.acceptance_criteria || 'N/A'}
Hints: ${t.hints || 'N/A'}
Execution Directives: ${t.execution_directives || 'N/A'}
Findings: ${t.findings || 'N/A'}
Files Modified: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}

${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
FieldValue
Wave${t.wave}
Scope${t.scope
Dependencies${t.deps
Context From${t.context_from
Tests Passed${t.tests_passed
Acceptance Met${t.acceptance_met
Error${t.error
Description: ${t.description}
Test Cases: ${t.test || 'N/A'}
Acceptance Criteria: ${t.acceptance_criteria || 'N/A'}
Hints: ${t.hints || 'N/A'}
Execution Directives: ${t.execution_directives || 'N/A'}
Findings: ${t.findings || 'N/A'}
Files Modified: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}

All Modified Files

All Modified Files

${[...new Set(finalTasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boolean))].map(f => '- ' + f).join('\n') || 'None'} `
Write(
${sessionFolder}/context.md
, contextContent)

3. **Display Summary**

```javascript
console.log(`
${[...new Set(finalTasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boolean))].map(f => '- ' + f).join('\n') || 'None'} `
Write(
${sessionFolder}/context.md
, contextContent)

3. **展示总结**

```javascript
console.log(`

Lite Planex Complete

Lite Planex Complete

  • Session: ${sessionId}
  • Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
  • Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
  • Waves: ${maxWave}
Results: ${sessionFolder}/results.csv Report: ${sessionFolder}/context.md Discoveries: ${sessionFolder}/discoveries.ndjson `)

4. **Offer Next Steps** (skip if AUTO_YES)

```javascript
if (!AUTO_YES && failed.length > 0) {
  const answer = AskUserQuestion({
    questions: [{
      question: `${failed.length} tasks failed. Next action?`,
      header: "Next Step",
      multiSelect: false,
      options: [
        { label: "Retry Failed", description: `Re-execute ${failed.length} failed tasks with updated context` },
        { label: "View Report", description: "Display context.md" },
        { label: "Done", description: "Complete session" }
      ]
    }]
  })

  if (answer['Next Step'] === "Retry Failed") {
    for (const task of failed) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    for (const task of skipped) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    // Re-execute Phase 4
  } else if (answer['Next Step'] === "View Report") {
    console.log(Read(`${sessionFolder}/context.md`))
  }
}
Success Criteria:
  • results.csv exported
  • context.md generated with full field coverage
  • Summary displayed to user

  • Session: ${sessionId}
  • Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
  • Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
  • Waves: ${maxWave}
Results: ${sessionFolder}/results.csv Report: ${sessionFolder}/context.md Discoveries: ${sessionFolder}/discoveries.ndjson `)

4. **提供后续操作选项**(AUTO_YES模式下跳过)

```javascript
if (!AUTO_YES && failed.length > 0) {
  const answer = AskUserQuestion({
    questions: [{
      question: `${failed.length} tasks failed. Next action?`,
      header: "Next Step",
      multiSelect: false,
      options: [
        { label: "Retry Failed", description: `Re-execute ${failed.length} failed tasks with updated context` },
        { label: "View Report", description: "Display context.md" },
        { label: "Done", description: "Complete session" }
      ]
    }]
  })

  if (answer['Next Step'] === "Retry Failed") {
    for (const task of failed) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    for (const task of skipped) {
      updateMasterCsvRow(`${sessionFolder}/tasks.csv`, task.id, { status: 'pending', error: '' })
    }
    // Re-execute Phase 4
  } else if (answer['Next Step'] === "View Report") {
    console.log(Read(`${sessionFolder}/context.md`))
  }
}
成功条件:
  • results.csv导出完成
  • 包含所有字段的context.md生成完成
  • 总结信息已展示给用户

Wave Computation (Kahn's BFS)

轮次计算(Kahn's BFS算法)

javascript
function computeWaves(tasks) {
  const taskMap = new Map(tasks.map(t => [t.id, t]))
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  const adjList = new Map(tasks.map(t => [t.id, []]))

  for (const task of tasks) {
    for (const dep of task.deps) {
      if (taskMap.has(dep)) {
        adjList.get(dep).push(task.id)
        inDegree.set(task.id, inDegree.get(task.id) + 1)
      }
    }
  }

  const queue = []
  const waveAssignment = new Map()

  for (const [id, deg] of inDegree) {
    if (deg === 0) {
      queue.push([id, 1])
      waveAssignment.set(id, 1)
    }
  }

  let maxWave = 1
  let idx = 0
  while (idx < queue.length) {
    const [current, depth] = queue[idx++]
    for (const next of adjList.get(current)) {
      const newDeg = inDegree.get(next) - 1
      inDegree.set(next, newDeg)
      const nextDepth = Math.max(waveAssignment.get(next) || 0, depth + 1)
      waveAssignment.set(next, nextDepth)
      if (newDeg === 0) {
        queue.push([next, nextDepth])
        maxWave = Math.max(maxWave, nextDepth)
      }
    }
  }

  for (const task of tasks) {
    if (!waveAssignment.has(task.id)) {
      throw new Error(`Circular dependency detected involving task ${task.id}`)
    }
  }

  return { waveAssignment, maxWave }
}

javascript
function computeWaves(tasks) {
  const taskMap = new Map(tasks.map(t => [t.id, t]))
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  const adjList = new Map(tasks.map(t => [t.id, []]))

  for (const task of tasks) {
    for (const dep of task.deps) {
      if (taskMap.has(dep)) {
        adjList.get(dep).push(task.id)
        inDegree.set(task.id, inDegree.get(task.id) + 1)
      }
    }
  }

  const queue = []
  const waveAssignment = new Map()

  for (const [id, deg] of inDegree) {
    if (deg === 0) {
      queue.push([id, 1])
      waveAssignment.set(id, 1)
    }
  }

  let maxWave = 1
  let idx = 0
  while (idx < queue.length) {
    const [current, depth] = queue[idx++]
    for (const next of adjList.get(current)) {
      const newDeg = inDegree.get(next) - 1
      inDegree.set(next, newDeg)
      const nextDepth = Math.max(waveAssignment.get(next) || 0, depth + 1)
      waveAssignment.set(next, nextDepth)
      if (newDeg === 0) {
        queue.push([next, nextDepth])
        maxWave = Math.max(maxWave, nextDepth)
      }
    }
  }

  for (const task of tasks) {
    if (!waveAssignment.has(task.id)) {
      throw new Error(`Circular dependency detected involving task ${task.id}`)
    }
  }

  return { waveAssignment, maxWave }
}

Shared Discovery Board Protocol

共享发现看板协议

All agents across all phases share
discoveries.ndjson
. This eliminates redundant codebase exploration.
jsonl
{"ts":"2026-02-28T10:00:00+08:00","worker":"E1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
{"ts":"2026-02-28T10:01:00+08:00","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
Types:
code_pattern
,
integration_point
,
convention
,
blocker
,
tech_stack
,
test_command
Rules: Read first → write immediately → deduplicate → append-only

所有阶段的所有Agent共享
discoveries.ndjson
,可避免重复的代码库探索。
jsonl
{"ts":"2026-02-28T10:00:00+08:00","worker":"E1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
{"ts":"2026-02-28T10:01:00+08:00","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
支持类型:
code_pattern
,
integration_point
,
convention
,
blocker
,
tech_stack
,
test_command
使用规则: 先读取 → 立即写入 → 自动去重 → 仅追加写入

Error Handling

错误处理

ErrorResolution
Explore agent failureMark as failed in explore.csv, exclude from planning
All explores failedFallback: plan directly from requirement without exploration
Circular dependencyAbort wave computation, report cycle
Execute agent timeoutMark as failed in results, continue with wave
Execute agent failedMark as failed, skip dependent tasks in later waves
CSV parse errorValidate CSV format before execution, show line number
discoveries.ndjson corruptIgnore malformed lines, continue with valid entries
Continue mode: no sessionList available sessions, prompt user to select

错误场景解决方式
探索Agent执行失败在explore.csv中标记为失败,规划阶段排除该结果
所有探索任务均失败降级方案:跳过探索,直接基于需求生成规划
循环依赖终止轮次计算,报告循环链路
执行Agent超时在结果中标记为失败,继续执行当前轮次其他任务
执行Agent失败标记为失败,后续轮次跳过依赖该任务的所有任务
CSV解析错误执行前校验CSV格式,展示错误行号
discoveries.ndjson损坏忽略格式错误的行,继续使用有效条目
恢复模式下无匹配会话列出可用会话,提示用户选择

Core Rules

核心规则

  1. Explore Before Execute: Phase 2 completes before Phase 4 starts
  2. Wave Order is Sacred: Never execute wave N before wave N-1 completes and results are merged
  3. CSV is Source of Truth: Master CSVs hold all state — always read before wave, always write after
  4. Cross-Phase Context: prev_context built from both explore.csv (E*) and tasks.csv (T*), not from memory
  5. E ↔ T Linking**: tasks.csv
    context_from
    references explore.csv rows for cross-phase context
  6. Discovery Board is Append-Only: Never clear, modify, or recreate discoveries.ndjson
  7. Skip on Failure: If a dependency failed, skip the dependent task (cascade)
  8. Cleanup Temp Files: Remove wave CSVs after results are merged
  9. DO NOT STOP: Continuous execution until all waves complete or all remaining tasks are skipped

  1. 先探索后执行: 第2阶段全部完成后才会启动第4阶段
  2. 严格遵守轮次顺序: 前一轮执行完成且结果合并前,绝不执行后一轮任务
  3. CSV是唯一可信数据源: 主CSV存储所有状态,每轮执行前必须读取,执行后必须写入
  4. 跨阶段上下文: prev_context从explore.csv(E*)和tasks.csv(T*)构建,不从内存读取
  5. E ↔ T关联**: tasks.csv的
    context_from
    字段引用explore.csv行实现跨阶段上下文传递
  6. 发现看板仅追加: 绝不清除、修改或重建discoveries.ndjson
  7. 失败自动级联跳过: 依赖任务失败时,自动跳过所有下游依赖任务
  8. 临时文件自动清理: 结果合并后自动删除轮次临时CSV文件
  9. 执行不中断: 持续执行直到所有轮次完成或剩余任务全部被跳过

Best Practices

最佳实践

  1. Exploration Angles: 1 for simple, 3-4 for complex; avoid redundant angles
  2. Context Linking: Link every task to at least one explore row (E*) — exploration was done for a reason
  3. Task Granularity: 3-10 tasks optimal; too many = overhead, too few = no parallelism
  4. Minimize Cross-Wave Deps: More tasks in wave 1 = more parallelism
  5. Specific Descriptions: Agent sees only its CSV row + prev_context — make description self-contained
  6. Non-Overlapping Scopes: Same-wave tasks must not write to the same files
  7. Concurrency Tuning:
    -c 1
    for serial (max context sharing);
    -c 8
    for I/O-bound tasks

  1. 探索角度设置: 简单需求1个,复杂需求3-4个,避免冗余角度
  2. 上下文关联: 每个任务至少关联一个探索行(E*),探索结果应当被充分利用
  3. 任务粒度: 3-10个任务为最优,任务过多会增加 overhead,过少则无法并行
  4. 最小化跨轮次依赖: 第1轮任务越多,并行度越高
  5. 描述尽可能具体: Agent只能看到自己的CSV行 + prev_context,要保证描述独立完整
  6. 范围不重叠: 同一轮次的任务不能写入相同文件
  7. 并发数调优:
    -c 1
    为串行执行(上下文共享最多);
    -c 8
    适合I/O密集型任务

Usage Recommendations

使用场景推荐

ScenarioRecommended Approach
Complex feature (unclear architecture)
$workflow-lite-plan
— explore first, then plan
Simple known-pattern task
$workflow-lite-plan
— skip exploration, direct execution
Independent parallel tasks
$workflow-lite-plan -c 8
— single wave, max parallelism
Diamond dependency (A→B,C→D)
$workflow-lite-plan
— 3 waves with context propagation
Unknown codebase
$workflow-lite-plan
— exploration phase is essential
场景推荐用法
复杂功能(架构不清晰)
$workflow-lite-plan
— 先探索再规划
简单已知模式的任务
$workflow-lite-plan
— 跳过探索直接执行
独立可并行的任务
$workflow-lite-plan -c 8
— 单轮执行,最大并行度
菱形依赖(A→B,C→D)
$workflow-lite-plan
— 3轮执行带上下文传递
不熟悉的代码库
$workflow-lite-plan
— 探索阶段非常必要