project-documentation-workflow
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAuto Mode
自动模式
When or : Auto-confirm task decomposition, skip interactive validation, use defaults.
--yes-y当使用或参数时:自动确认任务分解,跳过交互式验证,使用默认设置。
--yes-yProject Documentation Workflow (Optimized)
项目文档工作流(优化版)
Usage
使用方法
bash
$project-documentation-workflow "Document the authentication module in src/auth/"
$project-documentation-workflow -c 4 "Generate full docs for the FEM solver project"
$project-documentation-workflow -y "Document entire codebase with architecture and API"
$project-documentation-workflow --continue "doc-auth-module-20260304"Flags:
- : Skip all confirmations (auto mode)
-y, --yes - : Max concurrent agents within each wave (default: 3)
-c, --concurrency N - : Resume existing session
--continue
Output Directory:
Core Output: + + + + (完整文档集)
.workflow/.csv-wave/{session-id}/tasks.csvresults.csvdiscoveries.ndjsonwave-summaries/docs/bash
$project-documentation-workflow "Document the authentication module in src/auth/"
$project-documentation-workflow -c 4 "Generate full docs for the FEM solver project"
$project-documentation-workflow -y "Document entire codebase with architecture and API"
$project-documentation-workflow --continue "doc-auth-module-20260304"参数说明:
- :跳过所有确认步骤(自动模式)
-y, --yes - :每个波次内的最大并发Agent数量(默认:3)
-c, --concurrency N - :恢复现有会话
--continue
输出目录:
核心输出: + + + + (完整文档集)
.workflow/.csv-wave/{session-id}/tasks.csvresults.csvdiscoveries.ndjsonwave-summaries/docs/Overview
概述
优化版:动态任务分解 + 拓扑排序波次计算 + 波次间综合步骤。
┌─────────────────────────────────────────────────────────────────────────┐
│ PROJECT DOCUMENTATION WORKFLOW (Dynamic & Optimized) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 0: Dynamic Decomposition │
│ ├─ Analyze project structure, complexity, domain │
│ ├─ Generate appropriate documentation tasks (动态数量) │
│ ├─ Compute task dependencies (deps) │
│ ├─ Compute execution waves (topological sort) │
│ └─ User validates task breakdown (skip if -y) │
│ │
│ Phase 1: Wave Execution (with Inter-Wave Synthesis) │
│ ├─ For each wave (1..N, dynamically computed): │
│ │ ├─ Load Wave Summary from previous wave │
│ │ ├─ Build wave CSV with prev_context injection │
│ │ ├─ spawn_agents_on_csv(wave CSV) │
│ │ ├─ Collect results, merge into master tasks.csv │
│ │ ├─ Generate Wave Summary (波次综合) │
│ │ └─ Check: any failed? → skip dependents │
│ └─ discoveries.ndjson shared across all waves │
│ │
│ Phase 2: Results Aggregation │
│ ├─ Export final results.csv │
│ ├─ Generate context.md with all findings │
│ ├─ Generate docs/index.md navigation │
│ └─ Display summary: completed/failed/skipped per wave │
│ │
└─────────────────────────────────────────────────────────────────────────┘优化版特性:动态任务分解 + 拓扑排序波次计算 + 波次间综合步骤。
┌─────────────────────────────────────────────────────────────────────────┐
│ PROJECT DOCUMENTATION WORKFLOW (Dynamic & Optimized) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ 阶段0:动态任务分解 │
│ ├─ 分析项目结构、复杂度及业务领域 │
│ ├─ 生成适配的文档任务(数量动态调整) │
│ ├─ 计算任务依赖关系(deps) │
│ ├─ 通过拓扑排序计算执行波次 │
│ └─ 用户验证任务分解结果(若使用-y则跳过) │
│ │
│ 阶段1:波次执行(含波次间综合) │
│ ├─ 针对每个动态计算得出的波次(1..N): │
│ │ ├─ 加载上一波次的波次摘要 │
│ │ ├─ 构建注入prev_context的波次CSV文件 │
│ │ ├─ 调用spawn_agents_on_csv(波次CSV) │
│ │ ├─ 收集结果,合并至主tasks.csv文件 │
│ │ ├─ 生成波次摘要(波次综合) │
│ │ └─ 检查是否有任务失败?→ 跳过依赖该任务的后续任务 │
│ └─ discoveries.ndjson文件在所有波次间共享 │
│ │
│ 阶段2:结果聚合 │
│ ├─ 导出最终results.csv文件 │
│ ├─ 生成包含所有发现的context.md文件 │
│ ├─ 生成docs/index.md导航文件 │
│ └─ 展示执行摘要:各波次已完成/失败/跳过的任务数量 │
│ │
└─────────────────────────────────────────────────────────────────────────┘CSV Schema
CSV 格式定义
tasks.csv (Master State)
tasks.csv(主状态文件)
csv
id,title,description,doc_type,target_scope,doc_sections,formula_support,priority,deps,context_from,wave,status,findings,doc_path,key_discoveries,error
"doc-001","项目概述","撰写项目的整体概述","overview","README.md,package.json","purpose,background,positioning,audience","false","high","","","1","pending","","","",""Columns:
| Column | Type | Required | Description |
|---|---|---|---|
| string | Yes | Task ID (doc-NNN, auto-generated) |
| string | Yes | Document title |
| string | Yes | Detailed task description |
| enum | Yes | `overview |
| string | Yes | File scope (glob pattern) |
| string | Yes | Required sections (comma-separated) |
| boolean | No | LaTeX formula support |
| enum | No | `high |
| string | No | Dependency task IDs (semicolon-separated) |
| string | No | Context source task IDs |
| integer | Computed | Wave number (computed by topological sort) |
| enum | Output | `pending→completed |
| string | Output | Key findings summary |
| string | Output | Generated document path |
| string | Output | Key discoveries (JSON) |
| string | Output | Error message |
csv
id,title,description,doc_type,target_scope,doc_sections,formula_support,priority,deps,context_from,wave,status,findings,doc_path,key_discoveries,error
"doc-001","项目概述","撰写项目的整体概述","overview","README.md,package.json","purpose,background,positioning,audience","false","high","","","1","pending","","","",""列说明:
| 列名 | 类型 | 是否必填 | 描述 |
|---|---|---|---|
| string | 是 | 任务ID(格式为doc-NNN,自动生成) |
| string | 是 | 文档标题 |
| string | 是 | 详细任务描述 |
| 枚举 | 是 | `overview |
| string | 是 | 文件范围(通配符模式) |
| string | 是 | 必填章节(逗号分隔) |
| boolean | 否 | LaTeX公式支持 |
| 枚举 | 否 | `high |
| string | 否 | 依赖任务ID(分号分隔) |
| string | 否 | 上下文来源任务ID |
| integer | 计算生成 | 波次编号(通过拓扑排序计算) |
| 枚举 | 输出项 | `pending→completed |
| string | 输出项 | 核心发现摘要 |
| string | 输出项 | 生成文档的路径 |
| string | 输出项 | 关键发现(JSON格式) |
| string | 输出项 | 错误信息 |
Implementation
实现细节
Session Initialization
会话初始化
javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `doc-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/docs ${sessionFolder}/wave-summaries`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, `# Discovery Board - ${sessionId}\n# Format: NDJSON\n`)javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const requirement = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
.trim()
const slug = requirement.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `doc-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
Bash(`mkdir -p ${sessionFolder}/docs ${sessionFolder}/wave-summaries`)
// Initialize discoveries.ndjson
Write(`${sessionFolder}/discoveries.ndjson`, `# Discovery Board - ${sessionId}\n# Format: NDJSON\n`)Phase 0: Dynamic Task Decomposition
阶段0:动态任务分解
Objective: Analyze the project and dynamically generate appropriate documentation tasks.
目标:分析项目并动态生成适配的文档任务。
Step 1: Project Analysis
步骤1:项目分析
javascript
Bash({
command: `ccw cli -p "PURPOSE: Analyze the project and determine appropriate documentation tasks.
TASK:
1. Scan project structure to identify:
- Project type (library/application/service/CLI/tool)
- Primary language(s) and frameworks
- Project scale (small/medium/large based on file count and complexity)
- Key modules and their purposes
- Existing documentation (README, docs/, etc.)
2. Determine documentation needs based on project characteristics:
- For ALL projects: overview, tech-stack, directory-structure
- For libraries: api-reference, usage-guide, best-practices
- For applications: system-architecture, feature-list, usage-guide
- For numerical/scientific projects: theoretical-foundations (with formula_support=true)
- For services: api-reference, module-interactions, deployment
- For complex projects (>50 files): add design-patterns, data-model
- For simple projects (<10 files): reduce to essential docs only
3. Generate task list with:
- Unique task IDs (doc-001, doc-002, ...)
- Appropriate doc_type for each task
- Target scope (glob patterns) based on actual project structure
- Required sections for each document type
- Dependencies (deps) between related tasks
- Context sources (context_from) for information flow
- Priority (high for essential docs, medium for useful, low for optional)
4. Task dependency rules:
- overview tasks: no deps (Wave 1)
- architecture tasks: depend on overview tasks
- implementation tasks: depend on architecture tasks
- feature/api tasks: depend on implementation
- synthesis tasks: depend on most other tasks
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with:
- project_info: {type, scale, languages, frameworks, modules[]}
- recommended_waves: number of waves suggested
- tasks: [{id, title, description, doc_type, target_scope, doc_sections, formula_support, priority, deps[], context_from[]}]
CONSTRAINTS:
- Small projects: 5-8 tasks max
- Medium projects: 10-15 tasks
- Large projects: 15-25 tasks
- Each doc_type should appear at most once unless justified
- deps must form a valid DAG (no cycles)
PROJECT TO ANALYZE: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
run_in_background: true
})javascript
Bash({
command: `ccw cli -p "PURPOSE: Analyze the project and determine appropriate documentation tasks.
TASK:
1. Scan project structure to identify:
- Project type (library/application/service/CLI/tool)
- Primary language(s) and frameworks
- Project scale (small/medium/large based on file count and complexity)
- Key modules and their purposes
- Existing documentation (README, docs/, etc.)
2. Determine documentation needs based on project characteristics:
- For ALL projects: overview, tech-stack, directory-structure
- For libraries: api-reference, usage-guide, best-practices
- For applications: system-architecture, feature-list, usage-guide
- For numerical/scientific projects: theoretical-foundations (with formula_support=true)
- For services: api-reference, module-interactions, deployment
- For complex projects (>50 files): add design-patterns, data-model
- For simple projects (<10 files): reduce to essential docs only
3. Generate task list with:
- Unique task IDs (doc-001, doc-002, ...)
- Appropriate doc_type for each task
- Target scope (glob patterns) based on actual project structure
- Required sections for each document type
- Dependencies (deps) between related tasks
- Context sources (context_from) for information flow
- Priority (high for essential docs, medium for useful, low for optional)
4. Task dependency rules:
- overview tasks: no deps (Wave 1)
- architecture tasks: depend on overview tasks
- implementation tasks: depend on architecture tasks
- feature/api tasks: depend on implementation
- synthesis tasks: depend on most other tasks
MODE: analysis
CONTEXT: @**/*
EXPECTED: JSON with:
- project_info: {type, scale, languages, frameworks, modules[]}
- recommended_waves: number of waves suggested
- tasks: [{id, title, description, doc_type, target_scope, doc_sections, formula_support, priority, deps[], context_from[]}]
CONSTRAINTS:
- Small projects: 5-8 tasks max
- Medium projects: 10-15 tasks
- Large projects: 15-25 tasks
- Each doc_type should appear at most once unless justified
- deps must form a valid DAG (no cycles)
PROJECT TO ANALYZE: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
run_in_background: true
})Step 2: Topological Sort (Wave Computation)
步骤2:拓扑排序(波次计算)
javascript
function computeWaves(tasks) {
// Build adjacency list
const graph = new Map()
const inDegree = new Map()
const taskMap = new Map()
for (const task of tasks) {
taskMap.set(task.id, task)
graph.set(task.id, [])
inDegree.set(task.id, 0)
}
// Fill edges based on deps
for (const task of tasks) {
const deps = task.deps.filter(d => taskMap.has(d))
for (const dep of deps) {
graph.get(dep).push(task.id)
inDegree.set(task.id, inDegree.get(task.id) + 1)
}
}
// Kahn's BFS algorithm
const waves = []
let currentWave = []
// Start with tasks that have no dependencies
for (const [id, degree] of inDegree) {
if (degree === 0) currentWave.push(id)
}
while (currentWave.length > 0) {
waves.push([...currentWave])
const nextWave = []
for (const id of currentWave) {
for (const neighbor of graph.get(id)) {
inDegree.set(neighbor, inDegree.get(neighbor) - 1)
if (inDegree.get(neighbor) === 0) {
nextWave.push(neighbor)
}
}
}
currentWave = nextWave
}
// Assign wave numbers
for (let w = 0; w < waves.length; w++) {
for (const id of waves[w]) {
taskMap.get(id).wave = w + 1
}
}
// Check for cycles
const assignedCount = tasks.filter(t => t.wave > 0).length
if (assignedCount < tasks.length) {
throw new Error(`Circular dependency detected! Only ${assignedCount}/${tasks.length} tasks assigned.`)
}
return {
tasks: tasks,
waveCount: waves.length,
waveDistribution: waves.map((w, i) => ({ wave: i + 1, tasks: w.length }))
}
}javascript
function computeWaves(tasks) {
// Build adjacency list
const graph = new Map()
const inDegree = new Map()
const taskMap = new Map()
for (const task of tasks) {
taskMap.set(task.id, task)
graph.set(task.id, [])
inDegree.set(task.id, 0)
}
// Fill edges based on deps
for (const task of tasks) {
const deps = task.deps.filter(d => taskMap.has(d))
for (const dep of deps) {
graph.get(dep).push(task.id)
inDegree.set(task.id, inDegree.get(task.id) + 1)
}
}
// Kahn's BFS algorithm
const waves = []
let currentWave = []
// Start with tasks that have no dependencies
for (const [id, degree] of inDegree) {
if (degree === 0) currentWave.push(id)
}
while (currentWave.length > 0) {
waves.push([...currentWave])
const nextWave = []
for (const id of currentWave) {
for (const neighbor of graph.get(id)) {
inDegree.set(neighbor, inDegree.get(neighbor) - 1)
if (inDegree.get(neighbor) === 0) {
nextWave.push(neighbor)
}
}
}
currentWave = nextWave
}
// Assign wave numbers
for (let w = 0; w < waves.length; w++) {
for (const id of waves[w]) {
taskMap.get(id).wave = w + 1
}
}
// Check for cycles
const assignedCount = tasks.filter(t => t.wave > 0).length
if (assignedCount < tasks.length) {
throw new Error(`Circular dependency detected! Only ${assignedCount}/${tasks.length} tasks assigned.`)
}
return {
tasks: tasks,
waveCount: waves.length,
waveDistribution: waves.map((w, i) => ({ wave: i + 1, tasks: w.length }))
}
}Step 3: User Validation
步骤3:用户验证
javascript
// Parse decomposition result
const analysisResult = JSON.parse(decompositionOutput)
const { tasks, project_info, waveCount } = analysisResult
// Compute waves
const { tasks: tasksWithWaves, waveCount: computedWaves, waveDistribution } = computeWaves(tasks)
// Display to user (skip if AUTO_YES)
if (!AUTO_YES) {
console.log(`
╔════════════════════════════════════════════════════════════════╗
║ PROJECT ANALYSIS RESULT ║
╠════════════════════════════════════════════════════════════════╣
║ Type: ${project_info.type.padEnd(20)} Scale: ${project_info.scale.padEnd(10)} ║
║ Languages: ${project_info.languages.join(', ').substring(0, 40).padEnd(40)} ║
║ Modules: ${project_info.modules.length} identified ║
╠════════════════════════════════════════════════════════════════╣
║ WAVE DISTRIBUTION (${computedWaves} waves, ${tasksWithWaves.length} tasks) ║
${waveDistribution.map(w => `║ Wave ${w.wave}: ${w.tasks} tasks${' '.repeat(50 - w.tasks.toString().length)}`).join('\n')}
╚════════════════════════════════════════════════════════════════╝
`)
// Show tasks by wave
for (let w = 1; w <= computedWaves; w++) {
const waveTasks = tasksWithWaves.filter(t => t.wave === w)
console.log(`\nWave ${w}:`)
for (const t of waveTasks) {
console.log(` ${t.id}: ${t.title} [${t.doc_type}]`)
}
}
const confirm = AskUserQuestion("Proceed with this task breakdown?")
if (!confirm) {
console.log("Aborted. Use --continue to resume with modified tasks.")
return
}
}
// Generate tasks.csv
Write(`${sessionFolder}/tasks.csv`, toCsv(tasksWithWaves))
Write(`${sessionFolder}/project-info.json`, JSON.stringify(project_info, null, 2))javascript
// Parse decomposition result
const analysisResult = JSON.parse(decompositionOutput)
const { tasks, project_info, waveCount } = analysisResult
// Compute waves
const { tasks: tasksWithWaves, waveCount: computedWaves, waveDistribution } = computeWaves(tasks)
// Display to user (skip if AUTO_YES)
if (!AUTO_YES) {
console.log(`
╔════════════════════════════════════════════════════════════════╗
║ 项目分析结果 ║
╠════════════════════════════════════════════════════════════════╣
║ 类型: ${project_info.type.padEnd(20)} 规模: ${project_info.scale.padEnd(10)} ║
║ 开发语言: ${project_info.languages.join(', ').substring(0, 40).padEnd(40)} ║
║ 已识别模块数: ${project_info.modules.length} ║
╠════════════════════════════════════════════════════════════════╣
║ 波次分布(共${computedWaves}个波次,${tasksWithWaves.length}个任务) ║
${waveDistribution.map(w => `║ 波次 ${w.wave}: ${w.tasks}个任务${' '.repeat(50 - w.tasks.toString().length)}`).join('\n')}
╚════════════════════════════════════════════════════════════════╝
`)
// 按波次展示任务
for (let w = 1; w <= computedWaves; w++) {
const waveTasks = tasksWithWaves.filter(t => t.wave === w)
console.log(`\n波次 ${w}:`)
for (const t of waveTasks) {
console.log(` ${t.id}: ${t.title} [${t.doc_type}]`)
}
}
const confirm = AskUserQuestion("是否继续使用该任务分解结果?")
if (!confirm) {
console.log("已终止。可使用--continue参数恢复并修改任务。")
return
}
}
// 生成tasks.csv文件
Write(`${sessionFolder}/tasks.csv`, toCsv(tasksWithWaves))
Write(`${sessionFolder}/project-info.json`, JSON.stringify(project_info, null, 2))Phase 1: Wave Execution (with Inter-Wave Synthesis)
阶段1:波次执行(含波次间综合)
Key Optimization: Add Wave Summary generation between waves for better context propagation.
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\n{'='*60}`)
console.log(`Wave ${wave}/${maxWave}`)
console.log('='.repeat(60))
// 1. Load Wave Summary from previous wave
const waveSummaryPath = `${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md`
let prevWaveSummary = ''
if (wave > 1 && fileExists(waveSummaryPath)) {
prevWaveSummary = Read(waveSummaryPath)
console.log(`Loaded Wave ${wave-1} Summary (${prevWaveSummary.length} chars)`)
}
// 2. Filter tasks for this wave
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
// 3. Check dependencies
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `Dependency failed: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
}
}
const pendingTasks = waveTasks.filter(t => t.status === 'pending')
if (pendingTasks.length === 0) {
console.log(`Wave ${wave}: No pending tasks, skipping...`)
continue
}
// 4. Build enhanced prev_context
for (const task of pendingTasks) {
// a. From context_from tasks
const contextIds = (task.context_from || '').split(';').filter(Boolean)
const prevFindings = contextIds.map(id => {
const src = tasks.find(t => t.id === id)
if (!src?.findings) return ''
return `## [${src.id}] ${src.title}\n${src.findings}`
}).filter(Boolean).join('\n\n')
// b. From previous wave summary (HIGH DENSITY CONTEXT)
const waveContext = prevWaveSummary ?
`\n\n## Wave ${wave-1} Summary\n${prevWaveSummary}` : ''
// c. From discoveries.ndjson (relevant entries)
const discoveries = Read(`${sessionFolder}/discoveries.ndjson`)
const relevantDiscoveries = discoveries
.split('\n')
.filter(line => line.startsWith('{'))
.map(line => JSON.parse(line))
.filter(d => isRelevantDiscovery(d, task))
.slice(0, 10) // Limit to 10 most relevant
.map(d => `- [${d.type}] ${JSON.stringify(d.data)}`)
.join('\n')
const discoveryContext = relevantDiscoveries ?
`\n\n## Relevant Discoveries\n${relevantDiscoveries}` : ''
task.prev_context = prevFindings + waveContext + discoveryContext
}
// 5. Write wave CSV
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks))
// 6. Execute wave
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildOptimizedInstruction(sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
doc_path: { type: "string" },
key_discoveries: { type: "string" },
error: { type: "string" }
}
}
})
// 7. Merge results
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 8. Generate Wave Summary (NEW: Inter-Wave Synthesis)
const completedThisWave = results.filter(r => r.status === 'completed')
if (completedThisWave.length > 0) {
const waveSummary = generateWaveSummary(wave, completedThisWave, tasks)
Write(`${sessionFolder}/wave-summaries/wave-${wave}-summary.md`, waveSummary)
console.log(`Generated Wave ${wave} Summary`)
}
// 9. Cleanup temp files
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 10. Display wave summary
const completed = results.filter(r => r.status === 'completed').length
const failed = results.filter(r => r.status === 'failed').length
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed`)
}核心优化点:在波次之间添加波次摘要生成步骤,提升上下文传播效率。
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
const maxWave = Math.max(...tasks.map(t => t.wave))
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\n{'='*60}`)
console.log(`波次 ${wave}/${maxWave}`)
console.log('='.repeat(60))
// 1. 加载上一波次的波次摘要
const waveSummaryPath = `${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md`
let prevWaveSummary = ''
if (wave > 1 && fileExists(waveSummaryPath)) {
prevWaveSummary = Read(waveSummaryPath)
console.log(`已加载波次 ${wave-1} 摘要(${prevWaveSummary.length} 字符)`)
}
// 2. 筛选当前波次的任务
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
// 3. 检查依赖关系
for (const task of waveTasks) {
const depIds = (task.deps || '').split(';').filter(Boolean)
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
task.status = 'skipped'
task.error = `依赖任务失败: ${depIds.filter((id, i) =>
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
}
}
const pendingTasks = waveTasks.filter(t => t.status === 'pending')
if (pendingTasks.length === 0) {
console.log(`波次 ${wave}: 无待执行任务,跳过...`)
continue
}
// 4. 构建增强版prev_context
for (const task of pendingTasks) {
// a. 来自context_from指定的任务
const contextIds = (task.context_from || '').split(';').filter(Boolean)
const prevFindings = contextIds.map(id => {
const src = tasks.find(t => t.id === id)
if (!src?.findings) return ''
return `## [${src.id}] ${src.title}\n${src.findings}`
}).filter(Boolean).join('\n\n')
// b. 来自上一波次摘要(高密度上下文)
const waveContext = prevWaveSummary ?
`\n\n## 波次 ${wave-1} 摘要\n${prevWaveSummary}` : ''
// c. 来自discoveries.ndjson(相关条目)
const discoveries = Read(`${sessionFolder}/discoveries.ndjson`)
const relevantDiscoveries = discoveries
.split('\n')
.filter(line => line.startsWith('{'))
.map(line => JSON.parse(line))
.filter(d => isRelevantDiscovery(d, task))
.slice(0, 10) // 最多取10条最相关的
.map(d => `- [${d.type}] ${JSON.stringify(d.data)}`)
.join('\n')
const discoveryContext = relevantDiscoveries ?
`\n\n## 相关发现\n${relevantDiscoveries}` : ''
task.prev_context = prevFindings + waveContext + discoveryContext
}
// 5. 写入当前波次的CSV文件
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks))
// 6. 执行当前波次
spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildOptimizedInstruction(sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 900,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
doc_path: { type: "string" },
key_discoveries: { type: "string" },
error: { type: "string" }
}
}
})
// 7. 合并执行结果
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const r of results) {
const t = tasks.find(t => t.id === r.id)
if (t) Object.assign(t, r)
}
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
// 8. 生成波次摘要(新增:波次间综合)
const completedThisWave = results.filter(r => r.status === 'completed')
if (completedThisWave.length > 0) {
const waveSummary = generateWaveSummary(wave, completedThisWave, tasks)
Write(`${sessionFolder}/wave-summaries/wave-${wave}-summary.md`, waveSummary)
console.log(`已生成波次 ${wave} 摘要`)
}
// 9. 清理临时文件
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
// 10. 展示当前波次执行摘要
const completed = results.filter(r => r.status === 'completed').length
const failed = results.filter(r => r.status === 'failed').length
console.log(`波次 ${wave} 执行完成: ${completed} 个任务完成, ${failed} 个任务失败`)
}Wave Summary Generation (Inter-Wave Synthesis)
波次摘要生成(波次间综合)
javascript
function generateWaveSummary(waveNum, completedTasks, allTasks) {
let summary = `# Wave ${waveNum} Summary\n\n`
summary += `**Completed Tasks**: ${completedTasks.length}\n\n`
// Group by doc_type
const byType = {}
for (const task of completedTasks) {
const type = task.doc_type || 'unknown'
if (!byType[type]) byType[type] = []
byType[type].push(task)
}
for (const [type, tasks] of Object.entries(byType)) {
summary += `## ${type.toUpperCase()}\n\n`
for (const t of tasks) {
summary += `### ${t.title}\n`
if (t.findings) {
summary += `${t.findings.substring(0, 300)}${t.findings.length > 300 ? '...' : ''}\n\n`
}
if (t.key_discoveries) {
try {
const discoveries = JSON.parse(t.key_discoveries)
summary += `**Key Points**:\n`
for (const d of discoveries.slice(0, 3)) {
summary += `- ${d.name || d.type}: ${d.description || JSON.stringify(d).substring(0, 100)}\n`
}
summary += '\n'
} catch (e) {}
}
}
}
// Add cross-references for next wave
const nextWaveTasks = allTasks.filter(t => t.wave === waveNum + 1)
if (nextWaveTasks.length > 0) {
summary += `## Context for Wave ${waveNum + 1}\n\n`
summary += `Next wave will focus on: ${nextWaveTasks.map(t => t.title).join(', ')}\n`
}
return summary
}
function isRelevantDiscovery(discovery, task) {
// Check if discovery is relevant to the task
const taskScope = task.target_scope || ''
const taskType = task.doc_type || ''
// Always include architecture discoveries for architecture tasks
if (taskType === 'architecture' && discovery.type.includes('component')) return true
if (taskType === 'implementation' && discovery.type.includes('algorithm')) return true
if (taskType === 'api' && discovery.type.includes('api')) return true
// Check file relevance
if (discovery.data?.file) {
return taskScope.includes(discovery.data.file.split('/')[0])
}
return false
}javascript
function generateWaveSummary(waveNum, completedTasks, allTasks) {
let summary = `# 波次 ${waveNum} 摘要\n\n`
summary += `**已完成任务数**: ${completedTasks.length}\n\n`
// 按doc_type分组
const byType = {}
for (const task of completedTasks) {
const type = task.doc_type || 'unknown'
if (!byType[type]) byType[type] = []
byType[type].push(task)
}
for (const [type, tasks] of Object.entries(byType)) {
summary += `## ${type.toUpperCase()}\n\n`
for (const t of tasks) {
summary += `### ${t.title}\n`
if (t.findings) {
summary += `${t.findings.substring(0, 300)}${t.findings.length > 300 ? '...' : ''}\n\n`
}
if (t.key_discoveries) {
try {
const discoveries = JSON.parse(t.key_discoveries)
summary += `**核心要点**:\n`
for (const d of discoveries.slice(0, 3)) {
summary += `- ${d.name || d.type}: ${d.description || JSON.stringify(d).substring(0, 100)}\n`
}
summary += '\n'
} catch (e) {}
}
}
}
// 添加下一波次的上下文参考
const nextWaveTasks = allTasks.filter(t => t.wave === waveNum + 1)
if (nextWaveTasks.length > 0) {
summary += `## 波次 ${waveNum + 1} 上下文参考\n\n`
summary += `下一波次将聚焦于: ${nextWaveTasks.map(t => t.title).join(', ')}\n`
}
return summary
}
function isRelevantDiscovery(discovery, task) {
// 检查发现是否与当前任务相关
const taskScope = task.target_scope || ''
const taskType = task.doc_type || ''
// 架构任务始终关联组件类发现
if (taskType === 'architecture' && discovery.type.includes('component')) return true
if (taskType === 'implementation' && discovery.type.includes('algorithm')) return true
if (taskType === 'api' && discovery.type.includes('api')) return true
// 检查文件相关性
if (discovery.data?.file) {
return taskScope.includes(discovery.data.file.split('/')[0])
}
return false
}Optimized Instruction Template
优化版指令模板
javascript
function buildOptimizedInstruction(sessionFolder, wave) {
return `## DOCUMENTATION TASK — Wave ${wave}javascript
function buildOptimizedInstruction(sessionFolder, wave) {
return `## 文档任务 — 波次 ${wave}⚠️ MANDATORY FIRST STEPS (DO NOT SKIP)
⚠️ 强制前置步骤(请勿跳过)
-
CHECK DISCOVERIES FIRST (避免重复工作): ```bash
Search for existing discoveries about your topic
grep -i "{doc_type}" ${sessionFolder}/discoveries.ndjson grep -i "{target_keywords}" ${sessionFolder}/discoveries.ndjson ``` -
Read Wave Summary (高密度上下文):
- Read: ${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md (if exists)
-
Read prev_context (provided below)
-
先检查发现板(避免重复工作): ```bash
搜索与当前主题相关的已有发现
grep -i "{doc_type}" ${sessionFolder}/discoveries.ndjson grep -i "{target_keywords}" ${sessionFolder}/discoveries.ndjson ``` -
阅读波次摘要(高密度上下文):
- 阅读文件: ${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md(若存在)
-
阅读prev_context(如下方提供)
Your Task
你的任务
Task ID: {id}
Title: {title}
Document Type: {doc_type}
Target Scope: {target_scope}
Required Sections: {doc_sections}
LaTeX Support: {formula_support}
Priority: {priority}
任务ID: {id}
标题: {title}
文档类型: {doc_type}
目标范围: {target_scope}
必填章节: {doc_sections}
LaTeX支持: {formula_support}
优先级: {priority}
Task Description
任务描述
{description}
{description}
Previous Context (USE THIS!)
历史上下文(请务必使用!)
{prev_context}
{prev_context}
Execution Protocol
执行流程
Step 1: Discovery Check (MANDATORY)
步骤1:发现检查(强制)
Before reading any source files:
- Search discoveries.ndjson for existing findings
- Note any pre-discovered components, patterns, algorithms
- Avoid re-documenting what's already found
在阅读任何源文件之前:
- 在discoveries.ndjson中搜索已有发现
- 记录已发现的组件、模式、算法
- 避免重复文档已有的内容
Step 2: Scope Analysis
步骤2:范围分析
- Read files matching `{target_scope}`
- Identify key structures, functions, classes
- Extract relevant code patterns
- 阅读匹配`{target_scope}`的文件
- 识别核心结构、函数、类
- 提取相关代码模式
Step 3: Context Integration
步骤3:上下文整合
- Build on findings from prev_context
- Reference Wave Summary insights
- Connect to discoveries from other agents
- 基于prev_context中的发现进行扩展
- 参考波次摘要中的见解
- 关联其他Agent的发现结果
Step 4: Document Generation
步骤4:文档生成
Output Path: Determine based on doc_type:
- `overview` → `docs/01-overview/`
- `architecture` → `docs/02-architecture/`
- `implementation` → `docs/03-implementation/`
- `feature` → `docs/04-features/`
- `api` → `docs/04-features/`
- `usage` → `docs/04-features/`
- `synthesis` → `docs/05-synthesis/`
Document Structure:
```markdown
输出路径: 根据文档类型确定:
- `overview` → `docs/01-overview/`
- `architecture` → `docs/02-architecture/`
- `implementation` → `docs/03-implementation/`
- `feature` → `docs/04-features/`
- `api` → `docs/04-features/`
- `usage` → `docs/04-features/`
- `synthesis` → `docs/05-synthesis/`
文档结构:
```markdown
{Title}
{Title}
Overview
概述
[Brief introduction]
[简短介绍]
{Required Section 1}
{必填章节1}
[Content with code examples]
[包含代码示例的内容]
{Required Section 2}
{必填章节2}
[Content with diagrams if applicable]
...
[若适用则包含图表的内容]
...
Code Examples
代码示例
```{language}
// file:line references
```
```{language}
// 引用格式:文件路径:行号
```
Cross-References
交叉引用
- Related: Doc
- Depends: Prereq
- 相关文档: 文档标题
- 依赖文档: 前置文档
Summary
摘要
[Key takeaways]
```
[核心要点]
```
Step 5: Share Discoveries (MANDATORY)
步骤5:共享发现结果(强制)
Append to discovery board:
```bash
echo '{"ts":"${getUtc8ISOString()}","worker":"{id}","type":"<TYPE>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
```
Discovery Types:
- `component_found`: {name, type, file, purpose}
- `pattern_found`: {pattern_name, location, description}
- `algorithm_found`: {name, file, complexity, purpose}
- `formula_found`: {name, latex, file, context}
- `feature_found`: {name, entry_point, description}
- `api_found`: {endpoint, file, parameters, returns}
- `config_found`: {name, file, type, default_value}
将结果追加至发现板:
```bash
echo '{"ts":"${getUtc8ISOString()}","worker":"{id}","type":"<TYPE>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
```
发现类型:
- `component_found`: {name, type, file, purpose}
- `pattern_found`: {pattern_name, location, description}
- `algorithm_found`: {name, file, complexity, purpose}
- `formula_found`: {name, latex, file, context}
- `feature_found`: {name, entry_point, description}
- `api_found`: {endpoint, file, parameters, returns}
- `config_found`: {name, file, type, default_value}
Step 6: Report
步骤6:提交报告
```json
{
"id": "{id}",
"status": "completed",
"findings": "Key discoveries (max 500 chars, structured for context propagation)",
"doc_path": "docs/XX-category/filename.md",
"key_discoveries": "[{"name":"...","type":"...","description":"...","file":"..."}]",
"error": ""
}
```
```json
{
"id": "{id}",
"status": "completed",
"findings": "核心发现(最多500字符,结构化以便上下文传播)",
"doc_path": "docs/XX-category/filename.md",
"key_discoveries": "[{"name":"...","type":"...","description":"...","file":"..."}]",
"error": ""
}
```
Quality Requirements
质量要求
| Requirement | Criteria |
|---|---|
| Section Coverage | ALL sections in doc_sections present |
| Code References | Include file:line for code |
| Discovery Sharing | At least 2 discoveries shared |
| Context Usage | Reference prev_context findings |
| Cross-References | Link to related docs |
| ` | |
| } |
---| 要求 | 判定标准 |
|---|---|
| 章节覆盖 | 包含doc_sections中指定的所有章节 |
| 代码引用 | 包含代码的文件路径:行号引用 |
| 发现共享 | 至少共享2条发现结果 |
| 上下文使用 | 参考prev_context中的发现 |
| 交叉引用 | 链接至相关文档 |
| ` | |
| } |
---Phase 2: Results Aggregation
阶段2:结果聚合
javascript
// 1. Generate docs/index.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
// Group by doc_type for navigation
const byType = {}
for (const t of completed) {
const type = t.doc_type || 'other'
if (!byType[type]) byType[type] = []
byType[type].push(t)
}
let index = `# Project Documentation Index\n\n`
index += `**Generated**: ${getUtc8ISOString().substring(0, 10)}\n`
index += `**Total Documents**: ${completed.length}\n\n`
const typeLabels = {
overview: '📋 概览',
architecture: '🏗️ 架构',
implementation: '⚙️ 实现',
theory: '📐 理论',
feature: '✨ 功能',
api: '🔌 API',
usage: '📖 使用',
synthesis: '💡 综合'
}
for (const [type, typeTasks] of Object.entries(byType)) {
const label = typeLabels[type] || type
index += `## ${label}\n\n`
for (const t of typeTasks) {
index += `- [${t.title}](${t.doc_path})\n`
}
index += `\n`
}
// Add wave summaries reference
index += `## 📊 Execution Reports\n\n`
index += `- [Wave Summaries](wave-summaries/)\n`
index += `- [Full Context](../context.md)\n`
Write(`${sessionFolder}/docs/index.md`, index)
// 2. Export results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 3. Generate context.md
const projectInfo = JSON.parse(Read(`${sessionFolder}/project-info.json`))
let contextMd = `# Documentation Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## Project Info\n`
contextMd += `- **Type**: ${projectInfo.type}\n`
contextMd += `- **Scale**: ${projectInfo.scale}\n`
contextMd += `- **Languages**: ${projectInfo.languages?.join(', ') || 'N/A'}\n\n`
const statusCounts = {
completed: tasks.filter(t => t.status === 'completed').length,
failed: tasks.filter(t => t.status === 'failed').length,
skipped: tasks.filter(t => t.status === 'skipped').length
}
contextMd += `## Summary\n`
contextMd += `| Status | Count |\n`
contextMd += `|--------|-------|\n`
contextMd += `| ✅ Completed | ${statusCounts.completed} |\n`
contextMd += `| ❌ Failed | ${statusCounts.failed} |\n`
contextMd += `| ⏭️ Skipped | ${statusCounts.skipped} |\n\n`
// Per-wave summary
const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
const waveTasks = tasks.filter(t => t.wave === w)
contextMd += `### Wave ${w}\n\n`
for (const t of waveTasks) {
const icon = t.status === 'completed' ? '✅' : t.status === 'failed' ? '❌' : '⏭️'
contextMd += `${icon} **${t.title}** [${t.doc_type}]\n`
if (t.findings) {
contextMd += ` ${t.findings.substring(0, 200)}${t.findings.length > 200 ? '...' : ''}\n`
}
if (t.doc_path) {
contextMd += ` → [${t.doc_path}](${t.doc_path})\n`
}
contextMd += `\n`
}
}
Write(`${sessionFolder}/context.md`, contextMd)
// 4. Display final summary
console.log(`
╔════════════════════════════════════════════════════════════════╗
║ DOCUMENTATION COMPLETE ║
╠════════════════════════════════════════════════════════════════╣
║ ✅ Completed: ${statusCounts.completed.toString().padStart(2)} tasks ║
║ ❌ Failed: ${statusCounts.failed.toString().padStart(2)} tasks ║
║ ⏭️ Skipped: ${statusCounts.skipped.toString().padStart(2)} tasks ║
╠════════════════════════════════════════════════════════════════╣
║ Output: ${sessionFolder.padEnd(50)} ║
╚════════════════════════════════════════════════════════════════╝
`)javascript
// 1. 生成docs/index.md
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
// 按doc_type分组用于导航
const byType = {}
for (const t of completed) {
const type = t.doc_type || 'other'
if (!byType[type]) byType[type] = []
byType[type].push(t)
}
let index = `# 项目文档索引\n\n`
index += `**生成时间**: ${getUtc8ISOString().substring(0, 10)}\n`
index += `**总文档数**: ${completed.length}\n\n`
const typeLabels = {
overview: '📋 概览',
architecture: '🏗️ 架构',
implementation: '⚙️ 实现',
theory: '📐 理论',
feature: '✨ 功能',
api: '🔌 API',
usage: '📖 使用',
synthesis: '💡 综合'
}
for (const [type, typeTasks] of Object.entries(byType)) {
const label = typeLabels[type] || type
index += `## ${label}\n\n`
for (const t of typeTasks) {
index += `- [${t.title}](${t.doc_path})\n`
}
index += `\n`
}
// 添加波次摘要引用
index += `## 📊 执行报告\n\n`
index += `- [波次摘要](wave-summaries/)\n`
index += `- [完整上下文](../context.md)\n`
Write(`${sessionFolder}/docs/index.md`, index)
// 2. 导出results.csv
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
// 3. 生成context.md
const projectInfo = JSON.parse(Read(`${sessionFolder}/project-info.json`))
let contextMd = `# 文档生成报告\n\n`
contextMd += `**会话ID**: ${sessionId}\n`
contextMd += `**生成日期**: ${getUtc8ISOString().substring(0, 10)}\n\n`
contextMd += `## 项目信息\n`
contextMd += `- **类型**: ${projectInfo.type}\n`
contextMd += `- **规模**: ${projectInfo.scale}\n`
contextMd += `- **开发语言**: ${projectInfo.languages?.join(', ') || '无'}\n\n`
const statusCounts = {
completed: tasks.filter(t => t.status === 'completed').length,
failed: tasks.filter(t => t.status === 'failed').length,
skipped: tasks.filter(t => t.status === 'skipped').length
}
contextMd += `## 执行摘要\n`
contextMd += `| 状态 | 数量 |\n`
contextMd += `|--------|-------|\n`
contextMd += `| ✅ 已完成 | ${statusCounts.completed} |\n`
contextMd += `| ❌ 失败 | ${statusCounts.failed} |\n`
contextMd += `| ⏭️ 已跳过 | ${statusCounts.skipped} |\n\n`
// 按波次展示执行详情
const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## 波次执行详情\n\n`
for (let w = 1; w <= maxWave; w++) {
const waveTasks = tasks.filter(t => t.wave === w)
contextMd += `### 波次 ${w}\n\n`
for (const t of waveTasks) {
const icon = t.status === 'completed' ? '✅' : t.status === 'failed' ? '❌' : '⏭️'
contextMd += `${icon} **${t.title}** [${t.doc_type}]\n`
if (t.findings) {
contextMd += ` ${t.findings.substring(0, 200)}${t.findings.length > 200 ? '...' : ''}\n`
}
if (t.doc_path) {
contextMd += ` → [${t.doc_path}](${t.doc_path})\n`
}
contextMd += `\n`
}
}
Write(`${sessionFolder}/context.md`, contextMd)
// 4. 展示最终执行摘要
console.log(`
╔════════════════════════════════════════════════════════════════╗
║ 文档生成完成 ║
╠════════════════════════════════════════════════════════════════╣
║ ✅ 已完成: ${statusCounts.completed.toString().padStart(2)} 个任务 ║
║ ❌ 失败: ${statusCounts.failed.toString().padStart(2)} 个任务 ║
║ ⏭️ 已跳过: ${statusCounts.skipped.toString().padStart(2)} 个任务 ║
╠════════════════════════════════════════════════════════════════╣
║ 输出目录: ${sessionFolder.padEnd(50)} ║
╚════════════════════════════════════════════════════════════════╝
`)Optimized Output Structure
优化版输出结构
.workflow/.csv-wave/doc-{slug}-{date}/
├── project-info.json # 项目分析结果
├── tasks.csv # Master CSV (动态生成的任务)
├── results.csv # 最终结果
├── discoveries.ndjson # 发现板
├── context.md # 执行报告
│
├── wave-summaries/ # NEW: 波次摘要
│ ├── wave-1-summary.md
│ ├── wave-2-summary.md
│ └── ...
│
└── docs/
├── index.md # 文档导航
├── 01-overview/
├── 02-architecture/
├── 03-implementation/
├── 04-features/
└── 05-synthesis/.workflow/.csv-wave/doc-{slug}-{date}/
├── project-info.json # 项目分析结果
├── tasks.csv # 主CSV文件(动态生成的任务)
├── results.csv # 最终执行结果
├── discoveries.ndjson # 发现板
├── context.md # 执行报告
│
├── wave-summaries/ # 新增:波次摘要目录
│ ├── wave-1-summary.md
│ ├── wave-2-summary.md
│ └── ...
│
└── docs/
├── index.md # 文档导航页
├── 01-overview/
├── 02-architecture/
├── 03-implementation/
├── 04-features/
└── 05-synthesis/Optimization Summary
优化点对比
| 优化点 | 原版 | 优化版 |
|---|---|---|
| 任务数量 | 固定17任务 | 动态生成 (5-25基于项目规模) |
| 波次计算 | 硬编码5波 | 拓扑排序动态计算 |
| 上下文传播 | 仅 prev_context | prev_context + Wave Summary + Discoveries |
| 发现利用 | 依赖自觉 | 强制第一步检查 |
| 文档密度 | 原始 findings | 结构化 Wave Summary |
| 优化点 | 原版 | 优化版 |
|---|---|---|
| 任务数量 | 固定17个任务 | 动态生成(根据项目规模生成5-25个任务) |
| 波次计算 | 硬编码5个波次 | 通过拓扑排序动态计算 |
| 上下文传播 | 仅使用prev_context | prev_context + 波次摘要 + 发现板 |
| 发现结果利用 | 依赖Agent自觉 | 强制第一步检查发现板 |
| 文档密度 | 原始findings内容 | 结构化波次摘要 |
Core Rules
核心规则
- Dynamic First: 任务列表动态生成,不预设
- Wave Order is Sacred: 波次由拓扑排序决定
- Discovery Check Mandatory: 必须先检查发现板
- Wave Summary: 每波次结束生成摘要
- Context Compound: 上下文累积传播
- Quality Gates: 每文档必须覆盖所有 doc_sections
- DO NOT STOP: 持续执行直到所有波次完成
- 动态优先:任务列表动态生成,不预设固定任务
- 波次顺序不可修改:波次由拓扑排序算法决定
- 必须检查发现板:执行任务前必须先检查发现板
- 波次摘要:每个波次完成后必须生成摘要
- 上下文累积:上下文在波次间持续累积传播
- 质量校验:每份文档必须覆盖所有指定的doc_sections章节
- 持续执行:必须执行完所有波次才终止