unified-execute-with-file

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Unified-Execute-With-File Workflow

基于文件的统一执行工作流

Quick Start

快速开始

Universal execution engine consuming
.task/*.json
directory and executing tasks serially with convergence verification and progress tracking.
bash
undefined
通用执行引擎,读取**
.task/*.json
**目录中的任务,串行执行任务并提供收敛验证和进度跟踪功能。
bash
undefined

Execute from lite-plan output

从lite-plan输出执行

/codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/"
/codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/"

Execute from workflow session output

从工作流会话输出执行

/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit

Execute a single task JSON file

执行单个任务JSON文件

/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run
/codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run

Auto-detect from .workflow/ directories

从.workflow/目录自动检测

/codex:unified-execute-with-file

**Core workflow**: Scan .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress

**Key features**:
- **Directory-based**: Consumes `.task/` directory containing individual task JSON files
- **Convergence-driven**: Verifies each task's convergence criteria after execution
- **Serial execution**: Process tasks in topological order with dependency tracking
- **Dual progress tracking**: `execution.md` (overview) + `execution-events.md` (event stream)
- **Auto-commit**: Optional conventional commits per task
- **Dry-run mode**: Simulate execution without changes
- **Flexible input**: Accepts `.task/` directory path or a single `.json` file path

**Input format**: Each task is a standalone JSON file in `.task/` directory (e.g., `IMPL-001.json`). Use `plan-converter` to convert other formats to `.task/*.json` first.
/codex:unified-execute-with-file

**核心工作流**:扫描.task/*.json → 验证 → 执行前分析 → 执行任务 → 收敛验证 → 跟踪进度

**关键特性**:
- **基于目录**: 读取包含单个任务JSON文件的`.task/`目录
- **收敛驱动**: 执行后验证每个任务的收敛标准
- **串行执行**: 按拓扑顺序处理任务,同时跟踪依赖关系
- **双进度跟踪**: `execution.md`(概览) + `execution-events.md`(事件流)
- **自动提交**: 可选择为每个任务执行规范提交
- **试运行模式**: 模拟执行过程,不实际修改内容
- **灵活输入**: 支持`.task/`目录路径或单个`.json`文件路径

**输入格式**: 每个任务是`.task/`目录中的独立JSON文件(例如`IMPL-001.json`)。若使用其他格式,请先通过`plan-converter`转换为`.task/*.json`格式。

Overview

概览

┌─────────────────────────────────────────────────────────────┐
│                   UNIFIED EXECUTE WORKFLOW                    │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  Phase 1: Load & Validate                                     │
│     ├─ Scan .task/*.json (one task per file)                   │
│     ├─ Validate schema (id, title, depends_on, convergence)   │
│     ├─ Detect cycles, build topological order                 │
│     └─ Initialize execution.md + execution-events.md          │
│                                                               │
│  Phase 2: Pre-Execution Analysis                              │
│     ├─ Check file conflicts (multiple tasks → same file)      │
│     ├─ Verify file existence                                  │
│     ├─ Generate feasibility report                            │
│     └─ User confirmation (unless dry-run)                     │
│                                                               │
│  Phase 3: Serial Execution + Convergence Verification         │
│     For each task in topological order:                        │
│     ├─ Check dependencies satisfied                           │
│     ├─ Record START event                                     │
│     ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash)      │
│     ├─ Verify convergence.criteria[]                          │
│     ├─ Run convergence.verification command                   │
│     ├─ Record COMPLETE/FAIL event with verification results   │
│     ├─ Update _execution state in task JSON file               │
│     └─ Auto-commit if enabled                                 │
│                                                               │
│  Phase 4: Completion                                          │
│     ├─ Finalize execution.md with summary statistics          │
│     ├─ Finalize execution-events.md with session footer       │
│     ├─ Write back .task/*.json with _execution states          │
│     └─ Offer follow-up actions                                │
│                                                               │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│                   统一执行工作流                    │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  阶段1:加载与验证                                     │
│     ├─ 扫描.task/*.json(每个文件对应一个任务)                   │
│     ├─ 验证schema(id、title、depends_on、convergence)   │
│     ├─ 检测循环依赖,构建拓扑执行顺序                 │
│     └─ 初始化execution.md + execution-events.md          │
│                                                               │
│  阶段2:执行前分析                              │
│     ├─ 检查文件冲突(多个任务操作同一文件)      │
│     ├─ 验证文件是否存在                                  │
│     ├─ 生成可行性报告                            │
│     └─ 用户确认(试运行模式除外)                     │
│                                                               │
│  阶段3:串行执行 + 收敛验证         │
│     按拓扑顺序处理每个任务:                        │
│     ├─ 检查依赖是否满足                           │
│     ├─ 记录START事件                                     │
│     ├─ 直接执行任务(Read/Edit/Write/Grep/Glob/Bash)      │
│     ├─ 验证convergence.criteria[]                          │
│     ├─ 执行convergence.verification命令                   │
│     ├─ 记录COMPLETE/FAIL事件及验证结果   │
│     ├─ 更新任务JSON文件中的_execution状态               │
│     └─ 若启用自动提交则执行提交操作                                 │
│                                                               │
│  阶段4:执行完成                                          │
│     ├─ 完善execution.md,添加统计摘要          │
│     ├─ 完善execution-events.md,添加会话结束信息       │
│     ├─ 将_execution状态写回.task/*.json文件          │
│     └─ 提供后续操作选项                                │
│                                                               │
└─────────────────────────────────────────────────────────────┘

Output Structure

输出结构

${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md              # Plan overview + task table + summary
└── execution-events.md       # ⭐ Unified event log (single source of truth)
Additionally, each source
.task/*.json
file is updated in-place with
_execution
states.

${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md              # 计划概览 + 任务表格 + 执行摘要
└── execution-events.md       # ⭐ 统一事件日志(单一可信来源)
此外,每个源
.task/*.json
文件会被就地更新,添加
_execution
状态字段。

Implementation Details

实现细节

Session Initialization

会话初始化

Step 0: Initialize Session
步骤0:初始化会话
javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()

// Parse arguments
const autoCommit = $ARGUMENTS.includes('--auto-commit')
const dryRun = $ARGUMENTS.includes('--dry-run')
const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
let planPath = planMatch ? planMatch[1] : null

// Auto-detect if no PLAN specified
if (!planPath) {
  // Search in order (most recent first):
  //   .workflow/active/*/.task/
  //   .workflow/.lite-plan/*/.task/
  //   .workflow/.req-plan/*/.task/
  //   .workflow/.planning/*/.task/
  // Use most recently modified directory containing *.json files
}

// Resolve path
planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`

// Generate session ID
const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const random = Math.random().toString(36).substring(2, 9)
const sessionId = `EXEC-${slug}-${dateStr}-${random}`
const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`

Bash(`mkdir -p ${sessionFolder}`)

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()

// 解析参数
const autoCommit = $ARGUMENTS.includes('--auto-commit')
const dryRun = $ARGUMENTS.includes('--dry-run')
const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
let planPath = planMatch ? planMatch[1] : null

// 若未指定PLAN则自动检测
if (!planPath) {
  // 按以下顺序搜索(最新优先):
  //   .workflow/active/*/.task/
  //   .workflow/.lite-plan/*/.task/
  //   .workflow/.req-plan/*/.task/
  //   .workflow/.planning/*/.task/
  // 选择包含*.json文件的最新修改目录
}

// 解析路径
planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`

// 生成会话ID
const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const random = Math.random().toString(36).substring(2, 9)
const sessionId = `EXEC-${slug}-${dateStr}-${random}`
const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`

Bash(`mkdir -p ${sessionFolder}`)

Phase 1: Load & Validate

阶段1:加载与验证

Objective: Scan
.task/
directory, parse individual task JSON files, validate schema and dependencies, build execution order.
目标: 扫描
.task/
目录,解析单个任务JSON文件,验证 schema 和依赖关系,构建执行顺序。

Step 1.1: Scan .task/ Directory and Parse Task Files

步骤1.1:扫描.task/目录并解析任务文件

javascript
// Determine if planPath is a directory or single file
const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir'

let taskFiles, tasks

if (isDirectory) {
  // Directory mode: scan for all *.json files
  taskFiles = Glob('*.json', planPath)
  if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`)

  tasks = taskFiles.map(filePath => {
    try {
      const content = Read(filePath)
      const task = JSON.parse(content)
      task._source_file = filePath  // Track source file for write-back
      return task
    } catch (e) {
      throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`)
    }
  })
} else {
  // Single file mode: parse one task JSON
  try {
    const content = Read(planPath)
    const task = JSON.parse(content)
    task._source_file = planPath
    tasks = [task]
  } catch (e) {
    throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`)
  }
}

if (tasks.length === 0) throw new Error('No tasks found')
javascript
// 判断planPath是目录还是单个文件
const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir'

let taskFiles, tasks

if (isDirectory) {
  // 目录模式:扫描所有*.json文件
  taskFiles = Glob('*.json', planPath)
  if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`)

  tasks = taskFiles.map(filePath => {
    try {
      const content = Read(filePath)
      const task = JSON.parse(content)
      task._source_file = filePath  // 跟踪源文件以支持写回
      return task
    } catch (e) {
      throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`)
    }
  })
} else {
  // 单文件模式:解析单个任务JSON
  try {
    const content = Read(planPath)
    const task = JSON.parse(content)
    task._source_file = planPath
    tasks = [task]
  } catch (e) {
    throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`)
  }
}

if (tasks.length === 0) throw new Error('No tasks found')

Step 1.2: Validate Schema

步骤1.2:验证Schema

Validate against unified task schema:
~/.ccw/workflows/cli-templates/schemas/task-schema.json
javascript
const errors = []
tasks.forEach((task, i) => {
  const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}`

  // Required fields (per task-schema.json)
  if (!task.id) errors.push(`${src}: missing 'id'`)
  if (!task.title) errors.push(`${src}: missing 'title'`)
  if (!task.description) errors.push(`${src}: missing 'description'`)
  if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`)

  // Context block (optional but validated if present)
  if (task.context) {
    if (task.context.requirements && !Array.isArray(task.context.requirements))
      errors.push(`${task.id}: context.requirements must be array`)
    if (task.context.acceptance && !Array.isArray(task.context.acceptance))
      errors.push(`${task.id}: context.acceptance must be array`)
    if (task.context.focus_paths && !Array.isArray(task.context.focus_paths))
      errors.push(`${task.id}: context.focus_paths must be array`)
  }

  // Convergence (required for execution verification)
  if (!task.convergence) {
    errors.push(`${task.id || src}: missing 'convergence'`)
  } else {
    if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
    if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
    if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
  }

  // Flow control (optional but validated if present)
  if (task.flow_control) {
    if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files))
      errors.push(`${task.id}: flow_control.target_files must be array`)
  }

  // New unified schema fields (backward compatible addition)
  if (task.focus_paths && !Array.isArray(task.focus_paths))
    errors.push(`${task.id}: focus_paths must be array`)
  if (task.implementation && !Array.isArray(task.implementation))
    errors.push(`${task.id}: implementation must be array`)
  if (task.files && !Array.isArray(task.files))
    errors.push(`${task.id}: files must be array`)
})

if (errors.length) {
  // Report errors, stop execution
}
验证统一任务schema:
~/.ccw/workflows/cli-templates/schemas/task-schema.json
javascript
const errors = []
tasks.forEach((task, i) => {
  const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}`

  // 必填字段(依据task-schema.json)
  if (!task.id) errors.push(`${src}: missing 'id'`)
  if (!task.title) errors.push(`${src}: missing 'title'`)
  if (!task.description) errors.push(`${src}: missing 'description'`)
  if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`)

  // Context块(可选,但存在时需验证)
  if (task.context) {
    if (task.context.requirements && !Array.isArray(task.context.requirements))
      errors.push(`${task.id}: context.requirements must be array`)
    if (task.context.acceptance && !Array.isArray(task.context.acceptance))
      errors.push(`${task.id}: context.acceptance must be array`)
    if (task.context.focus_paths && !Array.isArray(task.context.focus_paths))
      errors.push(`${task.id}: context.focus_paths must be array`)
  }

  // Convergence(执行验证必填)
  if (!task.convergence) {
    errors.push(`${task.id || src}: missing 'convergence'`)
  } else {
    if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
    if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
    if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
  }

  // 流控(可选,但存在时需验证)
  if (task.flow_control) {
    if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files))
      errors.push(`${task.id}: flow_control.target_files must be array`)
  }

  // 新增统一schema字段(向后兼容)
  if (task.focus_paths && !Array.isArray(task.focus_paths))
    errors.push(`${task.id}: focus_paths must be array`)
  if (task.implementation && !Array.isArray(task.implementation))
    errors.push(`${task.id}: implementation must be array`)
  if (task.files && !Array.isArray(task.files))
    errors.push(`${task.id}: files must be array`)
})

if (errors.length) {
  // 报告错误,终止执行
}

Step 1.3: Build Execution Order

步骤1.3:构建执行顺序

javascript
// 1. Validate dependency references
const taskIds = new Set(tasks.map(t => t.id))
tasks.forEach(task => {
  task.depends_on.forEach(dep => {
    if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
  })
})

// 2. Detect cycles (DFS)
function detectCycles(tasks) {
  const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
  const visited = new Set(), inStack = new Set(), cycles = []
  function dfs(node, path) {
    if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
    if (visited.has(node)) return
    visited.add(node); inStack.add(node)
    ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
    inStack.delete(node)
  }
  tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
  return cycles
}
const cycles = detectCycles(tasks)
if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`)

// 3. Topological sort
function topoSort(tasks) {
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  tasks.forEach(t => t.depends_on.forEach(dep => {
    inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1)
  }))
  const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id)
  const order = []
  while (queue.length) {
    const id = queue.shift()
    order.push(id)
    tasks.forEach(t => {
      if (t.depends_on.includes(id)) {
        inDegree.set(t.id, inDegree.get(t.id) - 1)
        if (inDegree.get(t.id) === 0) queue.push(t.id)
      }
    })
  }
  return order
}
const executionOrder = topoSort(tasks)
javascript
// 1. 验证依赖引用
const taskIds = new Set(tasks.map(t => t.id))
tasks.forEach(task => {
  task.depends_on.forEach(dep => {
    if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
  })
})

// 2. 检测循环依赖(深度优先搜索)
function detectCycles(tasks) {
  const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
  const visited = new Set(), inStack = new Set(), cycles = []
  function dfs(node, path) {
    if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
    if (visited.has(node)) return
    visited.add(node); inStack.add(node)
    ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
    inStack.delete(node)
  }
  tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
  return cycles
}
const cycles = detectCycles(tasks)
if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`)

// 3. 拓扑排序
function topoSort(tasks) {
  const inDegree = new Map(tasks.map(t => [t.id, 0]))
  tasks.forEach(t => t.depends_on.forEach(dep => {
    inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1)
  }))
  const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id)
  const order = []
  while (queue.length) {
    const id = queue.shift()
    order.push(id)
    tasks.forEach(t => {
      if (t.depends_on.includes(id)) {
        inDegree.set(t.id, inDegree.get(t.id) - 1)
        if (inDegree.get(t.id) === 0) queue.push(t.id)
      }
    })
  }
  return order
}
const executionOrder = topoSort(tasks)

Step 1.4: Initialize Execution Artifacts

步骤1.4:初始化执行产物

javascript
// execution.md
const executionMd = `# Execution Overview
javascript
// execution.md
const executionMd = `# 执行概览

Session Info

会话信息

  • Session ID: ${sessionId}
  • Plan Source: ${planPath}
  • Started: ${getUtc8ISOString()}
  • Total Tasks: ${tasks.length}
  • Mode: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'}
  • Auto-Commit: ${autoCommit ? 'Enabled' : 'Disabled'}
  • 会话ID: ${sessionId}
  • 计划来源: ${planPath}
  • 启动时间: ${getUtc8ISOString()}
  • 总任务数: ${tasks.length}
  • 模式: ${dryRun ? '试运行(无实际修改)' : '直接在线执行'}
  • 自动提交: ${autoCommit ? '已启用' : '已禁用'}

Task Overview

任务概览

#IDTitleTypePriorityEffortDependenciesStatus
${tasks.map((t, i) => `${i+1}${t.id}${t.title}${t.type'-'}${t.priority
#ID标题类型优先级工作量依赖任务状态
${tasks.map((t, i) => `${i+1}${t.id}${t.title}${t.type'-'}${t.priority

Pre-Execution Analysis

执行前分析

Populated in Phase 2
阶段2填充内容

Execution Timeline

执行时间线

Updated as tasks complete
任务完成时更新

Execution Summary

执行摘要

Updated after all tasks complete
 Write(
${sessionFolder}/execution.md`, executionMd)
// execution-events.md Write(
${sessionFolder}/execution-events.md
, `# Execution Events
Session: ${sessionId} Started: ${getUtc8ISOString()} Source: ${planPath}

`)

---
所有任务完成后更新
 Write(
${sessionFolder}/execution.md`, executionMd)
// execution-events.md Write(
${sessionFolder}/execution-events.md
, `# 执行事件
会话: ${sessionId} 启动时间: ${getUtc8ISOString()} 来源: ${planPath}

`)

---

Phase 2: Pre-Execution Analysis

阶段2:执行前分析

Objective: Validate feasibility and identify issues before execution.
目标: 验证执行可行性,在执行前识别潜在问题。

Step 2.1: Analyze File Conflicts

步骤2.1:分析文件冲突

javascript
const fileTaskMap = new Map()  // file → [taskIds]
tasks.forEach(task => {
  (task.files || []).forEach(f => {
    const key = f.path
    if (!fileTaskMap.has(key)) fileTaskMap.set(key, [])
    fileTaskMap.get(key).push(task.id)
  })
})

const conflicts = []
fileTaskMap.forEach((taskIds, file) => {
  if (taskIds.length > 1) {
    conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' })
  }
})

// Check file existence
const missingFiles = []
tasks.forEach(task => {
  (task.files || []).forEach(f => {
    if (f.action !== 'create' && !file_exists(f.path)) {
      missingFiles.push({ file: f.path, task: task.id })
    }
  })
})
javascript
const fileTaskMap = new Map()  // 文件路径 → [任务ID列表]
tasks.forEach(task => {
  (task.files || []).forEach(f => {
    const key = f.path
    if (!fileTaskMap.has(key)) fileTaskMap.set(key, [])
    fileTaskMap.get(key).push(task.id)
  })
})

const conflicts = []
fileTaskMap.forEach((taskIds, file) => {
  if (taskIds.length > 1) {
    conflicts.push({ file, tasks: taskIds, resolution: '按依赖顺序执行' })
  }
})

// 检查文件是否存在
const missingFiles = []
tasks.forEach(task => {
  (task.files || []).forEach(f => {
    if (f.action !== 'create' && !file_exists(f.path)) {
      missingFiles.push({ file: f.path, task: task.id })
    }
  })
})

Step 2.2: Append to execution.md

步骤2.2:更新execution.md

javascript
// Replace "Pre-Execution Analysis" section with:
// - File Conflicts (list or "No conflicts")
// - Missing Files (list or "All files exist")
// - Dependency Validation (errors or "No issues")
// - Execution Order (numbered list)
javascript
// 替换「执行前分析」章节内容:
// - 文件冲突(列出冲突或显示「无冲突」)
// - 缺失文件(列出缺失文件或显示「所有文件存在」)
// - 依赖验证(显示错误或「无问题」)
// - 执行顺序(编号列表)

Step 2.3: User Confirmation

步骤2.3:用户确认

javascript
if (!dryRun) {
  AskUserQuestion({
    questions: [{
      question: `Execute ${tasks.length} tasks?\n\n${conflicts.length ? `${conflicts.length} file conflicts\n` : ''}Execution order:\n${executionOrder.map((id, i) => `  ${i+1}. ${id}: ${tasks.find(t => t.id === id).title}`).join('\n')}`,
      header: "Confirm",
      multiSelect: false,
      options: [
        { label: "Execute", description: "Start serial execution" },
        { label: "Dry Run", description: "Simulate without changes" },
        { label: "Cancel", description: "Abort execution" }
      ]
    }]
  })
}

javascript
if (!dryRun) {
  AskUserQuestion({
    questions: [{
      question: `是否执行${tasks.length}个任务?\n\n${conflicts.length ? `${conflicts.length}个文件冲突\n` : ''}执行顺序:\n${executionOrder.map((id, i) => `  ${i+1}. ${id}: ${tasks.find(t => t.id === id).title}`).join('\n')}`,
      header: "确认执行",
      multiSelect: false,
      options: [
        { label: "执行", description: "开始串行执行任务" },
        { label: "试运行", description: "模拟执行,不修改内容" },
        { label: "取消", description: "终止执行" }
      ]
    }]
  })
}

Phase 3: Serial Execution + Convergence Verification

阶段3:串行执行 + 收敛验证

Objective: Execute tasks sequentially, verify convergence after each task, track all state.
Execution Model: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation.
目标: 按顺序执行任务,每个任务执行后验证收敛状态,跟踪所有执行状态。
执行模型: 直接在线执行 — 主进程直接读取、编辑、写入文件,不通过CLI代理。

Step 3.1: Execution Loop

步骤3.1:执行循环

javascript
const completedTasks = new Set()
const failedTasks = new Set()
const skippedTasks = new Set()

for (const taskId of executionOrder) {
  const task = tasks.find(t => t.id === taskId)
  const startTime = getUtc8ISOString()

  // 1. Check dependencies
  const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep))
  if (unmetDeps.length) {
    appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`)
    skippedTasks.add(task.id)
    task._execution = { status: 'skipped', executed_at: startTime,
      result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } }
    continue
  }

  // 2. Record START event
  appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title}

**Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'}
**Status**:IN PROGRESS
**Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'}
**Description**: ${task.description}
**Convergence Criteria**:
${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}
javascript
const completedTasks = new Set()
const failedTasks = new Set()
const skippedTasks = new Set()

for (const taskId of executionOrder) {
  const task = tasks.find(t => t.id === taskId)
  const startTime = getUtc8ISOString()

  // 1. 检查依赖是否满足
  const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep))
  if (unmetDeps.length) {
    appendToEvents(task, 'BLOCKED', `未满足依赖: ${unmetDeps.join(', ')}`)
    skippedTasks.add(task.id)
    task._execution = { status: 'skipped', executed_at: startTime,
      result: { success: false, error: `被以下任务阻塞: ${unmetDeps.join(', ')}` } }
    continue
  }

  // 2. 记录START事件
  appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title}

**类型**: ${task.type || '-'} | **优先级**: ${task.priority || '-'} | **工作量**: ${task.effort || '-'}
**状态**: ⏳ 执行中
**操作文件**: ${(task.files || []).map(f => f.path).join(', ') || '待确定'}
**描述**: ${task.description}
**收敛标准**:
${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}

Execution Log

执行日志

`)
if (dryRun) { // Simulate: mark as completed without changes appendToEvents(
\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n
) task._execution = { status: 'completed', executed_at: startTime, result: { success: true, summary: 'Dry run — no changes made' } } completedTasks.add(task.id) continue }
// 3. Execute task directly // - Read each file in task.files (if specified) // - Analyze what changes satisfy task.description + task.convergence.criteria // - If task.files has detailed changes, use them as guidance // - Apply changes using Edit (preferred) or Write (for new files) // - Use Grep/Glob/mcp__ace-tool for discovery if needed // - Use Bash for build/test commands
// Dual-path field access (supports both unified and legacy 6-field schema) // const targetFiles = task.files?.map(f => f.path) || task.flow_control?.target_files || [] // const acceptanceCriteria = task.convergence?.criteria || task.context?.acceptance || [] // const requirements = task.implementation || task.context?.requirements || [] // const focusPaths = task.focus_paths || task.context?.focus_paths || []
// 4. Verify convergence const convergenceResults = verifyConvergence(task)
const endTime = getUtc8ISOString() const filesModified = getModifiedFiles()
if (convergenceResults.allPassed) { // 5a. Record SUCCESS appendToEvents(` Status: ✅ COMPLETED Duration: ${calculateDuration(startTime, endTime)} Files Modified: ${filesModified.join(', ')}
`)
if (dryRun) { // 模拟执行:标记为完成但不修改内容 appendToEvents(
\n**状态**: ⏭ 试运行(无实际修改)\n\n---\n
) task._execution = { status: 'completed', executed_at: startTime, result: { success: true, summary: '试运行 — 未修改任何内容' } } completedTasks.add(task.id) continue }
// 3. 直接执行任务 // - 读取task.files中指定的每个文件(若存在) // - 分析需执行的变更以满足task.description + task.convergence.criteria // - 若task.files包含详细变更说明,则以此为指导 // - 使用Edit(优先)或Write(新建文件)应用变更 // - 必要时使用Grep/Glob/mcp__ace-tool进行内容发现 // - 使用Bash执行构建/测试命令
// 双路径字段访问(同时支持统一schema和旧版6字段schema) // const targetFiles = task.files?.map(f => f.path) || task.flow_control?.target_files || [] // const acceptanceCriteria = task.convergence?.criteria || task.context?.acceptance || [] // const requirements = task.implementation || task.context?.requirements || [] // const focusPaths = task.focus_paths || task.context?.focus_paths || []
// 4. 验证收敛状态 const convergenceResults = verifyConvergence(task)
const endTime = getUtc8ISOString() const filesModified = getModifiedFiles()
if (convergenceResults.allPassed) { // 5a. 记录成功事件 appendToEvents(` 状态: ✅ 执行完成 耗时: ${calculateDuration(startTime, endTime)} 修改文件: ${filesModified.join(', ')}

Changes Summary

变更摘要

${changeSummary}
${changeSummary}

Convergence Verification

收敛验证结果

${task.convergence.criteria.map((c, i) =>
- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}
).join('\n')}
  • Verification: ${convergenceResults.verificationOutput}
  • Definition of Done: ${task.convergence.definition_of_done}

`) task._execution = { status: 'completed', executed_at: endTime, result: { success: true, files_modified: filesModified, summary: changeSummary, convergence_verified: convergenceResults.verified } } completedTasks.add(task.id) } else { // 5b. Record FAILURE handleTaskFailure(task, convergenceResults, startTime, endTime) }
// 6. Auto-commit if enabled if (autoCommit && task._execution.status === 'completed') { autoCommitTask(task, filesModified) } }
undefined
${task.convergence.criteria.map((c, i) =>
- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}
).join('\n')}
  • 验证命令结果: ${convergenceResults.verificationOutput}
  • 完成定义: ${task.convergence.definition_of_done}

`) task._execution = { status: 'completed', executed_at: endTime, result: { success: true, files_modified: filesModified, summary: changeSummary, convergence_verified: convergenceResults.verified } } completedTasks.add(task.id) } else { // 5b. 记录失败事件 handleTaskFailure(task, convergenceResults, startTime, endTime) }
// 6. 若启用自动提交则执行提交 if (autoCommit && task._execution.status === 'completed') { autoCommitTask(task, filesModified) } }
undefined

Step 3.2: Convergence Verification

步骤3.2:收敛验证

javascript
function verifyConvergence(task) {
  const results = {
    verified: [],           // boolean[] per criterion
    verificationOutput: '', // output of verification command
    allPassed: true
  }

  // 1. Check each criterion
  //    For each criterion in task.convergence.criteria:
  //      - If it references a testable condition, check it
  //      - If it's manual, mark as verified based on changes made
  //      - Record true/false per criterion
  task.convergence.criteria.forEach(criterion => {
    const passed = evaluateCriterion(criterion, task)
    results.verified.push(passed)
    if (!passed) results.allPassed = false
  })

  // 2. Run verification command (if executable)
  const verification = task.convergence.verification
  if (isExecutableCommand(verification)) {
    try {
      const output = Bash(verification, { timeout: 120000 })
      results.verificationOutput = `${verification} → PASS`
    } catch (e) {
      results.verificationOutput = `${verification} → FAIL: ${e.message}`
      results.allPassed = false
    }
  } else {
    results.verificationOutput = `Manual: ${verification}`
  }

  return results
}

function isExecutableCommand(verification) {
  // Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc.
  return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim())
}
javascript
function verifyConvergence(task) {
  const results = {
    verified: [],           // 每个标准的验证结果(布尔值数组)
    verificationOutput: '', // 验证命令输出
    allPassed: true
  }

  // 1. 检查每个收敛标准
  //    对task.convergence.criteria中的每个标准:
  //      - 若为可测试条件,则执行检查
  //      - 若为手动验证标准,则基于已执行变更标记是否通过
  //      - 记录每个标准的通过/失败状态
  task.convergence.criteria.forEach(criterion => {
    const passed = evaluateCriterion(criterion, task)
    results.verified.push(passed)
    if (!passed) results.allPassed = false
  })

  // 2. 执行验证命令(若为可执行命令)
  const verification = task.convergence.verification
  if (isExecutableCommand(verification)) {
    try {
      const output = Bash(verification, { timeout: 120000 })
      results.verificationOutput = `${verification} → 成功`
    } catch (e) {
      results.verificationOutput = `${verification} → 失败: ${e.message}`
      results.allPassed = false
    }
  } else {
    results.verificationOutput = `手动验证: ${verification}`
  }

  return results
}

function isExecutableCommand(verification) {
  // 检测可执行命令模式:npm、npx、jest、tsc、curl、pytest、go test等
  return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim())
}

Step 3.3: Failure Handling

步骤3.3:失败处理

javascript
function handleTaskFailure(task, convergenceResults, startTime, endTime) {
  appendToEvents(`
**Status**:FAILED
**Duration**: ${calculateDuration(startTime, endTime)}
**Error**: Convergence verification failed
javascript
function handleTaskFailure(task, convergenceResults, startTime, endTime) {
  appendToEvents(`
**状态**: ❌ 执行失败
**耗时**: ${calculateDuration(startTime, endTime)}
**错误原因**: 收敛验证未通过

Failed Criteria

未通过的收敛标准

${task.convergence.criteria.map((c, i) =>
- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}
).join('\n')}
  • Verification: ${convergenceResults.verificationOutput}

`)
task._execution = { status: 'failed', executed_at: endTime, result: { success: false, error: 'Convergence verification failed', convergence_verified: convergenceResults.verified } } failedTasks.add(task.id)
// Ask user AskUserQuestion({ questions: [{ question:
Task ${task.id} failed convergence verification. How to proceed?
, header: "Failure", multiSelect: false, options: [ { label: "Skip & Continue", description: "Skip this task, continue with next" }, { label: "Retry", description: "Retry this task" }, { label: "Accept", description: "Mark as completed despite failure" }, { label: "Abort", description: "Stop execution, keep progress" } ] }] }) }
undefined
${task.convergence.criteria.map((c, i) =>
- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}
).join('\n')}
  • 验证命令结果: ${convergenceResults.verificationOutput}

`)
task._execution = { status: 'failed', executed_at: endTime, result: { success: false, error: '收敛验证未通过', convergence_verified: convergenceResults.verified } } failedTasks.add(task.id)
// 询问用户处理方式 AskUserQuestion({ questions: [{ question:
任务${task.id}收敛验证失败。如何继续?
, header: "执行失败", multiSelect: false, options: [ { label: "跳过并继续", description: "跳过此任务,执行下一个任务" }, { label: "重试", description: "重新执行此任务" }, { label: "强制通过", description: "标记为完成,忽略失败" }, { label: "终止", description: "停止执行,保留现有进度" } ] }] }) }
undefined

Step 3.4: Auto-Commit

步骤3.4:自动提交

javascript
function autoCommitTask(task, filesModified) {
  Bash(`git add ${filesModified.join(' ')}`)

  const commitType = {
    fix: 'fix', refactor: 'refactor', feature: 'feat',
    enhancement: 'feat', testing: 'test', infrastructure: 'chore'
  }[task.type] || 'chore'

  const scope = inferScope(filesModified)

  Bash(`git commit -m "$(cat <<'EOF'
${commitType}(${scope}): ${task.title}

Task: ${task.id}
Source: ${path.basename(planPath)}
EOF
)"`)

  appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`)
}

javascript
function autoCommitTask(task, filesModified) {
  Bash(`git add ${filesModified.join(' ')}`)

  const commitType = {
    fix: 'fix', refactor: 'refactor', feature: 'feat',
    enhancement: 'feat', testing: 'test', infrastructure: 'chore'
  }[task.type] || 'chore'

  const scope = inferScope(filesModified)

  Bash(`git commit -m "$(cat <<'EOF'
${commitType}(${scope}): ${task.title}

Task: ${task.id}
Source: ${path.basename(planPath)}
EOF
)"`)

  appendToEvents(`**提交**: \`${commitType}(${scope}): ${task.title}\`\n`)
}

Phase 4: Completion

阶段4:执行完成

Objective: Finalize all artifacts, write back execution state, offer follow-up actions.
目标: 完善所有执行产物,写回执行状态,提供后续操作选项。

Step 4.1: Finalize execution.md

步骤4.1:完善execution.md

Append summary statistics to execution.md:
javascript
const summary = `
向execution.md添加统计摘要:
javascript
const summary = `

Execution Summary

执行摘要

  • Completed: ${getUtc8ISOString()}
  • Total Tasks: ${tasks.length}
  • Succeeded: ${completedTasks.size}
  • Failed: ${failedTasks.size}
  • Skipped: ${skippedTasks.size}
  • Success Rate: ${Math.round(completedTasks.size / tasks.length * 100)}%
  • 完成时间: ${getUtc8ISOString()}
  • 总任务数: ${tasks.length}
  • 成功完成: ${completedTasks.size}
  • 执行失败: ${failedTasks.size}
  • 已跳过: ${skippedTasks.size}
  • 成功率: ${Math.round(completedTasks.size / tasks.length * 100)}%

Task Results

任务执行结果

IDTitleStatusConvergenceFiles Modified
${tasks.map(t => {
const ex = t._execution{}
const convergenceStatus = ex.result?.convergence_verified
? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
: '-'
return
| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |
}).join('\n')}
${failedTasks.size > 0 ? `### Failed Tasks
${[...failedTasks].map(id => { const t = tasks.find(t => t.id === id) return
- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown'}
}).join('\n')} ` : ''}
ID标题状态收敛验证结果修改文件
${tasks.map(t => {
const ex = t._execution{}
const convergenceStatus = ex.result?.convergence_verified
? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
: '-'
return
| ${t.id} | ${t.title} | ${ex.status || '待执行'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |
}).join('\n')}
${failedTasks.size > 0 ? `### 失败任务
${[...failedTasks].map(id => { const t = tasks.find(t => t.id === id) return
- **${t.id}**: ${t.title} — ${t._execution?.result?.error || '未知错误'}
}).join('\n')} ` : ''}

Artifacts

执行产物

  • Plan Source: ${planPath}
  • Execution Overview: ${sessionFolder}/execution.md
  • Execution Events: ${sessionFolder}/execution-events.md ` // Append to execution.md
undefined
  • 计划来源: ${planPath}
  • 执行概览: ${sessionFolder}/execution.md
  • 执行事件: ${sessionFolder}/execution-events.md ` // 追加到execution.md
undefined

Step 4.2: Finalize execution-events.md

步骤4.2:完善execution-events.md

javascript
appendToEvents(`
---
javascript
appendToEvents(`
---

Session Summary

会话摘要

  • Session: ${sessionId}
  • Completed: ${getUtc8ISOString()}
  • Tasks: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped
  • Total Events: ${completedTasks.size + failedTasks.size + skippedTasks.size} `)
undefined
  • 会话ID: ${sessionId}
  • 完成时间: ${getUtc8ISOString()}
  • 任务统计: ${completedTasks.size}个完成,${failedTasks.size}个失败,${skippedTasks.size}个跳过
  • 总事件数: ${completedTasks.size + failedTasks.size + skippedTasks.size} `)
undefined

Step 4.3: Write Back .task/*.json with _execution

步骤4.3:将_execution状态写回.task/*.json

Update each source task JSON file with execution states:
javascript
tasks.forEach(task => {
  const filePath = task._source_file
  if (!filePath) return

  // Read current file to preserve formatting and non-execution fields
  const current = JSON.parse(Read(filePath))

  // Update _execution status and result
  current._execution = {
    status: task._execution?.status || 'pending',
    executed_at: task._execution?.executed_at || null,
    result: task._execution?.result || null
  }

  // Write back individual task file
  Write(filePath, JSON.stringify(current, null, 2))
})
// Each task JSON file now has _execution: { status, executed_at, result }
更新每个源任务JSON文件,添加执行状态:
javascript
tasks.forEach(task => {
  const filePath = task._source_file
  if (!filePath) return

  // 读取当前文件以保留格式和非执行相关字段
  const current = JSON.parse(Read(filePath))

  // 更新_execution状态和结果
  current._execution = {
    status: task._execution?.status || 'pending',
    executed_at: task._execution?.executed_at || null,
    result: task._execution?.result || null
  }

  // 写回单个任务文件
  Write(filePath, JSON.stringify(current, null, 2))
})
// 每个任务JSON文件现在包含_execution: { status, executed_at, result }

Step 4.4: Post-Completion Options

步骤4.4:执行完成后选项

javascript
AskUserQuestion({
  questions: [{
    question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded (${Math.round(completedTasks.size / tasks.length * 100)}%).\nNext step:`,
    header: "Post-Execute",
    multiSelect: false,
    options: [
      { label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` },
      { label: "View Events", description: "Display execution-events.md" },
      { label: "Create Issue", description: "Create issue from failed tasks" },
      { label: "Done", description: "End workflow" }
    ]
  }]
})
SelectionAction
Retry FailedFilter tasks with
_execution.status === 'failed'
, re-execute, append
[RETRY]
events
View EventsDisplay execution-events.md content
Create Issue
Skill(skill="issue:new", args="...")
from failed task details
DoneDisplay artifact paths, sync session state, end workflow
javascript
AskUserQuestion({
  questions: [{
    question: `执行完成: ${completedTasks.size}/${tasks.length}个任务成功(${Math.round(completedTasks.size / tasks.length * 100)}%)。\n下一步操作:`,
    header: "执行完成",
    multiSelect: false,
    options: [
      { label: "重试失败任务", description: `重新执行${failedTasks.size}个失败任务` },
      { label: "查看事件日志", description: "显示execution-events.md内容" },
      { label: "创建Issue", description: "基于失败任务创建Issue" },
      { label: "完成", description: "结束工作流" }
    ]
  }]
})
选择项操作
重试失败任务筛选
_execution.status === 'failed'
的任务,重新执行,事件日志添加
[RETRY]
标记
查看事件日志显示execution-events.md内容
创建Issue调用
Skill(skill="issue:new", args="...")
,基于失败任务详情创建Issue
完成显示执行产物路径,同步会话状态,结束工作流

Step 4.5: Sync Session State

步骤4.5:同步会话状态

After completion (regardless of user selection), unless
--dry-run
:
bash
$session-sync -y "Execution complete: {completed}/{total} tasks succeeded"
Updates specs/*.md with execution learnings and project-tech.json with development index entry.

执行完成后(无论用户选择何种操作),若未启用
--dry-run
bash
$session-sync -y "Execution complete: {completed}/{total} tasks succeeded"
更新specs/*.md文件,记录执行经验,同时更新project-tech.json文件,添加开发索引条目。

Configuration

配置选项

FlagDefaultDescription
PLAN="..."
auto-detectPath to
.task/
directory or single task
.json
file
--auto-commit
falseCommit changes after each successful task
--dry-run
falseSimulate execution without making changes
参数默认值描述
PLAN="..."
自动检测
.task/
目录路径或单个任务
.json
文件路径
--auto-commit
false每个任务成功完成后自动提交变更
--dry-run
false模拟执行过程,不实际修改内容

Plan Auto-Detection Order

计划自动检测顺序

When no
PLAN
specified, search for
.task/
directories in order (most recent first):
  1. .workflow/active/*/.task/
  2. .workflow/.lite-plan/*/.task/
  3. .workflow/.req-plan/*/.task/
  4. .workflow/.planning/*/.task/
If source is not
.task/*.json
: Run
plan-converter
first to generate
.task/
directory.

当未指定
PLAN
时,按以下顺序搜索
.task/
目录(最新优先):
  1. .workflow/active/*/.task/
  2. .workflow/.lite-plan/*/.task/
  3. .workflow/.req-plan/*/.task/
  4. .workflow/.planning/*/.task/
若源格式不是
.task/*.json
: 先运行
plan-converter
生成
.task/
目录。

Error Handling & Recovery

错误处理与恢复

SituationActionRecovery
.task/ directory not foundReport error with pathCheck path, run plan-converter
Invalid JSON in task fileReport filename and errorFix task JSON file manually
Missing convergenceReport validation errorRun plan-converter to add convergence
Circular dependencyStop, report cycle pathFix dependencies in task JSON
Task execution failsRecord in events, ask userRetry, skip, accept, or abort
Convergence verification failsMark task failed, ask userFix code and retry, or accept
Verification command timeoutMark as unverifiedManual verification needed
File conflict during executionDocument in eventsResolve in dependency order
All tasks failReport, suggest plan reviewRe-analyze or manual intervention

场景操作恢复方式
未找到.task/目录报告错误及路径检查路径是否正确,运行plan-converter生成任务计划
任务文件JSON格式无效报告文件名及错误信息手动修复任务JSON文件
缺少收敛验证配置报告验证错误运行plan-converter添加收敛验证配置
循环依赖终止执行,报告循环路径修改任务JSON文件中的依赖关系
任务执行失败记录到事件日志,询问用户处理方式重试、跳过、强制通过或终止
收敛验证失败标记任务为失败,询问用户处理方式修复代码后重试,或强制通过
验证命令超时标记为未验证需要手动验证
执行过程中文件冲突在事件日志中记录按依赖顺序自动处理
所有任务执行失败报告错误,建议检查计划重新分析计划或手动干预

Best Practices

最佳实践

Before Execution

执行前

  1. Validate Plan: Use
    --dry-run
    first to check plan feasibility
  2. Check Convergence: Ensure all tasks have meaningful convergence criteria
  3. Review Dependencies: Verify execution order makes sense
  4. Backup: Commit pending changes before starting
  5. Convert First: Use
    plan-converter
    for non-.task/ sources
  1. 验证计划: 先使用
    --dry-run
    检查计划可行性
  2. 检查收敛配置: 确保所有任务包含有意义的收敛标准
  3. 审核依赖关系: 确认执行顺序合理
  4. 备份代码: 执行前提交未完成的变更
  5. 格式转换: 非.task/格式的计划先使用plan-converter转换

During Execution

执行中

  1. Monitor Events: Check execution-events.md for real-time progress
  2. Handle Failures: Review convergence failures carefully before deciding
  3. Check Commits: Verify auto-commits are correct if enabled
  1. 监控事件: 查看execution-events.md获取实时进度
  2. 处理失败: 仔细分析收敛验证失败原因后再做决定
  3. 检查提交: 若启用自动提交,验证提交内容是否正确

After Execution

执行后

  1. Review Summary: Check execution.md statistics and failed tasks
  2. Verify Changes: Inspect modified files match expectations
  3. Check Task Files: Review
    _execution
    states in
    .task/*.json
    files
  4. Next Steps: Use completion options for follow-up

Now execute unified-execute-with-file for: $PLAN
  1. 查看摘要: 检查execution.md中的统计数据和失败任务
  2. 验证变更: 确认修改的文件符合预期
  3. 检查任务文件: 查看
    .task/*.json
    文件中的
    _execution
    状态
  4. 后续操作: 使用执行完成后的选项进行后续处理

现在为以下计划执行unified-execute-with-file: $PLAN