team-frontend-debug

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Auto Mode

自动模式

When
--yes
or
-y
: Auto-confirm task decomposition, skip interactive validation, use defaults.
当使用
--yes
-y
参数时:自动确认任务分解,跳过交互式验证,使用默认配置。

Frontend Debug Team

前端调试团队

Usage

使用方法

bash
$team-frontend-debug "Test features: login, dashboard, user profile at localhost:3000"
$team-frontend-debug "Bug: clicking save button on /settings causes white screen"
$team-frontend-debug -y "Test: 1. User registration 2. Email verification 3. Password reset"
$team-frontend-debug --continue "tfd-login-bug-20260308"
Flags:
  • -y, --yes
    : Skip all confirmations (auto mode)
  • -c, --concurrency N
    : Max concurrent agents within each wave (default: 2)
  • --continue
    : Resume existing session
Output Directory:
.workflow/.csv-wave/{session-id}/
Core Output:
tasks.csv
(master state) +
results.csv
(final) +
discoveries.ndjson
(shared exploration) +
context.md
(human-readable report)

bash
$team-frontend-debug "Test features: login, dashboard, user profile at localhost:3000"
$team-frontend-debug "Bug: clicking save button on /settings causes white screen"
$team-frontend-debug -y "Test: 1. User registration 2. Email verification 3. Password reset"
$team-frontend-debug --continue "tfd-login-bug-20260308"
参数说明:
  • -y, --yes
    : 跳过所有确认步骤(自动模式)
  • -c, --concurrency N
    : 每个Wave中最大并发Agent数量(默认值:2)
  • --continue
    : 恢复已有的会话
输出目录:
.workflow/.csv-wave/{session-id}/
核心输出文件:
tasks.csv
(主状态文件) +
results.csv
(最终结果) +
discoveries.ndjson
(共享探索记录) +
context.md
(可读报告)

Overview

概述

Dual-mode frontend debugging: feature-list testing or bug-report debugging, powered by Chrome DevTools MCP. Roles: tester (test-pipeline), reproducer (debug-pipeline), analyzer, fixer, verifier. Supports conditional skip (all tests pass -> no downstream tasks), iteration loops (analyzer requesting more evidence, verifier triggering re-fix), and Chrome DevTools-based browser interaction.
Execution Model: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
+-------------------------------------------------------------------+
|              FRONTEND DEBUG WORKFLOW                                |
+-------------------------------------------------------------------+
|                                                                     |
|  Phase 0: Pre-Wave Interactive (Input Analysis)                     |
|     +- Parse user input (feature list or bug report)                |
|     +- Detect mode: test-pipeline or debug-pipeline                 |
|     +- Extract: base URL, features/steps, evidence plan             |
|     +- Output: refined requirements for decomposition               |
|                                                                     |
|  Phase 1: Requirement -> CSV + Classification                       |
|     +- Select pipeline (test or debug)                              |
|     +- Build dependency graph from pipeline definition              |
|     +- Classify tasks: csv-wave | interactive (exec_mode)           |
|     +- Compute dependency waves (topological sort)                  |
|     +- Generate tasks.csv with wave + exec_mode columns             |
|     +- User validates task breakdown (skip if -y)                   |
|                                                                     |
|  Phase 2: Wave Execution Engine (Extended)                          |
|     +- For each wave (1..N):                                        |
|     |   +- Execute pre-wave interactive tasks (if any)              |
|     |   +- Build wave CSV (filter csv-wave tasks for this wave)     |
|     |   +- Inject previous findings into prev_context column        |
|     |   +- spawn_agents_on_csv(wave CSV)                            |
|     |   +- Execute post-wave interactive tasks (if any)             |
|     |   +- Merge all results into master tasks.csv                  |
|     |   +- Conditional skip: TEST-001 with 0 issues -> done         |
|     |   +- Iteration: ANALYZE needs more evidence -> REPRODUCE-002  |
|     |   +- Re-fix: VERIFY fails -> FIX-002 -> VERIFY-002            |
|     +- discoveries.ndjson shared across all modes (append-only)     |
|                                                                     |
|  Phase 3: Post-Wave Interactive (Completion Action)                 |
|     +- Pipeline completion report with debug summary                |
|     +- Interactive completion choice (Archive/Keep/Export)           |
|     +- Final aggregation / report                                   |
|                                                                     |
|  Phase 4: Results Aggregation                                       |
|     +- Export final results.csv                                     |
|     +- Generate context.md with all findings                        |
|     +- Display summary: completed/failed/skipped per wave           |
|     +- Offer: view results | retry failed | done                    |
|                                                                     |
+-------------------------------------------------------------------+

基于Chrome DevTools MCP实现的双模式前端调试:功能列表测试或Bug报告调试。角色包括:测试员(test-pipeline)、复现员(debug-pipeline)、分析师、修复员、验证员。支持条件跳过(所有测试通过则跳过下游任务)、迭代循环(分析师请求更多证据、验证员触发重新修复),以及基于Chrome DevTools的浏览器交互。
执行模型: 混合模式——CSV Wave流水线(主)+ 独立Agent生成(辅)
+-------------------------------------------------------------------+
|              FRONTEND DEBUG WORKFLOW                                |
+-------------------------------------------------------------------+
|                                                                     |
|  Phase 0: Pre-Wave Interactive (Input Analysis)                     |
|     +- 解析用户输入(功能列表或Bug报告)                |
|     +- 检测模式:test-pipeline或debug-pipeline                 |
|     +- 提取:基础URL、功能/步骤、证据收集方案             |
|     +- 输出:用于任务分解的细化需求               |
|                                                                     |
|  Phase 1: Requirement -> CSV + Classification                       |
|     +- 选择流水线(测试或调试)                              |
|     +- 根据流水线定义构建依赖关系图              |
|     +- 任务分类:csv-wave | interactive(exec_mode)           |
|     +- 计算依赖Wave(拓扑排序)                  |
|     +- 生成包含wave和exec_mode列的tasks.csv             |
|     +- 用户验证任务分解(若使用-y则跳过)                   |
|                                                                     |
|  Phase 2: Wave Execution Engine (Extended)                          |
|     +- 针对每个Wave(1..N):                                        |
|     |   +- 执行Wave前的交互式任务(若有)              |
|     |   +- 构建Wave CSV(筛选当前Wave的csv-wave任务)     |
|     |   +- 将之前的发现注入prev_context列        |
|     |   +- spawn_agents_on_csv(wave CSV)                            |
|     |   +- 执行Wave后的交互式任务(若有)             |
|     |   +- 将所有结果合并到主tasks.csv                  |
|     |   +- 条件跳过:TEST-001无问题则流程结束         |
|     |   +- 迭代:分析师请求更多证据则触发REPRODUCE-002  |
|     |   +- 重新修复:验证不通过则触发FIX-002 -> VERIFY-002            |
|     +- discoveries.ndjson在所有模式下共享(仅追加)     |
|                                                                     |
|  Phase 3: Post-Wave Interactive (Completion Action)                 |
|     +- 包含调试摘要的流水线完成报告                |
|     +- 交互式完成选项(归档/保留/导出)           |
|     +- 最终汇总/报告                                   |
|                                                                     |
|  Phase 4: Results Aggregation                                       |
|     +- 导出最终results.csv                                     |
|     +- 生成包含所有发现的context.md                        |
|     +- 显示摘要:每个Wave的完成/失败/跳过任务数           |
|     +- 选项:查看结果 | 重试失败任务 | 结束                    |
|                                                                     |
+-------------------------------------------------------------------+

Pipeline Modes

流水线模式

Input PatternPipelineFlow
Feature list / function checklist / test items
test-pipeline
TEST -> ANALYZE -> FIX -> VERIFY
Bug report / error description / crash report
debug-pipeline
REPRODUCE -> ANALYZE -> FIX -> VERIFY
输入模式流水线流程
功能列表 / 功能检查清单 / 测试项
test-pipeline
TEST -> ANALYZE -> FIX -> VERIFY
Bug报告 / 错误描述 / 崩溃报告
debug-pipeline
REPRODUCE -> ANALYZE -> FIX -> VERIFY

Pipeline Selection Keywords

流水线选择关键词

KeywordsPipeline
feature, test, list, check, verify functions, validate
test-pipeline
bug, error, crash, broken, white screen, not working
debug-pipeline
performance, slow, latency, memory leak
debug-pipeline
(perf dimension)
Ambiguous / unclearAskUserQuestion to clarify

关键词流水线
feature, test, list, check, verify functions, validate
test-pipeline
bug, error, crash, broken, white screen, not working
debug-pipeline
performance, slow, latency, memory leak
debug-pipeline
(性能维度)
模糊/不明确询问用户以澄清

Task Classification Rules

任务分类规则

Each task is classified by
exec_mode
:
exec_modeMechanismCriteria
csv-wave
spawn_agents_on_csv
One-shot, structured I/O, no multi-round interaction
interactive
spawn_agent
/
wait
/
send_input
/
close_agent
Multi-round, progress updates, inner loop
Classification Decision:
Task PropertyClassification
Feature testing with inner loop (tester iterates over features)
csv-wave
Bug reproduction (single pass)
csv-wave
Root cause analysis (single pass)
csv-wave
Code fix implementation
csv-wave
Fix verification (single pass)
csv-wave
Conditional skip gate (evaluating TEST results)
interactive
Pipeline completion action
interactive

每个任务通过
exec_mode
分类:
exec_mode机制判定标准
csv-wave
spawn_agents_on_csv
一次性执行、结构化输入输出、无多轮交互
interactive
spawn_agent
/
wait
/
send_input
/
close_agent
多轮交互、进度更新、内部循环
分类决策:
任务属性分类
包含内部循环的功能测试(测试员遍历功能)
csv-wave
Bug复现(单次执行)
csv-wave
根因分析(单次执行)
csv-wave
代码修复实现
csv-wave
修复验证(单次执行)
csv-wave
条件跳过网关(评估TEST结果)
interactive
流水线完成操作
interactive

CSV Schema

CSV Schema

tasks.csv (Master State)

tasks.csv(主状态文件)

csv
id,title,description,role,pipeline_mode,base_url,evidence_dimensions,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,issues_count,verdict,error
"TEST-001","Feature testing","PURPOSE: Test all features from list | Success: All features tested with evidence","tester","test-pipeline","http://localhost:3000","screenshot;console;network","","","csv-wave","1","pending","","","","",""
"ANALYZE-001","Root cause analysis","PURPOSE: Analyze discovered issues | Success: RCA for each issue","analyzer","test-pipeline","","console;network","TEST-001","TEST-001","csv-wave","2","pending","","","","",""
Columns:
ColumnPhaseDescription
id
InputUnique task identifier (PREFIX-NNN: TEST, REPRODUCE, ANALYZE, FIX, VERIFY)
title
InputShort task title
description
InputDetailed task description with PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS
role
InputRole name:
tester
,
reproducer
,
analyzer
,
fixer
,
verifier
pipeline_mode
InputPipeline:
test-pipeline
or
debug-pipeline
base_url
InputTarget URL for browser-based tasks (empty for non-browser tasks)
evidence_dimensions
InputSemicolon-separated evidence types:
screenshot
,
console
,
network
,
snapshot
,
performance
deps
InputSemicolon-separated dependency task IDs
context_from
InputSemicolon-separated task IDs whose findings this task needs
exec_mode
Input
csv-wave
or
interactive
wave
ComputedWave number (computed by topological sort, 1-based)
status
Output
pending
->
completed
/
failed
/
skipped
findings
OutputKey discoveries or implementation notes (max 500 chars)
artifacts_produced
OutputSemicolon-separated paths of produced artifacts
issues_count
OutputNumber of issues found (tester/analyzer), empty for others
verdict
OutputVerification verdict:
pass
,
pass_with_warnings
,
fail
(verifier only)
error
OutputError message if failed (empty if success)
csv
id,title,description,role,pipeline_mode,base_url,evidence_dimensions,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,issues_count,verdict,error
"TEST-001","Feature testing","PURPOSE: Test all features from list | Success: All features tested with evidence","tester","test-pipeline","http://localhost:3000","screenshot;console;network","","","csv-wave","1","pending","","","","",""
"ANALYZE-001","Root cause analysis","PURPOSE: Analyze discovered issues | Success: RCA for each issue","analyzer","test-pipeline","","console;network","TEST-001","TEST-001","csv-wave","2","pending","","","","",""
列说明:
列名阶段描述
id
输入唯一任务标识符(前缀-编号:TEST, REPRODUCE, ANALYZE, FIX, VERIFY)
title
输入简短任务标题
description
输入包含PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS的详细任务描述
role
输入角色名称:
tester
,
reproducer
,
analyzer
,
fixer
,
verifier
pipeline_mode
输入流水线类型:
test-pipeline
debug-pipeline
base_url
输入基于浏览器的任务目标URL(非浏览器任务为空)
evidence_dimensions
输入分号分隔的证据类型:
screenshot
,
console
,
network
,
snapshot
,
performance
deps
输入分号分隔的依赖任务ID
context_from
输入分号分隔的、当前任务需要其发现结果的任务ID
exec_mode
输入
csv-wave
interactive
wave
计算生成Wave编号(拓扑排序生成,从1开始)
status
输出
pending
->
completed
/
failed
/
skipped
findings
输出关键发现或实现说明(最多500字符)
artifacts_produced
输出分号分隔的生成产物路径
issues_count
输出发现的问题数量(测试员/分析师使用,其他角色为空)
verdict
输出验证结果:
pass
,
pass_with_warnings
,
fail
(仅验证员使用)
error
输出失败时的错误信息(成功时为空)

Per-Wave CSV (Temporary)

单Wave CSV(临时文件)

Each wave generates a temporary
wave-{N}.csv
with extra
prev_context
column (csv-wave tasks only).

每个Wave会生成一个临时的
wave-{N}.csv
,包含额外的
prev_context
列(仅csv-wave任务)。

Agent Registry (Interactive Agents)

Agent注册表(交互式Agent)

AgentRole FilePatternResponsibilityPosition
Conditional Skip Gateagents/conditional-skip-gate.md2.3 (send_input cycle)Evaluate TEST results and skip downstream if no issuespost-wave
Iteration Handleragents/iteration-handler.md2.3 (send_input cycle)Handle analyzer's need_more_evidence requestpost-wave
Completion Handleragents/completion-handler.md2.3 (send_input cycle)Handle pipeline completion action (Archive/Keep/Export)standalone
COMPACT PROTECTION: Agent files are execution documents. When context compression occurs, you MUST immediately
Read
the corresponding agent.md
to reload.

Agent角色文件对应流程职责位置
Conditional Skip Gateagents/conditional-skip-gate.md2.3(send_input循环)评估TEST结果,若无问题则跳过下游任务Wave后
Iteration Handleragents/iteration-handler.md2.3(send_input循环)处理分析师的need_more_evidence请求Wave后
Completion Handleragents/completion-handler.md2.3(send_input循环)处理流水线完成操作(归档/保留/导出)独立环节
重要保护机制:Agent文件是执行文档。当上下文被压缩时,必须立即
Read
对应的agent.md文件
以重新加载内容。

Chrome DevTools MCP Tools

Chrome DevTools MCP工具

All browser inspection operations use Chrome DevTools MCP. Tester, reproducer, and verifier are primary consumers. These tools are available to CSV wave agents.
ToolPurpose
mcp__chrome-devtools__navigate_page
Navigate to target URL
mcp__chrome-devtools__take_screenshot
Capture visual state
mcp__chrome-devtools__take_snapshot
Capture DOM/a11y tree
mcp__chrome-devtools__list_console_messages
Read console logs
mcp__chrome-devtools__get_console_message
Get specific console message
mcp__chrome-devtools__list_network_requests
Monitor network activity
mcp__chrome-devtools__get_network_request
Inspect request/response detail
mcp__chrome-devtools__performance_start_trace
Start performance recording
mcp__chrome-devtools__performance_stop_trace
Stop and analyze trace
mcp__chrome-devtools__click
Simulate user click
mcp__chrome-devtools__fill
Fill form inputs
mcp__chrome-devtools__hover
Hover over elements
mcp__chrome-devtools__evaluate_script
Execute JavaScript in page
mcp__chrome-devtools__wait_for
Wait for element/text
mcp__chrome-devtools__list_pages
List open browser tabs
mcp__chrome-devtools__select_page
Switch active tab
mcp__chrome-devtools__press_key
Press keyboard keys

所有浏览器检查操作均使用Chrome DevTools MCP。测试员、复现员和验证员是主要使用者。这些工具可供CSV Wave Agent使用。
工具用途
mcp__chrome-devtools__navigate_page
导航到目标URL
mcp__chrome-devtools__take_screenshot
捕获视觉状态
mcp__chrome-devtools__take_snapshot
捕获DOM/无障碍树
mcp__chrome-devtools__list_console_messages
读取控制台日志
mcp__chrome-devtools__get_console_message
获取特定控制台消息
mcp__chrome-devtools__list_network_requests
监控网络活动
mcp__chrome-devtools__get_network_request
检查请求/响应详情
mcp__chrome-devtools__performance_start_trace
开始性能录制
mcp__chrome-devtools__performance_stop_trace
停止并分析性能轨迹
mcp__chrome-devtools__click
模拟用户点击
mcp__chrome-devtools__fill
填充表单输入
mcp__chrome-devtools__hover
悬停在元素上
mcp__chrome-devtools__evaluate_script
在页面中执行JavaScript
mcp__chrome-devtools__wait_for
等待元素/文本出现
mcp__chrome-devtools__list_pages
列出打开的浏览器标签页
mcp__chrome-devtools__select_page
切换活动标签页
mcp__chrome-devtools__press_key
模拟按键操作

Output Artifacts

输出产物

FilePurposeLifecycle
tasks.csv
Master state -- all tasks with status/findingsUpdated after each wave
wave-{N}.csv
Per-wave input (temporary, csv-wave tasks only)Created before wave, deleted after
results.csv
Final export of all task resultsCreated in Phase 4
discoveries.ndjson
Shared exploration board (all agents, both modes)Append-only, carries across waves
context.md
Human-readable execution reportCreated in Phase 4
task-analysis.json
Phase 0/1 output: mode, features/steps, dimensionsCreated in Phase 1
role-instructions/
Per-role instruction templates for CSV agentsCreated in Phase 1
artifacts/
All deliverables: test reports, RCA reports, fix changes, verification reportsCreated by agents
evidence/
Screenshots, snapshots, network logs, performance tracesCreated by tester/reproducer/verifier
interactive/{id}-result.json
Results from interactive tasksCreated per interactive task

文件用途生命周期
tasks.csv
主状态文件——包含所有任务的状态/发现结果每个Wave后更新
wave-{N}.csv
单Wave输入文件(临时文件,仅csv-wave任务)Wave前创建,Wave后删除
results.csv
所有任务结果的最终导出Phase 4创建
discoveries.ndjson
共享探索看板(所有Agent,两种模式)仅追加,跨Wave保留
context.md
人类可读的执行报告Phase 4创建
task-analysis.json
Phase 0/1输出:模式、功能/步骤、维度Phase 1创建
role-instructions/
CSV Agent的角色专属指令模板Phase 1创建
artifacts/
所有交付物:测试报告、根因分析报告、修复变更、验证报告Agent生成
evidence/
截图、快照、网络日志、性能轨迹测试员/复现员/验证员生成
interactive/{id}-result.json
交互式任务的结果每个交互式任务生成一个

Session Structure

会话结构

.workflow/.csv-wave/{session-id}/
+-- tasks.csv                  # Master state (all tasks, both modes)
+-- results.csv                # Final results export
+-- discoveries.ndjson         # Shared discovery board (all agents)
+-- context.md                 # Human-readable report
+-- task-analysis.json         # Phase 1 analysis output
+-- wave-{N}.csv               # Temporary per-wave input (csv-wave only)
+-- role-instructions/         # Per-role instruction templates
|   +-- tester.md              # (test-pipeline)
|   +-- reproducer.md          # (debug-pipeline)
|   +-- analyzer.md
|   +-- fixer.md
|   +-- verifier.md
+-- artifacts/                 # All deliverables
|   +-- TEST-001-report.md
|   +-- TEST-001-issues.json
|   +-- ANALYZE-001-rca.md
|   +-- FIX-001-changes.md
|   +-- VERIFY-001-report.md
+-- evidence/                  # Browser evidence
|   +-- F-001-login-before.png
|   +-- F-001-login-after.png
|   +-- before-screenshot.png
|   +-- after-screenshot.png
|   +-- before-snapshot.txt
|   +-- after-snapshot.txt
|   +-- evidence-summary.json
+-- interactive/               # Interactive task artifacts
|   +-- {id}-result.json
+-- wisdom/                    # Cross-task knowledge
    +-- learnings.md

.workflow/.csv-wave/{session-id}/
+-- tasks.csv                  # 主状态文件(所有任务,两种模式)
+-- results.csv                # 最终结果导出
+-- discoveries.ndjson         # 共享探索看板(所有Agent)
+-- context.md                 # 人类可读报告
+-- task-analysis.json         # Phase 1分析输出
+-- wave-{N}.csv               # 临时单Wave输入文件(仅csv-wave任务)
+-- role-instructions/         # 角色专属指令模板
|   +-- tester.md              # (test-pipeline)
|   +-- reproducer.md          # (debug-pipeline)
|   +-- analyzer.md
|   +-- fixer.md
|   +-- verifier.md
+-- artifacts/                 # 所有交付物
|   +-- TEST-001-report.md
|   +-- TEST-001-issues.json
|   +-- ANALYZE-001-rca.md
|   +-- FIX-001-changes.md
|   +-- VERIFY-001-report.md
+-- evidence/                  # 浏览器证据
|   +-- F-001-login-before.png
|   +-- F-001-login-after.png
|   +-- before-screenshot.png
|   +-- after-screenshot.png
|   +-- before-snapshot.txt
|   +-- after-snapshot.txt
|   +-- evidence-summary.json
+-- interactive/               # 交互式任务产物
|   +-- {id}-result.json
+-- wisdom/                    # 跨任务知识
    +-- learnings.md

Implementation

实现细节

Session Initialization

会话初始化

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 2

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `tfd-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`

Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/evidence ${sessionFolder}/role-instructions ${sessionFolder}/interactive ${sessionFolder}/wisdom`)

Write(`${sessionFolder}/discoveries.ndjson`, '')
Write(`${sessionFolder}/wisdom/learnings.md`, '# Debug Learnings\n')

javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()

const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 2

const requirement = $ARGUMENTS
  .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
  .trim()

const slug = requirement.toLowerCase()
  .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
  .substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
const sessionId = `tfd-${slug}-${dateStr}`
const sessionFolder = `.workflow/.csv-wave/${sessionId}`

Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/evidence ${sessionFolder}/role-instructions ${sessionFolder}/interactive ${sessionFolder}/wisdom`)

Write(`${sessionFolder}/discoveries.ndjson`, '')
Write(`${sessionFolder}/wisdom/learnings.md`, '# Debug Learnings\n')

Phase 0: Pre-Wave Interactive (Input Analysis)

Phase 0: Wave前交互式环节(输入分析)

Objective: Parse user input, detect mode (test vs debug), extract parameters.
Workflow:
  1. Parse user input from $ARGUMENTS
  2. Check for existing sessions (continue mode):
    • Scan
      .workflow/.csv-wave/tfd-*/tasks.csv
      for sessions with pending tasks
    • If
      --continue
      : resume the specified or most recent session, skip to Phase 2
  3. Detect mode:
    Input PatternMode
    Contains: feature, test, list, check, verify
    test-pipeline
    Contains: bug, error, crash, broken, not working
    debug-pipeline
    AmbiguousAskUserQuestion to clarify
  4. Extract parameters by mode:
    Test Mode:
    • base_url
      : URL in text or AskUserQuestion
    • features
      : Parse feature list (bullet points, numbered list, free text)
    • Generate structured feature items with id, name, url
    Debug Mode:
    • bug_description
      : Bug description text
    • target_url
      : URL in text or AskUserQuestion
    • reproduction_steps
      : Steps in text or AskUserQuestion
    • evidence_plan
      : Detect dimensions from keywords (UI, network, console, performance)
  5. Dimension Detection (debug mode):
    KeywordsDimension
    render, style, display, layout, CSSscreenshot, snapshot
    request, API, network, timeoutnetwork
    error, crash, exceptionconsole
    slow, performance, lag, memoryperformance
    interaction, click, input, formscreenshot, console
Success Criteria:
  • Mode determined (test-pipeline or debug-pipeline)
  • Base URL and features/steps extracted
  • Evidence dimensions identified

目标: 解析用户输入,检测模式(测试vs调试),提取参数。
工作流:
  1. 从$ARGUMENTS解析用户输入
  2. 检查已有会话(恢复模式):
    • 扫描
      .workflow/.csv-wave/tfd-*/tasks.csv
      查找包含待处理任务的会话
    • 若使用
      --continue
      :恢复指定或最近的会话,直接跳转到Phase 2
  3. 检测模式:
    输入模式模式
    包含关键词:feature, test, list, check, verify
    test-pipeline
    包含关键词:bug, error, crash, broken, not working
    debug-pipeline
    模糊/不明确询问用户以澄清
  4. 按模式提取参数:
    测试模式:
    • base_url
      : 文本中的URL或询问用户获取
    • features
      : 解析功能列表(项目符号、编号列表、自由文本)
    • 生成包含id、名称、url的结构化功能项
    调试模式:
    • bug_description
      : Bug描述文本
    • target_url
      : 文本中的URL或询问用户获取
    • reproduction_steps
      : 文本中的步骤或询问用户获取
    • evidence_plan
      : 从关键词检测证据维度(UI、网络、控制台、性能)
  5. 维度检测(调试模式):
    关键词维度
    render, style, display, layout, CSSscreenshot, snapshot
    request, API, network, timeoutnetwork
    error, crash, exceptionconsole
    slow, performance, lag, memoryperformance
    interaction, click, input, formscreenshot, console
成功标准:
  • 确定模式(test-pipeline或debug-pipeline)
  • 提取基础URL和功能/步骤
  • 识别证据维度

Phase 1: Requirement -> CSV + Classification

Phase 1: 需求 -> CSV + 分类

Objective: Build task dependency graph, generate tasks.csv and per-role instruction templates.
Decomposition Rules:
  1. Pipeline Definition:
    Test Pipeline (4 tasks, conditional):
    TEST-001 -> [issues?] -> ANALYZE-001 -> FIX-001 -> VERIFY-001
                   |
                   +-- no issues -> Pipeline Complete (skip downstream)
    Debug Pipeline (4 tasks, linear with iteration):
    REPRODUCE-001 -> ANALYZE-001 -> FIX-001 -> VERIFY-001
                          ^                        |
                          |    (if fail)            |
                          +--- REPRODUCE-002 <-----+
  2. Task Description Template: Every task uses PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS format with session path, base URL, and upstream artifact references
  3. Role Instruction Generation: Write per-role instruction templates to
    role-instructions/{role}.md
    using the base instruction template customized for each role
Classification Rules:
Task Propertyexec_mode
Feature testing (tester with inner loop)
csv-wave
Bug reproduction (single pass)
csv-wave
Root cause analysis (single pass)
csv-wave
Code fix (may need multiple passes)
csv-wave
Fix verification (single pass)
csv-wave
All standard pipeline tasks
csv-wave
Wave Computation: Kahn's BFS topological sort with depth tracking.
javascript
// Generate per-role instruction templates
const roles = pipelineMode === 'test-pipeline'
  ? ['tester', 'analyzer', 'fixer', 'verifier']
  : ['reproducer', 'analyzer', 'fixer', 'verifier']

for (const role of roles) {
  const instruction = generateRoleInstruction(role, sessionFolder, pipelineMode)
  Write(`${sessionFolder}/role-instructions/${role}.md`, instruction)
}

const tasks = buildTasksCsv(pipelineMode, requirement, sessionFolder, baseUrl, evidencePlan)
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
Write(`${sessionFolder}/task-analysis.json`, JSON.stringify(analysisResult, null, 2))
User Validation: Display task breakdown (skip if AUTO_YES).
Success Criteria:
  • tasks.csv created with valid schema and wave assignments
  • Role instruction templates generated
  • task-analysis.json written
  • No circular dependencies

目标: 构建任务依赖图,生成tasks.csv和角色专属指令模板。
分解规则:
  1. 流水线定义:
    测试流水线(4个任务,带条件分支):
    TEST-001 -> [是否有问题?] -> ANALYZE-001 -> FIX-001 -> VERIFY-001
                   |
                   +-- 无问题 -> 流水线完成(跳过下游任务)
    调试流水线(4个任务,带迭代的线性流程):
    REPRODUCE-001 -> ANALYZE-001 -> FIX-001 -> VERIFY-001
                          ^                        |
                          |    (若验证失败)            |
                          +--- REPRODUCE-002 <-----+
  2. 任务描述模板: 所有任务均使用PURPOSE/TASK/CONTEXT/EXPECTED/CONSTRAINTS格式,包含会话路径、基础URL和上游产物引用
  3. 角色指令生成: 为CSV Agent生成角色专属指令模板,写入
    role-instructions/{role}.md
分类规则:
任务属性exec_mode
包含内部循环的功能测试(测试员遍历功能)
csv-wave
Bug复现(单次执行)
csv-wave
根因分析(单次执行)
csv-wave
代码修复(可能需要多次执行)
csv-wave
修复验证(单次执行)
csv-wave
所有标准流水线任务
csv-wave
Wave计算: 基于深度追踪的Kahn's BFS拓扑排序。
javascript
// 生成角色专属指令模板
const roles = pipelineMode === 'test-pipeline'
  ? ['tester', 'analyzer', 'fixer', 'verifier']
  : ['reproducer', 'analyzer', 'fixer', 'verifier']

for (const role of roles) {
  const instruction = generateRoleInstruction(role, sessionFolder, pipelineMode)
  Write(`${sessionFolder}/role-instructions/${role}.md`, instruction)
}

const tasks = buildTasksCsv(pipelineMode, requirement, sessionFolder, baseUrl, evidencePlan)
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
Write(`${sessionFolder}/task-analysis.json`, JSON.stringify(analysisResult, null, 2))
用户验证: 展示任务分解(若使用AUTO_YES则跳过)。
成功标准:
  • 创建符合Schema和Wave分配的tasks.csv
  • 生成角色指令模板
  • 写入task-analysis.json
  • 无循环依赖

Phase 2: Wave Execution Engine (Extended)

Phase 2: Wave执行引擎(扩展版)

Objective: Execute tasks wave-by-wave with conditional skip, iteration loops, and re-fix cycles.
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
let maxWave = Math.max(...tasks.map(t => t.wave))
let fixRound = 0
const MAX_FIX_ROUNDS = 3
const MAX_REPRODUCE_ROUNDS = 2

for (let wave = 1; wave <= maxWave; wave++) {
  console.log(`\nWave ${wave}/${maxWave}`)

  const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
  const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
  const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')

  // Check dependencies -- skip tasks whose deps failed
  for (const task of waveTasks) {
    const depIds = (task.deps || '').split(';').filter(Boolean)
    const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
    if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
      task.status = 'skipped'
      task.error = `Dependency failed: ${depIds.filter((id, i) =>
        ['failed','skipped'].includes(depStatuses[i])).join(', ')}`
    }
  }

  // Execute pre-wave interactive tasks (if any)
  for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
    // Determine agent file based on task type
    const agentFile = task.id.includes('skip') ? 'agents/conditional-skip-gate.md'
      : task.id.includes('iter') ? 'agents/iteration-handler.md'
      : 'agents/completion-handler.md'

    Read(agentFile)
    const agent = spawn_agent({
      message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
    })
    const result = wait({ ids: [agent], timeout_ms: 600000 })
    if (result.timed_out) {
      send_input({ id: agent, message: "Please finalize and output current findings." })
      wait({ ids: [agent], timeout_ms: 120000 })
    }
    Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
      task_id: task.id, status: "completed", findings: parseFindings(result),
      timestamp: getUtc8ISOString()
    }))
    close_agent({ id: agent })
    task.status = 'completed'
    task.findings = parseFindings(result)
  }

  // Build prev_context for csv-wave tasks
  const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
  for (const task of pendingCsvTasks) {
    task.prev_context = buildPrevContext(task, tasks)
  }

  if (pendingCsvTasks.length > 0) {
    Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))

    const waveInstruction = buildWaveInstruction(pendingCsvTasks, sessionFolder, wave)

    spawn_agents_on_csv({
      csv_path: `${sessionFolder}/wave-${wave}.csv`,
      id_column: "id",
      instruction: waveInstruction,
      max_concurrency: maxConcurrency,
      max_runtime_seconds: 1200,
      output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
      output_schema: {
        type: "object",
        properties: {
          id: { type: "string" },
          status: { type: "string", enum: ["completed", "failed"] },
          findings: { type: "string" },
          artifacts_produced: { type: "string" },
          issues_count: { type: "string" },
          verdict: { type: "string" },
          error: { type: "string" }
        }
      }
    })

    // Merge results into master CSV
    const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
    for (const r of results) {
      const t = tasks.find(t => t.id === r.id)
      if (t) Object.assign(t, r)
    }

    // Conditional Skip: TEST-001 with 0 issues
    const testResult = results.find(r => r.id === 'TEST-001')
    if (testResult && parseInt(testResult.issues_count || '0') === 0) {
      // Skip all downstream tasks
      tasks.filter(t => t.wave > wave && t.status === 'pending').forEach(t => {
        t.status = 'skipped'
        t.error = 'No issues found in testing -- skipped'
      })
      console.log('All features passed. No issues found. Pipeline complete.')
    }

    // Iteration: Analyzer needs more evidence
    const analyzerResult = results.find(r => r.id.startsWith('ANALYZE') && r.findings?.includes('need_more_evidence'))
    if (analyzerResult) {
      const reproduceRound = tasks.filter(t => t.id.startsWith('REPRODUCE')).length
      if (reproduceRound < MAX_REPRODUCE_ROUNDS) {
        const newRepId = `REPRODUCE-${String(reproduceRound + 1).padStart(3, '0')}`
        const newAnalyzeId = `ANALYZE-${String(tasks.filter(t => t.id.startsWith('ANALYZE')).length + 1).padStart(3, '0')}`
        tasks.push({
          id: newRepId, title: 'Supplemental evidence collection',
          description: `PURPOSE: Collect additional evidence per Analyzer request | Success: Targeted evidence collected`,
          role: 'reproducer', pipeline_mode: tasks[0].pipeline_mode,
          base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
          deps: '', context_from: analyzerResult.id,
          exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
          findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
        })
        tasks.push({
          id: newAnalyzeId, title: 'Re-analysis with supplemental evidence',
          description: `PURPOSE: Re-analyze with additional evidence | Success: Higher-confidence RCA`,
          role: 'analyzer', pipeline_mode: tasks[0].pipeline_mode,
          base_url: '', evidence_dimensions: '',
          deps: newRepId, context_from: `${analyzerResult.id};${newRepId}`,
          exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
          findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
        })
        // Update FIX task deps
        const fixTask = tasks.find(t => t.id === 'FIX-001' && t.status === 'pending')
        if (fixTask) fixTask.deps = newAnalyzeId
      }
    }

    // Re-fix: Verifier verdict = fail
    const verifyResult = results.find(r => r.id.startsWith('VERIFY') && r.verdict === 'fail')
    if (verifyResult && fixRound < MAX_FIX_ROUNDS) {
      fixRound++
      const newFixId = `FIX-${String(fixRound + 1).padStart(3, '0')}`
      const newVerifyId = `VERIFY-${String(fixRound + 1).padStart(3, '0')}`
      tasks.push({
        id: newFixId, title: `Re-fix (round ${fixRound + 1})`,
        description: `PURPOSE: Re-fix based on verification failure | Success: Issue resolved`,
        role: 'fixer', pipeline_mode: tasks[0].pipeline_mode,
        base_url: '', evidence_dimensions: '',
        deps: verifyResult.id, context_from: verifyResult.id,
        exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
        findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
      })
      tasks.push({
        id: newVerifyId, title: `Re-verify (round ${fixRound + 1})`,
        description: `PURPOSE: Re-verify after fix | Success: Bug resolved`,
        role: 'verifier', pipeline_mode: tasks[0].pipeline_mode,
        base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
        deps: newFixId, context_from: newFixId,
        exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
        findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
      })
    }
  }

  // Update master CSV
  Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))

  // Cleanup temp files
  Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)

  // Recalculate maxWave (may have grown from iteration/re-fix)
  maxWave = Math.max(maxWave, ...tasks.map(t => t.wave))

  // Display wave summary
  const completed = waveTasks.filter(t => t.status === 'completed').length
  const failed = waveTasks.filter(t => t.status === 'failed').length
  const skipped = waveTasks.filter(t => t.status === 'skipped').length
  console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
Success Criteria:
  • All waves executed in order
  • Conditional skip handled (TEST with 0 issues)
  • Iteration loops handled (analyzer need_more_evidence)
  • Re-fix cycles handled (verifier fail verdict)
  • discoveries.ndjson accumulated across all waves
  • Max iteration/fix bounds respected

目标: 按Wave执行任务,处理条件跳过、迭代循环和重新修复周期。
javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
let tasks = parseCsv(masterCsv)
let maxWave = Math.max(...tasks.map(t => t.wave))
let fixRound = 0
const MAX_FIX_ROUNDS = 3
const MAX_REPRODUCE_ROUNDS = 2

for (let wave = 1; wave <= maxWave; wave++) {
  console.log(`\nWave ${wave}/${maxWave}`)

  const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
  const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
  const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')

  // 检查依赖关系——跳过依赖失败的任务
  for (const task of waveTasks) {
    const depIds = (task.deps || '').split(';').filter(Boolean)
    const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
    if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
      task.status = 'skipped'
      task.error = `Dependency failed: ${depIds.filter((id, i) =>
        ['failed','skipped'].includes(depStatuses[i])).join(', ')}`
    }
  }

  // 执行Wave前的交互式任务(若有)
  for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
    // 根据任务类型确定Agent文件
    const agentFile = task.id.includes('skip') ? 'agents/conditional-skip-gate.md'
      : task.id.includes('iter') ? 'agents/iteration-handler.md'
      : 'agents/completion-handler.md'

    Read(agentFile)
    const agent = spawn_agent({
      message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
    })
    const result = wait({ ids: [agent], timeout_ms: 600000 })
    if (result.timed_out) {
      send_input({ id: agent, message: "Please finalize and output current findings." })
      wait({ ids: [agent], timeout_ms: 120000 })
    }
    Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
      task_id: task.id, status: "completed", findings: parseFindings(result),
      timestamp: getUtc8ISOString()
    }))
    close_agent({ id: agent })
    task.status = 'completed'
    task.findings = parseFindings(result)
  }

  // 为csv-wave任务构建prev_context
  const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
  for (const task of pendingCsvTasks) {
    task.prev_context = buildPrevContext(task, tasks)
  }

  if (pendingCsvTasks.length > 0) {
    Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))

    const waveInstruction = buildWaveInstruction(pendingCsvTasks, sessionFolder, wave)

    spawn_agents_on_csv({
      csv_path: `${sessionFolder}/wave-${wave}.csv`,
      id_column: "id",
      instruction: waveInstruction,
      max_concurrency: maxConcurrency,
      max_runtime_seconds: 1200,
      output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
      output_schema: {
        type: "object",
        properties: {
          id: { type: "string" },
          status: { type: "string", enum: ["completed", "failed"] },
          findings: { type: "string" },
          artifacts_produced: { type: "string" },
          issues_count: { type: "string" },
          verdict: { type: "string" },
          error: { type: "string" }
        }
      }
    })

    // 将结果合并到主CSV
    const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
    for (const r of results) {
      const t = tasks.find(t => t.id === r.id)
      if (t) Object.assign(t, r)
    }

    // 条件跳过:TEST-001无问题
    const testResult = results.find(r => r.id === 'TEST-001')
    if (testResult && parseInt(testResult.issues_count || '0') === 0) {
      // 跳过所有下游任务
      tasks.filter(t => t.wave > wave && t.status === 'pending').forEach(t => {
        t.status = 'skipped'
        t.error = 'No issues found in testing -- skipped'
      })
      console.log('All features passed. No issues found. Pipeline complete.')
    }

    // 迭代:分析师需要更多证据
    const analyzerResult = results.find(r => r.id.startsWith('ANALYZE') && r.findings?.includes('need_more_evidence'))
    if (analyzerResult) {
      const reproduceRound = tasks.filter(t => t.id.startsWith('REPRODUCE')).length
      if (reproduceRound < MAX_REPRODUCE_ROUNDS) {
        const newRepId = `REPRODUCE-${String(reproduceRound + 1).padStart(3, '0')}`
        const newAnalyzeId = `ANALYZE-${String(tasks.filter(t => t.id.startsWith('ANALYZE')).length + 1).padStart(3, '0')}`
        tasks.push({
          id: newRepId, title: 'Supplemental evidence collection',
          description: `PURPOSE: Collect additional evidence per Analyzer request | Success: Targeted evidence collected`,
          role: 'reproducer', pipeline_mode: tasks[0].pipeline_mode,
          base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
          deps: '', context_from: analyzerResult.id,
          exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
          findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
        })
        tasks.push({
          id: newAnalyzeId, title: 'Re-analysis with supplemental evidence',
          description: `PURPOSE: Re-analyze with additional evidence | Success: Higher-confidence RCA`,
          role: 'analyzer', pipeline_mode: tasks[0].pipeline_mode,
          base_url: '', evidence_dimensions: '',
          deps: newRepId, context_from: `${analyzerResult.id};${newRepId}`,
          exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
          findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
        })
        // 更新FIX任务的依赖
        const fixTask = tasks.find(t => t.id === 'FIX-001' && t.status === 'pending')
        if (fixTask) fixTask.deps = newAnalyzeId
      }
    }

    // 重新修复:验证结果为失败
    const verifyResult = results.find(r => r.id.startsWith('VERIFY') && r.verdict === 'fail')
    if (verifyResult && fixRound < MAX_FIX_ROUNDS) {
      fixRound++
      const newFixId = `FIX-${String(fixRound + 1).padStart(3, '0')}`
      const newVerifyId = `VERIFY-${String(fixRound + 1).padStart(3, '0')}`
      tasks.push({
        id: newFixId, title: `Re-fix (round ${fixRound + 1})`,
        description: `PURPOSE: Re-fix based on verification failure | Success: Issue resolved`,
        role: 'fixer', pipeline_mode: tasks[0].pipeline_mode,
        base_url: '', evidence_dimensions: '',
        deps: verifyResult.id, context_from: verifyResult.id,
        exec_mode: 'csv-wave', wave: wave + 1, status: 'pending',
        findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
      })
      tasks.push({
        id: newVerifyId, title: `Re-verify (round ${fixRound + 1})`,
        description: `PURPOSE: Re-verify after fix | Success: Bug resolved`,
        role: 'verifier', pipeline_mode: tasks[0].pipeline_mode,
        base_url: tasks[0].base_url, evidence_dimensions: tasks[0].evidence_dimensions,
        deps: newFixId, context_from: newFixId,
        exec_mode: 'csv-wave', wave: wave + 2, status: 'pending',
        findings: '', artifacts_produced: '', issues_count: '', verdict: '', error: ''
      })
    }
  }

  // 更新主CSV
  Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))

  // 清理临时文件
  Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)

  // 重新计算maxWave(可能因迭代/重新修复而增加)
  maxWave = Math.max(maxWave, ...tasks.map(t => t.wave))

  // 显示Wave摘要
  const completed = waveTasks.filter(t => t.status === 'completed').length
  const failed = waveTasks.filter(t => t.status === 'failed').length
  const skipped = waveTasks.filter(t => t.status === 'skipped').length
  console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
}
成功标准:
  • 按顺序执行所有Wave
  • 处理条件跳过(TEST无问题)
  • 处理迭代循环(分析师need_more_evidence)
  • 处理重新修复周期(验证失败)
  • 跨Wave累积discoveries.ndjson
  • 遵守最大迭代/修复次数限制

Phase 3: Post-Wave Interactive (Completion Action)

Phase 3: Wave后交互式环节(完成操作)

Objective: Pipeline completion report with debug summary.
javascript
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const pipelineMode = tasks[0]?.pipeline_mode

console.log(`
============================================
FRONTEND DEBUG COMPLETE

Pipeline: ${pipelineMode} | ${completed.length}/${tasks.length} tasks
Fix Rounds: ${fixRound}/${MAX_FIX_ROUNDS}
Session: ${sessionFolder}

Results:
${completed.map(t => `  [DONE] ${t.id} (${t.role}): ${t.findings?.substring(0, 80) || 'completed'}`).join('\n')}
============================================
`)

if (!AUTO_YES) {
  AskUserQuestion({
    questions: [{
      question: "Debug pipeline complete. What would you like to do?",
      header: "Completion",
      multiSelect: false,
      options: [
        { label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
        { label: "Keep Active", description: "Keep session for follow-up debugging" },
        { label: "Export Results", description: "Export debug report and patches" }
      ]
    }]
  })
}
Success Criteria:
  • User informed of debug pipeline results
  • Completion action taken

目标: 生成包含调试摘要的流水线完成报告。
javascript
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
const completed = tasks.filter(t => t.status === 'completed')
const pipelineMode = tasks[0]?.pipeline_mode

console.log(`
============================================
FRONTEND DEBUG COMPLETE

Pipeline: ${pipelineMode} | ${completed.length}/${tasks.length} tasks
Fix Rounds: ${fixRound}/${MAX_FIX_ROUNDS}
Session: ${sessionFolder}

Results:
${completed.map(t => `  [DONE] ${t.id} (${t.role}): ${t.findings?.substring(0, 80) || 'completed'}`).join('\n')}
============================================
`)

if (!AUTO_YES) {
  AskUserQuestion({
    questions: [{
      question: "Debug pipeline complete. What would you like to do?",
      header: "Completion",
      multiSelect: false,
      options: [
        { label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
        { label: "Keep Active", description: "Keep session for follow-up debugging" },
        { label: "Export Results", description: "Export debug report and patches" }
      ]
    }]
  })
}
成功标准:
  • 向用户告知调试流水线结果
  • 执行完成操作

Phase 4: Results Aggregation

Phase 4: 结果汇总

Objective: Generate final results and human-readable report.
javascript
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)

const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Frontend Debug Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Pipeline**: ${tasks[0]?.pipeline_mode}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`

contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`

const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
  const waveTasks = tasks.filter(t => t.wave === w)
  contextMd += `### Wave ${w}\n\n`
  for (const t of waveTasks) {
    const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
    contextMd += `${icon} **${t.title}** [${t.role}]`
    if (t.verdict) contextMd += ` Verdict: ${t.verdict}`
    if (t.issues_count) contextMd += ` Issues: ${t.issues_count}`
    contextMd += ` ${t.findings || ''}\n\n`
  }
}

// Debug-specific sections
const verifyTasks = tasks.filter(t => t.role === 'verifier' && t.verdict)
if (verifyTasks.length > 0) {
  contextMd += `## Verification Results\n\n`
  for (const v of verifyTasks) {
    contextMd += `- **${v.id}**: ${v.verdict}\n`
  }
}

Write(`${sessionFolder}/context.md`, contextMd)
console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
Success Criteria:
  • results.csv exported
  • context.md generated with debug summary
  • Summary displayed to user

目标: 生成最终结果和人类可读报告。
javascript
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)

const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
let contextMd = `# Frontend Debug Report\n\n`
contextMd += `**Session**: ${sessionId}\n`
contextMd += `**Pipeline**: ${tasks[0]?.pipeline_mode}\n`
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`

contextMd += `## Summary\n`
contextMd += `| Status | Count |\n|--------|-------|\n`
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`

const maxWave = Math.max(...tasks.map(t => t.wave))
contextMd += `## Wave Execution\n\n`
for (let w = 1; w <= maxWave; w++) {
  const waveTasks = tasks.filter(t => t.wave === w)
  contextMd += `### Wave ${w}\n\n`
  for (const t of waveTasks) {
    const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
    contextMd += `${icon} **${t.title}** [${t.role}]`
    if (t.verdict) contextMd += ` Verdict: ${t.verdict}`
    if (t.issues_count) contextMd += ` Issues: ${t.issues_count}`
    contextMd += ` ${t.findings || ''}\n\n`
  }
}

// 调试专用章节
const verifyTasks = tasks.filter(t => t.role === 'verifier' && t.verdict)
if (verifyTasks.length > 0) {
  contextMd += `## Verification Results\n\n`
  for (const v of verifyTasks) {
    contextMd += `- **${v.id}**: ${v.verdict}\n`
  }
}

Write(`${sessionFolder}/context.md`, contextMd)
console.log(`Results exported to: ${sessionFolder}/results.csv`)
console.log(`Report generated at: ${sessionFolder}/context.md`)
成功标准:
  • 导出results.csv
  • 生成包含调试摘要的context.md
  • 向用户显示摘要

Shared Discovery Board Protocol

共享探索看板协议

All agents share a single
discoveries.ndjson
file.
Format: One JSON object per line (NDJSON):
jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TEST-001","type":"feature_tested","data":{"feature":"F-001","name":"Login","result":"fail","issues":2}}
{"ts":"2026-03-08T10:05:00Z","worker":"REPRODUCE-001","type":"bug_reproduced","data":{"url":"/settings","steps":3,"console_errors":2,"network_failures":1}}
{"ts":"2026-03-08T10:10:00Z","worker":"ANALYZE-001","type":"root_cause_found","data":{"category":"TypeError","file":"src/components/Settings.tsx","line":142,"confidence":"high"}}
{"ts":"2026-03-08T10:15:00Z","worker":"FIX-001","type":"file_modified","data":{"file":"src/components/Settings.tsx","change":"Added null check","lines_added":3}}
{"ts":"2026-03-08T10:20:00Z","worker":"VERIFY-001","type":"verification_result","data":{"verdict":"pass","original_error_resolved":true,"new_errors":0}}
Discovery Types:
TypeData SchemaDescription
feature_tested
{feature, name, result, issues}
Feature test result
bug_reproduced
{url, steps, console_errors, network_failures}
Bug reproduction result
evidence_collected
{dimension, file, description}
Evidence artifact saved
root_cause_found
{category, file, line, confidence}
Root cause identified
file_modified
{file, change, lines_added}
Code fix applied
verification_result
{verdict, original_error_resolved, new_errors}
Fix verification result
issue_found
{file, line, severity, description}
Issue discovered
Protocol:
  1. Agents MUST read discoveries.ndjson at start of execution
  2. Agents MUST append relevant discoveries during execution
  3. Agents MUST NOT modify or delete existing entries
  4. Deduplication by
    {type, data.file}
    key

所有Agent共享一个
discoveries.ndjson
文件。
格式: 每行一个JSON对象(NDJSON):
jsonl
{"ts":"2026-03-08T10:00:00Z","worker":"TEST-001","type":"feature_tested","data":{"feature":"F-001","name":"Login","result":"fail","issues":2}}
{"ts":"2026-03-08T10:05:00Z","worker":"REPRODUCE-001","type":"bug_reproduced","data":{"url":"/settings","steps":3,"console_errors":2,"network_failures":1}}
{"ts":"2026-03-08T10:10:00Z","worker":"ANALYZE-001","type":"root_cause_found","data":{"category":"TypeError","file":"src/components/Settings.tsx","line":142,"confidence":"high"}}
{"ts":"2026-03-08T10:15:00Z","worker":"FIX-001","type":"file_modified","data":{"file":"src/components/Settings.tsx","change":"Added null check","lines_added":3}}
{"ts":"2026-03-08T10:20:00Z","worker":"VERIFY-001","type":"verification_result","data":{"verdict":"pass","original_error_resolved":true,"new_errors":0}}
发现类型:
类型数据Schema描述
feature_tested
{feature, name, result, issues}
功能测试结果
bug_reproduced
{url, steps, console_errors, network_failures}
Bug复现结果
evidence_collected
{dimension, file, description}
保存的证据产物
root_cause_found
{category, file, line, confidence}
识别的根因
file_modified
{file, change, lines_added}
应用的代码修复
verification_result
{verdict, original_error_resolved, new_errors}
修复验证结果
issue_found
{file, line, severity, description}
发现的问题
协议:
  1. Agent必须在执行开始时读取discoveries.ndjson
  2. Agent必须在执行期间追加相关发现
  3. Agent不得修改或删除现有条目
  4. {type, data.file}
    键去重

Conditional Skip Logic

条件跳过逻辑

After TEST-001 completes, evaluate issues:
ConditionAction
issues_count === 0
Skip ANALYZE/FIX/VERIFY. Pipeline complete with all-pass.
Only low-severity warningsAskUserQuestion: fix warnings or complete
High/medium severity issuesProceed with ANALYZE -> FIX -> VERIFY

TEST-001完成后,评估问题数量:
条件操作
issues_count === 0
跳过ANALYZE/FIX/VERIFY。流水线以全通过结束。
仅存在低严重性警告询问用户:修复警告或结束流程
存在高/中严重性问题继续执行ANALYZE -> FIX -> VERIFY

Iteration Rules

迭代规则

TriggerConditionActionMax
Analyzer -> ReproducerConfidence < 50%Create REPRODUCE-002 -> ANALYZE-0022 reproduction rounds
Verifier -> FixerVerdict = failCreate FIX-002 -> VERIFY-0023 fix rounds
Max iterations reachedRound >= maxReport to user for manual intervention--

触发条件判定标准操作最大次数
分析师 -> 复现员置信度 < 50%创建REPRODUCE-002 -> ANALYZE-0022次复现轮次
验证员 -> 修复员验证结果 = fail创建FIX-002 -> VERIFY-0023次修复轮次
达到最大迭代次数轮次 >= 最大次数告知用户进行手动干预--

Error Handling

错误处理

ErrorResolution
Circular dependencyDetect in wave computation, abort with error message
CSV agent timeoutMark as failed in results, continue with wave
CSV agent failedMark as failed, skip dependent tasks in later waves
Interactive agent timeoutUrge convergence via send_input, then close if still timed out
All agents in wave failedLog error, offer retry or abort
CSV parse errorValidate CSV format before execution, show line number
discoveries.ndjson corruptIgnore malformed lines, continue with valid entries
All features pass testSkip downstream tasks, report success
Bug not reproducibleReport failure, ask user for more details
Browser not availableReport error, suggest manual reproduction steps
Analysis inconclusiveRequest more evidence via iteration loop
Fix introduces regressionVerifier reports fail, dispatch re-fix
Max iterations reachedEscalate to user for manual intervention
Continue mode: no session foundList available sessions, prompt user to select

错误解决方式
循环依赖在Wave计算时检测,抛出错误信息并终止
CSV Agent超时在结果中标记为失败,继续执行Wave
CSV Agent执行失败标记为失败,后续Wave中跳过依赖该任务的任务
交互式Agent超时通过send_input催促收敛,若仍超时则关闭
Wave中所有Agent失败记录错误,提供重试或终止选项
CSV解析错误执行前验证CSV格式,显示错误行号
discoveries.ndjson损坏忽略格式错误的行,继续使用有效条目
所有功能测试通过跳过下游任务,报告成功
Bug无法复现报告失败,询问用户获取更多细节
浏览器不可用报告错误,建议手动复现步骤
分析无结论通过迭代循环请求更多证据
修复引入回归验证员报告失败,触发重新修复
达到最大迭代次数升级给用户进行手动干预
恢复模式:未找到会话列出可用会话,提示用户选择

Core Rules

核心规则

  1. Start Immediately: First action is session initialization, then Phase 0/1
  2. Wave Order is Sacred: Never execute wave N before wave N-1 completes and results are merged
  3. CSV is Source of Truth: Master tasks.csv holds all state (both csv-wave and interactive)
  4. CSV First: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
  5. Context Propagation: prev_context built from master CSV, not from memory
  6. Discovery Board is Append-Only: Never clear, modify, or recreate discoveries.ndjson
  7. Skip on Failure: If a dependency failed, skip the dependent task
  8. Conditional Skip: If TEST finds 0 issues, skip all downstream tasks
  9. Iteration Bounds: Max 2 reproduction rounds, max 3 fix rounds
  10. Cleanup Temp Files: Remove wave-{N}.csv after results are merged
  11. DO NOT STOP: Continuous execution until all waves complete or all remaining tasks are skipped

  1. 立即启动: 第一个操作是会话初始化,然后执行Phase 0/1
  2. Wave顺序不可打破: 绝不在Wave N-1完成并合并结果前执行Wave N
  3. CSV是唯一可信源: 主tasks.csv保存所有状态(csv-wave和interactive任务)
  4. 优先使用CSV: 任务默认使用csv-wave模式;仅当交互模式需要时才使用interactive
  5. 上下文传播: prev_context从主CSV构建,而非内存
  6. 探索看板仅可追加: 绝不清除、修改或重建discoveries.ndjson
  7. 失败则跳过: 若依赖任务失败,跳过当前任务
  8. 条件跳过: 若TEST未发现问题,跳过所有下游任务
  9. 迭代次数限制: 最多2次复现轮次,最多3次修复轮次
  10. 清理临时文件: 合并结果后删除wave-{N}.csv
  11. 持续执行: 持续执行直到所有Wave完成或剩余任务均被跳过

Coordinator Role Constraints (Main Agent)

协调器角色约束(主Agent)

CRITICAL: The coordinator (main agent executing this skill) is responsible for orchestration only, NOT implementation.
  1. Coordinator Does NOT Execute Code: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
    • Spawns agents with task assignments
    • Waits for agent callbacks
    • Merges results and coordinates workflow
    • Manages workflow transitions between phases
  2. Patient Waiting is Mandatory: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
    • Wait patiently for
      wait()
      calls to complete
    • NOT skip workflow steps due to perceived delays
    • NOT assume agents have failed just because they're taking time
    • Trust the timeout mechanisms defined in the skill
  3. Use send_input for Clarification: When agents need guidance or appear stuck, the coordinator MUST:
    • Use
      send_input()
      to ask questions or provide clarification
    • NOT skip the agent or move to next phase prematurely
    • Give agents opportunity to respond before escalating
    • Example:
      send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })
  4. No Workflow Shortcuts: The coordinator MUST NOT:
    • Skip phases or stages defined in the workflow
    • Bypass required approval or review steps
    • Execute dependent tasks before prerequisites complete
    • Assume task completion without explicit agent callback
    • Make up or fabricate agent results
  5. Respect Long-Running Processes: This is a complex multi-agent workflow that requires patience:
    • Total execution time may range from 30-90 minutes or longer
    • Each phase may take 10-30 minutes depending on complexity
    • The coordinator must remain active and attentive throughout the entire process
    • Do not terminate or skip steps due to time concerns
关键: 协调器(执行此技能的主Agent)仅负责编排,不负责具体实现。
  1. 协调器不得执行代码: 主Agent不得直接编写、修改或实现任何代码。所有实现工作均委托给生成的团队Agent。协调器仅需:
    • 生成带任务分配的Agent
    • 等待Agent回调
    • 合并结果并协调工作流
    • 管理各阶段间的工作流转换
  2. 必须耐心等待: Agent执行需要大量时间(通常每个阶段10-30分钟,有时更长)。协调器必须:
    • 耐心等待
      wait()
      调用完成
    • 不得因感知到延迟而跳过工作流步骤
    • 不得仅因执行时间长就假设Agent失败
    • 信任技能中定义的超时机制
  3. 使用send_input进行澄清: 当Agent需要指导或似乎卡住时,协调器必须:
    • 使用
      send_input()
      询问问题或提供澄清
    • 不得提前跳过Agent或进入下一阶段
    • 在升级前给Agent响应的机会
    • 示例:
      send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })
  4. 不得走工作流捷径: 协调器不得:
    • 跳过工作流中定义的阶段或步骤
    • 绕过必要的审批或审查步骤
    • 在前置条件完成前执行依赖任务
    • 无Agent明确回调就假设任务完成
    • 编造或伪造Agent结果
  5. 尊重长时间运行的流程: 这是一个复杂的多Agent工作流,需要耐心:
    • 总执行时间可能在30-90分钟或更长
    • 每个阶段的耗时可能因复杂度不同而在10-30分钟之间
    • 协调器必须在整个过程中保持活跃和专注
    • 不得因时间问题终止或跳过步骤