ln-1000-pipeline-orchestrator
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePaths: File paths (,shared/,references/) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.../ln-*
路径: 文件路径(、shared/、references/)均相对于技能仓库根目录。如果在当前工作目录(CWD)下找不到,请定位到本SKILL.md所在目录,向上跳转一级即为仓库根目录。../ln-*
Pipeline Orchestrator
流水线编排器
Meta-orchestrator that reads the kanban board, shows available Stories, lets the user pick one to process, and drives it through the full pipeline (task planning -> validation -> execution -> quality gate) using Claude Code Agent Teams.
Meta-orchestrator可读取看板、展示可用的Story、支持用户选择单个Story进行处理,并通过Claude Code Agent团队驱动需求走完完整流水线(任务规划→验证→执行→质量门禁)。
Purpose & Scope
用途与范围
- Parse kanban board and show available Stories for user selection
- Ask business questions in ONE batch before execution; make technical decisions autonomously
- Spawn worker via TeamCreate for selected Story (single worker)
- Drive selected Story through 4 stages: ln-300 -> ln-310 -> ln-400 -> ln-500
- Sync with develop + generate report after quality gate PASS; merge only on user confirmation
- Handle failures, retries, and escalation to user
- 解析看板内容,展示可用的Story供用户选择
- 执行前一次性批量询问业务问题,自主做出技术决策
- 通过TeamCreate为选中的Story生成工作器(单个工作器)
- 驱动选中的Story走完4个阶段:ln-300 → ln-310 → ln-400 → ln-500
- 质量门禁通过后与develop分支同步并生成报告,仅在用户确认后执行合并
- 处理失败、重试场景,并将问题升级给用户
Hierarchy
层级结构
L0: ln-1000-pipeline-orchestrator (TeamCreate lead, delegate mode, single story)
+-- Worker (fresh per stage, shutdown after completion, one at a time)
| All stages: Opus 4.6 | Effort: Stage 0 = low | Stage 1,2 = medium | Stage 3 = medium
+-- L1: ln-300 / ln-310 / ln-400 / ln-500 (invoked via Skill tool, as-is)
+-- L2/L3: existing hierarchy unchangedKey principle: ln-1000 does NOT modify existing skills. Workers invoke ln-300/ln-310/ln-400/ln-500 through Skill tool exactly as a human operator would.
L0: ln-1000-pipeline-orchestrator (TeamCreate lead, delegate mode, single story)
+-- Worker (fresh per stage, shutdown after completion, one at a time)
| All stages: Opus 4.6 | Effort: Stage 0 = low | Stage 1,2 = medium | Stage 3 = medium
+-- L1: ln-300 / ln-310 / ln-400 / ln-500 (invoked via Skill tool, as-is)
+-- L2/L3: existing hierarchy unchanged核心原则: ln-1000 不修改现有技能。工作器完全按照人类操作员的操作方式,通过Skill工具调用ln-300/ln-310/ln-400/ln-500。
MCP Tool Preferences
MCP工具优先级
When tools are available, workers MUST prefer them over standard file tools:
mcp__hashline-edit__*| Standard Tool | Hashline-Edit Replacement | Why |
|---|---|---|
| | Hash-prefixed lines enable precise edits |
| | Atomic validation prevents corruption |
| | Same behavior, consistent interface |
| | Results include hashline refs for follow-up edits |
Fallback: If hashline-edit MCP unavailable (tools not in ToolSearch), use standard tools. No error.
当系列工具可用时,工作器必须优先使用它们,而非标准文件工具:
mcp__hashline-edit__*| 标准工具 | Hashline-Edit替代工具 | 原因 |
|---|---|---|
| | 哈希前缀行支持精准编辑 |
| | 原子验证避免文件损坏 |
| | 行为一致,接口统一 |
| | 结果包含哈希行引用,便于后续编辑 |
降级方案: 如果hashline-edit MCP不可用(ToolSearch中无对应工具),则使用标准工具,不报错。
Task Storage Mode
任务存储模式
MANDATORY READ: Load for Linear vs File mode detection and operations.
shared/references/storage_mode_detection.md必读: 加载了解Linear模式和文件模式的检测与操作方法。
shared/references/storage_mode_detection.mdWhen to Use
适用场景
- One Story ready for processing — user picks which one
- Need end-to-end automation: task planning -> validation -> execution -> quality gate -> merge confirmation
- Want controlled Story processing with user confirmation before merge
- 有一个Story待处理,由用户选择具体处理哪一个
- 需要端到端自动化:任务规划→验证→执行→质量门禁→合并确认
- 需要可控的Story处理流程,合并前需用户确认
Pipeline: 4-Stage State Machine
流水线:4阶段状态机
MANDATORY READ: Load for transition rules and guards.
references/pipeline_states.mdBacklog --> Stage 0 (ln-300) --> Backlog --> Stage 1 (ln-310) --> Todo
(no tasks) create tasks (tasks exist) validate |
| NO-GO |
v v
[retry/ask] Stage 2 (ln-400)
|
v
To Review
|
v
Stage 3 (ln-500)
| |
PASS FAIL
| v
PENDING_MERGE To Rework -> Stage 2
(sync+report) (max 2 cycles)
|
[user confirms?]
yes | no
v v
Done Done
(merged) (branch kept)| Stage | Skill | Input Status | Output Status |
|---|---|---|---|
| 0 | ln-300-task-coordinator | Backlog (no tasks) | Backlog (tasks created) |
| 1 | ln-310-story-validator | Backlog (tasks exist) | Todo |
| 2 | ln-400-story-executor | Todo / To Rework | To Review |
| 3 | ln-500-story-quality-gate | To Review | Done / To Rework |
必读: 加载了解状态转换规则和防护条件。
references/pipeline_states.mdBacklog --> Stage 0 (ln-300) --> Backlog --> Stage 1 (ln-310) --> Todo
(no tasks) create tasks (tasks exist) validate |
| NO-GO |
v v
[retry/ask] Stage 2 (ln-400)
|
v
To Review
|
v
Stage 3 (ln-500)
| |
PASS FAIL
| v
PENDING_MERGE To Rework -> Stage 2
(sync+report) (max 2 cycles)
|
[user confirms?]
yes | no
v v
Done Done
(merged) (branch kept)| 阶段 | 对应技能 | 输入状态 | 输出状态 |
|---|---|---|---|
| 0 | ln-300-task-coordinator | 待办(无任务) | 待办(已创建任务) |
| 1 | ln-310-story-validator | 待办(已有任务) | 待处理 |
| 2 | ln-400-story-executor | 待处理 / 待返工 | 待评审 |
| 3 | ln-500-story-quality-gate | 待评审 | 已完成 / 待返工 |
Team Lead Responsibilities
团队负责人职责
This skill runs as a team lead in delegate mode. The agent executing ln-1000 MUST NOT write code or invoke skills directly.
| Responsibility | Description |
|---|---|
| Coordinate | Assign stages to worker, process completion reports, advance pipeline |
| Verify board | Re-read kanban/Linear after each stage. Workers update via skills; lead ASSERTs expected state transitions |
| Escalate | Route failures to user when retry limits exceeded |
| Sync & confirm | Sync with develop after quality gate PASS, ask user for merge confirmation |
| Shutdown | Graceful worker shutdown, team cleanup |
NEVER do as lead: Invoke ln-300/ln-310/ln-400/ln-500 directly. Edit source code. Skip quality gate. Force-kill workers.
本技能以团队负责人身份在代理模式下运行。执行ln-1000的Agent不得直接编写代码或调用技能。
| 职责 | 描述 |
|---|---|
| 协调 | 为工作器分配阶段任务,处理完成报告,推进流水线进度 |
| 验证看板状态 | 每个阶段结束后重新读取看板/Linear。工作器通过技能更新状态,负责人校验状态转换符合预期 |
| 问题升级 | 重试次数超出上限时,将失败问题提交给用户处理 |
| 同步与确认 | 质量门禁通过后与develop分支同步,向用户发起合并确认 |
| 关闭流程 | 优雅关闭工作器,清理团队资源 |
负责人禁止操作: 直接调用ln-300/ln-310/ln-400/ln-500、编辑源代码、跳过质量门禁、强制终止工作器。
Workflow
工作流程
Phase 0: Recovery Check
阶段0:恢复检查
IF .pipeline/state.json exists AND complete == false:
# Previous run interrupted — resume from saved state
1. Read .pipeline/state.json → restore: selected_story_id, story_state, worker_map,
quality_cycles, validation_retries, crash_count,
story_results, infra_issues, worktree_map,
stage_timestamps, git_stats, pipeline_start_time, readiness_scores, merge_status
2. Read .pipeline/checkpoint-{selected_story_id}.json → validate story_state consistency
(checkpoint.stage should match story_state[id])
3. Re-read kanban board → verify selected story still exists
4. Read team config → verify worker_map members still exist
5. Set suspicious_idle = false (ephemeral, reset on recovery)
6. IF story_state[id] == "PENDING_MERGE":
# Re-ask user for merge confirmation (Phase 4 post-loop)
Jump to Phase 4 POST-LOOP
7. IF story_state[id] IN ("STAGE_0".."STAGE_3"):
IF checkpoint.agentId exists → Task(resume: checkpoint.agentId)
ELSE → respawn worker with checkpoint context (see checkpoint_format.md)
8. Jump to Phase 4 event loop
IF .pipeline/state.json NOT exists OR complete == true:
# Fresh start — proceed to Phase 1IF .pipeline/state.json exists AND complete == false:
# 之前的运行被中断,从保存的状态恢复
1. Read .pipeline/state.json → restore: selected_story_id, story_state, worker_map,
quality_cycles, validation_retries, crash_count,
story_results, infra_issues, worktree_map,
stage_timestamps, git_stats, pipeline_start_time, readiness_scores, merge_status
2. Read .pipeline/checkpoint-{selected_story_id}.json → validate story_state consistency
(checkpoint.stage should match story_state[id])
3. Re-read kanban board → verify selected story still exists
4. Read team config → verify worker_map members still exist
5. Set suspicious_idle = false (ephemeral, reset on recovery)
6. IF story_state[id] == "PENDING_MERGE":
# 重新向用户询问合并确认(阶段4循环后处理)
Jump to Phase 4 POST-LOOP
7. IF story_state[id] IN ("STAGE_0".."STAGE_3"):
IF checkpoint.agentId exists → Task(resume: checkpoint.agentId)
ELSE → respawn worker with checkpoint context (see checkpoint_format.md)
8. Jump to Phase 4 event loop
IF .pipeline/state.json NOT exists OR complete == true:
# 全新运行,进入阶段1Phase 1: Discovery, Kanban Parsing & Story Selection
阶段1:发现、看板解析与Story选择
MANDATORY READ: Load for parsing patterns.
references/kanban_parser.md- Auto-discover (or Linear API via storage mode detection)
docs/tasks/kanban_board.md - Extract project brief from target project's CLAUDE.md (NOT skills repo):
project_brief = { name: <from H1 or first line>, tech: <from Development Commands / tech references>, type: <inferred: "CLI", "API", "web app", "library">, key_rules: <2-3 critical rules> } IF not found: project_brief = { name: basename(project_root), tech: "unknown" } - Parse all status sections: Backlog, Todo, In Progress, To Review, To Rework
- Extract Story list with: ID, title, status, Epic name, task presence
- Filter: skip Stories in Done, Postponed, Canceled
- Detect task presence per Story:
- Has → no tasks → Stage 0
_(tasks not created yet)_ - Has task lines (4-space indent) → tasks exist → Stage 1+
- Has
- Determine target stage per Story (see Stage-to-Status Mapping)
references/pipeline_states.md - Show available Stories and ask user to pick ONE:
Project: {project_brief.name} ({project_brief.tech}) Available Stories: | # | Story | Status | Stage | Skill | Epic | |---|-------|--------|-------|-------|------| | 1 | PROJ-42: Auth endpoint | To Review | 3 | ln-500 | Epic: Auth | | 2 | PROJ-55: CRUD users | Backlog (no tasks) | 0 | ln-300 | Epic: Users | | 3 | PROJ-60: Dashboard | Todo | 2 | ln-400 | Epic: UI | AskUserQuestion: "Which story to process? Enter # or Story ID." - Store selected story. Extract story brief for selected story only:
description = get_issue(selected_story.id).description story_briefs[id] = parse <!-- ORCHESTRATOR_BRIEF_START/END --> markers IF no markers: story_briefs[id] = { tech: project_brief.tech, keyFiles: "unknown" }
必读: 加载了解解析规则。
references/kanban_parser.md- 自动发现(或通过存储模式检测调用Linear API)
docs/tasks/kanban_board.md - 从目标项目的CLAUDE.md(非技能仓库)提取项目简介:
project_brief = { name: <from H1 or first line>, tech: <from Development Commands / tech references>, type: <inferred: "CLI", "API", "web app", "library">, key_rules: <2-3 critical rules> } IF not found: project_brief = { name: basename(project_root), tech: "unknown" } - 解析所有状态分区:待办、待处理、进行中、待评审、待返工
- 提取Story列表,包含:ID、标题、状态、史诗名称、任务存在情况
- 过滤:跳过已完成、已延期、已取消的Story
- 检测每个Story的任务存在情况:
- 包含→ 无任务 → 阶段0
_(tasks not created yet)_ - 包含任务行(4空格缩进) → 已有任务 → 阶段1及以上
- 包含
- 确定每个Story的目标阶段(参考阶段与状态映射表)
references/pipeline_states.md - 展示可用Story并要求用户选择一个:
Project: {project_brief.name} ({project_brief.tech}) Available Stories: | # | Story | Status | Stage | Skill | Epic | |---|-------|--------|-------|-------|------| | 1 | PROJ-42: Auth endpoint | To Review | 3 | ln-500 | Epic: Auth | | 2 | PROJ-55: CRUD users | Backlog (no tasks) | 0 | ln-300 | Epic: Users | | 3 | PROJ-60: Dashboard | Todo | 2 | ln-400 | Epic: UI | AskUserQuestion: "Which story to process? Enter # or Story ID." - 存储选中的Story,仅为选中的Story提取简介:
description = get_issue(selected_story.id).description story_briefs[id] = parse <!-- ORCHESTRATOR_BRIEF_START/END --> markers IF no markers: story_briefs[id] = { tech: project_brief.tech, keyFiles: "unknown" }
Phase 2: Pre-flight Questions (ONE batch)
阶段2:前置问题确认(一次性批量)
- Load selected Story description (metadata only)
- Scan for business ambiguities — questions where:
- Answer cannot be found in codebase, docs, or standards
- Answer requires business/product decision (payment provider, auth flow, UI preference)
- Collect ALL business questions into single AskUserQuestion:
"Before starting Story {selected_story.id}: Which payment provider? (Stripe/PayPal/both) Auth flow — JWT or session-based?" - Technical questions — resolve using project_brief:
- Library versions: MCP Ref / Context7 (for ecosystem)
project_brief.tech - Architecture patterns:
project_brief.key_rules - Standards compliance: ln-310 Phase 2 handles this
- Library versions: MCP Ref / Context7 (for
- Store answers in shared context (pass to worker via spawn prompt)
Skip Phase 2 if no business questions found. Proceed directly to Phase 3.
- 加载选中Story的描述(仅元数据)
- 扫描业务歧义点,满足以下条件的问题需要确认:
- 无法在代码库、文档或规范中找到答案
- 需要业务/产品决策(如支付提供商、认证流程、UI偏好)
- 将所有业务问题整合为一次用户提问:
"Before starting Story {selected_story.id}: Which payment provider? (Stripe/PayPal/both) Auth flow — JWT or session-based?" - 技术问题使用项目简介自行解决:
- 库版本:参考MCP文档/Context7(对应生态)
project_brief.tech - 架构模式:参考
project_brief.key_rules - 规范合规:由ln-310阶段2处理
- 库版本:参考MCP文档/Context7(对应
- 将答案存储在共享上下文中(通过生成提示传递给工作器)
如果没有业务问题则跳过阶段2,直接进入阶段3。
Phase 3: Team Setup
阶段3:团队设置
MANDATORY READ: Load for required permissions and hooks.
references/settings_template.json必读: 加载了解所需权限和钩子配置。
references/settings_template.json3.0 Linear Status Cache (Linear mode only)
3.0 Linear状态缓存(仅Linear模式)
IF storage_mode == "linear":
statuses = list_issue_statuses(teamId=team_id)
status_cache = {status.name: status.id FOR status IN statuses}
REQUIRED = ["Backlog", "Todo", "In Progress", "To Review", "To Rework", "Done"]
missing = [s for s in REQUIRED if s not in status_cache]
IF missing: ABORT "Missing Linear statuses: {missing}. Configure workflow."
# Persist in state.json (added in 3.2) and pass to workers via prompt CONTEXTIF storage_mode == "linear":
statuses = list_issue_statuses(teamId=team_id)
status_cache = {status.name: status.id FOR status IN statuses}
REQUIRED = ["Backlog", "Todo", "In Progress", "To Review", "To Rework", "Done"]
missing = [s for s in REQUIRED if s not in status_cache]
IF missing: ABORT "Missing Linear statuses: {missing}. Configure workflow."
# 持久化到state.json(3.2中添加)并通过提示CONTEXT传递给工作器3.1 Pre-flight: Settings Verification
3.1 前置检查:设置验证
Verify in target project:
.claude/settings.local.json- =
defaultMode(required for workers)"bypassPermissions" - registered →
hooks.Stoppipeline-keepalive.sh - registered →
hooks.TeammateIdleworker-keepalive.sh
If missing or incomplete → copy from and install hook scripts via Bash (NOT Write tool — Write produces CRLF on Windows, breaking shebang):
references/settings_template.jsoncp#!/bin/bashundefined验证目标项目中的:
.claude/settings.local.json- =
defaultMode(工作器必需)"bypassPermissions" - 已注册→
hooks.Stoppipeline-keepalive.sh - 已注册→
hooks.TeammateIdleworker-keepalive.sh
如果缺失或不完整 → 从复制,并通过Bash 命令安装钩子脚本(不要使用Write工具,Write在Windows下会生成CRLF换行,破坏 shebang):
references/settings_template.jsoncp#!/bin/bashundefinedPreflight: verify dependencies
前置检查:验证依赖
which jq || ABORT "jq is required for pipeline hooks. Install: https://jqlang.github.io/jq/download/"
mkdir -p .claude/hooks
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/pipeline-keepalive.sh .claude/hooks/pipeline-keepalive.sh
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/worker-keepalive.sh .claude/hooks/worker-keepalive.sh
**Hook troubleshooting:** If hooks fail with "No such file or directory":
1. Verify hook commands use `bash .claude/hooks/script.sh` (relative path, no env vars — `$CLAUDE_PROJECT_DIR` is NOT available in hook shell context)
2. Verify `.claude/hooks/*.sh` files exist and have `#!/bin/bash` shebang
3. On Windows: ensure LF line endings in .sh files (see hook installation above — use Bash `cp`, not Write tool)which jq || ABORT "jq is required for pipeline hooks. Install: https://jqlang.github.io/jq/download/"
mkdir -p .claude/hooks
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/pipeline-keepalive.sh .claude/hooks/pipeline-keepalive.sh
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/worker-keepalive.sh .claude/hooks/worker-keepalive.sh
**钩子故障排查:** 如果钩子报错"No such file or directory":
1. 验证钩子命令使用`bash .claude/hooks/script.sh`(相对路径,无环境变量——钩子shell上下文中不存在`$CLAUDE_PROJECT_DIR`)
2. 验证`.claude/hooks/*.sh`文件存在且包含`#!/bin/bash` shebang
3. Windows系统:确保.sh文件使用LF换行(参考上面的钩子安装说明——使用Bash `cp`命令,不要用Write工具)3.2 Initialize Pipeline State
3.2 初始化流水线状态
Write .pipeline/state.json (full schema — see checkpoint_format.md):
{ "complete": false, "selected_story_id": "<selected story ID>",
"stories_remaining": 1, "last_check": <now>,
"story_state": {}, "worker_map": {}, "quality_cycles": {}, "validation_retries": {},
"crash_count": {},
"worktree_map": {}, "story_results": {}, "infra_issues": [],
"status_cache": {<status_name: status_uuid>}, # Empty object if file mode
"stage_timestamps": {}, "git_stats": {}, "pipeline_start_time": <now>, "readiness_scores": {},
"skill_repo_path": <absolute path to skills repository root>,
"team_name": "pipeline-{YYYY-MM-DD}",
"business_answers": {<question: answer pairs from Phase 2, or {} if skipped>},
"merge_status": "pending",
"storage_mode": "file"|"linear",
"project_brief": {<name, tech, type, key_rules from Phase 1 step 2>},
"story_briefs": {<storyId: {tech, keyFiles, approach, complexity} from Phase 1 step 9>} } # Recovery-critical
Write .pipeline/lead-session.id with current session_id # Stop hook uses this to only keep lead aliveWrite .pipeline/state.json (full schema — see checkpoint_format.md):
{ "complete": false, "selected_story_id": "<selected story ID>",
"stories_remaining": 1, "last_check": <now>,
"story_state": {}, "worker_map": {}, "quality_cycles": {}, "validation_retries": {},
"crash_count": {},
"worktree_map": {}, "story_results": {}, "infra_issues": [],
"status_cache": {<status_name: status_uuid>}, # 文件模式下为空对象
"stage_timestamps": {}, "git_stats": {}, "pipeline_start_time": <now>, "readiness_scores": {},
"skill_repo_path": <absolute path to skills repository root>,
"team_name": "pipeline-{YYYY-MM-DD}",
"business_answers": {<question: answer pairs from Phase 2, or {} if skipped>},
"merge_status": "pending",
"storage_mode": "file"|"linear",
"project_brief": {<name, tech, type, key_rules from Phase 1 step 2>},
"story_briefs": {<storyId: {tech, keyFiles, approach, complexity} from Phase 1 step 9>} } # 恢复关键数据
Write .pipeline/lead-session.id with current session_id # Stop钩子使用该ID仅保持负责人进程存活3.2a Sleep Prevention (Windows only)
3.2a 防睡眠设置(仅Windows)
IF platform == "win32":
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/prevent-sleep.ps1 .claude/hooks/prevent-sleep.ps1
Bash: powershell -ExecutionPolicy Bypass -WindowStyle Hidden -File .claude/hooks/prevent-sleep.ps1 &
sleep_prevention_pid = $!
# Script polls .pipeline/state.json — self-terminates when complete=true
# Fallback: Windows auto-releases execution state on process exitIF platform == "win32":
Bash: cp {skill_repo}/ln-1000-pipeline-orchestrator/references/hooks/prevent-sleep.ps1 .claude/hooks/prevent-sleep.ps1
Bash: powershell -ExecutionPolicy Bypass -WindowStyle Hidden -File .claude/hooks/prevent-sleep.ps1 &
sleep_prevention_pid = $!
# 脚本轮询.pipeline/state.json,complete=true时自动终止
# 降级方案:进程退出时Windows自动释放执行状态3.3 Create Team & Prepare Branch
3.3 创建团队与准备分支
Worktree: Worker gets its own worktree with a named feature branch (). Created in Phase 4 before spawning.
feature/{id}-{slug}Model routing: All stages use . Effort routing via prompt: , , , . Crash recovery = same as target stage. Thinking mode: always enabled (adaptive).
model: "opus"effort_for_stage(0) = "low"effort_for_stage(1) = "medium"effort_for_stage(2) = "medium"effort_for_stage(3) = "medium"-
Ensurebranch exists:
developIF `develop` branch not found locally or on origin: git branch develop $(git symbolic-ref --short HEAD) # Create from current default branch git push -u origin develop git checkout develop # Start pipeline from develop -
Create team:
TeamCreate(team_name: "pipeline-{YYYY-MM-DD}-{HHmm}")
Worker is spawned in Phase 4 after worktree creation.
工作树: 工作器拥有独立的工作树,对应命名的功能分支(),在阶段4生成工作器前创建。
feature/{id}-{slug}模型路由: 所有阶段使用。通过提示配置工作量:,,,。崩溃恢复使用与目标阶段相同的配置。思考模式:始终开启(自适应)。
model: "opus"effort_for_stage(0) = "low"effort_for_stage(1) = "medium"effort_for_stage(2) = "medium"effort_for_stage(3) = "medium"-
确保分支存在:
developIF `develop` branch not found locally or on origin: git branch develop $(git symbolic-ref --short HEAD) # 从当前默认分支创建 git push -u origin develop git checkout develop # 从develop分支启动流水线 -
创建团队:
TeamCreate(team_name: "pipeline-{YYYY-MM-DD}-{HHmm}")
工作器将在阶段4创建工作树后生成。
Phase 4: Execution Loop
阶段4:执行循环
MANDATORY READ: Load for exact message formats and parsing regex.
MANDATORY READ: Load for crash detection and respawn rules.
references/message_protocol.mdreferences/worker_health_contract.mdLead operates in delegate mode — coordination only, no code writing.
MANDATORY READ: Load for checkpoint schema and resume protocol.
references/checkpoint_format.mdundefined必读: 加载了解确切的消息格式和解析正则。
必读: 加载了解崩溃检测和重启规则。
references/message_protocol.mdreferences/worker_health_contract.md负责人在代理模式下运行——仅负责协调,不编写代码。
必读: 加载了解检查点结构和恢复协议。
references/checkpoint_format.mdundefined--- INITIALIZATION (single story) ---
--- 初始化(单个Story) ---
selected_story = <from Phase 1 selection>
quality_cycles[selected_story.id] = 0 # FAIL→retry counter, limit 2
validation_retries[selected_story.id] = 0 # NO-GO retry counter, limit 1
crash_count[selected_story.id] = 0 # crash respawn counter, limit 1
suspicious_idle = false # crash detection flag
story_state[selected_story.id] = "QUEUED"
worker_map = {} # {storyId: worker_name}
worktree_map = {} # {storyId: worktree_dir | null}
story_results = {} # {storyId: {stage0: "...", ...}} — for pipeline report
infra_issues = [] # [{phase, type, message}] — infrastructure problems
heartbeat_count = 0 # Heartbeat cycle counter (ephemeral, resets on recovery)
stage_timestamps = {} # {storyId: {stage_N_start: ISO, stage_N_end: ISO}}
git_stats = {} # {storyId: {lines_added, lines_deleted, files_changed}}
pipeline_start_time = now() # ISO 8601 — wall-clock start for duration metrics
readiness_scores = {} # {storyId: readiness_score} — from Stage 1 GO
selected_story = <from Phase 1 selection>
quality_cycles[selected_story.id] = 0 # 失败→重试计数器,上限2
validation_retries[selected_story.id] = 0 # NO-GO重试计数器,上限1
crash_count[selected_story.id] = 0 # 崩溃重启计数器,上限1
suspicious_idle = false # 崩溃检测标记
story_state[selected_story.id] = "QUEUED"
worker_map = {} # {storyId: worker_name}
worktree_map = {} # {storyId: worktree_dir | null}
story_results = {} # {storyId: {stage0: "...", ...} — 用于流水线报告
infra_issues = [] # [{phase, type, message}] — 基础设施问题
heartbeat_count = 0 # 心跳周期计数器(临时变量,恢复时重置)
stage_timestamps = {} # {storyId: {stage_N_start: ISO, stage_N_end: ISO}}
git_stats = {} # {storyId: {lines_added, lines_deleted, files_changed}}
pipeline_start_time = now() # ISO 8601 — 耗时统计的实际开始时间
readiness_scores = {} # {storyId: readiness_score} — 来自阶段1 GO结果
Helper functions — see phase4_heartbeat.md Helper Functions for full definitions
辅助函数——完整定义参考phase4_heartbeat.md辅助函数部分
skill_name_from_stage(stage), predict_next_step(stage), stage_duration(id, N)
skill_name_from_stage(stage), predict_next_step(stage), stage_duration(id, N)
--- SPAWN SINGLE WORKER ---
--- 生成单个工作器 ---
id = selected_story.id
target_stage = determine_stage(selected_story) # See pipeline_states.md guards
worker_name = "story-{id}-s{target_stage}"
worktree_dir = ".worktrees/story-{id}"
git worktree add -b feature/{id}-{slug} {worktree_dir} develop
worktree_map[id] = worktree_dir
project_root = Bash("pwd") # Absolute path for PIPELINE_DIR in worktree mode
Task(name: worker_name, team_name: "pipeline-{date}",
model: "opus", mode: "bypassPermissions",
subagent_type: "general-purpose",
prompt: worker_prompt(selected_story, target_stage, business_answers, worktree_dir, project_root))
worker_map[id] = worker_name
story_state[id] = "STAGE_{target_stage}"
stage_timestamps[id] = {}
stage_timestamps[id]["stage_{target_stage}_start"] = now()
Write .pipeline/worker-{worker_name}-active.flag # For TeammateIdle hook
Update .pipeline/state.json
SendMessage(recipient: worker_name,
content: "Execute Stage {target_stage} for {id}",
summary: "Stage {target_stage} assignment")
id = selected_story.id
target_stage = determine_stage(selected_story) # 参考pipeline_states.md防护规则
worker_name = "story-{id}-s{target_stage}"
worktree_dir = ".worktrees/story-{id}"
git worktree add -b feature/{id}-{slug} {worktree_dir} develop
worktree_map[id] = worktree_dir
project_root = Bash("pwd") # 工作树模式下PIPELINE_DIR的绝对路径
Task(name: worker_name, team_name: "pipeline-{date}",
model: "opus", mode: "bypassPermissions",
subagent_type: "general-purpose",
prompt: worker_prompt(selected_story, target_stage, business_answers, worktree_dir, project_root))
worker_map[id] = worker_name
story_state[id] = "STAGE_{target_stage}"
stage_timestamps[id] = {}
stage_timestamps[id]["stage_{target_stage}_start"] = now()
Write .pipeline/worker-{worker_name}-active.flag # 供TeammateIdle钩子使用
Update .pipeline/state.json
SendMessage(recipient: worker_name,
content: "Execute Stage {target_stage} for {id}",
summary: "Stage {target_stage} assignment")
--- EVENT LOOP (driven by Stop hook heartbeat, single story) ---
--- 事件循环(由Stop钩子心跳驱动,单个Story) ---
HOW THIS WORKS:
工作原理:
1. Lead's turn ends → Stop event fires
1. 负责人轮次结束 → 触发Stop事件
2. pipeline-keepalive.sh reads .pipeline/state.json → complete=false → exit 2
2. pipeline-keepalive.sh读取.pipeline/state.json → complete=false → exit 2
3. stderr "HEARTBEAT: ..." → new agentic loop iteration
3. 标准错误输出"HEARTBEAT: ..." → 启动新的Agent循环迭代
4. Any queued worker messages (SendMessage) delivered in this cycle
4. 所有排队的工作器消息(SendMessage)在本次循环中传递
5. Lead processes messages via ON handlers (reactive) + verifies done-flags (proactive)
5. 负责人通过ON处理器(响应式)处理消息 + 验证完成标记(主动式)
6. Lead's turn ends → Go to step 1
6. 负责人轮次结束 → 回到步骤1
The Stop hook IS the event loop driver. Each heartbeat = one iteration.
Stop钩子就是事件循环驱动器,每次心跳 = 一次迭代
Lead MUST NOT say "waiting for messages" and stop — the heartbeat keeps it alive.
负责人不能说「等待消息」然后停止——心跳会保持进程存活
If no worker messages arrived: output brief status, let turn end → next heartbeat.
如果没有收到工作器消息:输出简要状态,结束当前轮次 → 进入下一次心跳
--- CONTEXT RECOVERY PROTOCOL ---
--- 上下文恢复协议 ---
Claude Code may compress conversation history during long pipelines.
长流水线运行过程中Claude Code可能会压缩对话历史
When this happens, you lose SKILL.md instructions and state variables.
发生这种情况时,你会丢失SKILL.md指令和状态变量
The Stop hook includes "---PIPELINE RECOVERY CONTEXT---" in EVERY heartbeat stderr.
Stop钩子会在每次心跳的标准错误输出中包含"---PIPELINE RECOVERY CONTEXT---"
IF you see this block and don't recall the pipeline protocol:
如果你看到这个块且不记得流水线协议:
Follow CONTEXT RECOVERY PROTOCOL in references/phases/phase4_heartbeat.md (7 steps).
遵循references/phases/phase4_heartbeat.md中的上下文恢复协议(7步)
Quick summary: state.json → SKILL.md(FULL) → handlers → heartbeat → known_issues → ToolSearch → resume
简要说明:读取state.json → 读取完整SKILL.md → 加载处理器 → 处理心跳 → 加载已知问题 → 执行ToolSearch → 恢复运行
FRESH WORKER PER STAGE: Each stage transition = shutdown old worker + spawn new one.
每个阶段使用全新工作器:阶段切换 = 关闭旧工作器 + 生成新工作器
BIDIRECTIONAL HEALTH MONITORING:
双向健康监测:
- Reactive: ON handlers process worker completion messages
- 响应式:ON处理器处理工作器完成消息
- Proactive: Verify done-flags without messages (lost message recovery)
- 主动式:无消息时验证完成标记(消息丢失恢复)
- Defense-in-depth: Handles network issues, context overflow, worker crashes
- 纵深防御:处理网络问题、上下文溢出、工作器崩溃场景
WHILE story_state[id] NOT IN ("DONE", "PAUSED", "PENDING_MERGE"):
1. Process worker messages (reactive message handling)
MANDATORY READ: Load for all ON message handlers:
references/phases/phase4_handlers.md- Stage 0 COMPLETE / ERROR (task planning outcomes)
- Stage 1 COMPLETE (GO / NO-GO validation outcomes with retry logic)
- Stage 2 COMPLETE / ERROR (execution outcomes)
- Stage 3 COMPLETE (PASS/CONCERNS/WAIVED/FAIL quality gate outcomes with rework cycles)
- Worker crash detection (3-step protocol: flag → probe → respawn)
Handlers include sender validation and state guards to prevent duplicate processing.
2. Active done-flag verification (proactive health monitoring)
MANDATORY READ: Load for bidirectional health monitoring:
references/phases/phase4_heartbeat.md- Lost message detection (done-flag exists but state not advanced)
- Synthetic recovery from checkpoint + kanban verification (all 4 stages)
- Fallback to probe protocol when checkpoint missing
- Structured heartbeat output (single story status line)
- Helper functions (skill_name_from_stage, predict_next_step)
3. Heartbeat state persistence
ON HEARTBEAT (Stop hook stderr: "HEARTBEAT: ..."):
Write .pipeline/state.json with ALL state variables.
# See phase4_heartbeat.md for persistence details
WHILE story_state[id] NOT IN ("DONE", "PAUSED", "PENDING_MERGE"):
1. 处理工作器消息(响应式消息处理)
必读: 加载了解所有ON消息处理器:
references/phases/phase4_handlers.md- 阶段0 完成/错误(任务规划结果)
- 阶段1 完成(GO/NO-GO验证结果,带重试逻辑)
- 阶段2 完成/错误(执行结果)
- 阶段3 完成(PASS/CONCERNS/WAIVED/FAIL质量门禁结果,带返工周期逻辑)
- 工作器崩溃检测(3步协议:标记→探测→重启)
处理器包含发送方验证和状态防护,避免重复处理。
2. 主动完成标记验证(主动式健康监测)
必读: 加载了解双向健康监测规则:
references/phases/phase4_heartbeat.md- 消息丢失检测(存在完成标记但状态未推进)
- 从检查点+看板验证的合成恢复(覆盖所有4个阶段)
- 检查点缺失时降级到探测协议
- 结构化心跳输出(单个Story状态行)
- 辅助函数(skill_name_from_stage, predict_next_step)
3. 心跳状态持久化
ON HEARTBEAT (Stop hook stderr: "HEARTBEAT: ..."):
Write .pipeline/state.json with ALL state variables.
# 持久化细节参考phase4_heartbeat.md
--- POST-LOOP: Handle PENDING_MERGE ---
--- 循环后处理:处理待合并状态 ---
IF story_state[id] == "PENDING_MERGE":
Phase 4a Section B: Ask user for merge confirmation
(Section A: sync+report already executed by Stage 3 PASS handler)
AskUserQuestion:
"Story {id} completed. Quality Score: {score}/100. Verdict: {verdict}.
Branch: feature/{id}-{slug}
Files changed: {git_stats[id].files_changed}, +{git_stats[id].lines_added}/-{git_stats[id].lines_deleted}
Report: docs/tasks/reports/pipeline-{date}.md
Merge feature/{id}-{slug} to develop?"IF user confirms:
Execute phase4a_git_merge.md Section C: merge_to_develop(id)
# Sets story_state = "DONE", merge_status = "merged"
ELSE:
Execute phase4a_git_merge.md Section D: decline_merge(id)
# Sets story_state = "DONE", merge_status = "declined"
**`determine_stage(story)` routing:** See `references/pipeline_states.md` Stage-to-Status Mapping table.IF story_state[id] == "PENDING_MERGE":
阶段4a B部分:向用户发起合并确认
#(A部分:同步+报告已由阶段3 PASS处理器自动执行)
AskUserQuestion:
"Story {id} completed. Quality Score: {score}/100. Verdict: {verdict}.
Branch: feature/{id}-{slug}
Files changed: {git_stats[id].files_changed}, +{git_stats[id].lines_added}/-{git_stats[id].lines_deleted}
Report: docs/tasks/reports/pipeline-{date}.md
Merge feature/{id}-{slug} to develop?"IF user confirms:
执行phase4a_git_merge.md C部分:merge_to_develop(id)
# 设置story_state = "DONE", merge_status = "merged"
ELSE:
执行phase4a_git_merge.md D部分:decline_merge(id)
# 设置story_state = "DONE", merge_status = "declined"
**`determine_stage(story)`路由规则:** 参考`references/pipeline_states.md`阶段与状态映射表。Phase 4a: Git Sync, Report & Merge Confirmation
阶段4a:Git同步、报告与合并确认
MANDATORY READ: Load for full procedure:
references/phases/phase4a_git_merge.md- Section A: Sync with develop (rebase → fallback to merge), collect metrics, append story report, verify kanban/Linear — executed automatically after Stage 3 PASS
- Section B: Ask user for merge confirmation (AskUserQuestion) — post-loop
- Section C: Squash merge into develop, worktree cleanup, context refresh — only if user confirms
- Section D: Preserve branch, output manual merge instructions — if user declines
Triggered after Stage 3 PASS/CONCERNS/WAIVED verdict from ln-500-story-quality-gate.
必读: 加载了解完整流程:
references/phases/phase4a_git_merge.md- A部分: 与develop分支同步(变基→降级为合并)、收集指标、追加Story报告、验证看板/Linear——阶段3 PASS后自动执行
- B部分: 向用户发起合并确认(AskUserQuestion)——循环后执行
- C部分: 压缩合并到develop分支、清理工作树、刷新上下文——仅用户确认后执行
- D部分: 保留分支、输出手动合并说明——用户拒绝时执行
在ln-500-story-quality-gate返回阶段3 PASS/CONCERNS/WAIVED结论后触发。
Phase 5: Cleanup & Self-Verification
阶段5:清理与自验证
undefinedundefined0. Signal pipeline complete (allows Stop hook to pass)
0. 标记流水线完成(允许Stop钩子通过)
Write .pipeline/state.json: { "complete": true, ... }
Write .pipeline/state.json: { "complete": true, ... }
1. Self-verify against Definition of Done
1. 对照完成定义自验证
verification = {
story_selected: selected_story_id is set # Phase 1 ✓
questions_asked: business_answers stored OR none # Phase 2 ✓
team_created: team exists # Phase 3 ✓
story_processed: story_state[id] IN ("DONE", "PAUSED") # Phase 4 ✓
sync_completed: feature branch synced with develop # Phase 4a Section A ✓
merge_status: "merged" | "declined" | "paused" # Phase 4a Section C/D ✓
}
IF ANY verification == false: WARN user with details
verification = {
story_selected: selected_story_id is set # 阶段1 ✓
questions_asked: business_answers stored OR none # 阶段2 ✓
team_created: team exists # 阶段3 ✓
story_processed: story_state[id] IN ("DONE", "PAUSED") # 阶段4 ✓
sync_completed: feature branch synced with develop # 阶段4a A部分 ✓
merge_status: "merged" | "declined" | "paused" # 阶段4a C/D部分 ✓
}
IF ANY verification == false: 向用户发出警告并展示详情
2. Finalize pipeline report
2. 完成流水线报告
Prepend summary header to docs/tasks/reports/pipeline-{date}.md:
Pipeline Report — {date}
| Metric | Value |
|---|---|
| Story | {selected_story_id}: {title} |
| Final State | {story_state[id]} |
| Merge Status | {merge_status} |
| Quality rework cycles | {quality_cycles[id]} |
| Validation retries | {validation_retries[id]} |
| Crash recoveries | {crash_count[id]} |
| Infrastructure issues | {len(infra_issues)} |
为docs/tasks/reports/pipeline-{date}.md添加摘要头部:
Pipeline Report — {date}
| 指标 | 数值 |
|---|---|
| Story | {selected_story_id}: {title} |
| 最终状态 | {story_state[id]} |
| 合并状态 | {merge_status} |
| 质量返工周期 | {quality_cycles[id]} |
| 验证重试次数 | {validation_retries[id]} |
| 崩溃恢复次数 | {crash_count[id]} |
| 基础设施问题 | {len(infra_issues)} |
2b. Stage Duration Breakdown
2b. 阶段耗时明细
Append Stage Duration section:
Stage Duration Breakdown
| Stage 0 | Stage 1 | Stage 2 | Stage 3 | Total | Bottleneck |
|---|---|---|---|---|---|
| durations = {N: stage_timestamps[id]["stage_{N}end"] - stage_timestamps[id]["stage{N}_start"] |
FOR N IN 0..3 IF both timestamps exist}total = sum(durations.values())
bottleneck = key with max(durations)
| {durations[0] or "—"} | {durations[1] or "—"} | {durations[2] or "—"} | {durations[3] or "—"} | {total} | Stage {bottleneck} |
追加阶段耗时部分:
阶段耗时明细
| 阶段0 | 阶段1 | 阶段2 | 阶段3 | 总计 | 瓶颈阶段 |
|---|---|---|---|---|---|
| durations = {N: stage_timestamps[id]["stage_{N}end"] - stage_timestamps[id]["stage{N}_start"] |
FOR N IN 0..3 IF both timestamps exist}total = sum(durations.values())
bottleneck = key with max(durations)
| {durations[0] or "—"} | {durations[1] or "—"} | {durations[2] or "—"} | {durations[3] or "—"} | {total} | Stage {bottleneck} |
2c. Code Output Metrics
2c. 代码输出指标
Append Code Output section (if git_stats available):
Code Output Metrics
| Files Changed | Lines Added | Lines Deleted | Net Lines |
|---|---|---|---|
| {git_stats[id].files_changed} | +{git_stats[id].lines_added} | -{git_stats[id].lines_deleted} | {net} |
追加代码输出部分(如果存在git_stats):
代码输出指标
| 变更文件数 | 新增行数 | 删除行数 | 净增减行数 |
|---|---|---|---|
| {git_stats[id].files_changed} | +{git_stats[id].lines_added} | -{git_stats[id].lines_deleted} | {net} |
2d. Cost Estimate
2d. 成本估算
Append Cost Estimate section:
Cost Estimate
| Metric | Value |
|---|---|
| Wall-clock time | {now() - pipeline_start_time} |
| Total worker spawns | {count of Task() calls in session} |
| Hashline-edit usage | {count mcp__hashline-edit__* calls in Stage 2 workers} / {total file edits} |
追加成本估算部分:
成本估算
| 指标 | 数值 |
|---|---|
| 实际运行时长 | {now() - pipeline_start_time} |
| 工作器总生成次数 | {count of Task() calls in session} |
| Hashline-edit使用率 | {count mcp__hashline-edit__* calls in Stage 2 workers} / {total file edits} |
2e. Collect infrastructure issues
2e. 收集基础设施问题
Analyze pipeline session for non-fatal problems:
分析流水线会话中的非致命问题:
hook/settings failures, git conflicts, worktree errors, merge issues,
钩子/设置失败、Git冲突、工作树错误、合并问题、
Linear sync mismatches, worker crashes, permission errors.
Linear同步不匹配、工作器崩溃、权限错误。
Populate infra_issues = [{phase, type, message}] from session context.
从会话上下文中填充infra_issues = [{phase, type, message}]
Append Infrastructure Issues section:
Infrastructure Issues
IF infra_issues NOT EMPTY:
| # | Phase | Type | Details |
|---|-------|------|---------|
FOR EACH issue IN infra_issues:
| {N} | {issue.phase} | {issue.type} | {issue.message} |
ELSE:
No infrastructure issues.
Append Operational Recommendations section (auto-generated from counters):
Operational Recommendations
- IF quality_cycles[id] > 0: "Needed {N} quality cycles. Improve task specs or acceptance criteria."
- IF validation_retries[id] > 0: "Failed validation. Review Story/Task structure."
- IF crash_count[id] > 0: "Worker crashed {N} times. Check for context-heavy operations."
- IF story_state[id] == "PAUSED": "Story requires manual intervention."
- IF infra_issues with type "hook": "Hook configuration errors. Verify settings.local.json and .claude/hooks/."
- IF infra_issues with type "git": "Git conflicts encountered. Rebase feature branches more frequently."
- IF all DONE with 0 retries AND no infra_issues: "Clean run — no issues detected."
追加基础设施问题部分:
基础设施问题
IF infra_issues NOT EMPTY:
| # | 阶段 | 类型 | 详情 |
|---|-------|------|---------|
FOR EACH issue IN infra_issues:
| {N} | {issue.phase} | {issue.type} | {issue.message} |
ELSE:
无基础设施问题。
追加运营建议部分(根据计数器自动生成):
运营建议
- IF quality_cycles[id] > 0: "需要{N}次质量循环,建议优化任务规格或验收标准。"
- IF validation_retries[id] > 0: "验证失败,建议检查Story/任务结构。"
- IF crash_count[id] > 0: "工作器崩溃{N}次,建议检查重上下文操作。"
- IF story_state[id] == "PAUSED": "Story需要人工介入。"
- IF infra_issues with type "hook": "钩子配置错误,建议验证settings.local.json和.claude/hooks/目录。"
- IF infra_issues with type "git": "遇到Git冲突,建议更频繁地变基功能分支。"
- IF all DONE with 0 retries AND no infra_issues: "运行流畅——未检测到问题。"
3. Show pipeline summary to user
3. 向用户展示流水线摘要
Pipeline Complete:
| Story | Stage 0 | Stage 1 | Stage 2 | Stage 3 | Merged | Final State |
|-------|---------|---------|---------|---------|--------|------------|
| {id} | {stage0} | {stage1} | {stage2} | {stage3} | {merge_status} | {story_state[id]} |
Report saved: docs/tasks/reports/pipeline-{date}.md流水线已完成:
| Story | 阶段0 | 阶段1 | 阶段2 | 阶段3 | 已合并 | 最终状态 |
|-------|---------|---------|---------|---------|--------|------------|
| {id} | {stage0} | {stage1} | {stage2} | {stage3} | {merge_status} | {story_state[id]} |
报告已保存:docs/tasks/reports/pipeline-{date}.md4. Shutdown worker (if still active)
4. 关闭工作器(如果仍在运行)
IF worker_map[id]:
SendMessage(type: "shutdown_request", recipient: worker_map[id])
IF worker_map[id]:
SendMessage(type: "shutdown_request", recipient: worker_map[id])
5. Cleanup team
5. 清理团队
TeamDelete
TeamDelete
6. Worktree cleanup
6. 工作树清理
IF merge_status == "merged":
Worktree already removed in Phase 4a Section C
pass
ELSE IF merge_status == "declined":
Preserve worktree — user needs it for manual merge
Output: "Worktree preserved at .worktrees/story-{id}/"
ELSE IF story_state[id] == "PAUSED":
IF merge_status == "pending" AND worktree_map[id]:
# Merge conflict — preserve worktree for manual resolution
Output: "Worktree preserved at {worktree_map[id]}/ for merge conflict resolution"
ELSE IF worktree_map[id]:
git worktree remove {worktree_map[id]} --force
rm -rf .worktrees/
IF merge_status == "merged":
工作树已在阶段4a C部分移除
pass
ELSE IF merge_status == "declined":
保留工作树——用户需要用于手动合并
Output: "工作树已保留在 .worktrees/story-{id}/"
ELSE IF story_state[id] == "PAUSED":
IF merge_status == "pending" AND worktree_map[id]:
# 合并冲突——保留工作树用于人工解决
Output: "工作树已保留在 {worktree_map[id]}/ 用于合并冲突解决"
ELSE IF worktree_map[id]:
git worktree remove {worktree_map[id]} --force
rm -rf .worktrees/
7. Switch to develop (only if merged)
7. 切换到develop分支(仅合并完成时)
IF merge_status == "merged":
git checkout develop
IF merge_status == "merged":
git checkout develop
8. Remove pipeline state files
8. 移除流水线状态文件
8a. Stop sleep prevention (Windows safety net — script should have self-terminated)
8a. 停止防睡眠进程(Windows安全网——脚本应已自动终止)
IF sleep_prevention_pid:
kill $sleep_prevention_pid 2>/dev/null || true
Delete .pipeline/ directory
IF sleep_prevention_pid:
kill $sleep_prevention_pid 2>/dev/null || true
删除.pipeline/目录
9. Report results and report location to user
9. 向用户报告结果和报告位置
undefinedundefinedKanban as Single Source of Truth
看板作为唯一真实数据源
- Lead = single writer to kanban_board.md. Workers report results via SendMessage; lead updates the board
- Re-read board after each stage completion for fresh state
- Update algorithm: Follow for Epic grouping and indentation
shared/references/kanban_update_algorithm.md
- 负责人 = 看板唯一写入者。工作器通过SendMessage报告结果,由负责人更新看板
- 每个阶段完成后重新读取看板获取最新状态
- 更新算法: 遵循实现史诗分组和缩进
shared/references/kanban_update_algorithm.md
Error Handling
错误处理
| Situation | Detection | Action |
|---|---|---|
| ln-300 task creation fails | Worker reports error | Escalate to user: "Cannot create tasks for Story {id}" |
| ln-310 NO-GO (Score <5) | Worker reports NO-GO | Retry once (ln-310 auto-fixes). If still NO-GO -> ask user |
| Task in To Rework 3+ times | Worker reports rework loop | Escalate: "Task X reworked 3 times, need input" |
| ln-500 FAIL | Worker reports FAIL verdict | Fix tasks auto-created by ln-500. Stage 2 re-entry. Max 2 quality cycles |
| Worker crash | TeammateIdle without completion msg | Re-spawn worker, resume from last stage |
| Business question mid-execution | Worker encounters ambiguity | Worker -> lead -> user -> lead -> worker (message chain) |
| Merge conflict (sync) | git rebase/merge fails | Escalate to user, Story PAUSED, worktree preserved for resolution |
| 场景 | 检测方式 | 处理动作 |
|---|---|---|
| ln-300 任务创建失败 | 工作器上报错误 | 升级给用户:"无法为Story {id}创建任务" |
| ln-310 NO-GO(得分<5) | 工作器上报NO-GO | 重试一次(ln-310自动修复),如果仍为NO-GO→询问用户 |
| 任务待返工3次以上 | 工作器上报返工循环 | 升级给用户:"任务X已返工3次,需要输入指导" |
| ln-500 FAIL | 工作器上报FAIL结论 | 自动创建ln-500生成的修复任务,重新进入阶段2,最多2次质量循环 |
| 工作器崩溃 | TeammateIdle触发但无完成消息 | 重启工作器,从上个阶段恢复 |
| 执行中途遇到业务问题 | 工作器遇到歧义 | 工作器→负责人→用户→负责人→工作器(消息链) |
| (同步时)合并冲突 | git变基/合并失败 | 升级给用户,Story标记为PAUSED,保留工作树用于解决 |
Critical Rules
关键规则
- Single Story processing. One worker at a time. User selects which Story to process
- Delegate mode. Lead coordinates only — never invoke ln-300/ln-310/ln-400/ln-500 directly. Workers do all execution
- Skills as-is. Never modify or bypass existing skill logic. Workers call exactly as documented
Skill("ln-310-story-validator", args) - Kanban verification. Workers update Linear/kanban via skills. Lead re-reads and ASSERTs expected state after each stage. In file mode, lead resolves merge conflicts
- Quality cycle limit. Max 2 quality FAILs per Story (original + 1 rework cycle). After 2nd FAIL, escalate to user
- Merge only on confirmation. After quality gate PASS, sync with develop and ask user. Merge only if confirmed. Feature branch preserved if declined
- Re-read kanban. After every stage completion, re-read board for fresh state. Never cache
- Graceful shutdown. Always shutdown workers via shutdown_request. Never force-kill
- 单Story处理,同时仅运行一个工作器,由用户选择要处理的Story
- 代理模式,负责人仅负责协调——绝不直接调用ln-300/ln-310/ln-400/ln-500,所有执行由工作器完成
- 技能原样使用,绝不修改或绕过现有技能逻辑,工作器严格按照文档调用
Skill("ln-310-story-validator", args) - 看板验证,工作器通过技能更新Linear/看板,每个阶段结束后负责人重新读取并校验状态符合预期。文件模式下由负责人解决合并冲突
- 质量周期限制,每个Story最多允许2次质量FAIL(原始+1次返工周期),第二次FAIL后升级给用户
- 仅确认后合并,质量门禁通过后与develop分支同步并询问用户,仅确认后才合并,用户拒绝则保留功能分支
- 重新读取看板,每个阶段结束后重新读取看板获取最新状态,绝不缓存
- 优雅关闭,始终通过shutdown_request关闭工作器,绝不强制终止
Known Issues
已知问题
MANDATORY READ: Load for production-discovered problems and self-recovery patterns.
references/known_issues.md必读: 加载了解生产环境发现的问题和自恢复模式。
references/known_issues.mdAnti-Patterns
反模式
- Running ln-300/ln-310/ln-400/ln-500 directly from lead instead of delegating to workers
- Processing multiple stories without user selection
- Auto-merging to develop without user confirmation
- Lead skipping kanban verification after worker updates (workers write via skills, lead MUST re-read + ASSERT)
- Skipping quality gate after execution
- Merging to develop before quality gate PASS
- Caching kanban state instead of re-reading
- Reading directly (messages arrive automatically)
~/.claude/teams/*/inboxes/*.json - Using + filesystem polling for message checking
sleep - Parsing internal Claude Code JSON formats (permission_request, idle_notification)
- Reusing same worker across stages (context exhaustion — spawn fresh worker per stage)
- Processing messages without verifying sender matches worker_map (stale message confusion from old/dead workers)
- 负责人直接运行ln-300/ln-310/ln-400/ln-500而非委派给工作器
- 未经过用户选择就处理多个Story
- 未经过用户确认就自动合并到develop分支
- 工作器更新后负责人跳过看板验证(工作器通过技能写入,负责人必须重新读取+校验)
- 执行后跳过质量门禁
- 质量门禁PASS前就合并到develop分支
- 缓存看板状态而非重新读取
- 直接读取(消息会自动送达)
~/.claude/teams/*/inboxes/*.json - 使用+文件系统轮询检查消息
sleep - 解析Claude Code内部JSON格式(permission_request、idle_notification)
- 跨阶段复用同一个工作器(上下文溢出——每个阶段生成全新工作器)
- 未验证发送方匹配worker_map就处理消息(旧/死工作器的过期消息导致混淆)
Plan Mode Support
计划模式支持
When invoked in Plan Mode, show available Stories and ask user which one to plan for:
- Parse kanban board (Phase 1 steps 1-7)
- Show available Stories table
- AskUserQuestion: "Which story to plan for? Enter # or Story ID."
- Show execution plan for selected Story
- Write plan to plan file, call ExitPlanMode
Plan Output Format:
undefined在计划模式下调用时,展示可用Story并询问用户要为哪个Story制定计划:
- 解析看板(阶段1步骤1-7)
- 展示可用Story表格
- 询问用户:"要为哪个Story制定计划?输入序号或Story ID。"
- 展示选中Story的执行计划
- 将计划写入计划文件,调用ExitPlanMode
计划输出格式:
undefinedPipeline Plan for {date}
Pipeline Plan for {date}
Story: {ID}: {Title}
Current Status: {status}
Target Stage: {N} ({skill_name})
Story: {ID}: {Title}
当前状态: {status}
目标阶段: {N} ({skill_name})
Execution Sequence
执行顺序
- TeamCreate("pipeline-{date}")
- Create worktree + feature branch: feature/{id}-{slug}
- Spawn worker -> Stage {N} ({skill_name})
- Drive through remaining stages until quality gate
- Sync with develop, generate report
- Ask for merge confirmation
- Cleanup
undefined- TeamCreate("pipeline-{date}")
- 创建工作树 + 功能分支: feature/{id}-{slug}
- 生成工作器 → 阶段{N} ({skill_name})
- 驱动完成剩余阶段直到质量门禁
- 与develop分支同步,生成报告
- 发起合并确认
- 清理资源
undefinedDefinition of Done (self-verified in Phase 5)
完成定义(阶段5自验证)
| # | Criterion | Verified By |
|---|---|---|
| 1 | User selected Story from kanban board | |
| 2 | Business questions asked in single batch (or none found) | |
| 3 | Team created, single worker spawned | Worker spawned for selected Story |
| 4 | Selected Story processed: state = DONE or PAUSED | |
| 5 | Feature branch synced with develop. Merged if user confirmed | |
| 6 | Pipeline summary shown to user | Phase 5 table output |
| 7 | Team cleaned up (worker shutdown, TeamDelete) | TeamDelete called |
| # | 标准 | 验证方式 |
|---|---|---|
| 1 | 用户从看板中选择了Story | |
| 2 | 一次性批量询问了业务问题(或未发现问题) | 已存储 |
| 3 | 已创建团队,为选中的Story生成了单个工作器 | 已为选中的Story生成工作器 |
| 4 | 选中的Story处理完成:状态为DONE或PAUSED | |
| 5 | 功能分支已与develop同步,用户确认后已合并 | |
| 6 | 已向用户展示流水线摘要 | 已输出阶段5表格 |
| 7 | 已清理团队(关闭工作器、调用TeamDelete) | 已调用TeamDelete |
Reference Files
参考文件
Phase 4 Procedures (Progressive Disclosure)
阶段4流程(渐进式披露)
- Message handlers: (Stage 0-3 ON handlers, crash detection)
references/phases/phase4_handlers.md - Heartbeat & verification: (Active done-flag checking, structured heartbeat output)
references/phases/phase4_heartbeat.md - Git flow: (Sync, report, merge confirmation, worktree cleanup)
references/phases/phase4a_git_merge.md
- 消息处理器: (阶段0-3 ON处理器、崩溃检测)
references/phases/phase4_handlers.md - 心跳与验证: (主动完成标记检查、结构化心跳输出)
references/phases/phase4_heartbeat.md - Git流程: (同步、报告、合并确认、工作树清理)
references/phases/phase4a_git_merge.md
Core Infrastructure
核心基础设施
- Known issues: (production-discovered problems and self-recovery)
references/known_issues.md - Message protocol:
references/message_protocol.md - Worker health:
references/worker_health_contract.md - Checkpoint format:
references/checkpoint_format.md - Settings template:
references/settings_template.json - Hooks: ,
references/hooks/pipeline-keepalive.shreferences/hooks/worker-keepalive.sh - Kanban parsing:
references/kanban_parser.md - Pipeline states:
references/pipeline_states.md - Worker prompts:
references/worker_prompts.md - Kanban update algorithm:
shared/references/kanban_update_algorithm.md - Storage mode detection:
shared/references/storage_mode_detection.md - Auto-discovery patterns:
shared/references/auto_discovery_pattern.md
- 已知问题: (生产环境发现的问题和自恢复方案)
references/known_issues.md - 消息协议:
references/message_protocol.md - 工作器健康:
references/worker_health_contract.md - 检查点格式:
references/checkpoint_format.md - 设置模板:
references/settings_template.json - 钩子: 、
references/hooks/pipeline-keepalive.shreferences/hooks/worker-keepalive.sh - 看板解析:
references/kanban_parser.md - 流水线状态:
references/pipeline_states.md - 工作器提示:
references/worker_prompts.md - 看板更新算法:
shared/references/kanban_update_algorithm.md - 存储模式检测:
shared/references/storage_mode_detection.md - 自动发现规则:
shared/references/auto_discovery_pattern.md
Delegated Skills
委派技能
../ln-300-task-coordinator/SKILL.md../ln-310-story-validator/SKILL.md../ln-400-story-executor/SKILL.md../ln-500-story-quality-gate/SKILL.md
Version: 2.0.0
Last Updated: 2026-02-25
../ln-300-task-coordinator/SKILL.md../ln-310-story-validator/SKILL.md../ln-400-story-executor/SKILL.md../ln-500-story-quality-gate/SKILL.md
版本: 2.0.0
最后更新: 2026-02-25