dev-orchestrator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Dev Orchestrator Skill

Dev Orchestrator Skill

Purpose

用途

This is the default orchestrator for all non-trivial development and investigation tasks in amplihack. It replaces the
ultrathink-orchestrator
skill.
When a user asks you to build, implement, fix, investigate, or create anything non-trivial, this skill ensures:
  1. Task is classified — Q&A / Operations / Investigation / Development
  2. Goal is formulated — clear success criteria identified
  3. Workstreams detected — parallel tasks split automatically
  4. Recipe runner used — code-enforced workflow execution
  5. Outcome verified — reflection confirms goal achievement
这是amplihack中所有非简单开发与调研任务的默认编排器,替代了
ultrathink-orchestrator
技能。
当用户要求你构建、实现、修复、调研或创建任何非简单内容时,该技能可确保:
  1. 任务分类 — 问答/操作/调研/开发
  2. 目标明确 — 确定清晰的成功标准
  3. 工作流识别 — 自动拆分并行任务
  4. 使用recipe runner — 由代码强制执行工作流
  5. 结果验证 — 通过复盘确认目标达成

How It Works

工作机制

User request
[Classify] ──→ Q&A ──────────────────→ analyzer agent (technical/code questions)
     ├──────→ Ops ────────────────────→ builder agent
     └──→ Development / Investigation
         [Recursion guard] (AMPLIHACK_SESSION_DEPTH vs AMPLIHACK_MAX_DEPTH=3)
             │         │
           ALLOWED   BLOCKED → [announce-depth-limited banner]
                           [execute-single-fallback-blocked]
                           [Execute round 1 (single-session)]
         [Decompose]
             │         │
             1 ws     N ws ──→ [multitask parallel] + tree context in env
         [Execute round 1]
         [Reflect] ──→ ACHIEVED ──→ [Summarize]
           PARTIAL/NOT_ACHIEVED
         [Execute round 2]
         [Reflect] ──→ ACHIEVED ──→ [Summarize]
           PARTIAL/NOT_ACHIEVED
         [Execute round 3 (final)]
         [Final reflect + Summarize]
Session tree enforcement (prevents infinite recursion):
  • Each subprocess inherits
    AMPLIHACK_TREE_ID
    ,
    AMPLIHACK_SESSION_DEPTH
    ,
    AMPLIHACK_MAX_DEPTH
  • Depth >= 3: recursion guard blocks sub-workstream spawning, adapts to single-session
  • Max 10 concurrent sessions per tree; extras queue in FIFO order
  • State tracked in
    /tmp/amplihack-session-trees/{tree_id}.json
用户请求
[分类] ──→ 问答 ──────────────────→ 分析agent(技术/代码问题)
     ├──────→ 操作 ────────────────────→ 构建agent
     └──→ 开发 / 调研
         [递归防护](AMPLIHACK_SESSION_DEPTH 与 AMPLIHACK_MAX_DEPTH=3)
             │         │
           允许   拦截 → [输出深度受限提示横幅]
                           [执行单会话回退流程]
                           [执行第一轮(单会话)]
         [任务分解]
             │         │
           1个工作流     N个工作流 ──→ [多任务并行] + 环境中的树状上下文
         [执行第一轮]
         [复盘] ──→ 已达成 ──→ [总结]
           部分达成/未达成
         [执行第二轮]
         [复盘] ──→ 已达成 ──→ [总结]
           部分达成/未达成
         [执行第三轮(最终轮)]
         [最终复盘 + 总结]
会话树强制执行(防止无限递归):
  • 每个子进程继承
    AMPLIHACK_TREE_ID
    AMPLIHACK_SESSION_DEPTH
    AMPLIHACK_MAX_DEPTH
  • 深度≥3时:递归防护会阻止子工作流生成,切换为单会话模式
  • 每个树状结构最多允许10个并发会话;超出的会话按FIFO顺序排队
  • 状态存储在
    /tmp/amplihack-session-trees/{tree_id}.json

Activation

激活方式

This skill auto-activates for development and investigation keywords. It is also the default behavior per CLAUDE.md — invoke it for any non-trivial task.
The
UserPromptSubmit
hook reinforces this by injecting a classification prompt that instructs Claude to invoke
Skill(skill="dev-orchestrator")
for dev/investigation tasks.
该技能会针对开发与调研类关键词自动激活。根据CLAUDE.md的定义,它也是默认行为 — 对于任何非简单任务,都应调用该技能。
UserPromptSubmit
钩子会注入分类提示,指导Claude针对开发/调研任务调用
Skill(skill="dev-orchestrator")
,以此强化这一机制。

Execution Instructions

执行说明

YOUR NEXT ACTION after reading this skill MUST include a Bash tool call that executes
run_recipe_by_name("smart-orchestrator")
.
If your runtime requires a
report_intent
call (e.g. Copilot CLI), emit it in parallel with the Bash launch — both tool calls in the same response. The constraint is that no other substantive action (reading files, calling TodoWrite, invoking Agent, or typing a response) may precede or replace the Bash launch.
If you find yourself doing anything else instead of launching the recipe runner, you are bypassing the workflow. Stop. Go back. Run the recipe runner.
Development and Investigation tasks have exactly ONE execution path: the recipe runner. There is no "Option B." There is no "I'll follow the steps manually." There is no "Let me just quickly..." The recipe runner IS the workflow. Skipping it means skipping the 23-step process, the recursion guards, the goal verification, and the quality gates that exist for a reason.
A PostToolUse hook monitors every tool call after this skill activates. If it detects 3 tool calls without recipe runner evidence, it will fire a hard WARNING. Do not wait for the warning — run the recipe runner immediately.
When this skill is activated:
阅读完本技能后,你的下一个操作必须包含一个执行
run_recipe_by_name("smart-orchestrator")
的Bash工具调用。
如果你的运行时需要
report_intent
调用(如Copilot CLI),请在同一响应中并行输出该调用与Bash启动命令。注意:不得在Bash启动之前或替代它执行任何其他实质性操作(如读取文件、调用TodoWrite、调用Agent或编写回复)。
如果你发现自己在做启动recipe runner之外的事情,说明你正在绕过工作流。请停止当前操作,返回并运行recipe runner。
开发与调研任务只有一条执行路径:使用recipe runner。没有“备选方案B”,没有“我手动跟着步骤做”,也没有“我先快速...”。recipe runner就是工作流本身。跳过它意味着跳过了23步流程、递归防护、目标验证以及为保障质量而设置的检查门。
PostToolUse钩子会在该技能激活后监控所有工具调用。如果检测到3次工具调用中没有使用recipe runner的痕迹,它会发出严重警告。请不要等警告出现 — 立即运行recipe runner。
当该技能激活时:

REQUIRED: Execute via Recipe Runner — IMMEDIATELY

必须操作:立即通过Recipe Runner执行

Your next tool call(s) must include the recipe runner launch (alongside
report_intent
if your runtime requires it). CRITICAL: Claude Code's
run_in_background
kills processes after ~10 minutes. Recipe workstreams can take hours. You MUST use a tmux session for execution:
bash
LOG_FILE=$(mktemp /tmp/recipe-runner-output.XXXXXX.log)
chmod 600 "$LOG_FILE"
tmux new-session -d -s recipe-runner "env -u CLAUDECODE PYTHONPATH=src python3 -c \"
import os
os.environ.pop('CLAUDECODE', None)

from amplihack.recipes import run_recipe_by_name

result = run_recipe_by_name(
    'smart-orchestrator',
    user_context={
        'task_description': '''TASK_DESCRIPTION_HERE''',
        'repo_path': '.',
    },
    progress=True,
)
print(f'Recipe result: {result}')
\" 2>&1 | tee \"$LOG_FILE\""
echo \"Recipe runner log: $LOG_FILE\"
Key points:
  • env -u CLAUDECODE
    — unset so nested Claude Code sessions can launch
  • PYTHONPATH=src python3
    — uses the interpreter on PATH while forcing imports from the checked-out repo source tree (do NOT hardcode
    .venv/bin/python
    )
  • run_recipe_by_name
    — delegates to the Rust binary; the adapter parameter is no longer needed
  • progress=True
    — streams recipe-runner stderr live so tmux logs show nested step activity
  • chmod 600 "$LOG_FILE"
    — keeps the tmux log private to the current user
  • tmux new-session -d
    — detached session, no timeout, survives disconnects
  • Monitor with:
    tail -f "$LOG_FILE"
    or
    tmux attach -t recipe-runner
Restarting a stale tmux session: Some runtimes (e.g. Copilot CLI) block
tmux kill-session
because it does not target a numeric PID. Use one of these shell-policy-safe alternatives instead:
bash
undefined
你的下一个工具调用必须包含recipe runner启动命令(如果运行时需要,可同时包含
report_intent
)。关键提示:Claude Code的
run_in_background
会在约10分钟后终止进程,而recipe工作流可能需要数小时才能完成。你必须使用tmux会话来执行:
bash
LOG_FILE=$(mktemp /tmp/recipe-runner-output.XXXXXX.log)
chmod 600 "$LOG_FILE"
tmux new-session -d -s recipe-runner "env -u CLAUDECODE PYTHONPATH=src python3 -c \"
import os
os.environ.pop('CLAUDECODE', None)

from amplihack.recipes import run_recipe_by_name

result = run_recipe_by_name(
    'smart-orchestrator',
    user_context={
        'task_description': '''TASK_DESCRIPTION_HERE''',
        'repo_path': '.',
    },
    progress=True,
)
print(f'Recipe result: {result}')
\" 2>&1 | tee \"$LOG_FILE\""
echo \"Recipe runner log: $LOG_FILE\"
核心要点:
  • env -u CLAUDECODE
    — 取消该环境变量,以便启动嵌套的Claude Code会话
  • PYTHONPATH=src python3
    — 使用PATH中的解释器,同时强制从已检出的仓库源码树导入模块(请勿硬编码
    .venv/bin/python
  • run_recipe_by_name
    — 委托给Rust二进制文件;不再需要适配器参数
  • progress=True
    — 实时流式传输recipe-runner的stderr,以便tmux日志显示嵌套步骤的活动
  • chmod 600 "$LOG_FILE"
    — 确保tmux日志仅对当前用户可见
  • tmux new-session -d
    — 分离式会话,无超时限制,断开连接后仍可继续运行
  • 监控方式:
    tail -f "$LOG_FILE"
    tmux attach -t recipe-runner
重启停滞的tmux会话:部分运行时(如Copilot CLI)会阻止
tmux kill-session
,因为它不针对数字PID。请使用以下符合shell策略的替代方案:
bash
undefined

Option A (preferred): use a unique session name per run to avoid collisions

方案A(推荐):为每次运行使用唯一的会话名称,避免冲突

tmux new-session -d -s "recipe-$(date +%s)" "..."
tmux new-session -d -s "recipe-$(date +%s)" "..."

Option B: locate the tmux server PID and terminate with numeric kill

方案B:定位tmux服务器PID并使用数字kill命令终止

tmux list-sessions -F '#{pid}' 2>/dev/null | xargs -I{} kill {}
tmux list-sessions -F '#{pid}' 2>/dev/null | xargs -I{} kill {}

Option C: let tmux itself handle it — send exit to all panes

方案C:让tmux自行处理 — 向所有窗格发送exit命令

tmux send-keys -t recipe-runner "exit" Enter 2>/dev/null; sleep 1

If using Option A, update the `tail -f` / `tmux attach` commands to use the
same session name.

**DO NOT use `run_in_background`** for recipe execution — it will be killed
after ~10 minutes (Issue #2909).

**The recipe runner is the required execution path for Development and
Investigation tasks.** Always try `smart-orchestrator` first.

**Common rationalizations that are NOT acceptable:**

- "Let me first understand the codebase" — the recipe does that in Step 0
- "I'll follow the workflow steps manually" — NO, the recipe enforces them
- "The recipe runner might not work" — try it first, report errors if it fails
- "This is a simple task" — simple or complex, the recipe runner handles both

**Q&A and Operations only** may bypass the recipe runner:

- Q&A: Respond directly (analyzer agent)
- Operations: Builder agent (direct execution, no workflow steps)
tmux send-keys -t recipe-runner "exit" Enter 2>/dev/null; sleep 1

如果使用方案A,请更新`tail -f` / `tmux attach`命令以使用相同的会话名称。

**请勿使用`run_in_background`执行recipe** — 它会在约10分钟后被终止(问题#2909)。

**recipe runner是开发与调研任务的强制执行路径。** 请始终首先尝试`smart-orchestrator`。

**以下理由均不被接受:**

- “我先了解一下代码库” — recipe会在第0步完成这一操作
- “我手动跟着工作流步骤做” — 不行,recipe会强制执行这些步骤
- “recipe runner可能无法工作” — 先尝试运行,若失败再报告错误
- “这个任务很简单” — 无论简单或复杂,recipe runner都能处理

**仅问答和操作类任务**可以绕过recipe runner:

- 问答:直接回复(使用analyzer agent)
- 操作:使用builder agent(直接执行,无需工作流步骤)

Error Recovery: Adaptive Strategy (NOT Degradation)

错误恢复:自适应策略(而非降级处理)

When
smart-orchestrator
fails, failures must be visible and surfaced — never swallowed or silently degraded. The recipe handles error recovery automatically via its built-in adaptive strategy steps, but if you observe a failure outside the recipe, follow this protocol:
1. Surface the error with full context:
Report the exact error, the step that failed, and the log output. Never say "something went wrong" — always include the specific failure details.
2. File a bug with reproduction details:
For infrastructure failures (import errors, missing env vars, binary not found, decomposition producing invalid output), file a GitHub issue:
bash
gh issue create \
  --title "smart-orchestrator infrastructure failure: <one-line summary>" \
  --body "<full error context, reproduction command, env details>" \
  --label "bug"
3. Evaluate alternative strategies:
If
smart-orchestrator
fails at the infrastructure level (not because the task is wrong), you MAY invoke the specific workflow recipe directly. This is an adaptive strategy — it must be announced explicitly, not done silently:
ClassificationDirect RecipeWhen Permitted
Investigation
investigation-workflow
smart-orchestrator failed at parse/decompose/launch
Development
default-workflow
smart-orchestrator failed at parse/decompose/launch
Example:
python
undefined
smart-orchestrator
失败时,必须将失败情况可视化并上报 — 绝不能隐藏或静默降级。recipe会通过内置的自适应策略步骤自动处理错误恢复,但如果你在recipe之外发现失败,请遵循以下流程:
1. 上报包含完整上下文的错误:
报告确切的错误信息、失败的步骤以及日志输出。绝不能只说“出问题了” — 务必包含具体的失败细节。
2. 提交包含复现细节的bug:
对于基础设施故障(导入错误、缺少环境变量、未找到二进制文件、分解生成无效输出等),请提交GitHub issue:
bash
gh issue create \
  --title "smart-orchestrator infrastructure failure: <one-line summary>" \
  --body "<full error context, reproduction command, env details>" \
  --label "bug"
3. 评估替代策略:
如果
smart-orchestrator
在基础设施层面失败(而非任务本身存在问题),你可以直接调用特定的工作流recipe。这是一种自适应策略 — 必须明确声明,不能静默执行:
分类直接调用的Recipe允许场景
调研
investigation-workflow
smart-orchestrator在解析/分解/启动阶段失败
开发
default-workflow
smart-orchestrator在解析/分解/启动阶段失败
示例:
python
undefined

ANNOUNCE the strategy change first — never do this silently

首先声明策略变更 — 绝不能静默执行

print("[ADAPTIVE] smart-orchestrator failed at parse-decomposition: <error>") print("[ADAPTIVE] Switching to direct investigation-workflow invocation") run_recipe_by_name("investigation-workflow", user_context={...}, progress=True)

**This is NOT a license to bypass smart-orchestrator.** Always try it first.
Direct invocation is only permitted when smart-orchestrator fails at the
infrastructure level. "The task seems simple" is NOT an infrastructure failure.

**4. Detect hollow success:**

A recipe can complete structurally (all steps exit 0) but produce empty or
meaningless results — agents reporting "no codebase found" or reflection
marking ACHIEVED when no work was done. After execution, check that:

- Round results contain actual findings or code changes (not "I could not access...")
- PR URLs or concrete outputs are present for Development tasks
- At least one success criterion was verifiably evaluated

If results are hollow, report this to the user with the specific empty outputs.
Do not declare success when agents produced no meaningful work.
print("[ADAPTIVE] smart-orchestrator在解析分解阶段失败:<error>") print("[ADAPTIVE] 切换为直接调用investigation-workflow") run_recipe_by_name("investigation-workflow", user_context={...}, progress=True)

**这并非允许你绕过smart-orchestrator。** 请始终首先尝试它。只有当smart-orchestrator在基础设施层面失败时,才允许直接调用。“任务看起来很简单”不属于基础设施故障。

**4. 检测虚假成功:**

recipe可能在结构上完成(所有步骤以0状态退出),但生成空或无意义的结果 — 例如agent报告“未找到代码库”,或在未完成任何工作的情况下复盘标记为“已达成”。执行完成后,请检查:

- 各轮次结果包含实际发现或代码变更(而非“我无法访问...”)
- 开发任务包含PR链接或具体输出
- 至少有一项成功标准经过可验证的评估

如果结果虚假,请向用户报告具体的空输出内容。当agent未产生任何有意义的工作时,绝不能宣称任务成功。

Required Environment Variables

必需的环境变量

The recipe runner requires these environment variables to function:
VariablePurposeDefault
AMPLIHACK_HOME
Root of amplihack installation (for asset lookup)Auto-detected
AMPLIHACK_AGENT_BINARY
Which agent binary to use (claude, copilot, etc.)
claude
AMPLIHACK_MAX_DEPTH
Max recursion depth for nested sessions
3
AMPLIHACK_NONINTERACTIVE
Set to
1
to skip interactive prompts
Unset
If
AMPLIHACK_HOME
is not set and auto-detection fails,
parse-decomposition
and
activate-workflow
will fail with "orch_helper.py not found". Set it to the directory containing
amplifier-bundle/
.
recipe runner需要以下环境变量才能正常运行:
环境变量用途默认值
AMPLIHACK_HOME
amplihack安装根目录(用于查找资源)自动检测
AMPLIHACK_AGENT_BINARY
使用的agent二进制文件(claude、copilot等)
claude
AMPLIHACK_MAX_DEPTH
嵌套会话的最大递归深度
3
AMPLIHACK_NONINTERACTIVE
设置为
1
以跳过交互式提示
未设置
如果未设置
AMPLIHACK_HOME
且自动检测失败,
parse-decomposition
activate-workflow
会因“未找到orch_helper.py”而失败。请将其设置为包含
amplifier-bundle/
的目录。

After Execution: Reflect and verify

执行后:复盘与验证

After execution completes, verify the goal was achieved. If not:
  • For missing information: ask the user
  • For fixable gaps: re-invoke with the remaining work description
  • For infrastructure failures: file a bug and try adaptive strategy
执行完成后,请验证目标是否达成。若未达成:
  • 若缺少信息:询问用户
  • 若存在可修复的缺口:重新调用并传入剩余工作的描述
  • 若为基础设施故障:提交bug并尝试自适应策略

Enforcement: PostToolUse Workflow Guard

强制执行:PostToolUse工作流防护

A PostToolUse hook (
workflow_enforcement_hook.py
) actively monitors every tool call after this skill is invoked. It tracks:
  • Whether
    /dev
    or
    dev-orchestrator
    was called (sets a flag)
  • Whether the recipe runner was actually executed (clears the flag)
  • How many tool calls have passed without workflow evidence
If 3+ tool calls pass without evidence of recipe runner execution, the hook emits a hard WARNING. This is not a suggestion — it means you are violating the mandatory workflow. State is stored in
/tmp/amplihack-workflow-state/
.
PostToolUse钩子(
workflow_enforcement_hook.py
)会在该技能被调用后主动监控所有工具调用。它会跟踪:
  • 是否调用了
    /dev
    dev-orchestrator
    (设置标记)
  • 是否实际执行了recipe runner(清除标记)
  • 已进行多少次工具调用但未使用工作流
如果连续3次工具调用都没有使用recipe runner的痕迹,钩子会发出严重警告。这不是建议 — 这意味着你违反了强制工作流。状态存储在
/tmp/amplihack-workflow-state/
中。

Task Type Classification

任务类型分类

TypeKeywordsAction
Q&A"what is", "explain", "how does", "how do I", "quick question"Respond directly
Operations"clean up", "delete", "git status", "run command"builder agent (direct execution, no workflow steps)
Investigation"investigate", "analyze", "understand", "explore"investigation-workflow
Development"implement", "build", "create", "add", "fix", "refactor"smart-orchestrator
Hybrid*Both investigation + development keywordsDecomposed into investigation + dev workstreams
* Hybrid is not a distinct task_type — the orchestrator classifies as Development and decomposes into multiple workstreams (one investigation, one development).
类型关键词操作
问答"what is"、"explain"、"how does"、"how do I"、"quick question"直接回复
操作"clean up"、"delete"、"git status"、"run command"使用builder agent(直接执行,无需工作流步骤)
调研"investigate"、"analyze"、"understand"、"explore"使用investigation-workflow
开发"implement"、"build"、"create"、"add"、"fix"、"refactor"使用smart-orchestrator
混合*同时包含调研与开发类关键词分解为调研+开发工作流
* 混合类型并非独立的task_type — 编排器会将其分类为开发任务,并分解为多个工作流(一个调研、一个开发)。

Workstream Decomposition Examples

工作流分解示例

RequestWorkstreams
"implement JWT auth"1: auth (default-workflow)
"build a webui and an api"2: api + webui (parallel)
"add logging and add metrics"2: logging + metrics (parallel)
"investigate auth system then add OAuth"2: investigate + implement (sequential)
"fix bug in payment flow"1: bugfix (default-workflow)
请求内容工作流
"implement JWT auth"1个:认证(使用default-workflow)
"build a webui and an api"2个:api + webui(并行)
"add logging and add metrics"2个:日志 + 指标(并行)
"investigate auth system then add OAuth"2个:调研 + 实现(串行)
"fix bug in payment flow"1个:bug修复(使用default-workflow)

Override Options

覆盖选项

Single workstream override: Pass
force_single_workstream: "true"
in the recipe user_context to prevent automatic parallel decomposition regardless of task structure. This is a programmatic option (not directly settable from
/dev
):
python
run_recipe_by_name(
    "smart-orchestrator",
    user_context={
        "task_description": task,
        "repo_path": ".",
        "force_single_workstream": "true",  # disables parallel decomposition
    }
)
To force single-workstream execution without modifying recipe context: Set
AMPLIHACK_MAX_DEPTH=0
before running
/dev
. This causes the recursion guard to block parallel spawning and fall back to single-session mode for all tasks:
bash
export AMPLIHACK_MAX_DEPTH=0  # set in your shell first
/dev build a webui and an api  # then type in Claude Code
Note: The env var must be set in your shell before starting Claude Code — it cannot be prefixed inline on the
/dev
command. This affects all depth checks, not just parallel workstream spawning.
单工作流覆盖:在recipe的user_context中传入
force_single_workstream: "true"
,可阻止自动并行分解,无论任务结构如何。这是一个程序化选项(无法通过
/dev
直接设置):
python
run_recipe_by_name(
    "smart-orchestrator",
    user_context={
        "task_description": task,
        "repo_path": ".",
        "force_single_workstream": "true",  # 禁用并行分解
    }
)
无需修改recipe上下文即可强制单工作流执行:在运行
/dev
之前设置
AMPLIHACK_MAX_DEPTH=0
。这会触发递归防护,阻止并行工作流生成,并回退为所有任务使用单会话模式:
bash
export AMPLIHACK_MAX_DEPTH=0  # 先在shell中设置
/dev build a webui and an api  # 然后在Claude Code中输入
注意:必须在启动Claude Code之前在shell中设置该环境变量 — 不能在
/dev
命令行中前缀设置。这会影响所有深度检查,而不仅仅是并行工作流生成。

Canonical Sources

权威来源

  • Recipe:
    amplifier-bundle/recipes/smart-orchestrator.yaml
  • Parallel execution:
    .claude/skills/multitask/orchestrator.py
  • Development workflow:
    amplifier-bundle/recipes/default-workflow.yaml
  • Investigation workflow:
    amplifier-bundle/recipes/investigation-workflow.yaml
  • CLAUDE.md: Defines this as the default orchestrator
  • Recipe
    amplifier-bundle/recipes/smart-orchestrator.yaml
  • 并行执行
    .claude/skills/multitask/orchestrator.py
  • 开发工作流
    amplifier-bundle/recipes/default-workflow.yaml
  • 调研工作流
    amplifier-bundle/recipes/investigation-workflow.yaml
  • CLAUDE.md:将其定义为默认编排器

Relationship to Other Skills

与其他技能的关系

SkillRelationship
ultrathink-orchestrator
Deprecated — redirects here
default-workflow
Called by this orchestrator for single dev tasks
investigation-workflow
Called by this orchestrator for research tasks
multitask
Called by this orchestrator for parallel workstreams
work-delegator
Orthogonal — for backlog-driven delegation
技能名称关系描述
ultrathink-orchestrator
已废弃 — 重定向至本技能
default-workflow
被本编排器用于单个开发任务
investigation-workflow
被本编排器用于调研任务
multitask
被本编排器用于并行工作流
work-delegator
正交关系 — 用于基于待办事项的任务分配

Entry Points

入口点

  • Primary:
    /dev <task description>
  • Auto-activation: Via CLAUDE.md default behavior + hook injection
  • Legacy:
    /ultrathink <task>
    (deprecated alias → redirects to
    /dev
    )
  • 主要入口
    /dev <任务描述>
  • 自动激活:通过CLAUDE.md默认行为 + 钩子注入
  • 遗留入口
    /ultrathink <任务>
    (已废弃别名 → 重定向至
    /dev

Status Signal Reference

状态信号参考

The orchestrator uses two status signal formats:
编排器使用两种状态信号格式:

Execution status (from builder agents)

执行状态(来自builder agent)

Appears at the end of round execution steps:
  • STATUS: COMPLETE
    — the round's work is fully done
  • STATUS: CONTINUE
    — more work remains after this round
  • STATUS: PARTIAL
    — the final round (round 3) reached partial completion
  • STATUS: DEPTH_LIMITED
    — (legacy, no longer emitted; use BLOCKED path instead)
出现在轮次执行步骤末尾:
  • STATUS: COMPLETE
    — 本轮工作已全部完成
  • STATUS: CONTINUE
    — 本轮后仍有更多工作待完成
  • STATUS: PARTIAL
    — 最终轮(第3轮)仅部分完成
  • STATUS: DEPTH_LIMITED
    — (遗留格式,不再输出;请使用BLOCKED路径替代)

Goal status (from reviewer agents)

目标状态(来自reviewer agent)

Appears at the end of reflection steps:
  • GOAL_STATUS: ACHIEVED
    — all success criteria met, task is done
  • GOAL_STATUS: PARTIAL -- [description]
    — some criteria met, more work needed
  • GOAL_STATUS: NOT_ACHIEVED -- [reason]
    — goal not met, another round needed
The goal-seeking loop uses GOAL_STATUS signals to decide whether to run round 2 or 3.
BLOCKED path (recursion guard): When multi-workstream spawning is blocked by the depth limit, the orchestrator adapts to single-session execution:
  1. announce-depth-limited
    — prints a warning banner with remediation info
  2. execute-single-fallback-blocked
    — executes the full task as a single builder agent session (announced, not silent — the banner makes the strategy change visible)
出现在复盘步骤末尾:
  • GOAL_STATUS: ACHIEVED
    — 所有成功标准均已满足,任务完成
  • GOAL_STATUS: PARTIAL -- [description]
    — 部分标准满足,仍需更多工作
  • GOAL_STATUS: NOT_ACHIEVED -- [reason]
    — 目标未达成,需要执行下一轮
目标循环会根据GOAL_STATUS信号决定是否执行第2轮或第3轮。
BLOCKED路径(递归防护):当深度限制阻止多工作流生成时,编排器会切换为单会话执行:
  1. announce-depth-limited
    — 打印包含修复信息的警告横幅
  2. execute-single-fallback-blocked
    — 将整个任务作为单个builder agent会话执行(明确声明,而非静默执行 — 横幅会显示策略变更)