deepen-plan

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Deepen Plan - Power Enhancement Mode

深化计划 - 增强模式

Introduction

简介

Note: The current year is 2026. Use this when searching for recent documentation and best practices.
This command takes an existing plan (from
/ce:plan
) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
  • Best practices and industry patterns
  • Performance optimizations
  • UI/UX improvements (if applicable)
  • Quality enhancements and edge cases
  • Real-world implementation examples
The result is a deeply grounded, production-ready plan with concrete implementation details.
**注意:当前年份为2026年。**搜索最新文档和最佳实践时请以此为准。
此命令接收一个现有计划(来自
/ce:plan
),并为每个部分配备并行研究Agent进行增强。每个主要元素都有专属的研究子Agent,负责查找:
  • 最佳实践和行业模式
  • 性能优化方案
  • UI/UX改进(如适用)
  • 质量提升点与边缘场景
  • 真实世界的实现示例
最终产出的是一个内容详实、可直接用于生产的计划,包含具体的实现细节。

Plan File

计划文件

<plan_path> #$ARGUMENTS </plan_path>
If the plan path above is empty:
  1. Check for recent plans:
    ls -la docs/plans/
  2. Ask the user: "Which plan would you like to deepen? Please provide the path (e.g.,
    docs/plans/2026-01-15-feat-my-feature-plan.md
    )."
Do not proceed until you have a valid plan file path.
<plan_path> #$ARGUMENTS </plan_path>
如果上述计划路径为空:
  1. 检查近期计划:
    ls -la docs/plans/
  2. 询问用户:“您希望深化哪个计划?请提供路径(例如:
    docs/plans/2026-01-15-feat-my-feature-plan.md
    )。”
在获取有效的计划文件路径前,请勿继续操作。

Main Tasks

主要任务

1. Parse and Analyze Plan Structure

1. 解析与分析计划结构

<thinking> First, read and parse the plan to identify each major section that can be enhanced with research. </thinking>
Read the plan file and extract:
  • Overview/Problem Statement
  • Proposed Solution sections
  • Technical Approach/Architecture
  • Implementation phases/steps
  • Code examples and file references
  • Acceptance criteria
  • Any UI/UX components mentioned
  • Technologies/frameworks mentioned (Rails, React, Python, TypeScript, etc.)
  • Domain areas (data models, APIs, UI, security, performance, etc.)
Create a section manifest:
Section 1: [Title] - [Brief description of what to research]
Section 2: [Title] - [Brief description of what to research]
...
<thinking> 首先,读取并解析计划,识别可通过研究增强的每个主要部分。 </thinking>
读取计划文件并提取:
  • 概述/问题陈述
  • 拟议解决方案部分
  • 技术方案/架构
  • 实施阶段/步骤
  • 代码示例与文件引用
  • 验收标准
  • 提及的任何UI/UX组件
  • 提及的技术/框架(Rails、React、Python、TypeScript等)
  • 领域范围(数据模型、API、UI、安全、性能等)
创建部分清单:
Section 1: [标题] - [研究内容简要说明]
Section 2: [标题] - [研究内容简要说明]
...

2. Discover and Apply Available Skills

2. 发现并应用可用技能

<thinking> Dynamically discover all available skills and match them to plan sections. Don't assume what skills exist - discover them at runtime. </thinking>
Step 1: Discover ALL available skills from ALL sources
bash
undefined
<thinking> 动态发现所有可用技能,并与计划部分匹配。不要假设存在哪些技能——在运行时进行发现。 </thinking>
步骤1:从所有来源发现所有可用技能
bash
undefined

1. Project-local skills (highest priority - project-specific)

1. 项目本地技能(优先级最高 - 项目专属)

ls .claude/skills/
ls .claude/skills/

2. User's global skills (~/.claude/)

2. 用户全局技能 (~/.claude/)

ls ~/.claude/skills/
ls ~/.claude/skills/

3. compound-engineering plugin skills

3. compound-engineering 插件技能

ls ~/.claude/plugins/cache//compound-engineering//skills/
ls ~/.claude/plugins/cache//compound-engineering//skills/

4. ALL other installed plugins - check every plugin for skills

4. 所有其他已安装插件 - 检查每个插件的技能

find ~/.claude/plugins/cache -type d -name "skills" 2>/dev/null
find ~/.claude/plugins/cache -type d -name "skills" 2>/dev/null

5. Also check installed_plugins.json for all plugin locations

5. 同时检查 installed_plugins.json 获取所有插件位置

cat ~/.claude/plugins/installed_plugins.json

**Important:** Check EVERY source. Don't assume compound-engineering is the only plugin. Use skills from ANY installed plugin that's relevant.

**Step 2: For each discovered skill, read its SKILL.md to understand what it does**

```bash
cat ~/.claude/plugins/installed_plugins.json

**重要提示:**检查所有来源。不要假设compound-engineering是唯一的插件。使用任何已安装插件中的相关技能。

**步骤2:对于每个发现的技能,读取其SKILL.md以了解其功能**

```bash

For each skill directory found, read its documentation

对于每个找到的技能目录,读取其文档

cat [skill-path]/SKILL.md

**Step 3: Match skills to plan content**

For each skill discovered:
- Read its SKILL.md description
- Check if any plan sections match the skill's domain
- If there's a match, spawn a sub-agent to apply that skill's knowledge

**Step 4: Spawn a sub-agent for EVERY matched skill**

**CRITICAL: For EACH skill that matches, spawn a separate sub-agent and instruct it to USE that skill.**

For each matched skill:
Task general-purpose: "You have the [skill-name] skill available at [skill-path].
YOUR JOB: Use this skill on the plan.
  1. Read the skill: cat [skill-path]/SKILL.md
  2. Follow the skill's instructions exactly
  3. Apply the skill to this content:
[relevant plan section or full plan]
  1. Return the skill's full output
The skill tells you what to do - follow it. Execute the skill completely."

**Spawn ALL skill sub-agents in PARALLEL:**
- 1 sub-agent per matched skill
- Each sub-agent reads and uses its assigned skill
- All run simultaneously
- 10, 20, 30 skill sub-agents is fine

**Each sub-agent:**
1. Reads its skill's SKILL.md
2. Follows the skill's workflow/instructions
3. Applies the skill to the plan
4. Returns whatever the skill produces (code, recommendations, patterns, reviews, etc.)

**Example spawns:**
Task general-purpose: "Use the dhh-rails-style skill at ~/.claude/plugins/.../dhh-rails-style. Read SKILL.md and apply it to: [Rails sections of plan]"
Task general-purpose: "Use the frontend-design skill at ~/.claude/plugins/.../frontend-design. Read SKILL.md and apply it to: [UI sections of plan]"
Task general-purpose: "Use the agent-native-architecture skill at ~/.claude/plugins/.../agent-native-architecture. Read SKILL.md and apply it to: [agent/tool sections of plan]"
Task general-purpose: "Use the security-patterns skill at ~/.claude/skills/security-patterns. Read SKILL.md and apply it to: [full plan]"

**No limit on skill sub-agents. Spawn one for every skill that could possibly be relevant.**
cat [skill-path]/SKILL.md

**步骤3:将技能与计划内容匹配**

对于每个发现的技能:
- 读取其SKILL.md描述
- 检查是否有计划部分与技能的领域匹配
- 如果匹配,生成一个子Agent以应用该技能的知识

**步骤4:为每个匹配的技能生成子Agent**

**关键规则:对于每个匹配的技能,生成一个独立的子Agent,并指示其使用该技能。**

对于每个匹配的技能:
Task general-purpose: "您拥有位于[skill-path]的[skill-name]技能。
您的任务:将此技能应用于计划。
  1. 读取技能:cat [skill-path]/SKILL.md
  2. 严格遵循技能的说明
  3. 将技能应用于以下内容:
[相关计划部分或完整计划]
  1. 返回技能的完整输出
技能会告诉您要做什么——严格遵循。完整执行该技能。"

**并行生成所有技能子Agent:**
- 每个匹配的技能对应一个子Agent
- 每个子Agent读取并使用其分配的技能
- 所有子Agent同时运行
- 生成10、20、30个技能子Agent都是可行的

**每个子Agent的工作:**
1. 读取其技能的SKILL.md
2. 遵循技能的工作流/说明
3. 将技能应用于计划
4. 返回技能生成的任何内容(代码、建议、模式、评审等)

**生成示例:**
Task general-purpose: "使用位于~/.claude/plugins/.../dhh-rails-style的dhh-rails-style技能。读取SKILL.md并将其应用于:[计划中的Rails部分]"
Task general-purpose: "使用位于~/.claude/plugins/.../frontend-design的frontend-design技能。读取SKILL.md并将其应用于:[计划中的UI部分]"
Task general-purpose: "使用位于~/.claude/plugins/.../agent-native-architecture的agent-native-architecture技能。读取SKILL.md并将其应用于:[计划中的Agent/工具部分]"
Task general-purpose: "使用位于~/.claude/skills/security-patterns的security-patterns技能。读取SKILL.md并将其应用于:[完整计划]"

**技能子Agent数量无限制。为所有可能相关的技能各生成一个。**

3. Discover and Apply Learnings/Solutions

3. 发现并应用已有经验/解决方案

<thinking> Check for documented learnings from /ce:compound. These are solved problems stored as markdown files. Spawn a sub-agent for each learning to check if it's relevant. </thinking>
LEARNINGS LOCATION - Check these exact folders:
docs/solutions/           <-- PRIMARY: Project-level learnings (created by /ce:compound)
├── performance-issues/
│   └── *.md
├── debugging-patterns/
│   └── *.md
├── configuration-fixes/
│   └── *.md
├── integration-issues/
│   └── *.md
├── deployment-issues/
│   └── *.md
└── [other-categories]/
    └── *.md
Step 1: Find ALL learning markdown files
Run these commands to get every learning file:
bash
undefined
<thinking> 检查`/ce:compound`中记录的经验。这些是存储为markdown文件的已解决问题。为每个经验生成一个子Agent,检查其是否相关。 </thinking>
经验存储位置 - 检查以下确切文件夹:
docs/solutions/           <-- 主要位置:项目级经验(由`/ce:compound`创建)
├── performance-issues/
│   └── *.md
├── debugging-patterns/
│   └── *.md
├── configuration-fixes/
│   └── *.md
├── integration-issues/
│   └── *.md
├── deployment-issues/
│   └── *.md
└── [其他分类]/
    └── *.md
步骤1:查找所有经验markdown文件
运行以下命令获取所有经验文件:
bash
undefined

PRIMARY LOCATION - Project learnings

主要位置 - 项目经验

find docs/solutions -name "*.md" -type f 2>/dev/null
find docs/solutions -name "*.md" -type f 2>/dev/null

If docs/solutions doesn't exist, check alternate locations:

如果docs/solutions不存在,检查替代位置:

find .claude/docs -name ".md" -type f 2>/dev/null find ~/.claude/docs -name ".md" -type f 2>/dev/null

**Step 2: Read frontmatter of each learning to filter**

Each learning file has YAML frontmatter with metadata. Read the first ~20 lines of each file to get:

```yaml
---
title: "N+1 Query Fix for Briefs"
category: performance-issues
tags: [activerecord, n-plus-one, includes, eager-loading]
module: Briefs
symptom: "Slow page load, multiple queries in logs"
root_cause: "Missing includes on association"
---
For each .md file, quickly scan its frontmatter:
bash
undefined
find .claude/docs -name ".md" -type f 2>/dev/null find ~/.claude/docs -name ".md" -type f 2>/dev/null

**步骤2:读取每个经验的前置元数据进行筛选**

每个经验文件都包含YAML前置元数据。读取每个文件的前约20行以获取:

```yaml
---
title: "N+1 Query Fix for Briefs"
category: performance-issues
tags: [activerecord, n-plus-one, includes, eager-loading]
module: Briefs
symptom: "Slow page load, multiple queries in logs"
root_cause: "Missing includes on association"
---
对于每个.md文件,快速扫描其前置元数据:
bash
undefined

Read first 20 lines of each learning (frontmatter + summary)

读取每个经验的前20行(前置元数据+摘要)

head -20 docs/solutions/**/*.md

**Step 3: Filter - only spawn sub-agents for LIKELY relevant learnings**

Compare each learning's frontmatter against the plan:
- `tags:` - Do any tags match technologies/patterns in the plan?
- `category:` - Is this category relevant? (e.g., skip deployment-issues if plan is UI-only)
- `module:` - Does the plan touch this module?
- `symptom:` / `root_cause:` - Could this problem occur with the plan?

**SKIP learnings that are clearly not applicable:**
- Plan is frontend-only → skip `database-migrations/` learnings
- Plan is Python → skip `rails-specific/` learnings
- Plan has no auth → skip `authentication-issues/` learnings

**SPAWN sub-agents for learnings that MIGHT apply:**
- Any tag overlap with plan technologies
- Same category as plan domain
- Similar patterns or concerns

**Step 4: Spawn sub-agents for filtered learnings**

For each learning that passes the filter:
Task general-purpose: " LEARNING FILE: [full path to .md file]
  1. Read this learning file completely
  2. This learning documents a previously solved problem
Check if this learning applies to this plan:

[full plan content]

If relevant:
  • Explain specifically how it applies
  • Quote the key insight or solution
  • Suggest where/how to incorporate it
If NOT relevant after deeper analysis:
  • Say 'Not applicable: [reason]' "

**Example filtering:**
head -20 docs/solutions/**/*.md

**步骤3:筛选 - 仅为可能相关的经验生成子Agent**

将每个经验的前置元数据与计划进行比较:
- `tags:` - 是否有标签与计划中的技术/模式匹配?
- `category:` - 该分类是否相关?(例如,如果计划仅涉及UI,则跳过deployment-issues)
- `module:` - 计划是否涉及该模块?
- `symptom:` / `root_cause:` - 该问题是否可能在计划中出现?

**跳过明显不适用的经验:**
- 计划仅涉及前端 → 跳过`database-migrations/`经验
- 计划基于Python → 跳过`rails-specific/`经验
- 计划不涉及认证 → 跳过`authentication-issues/`经验

**为可能适用的经验生成子Agent:**
- 任何与计划技术有标签重叠的经验
- 与计划领域分类相同的经验
- 具有相似模式或关注点的经验

**步骤4:为筛选后的经验生成子Agent**

对于每个通过筛选的经验:
Task general-purpose: " 经验文件:[.md文件的完整路径]
  1. 完整读取此经验文件
  2. 此经验记录了一个已解决的问题
检查此经验是否适用于当前计划:

[完整计划内容]

如果相关:
  • 具体说明其适用方式
  • 引用关键见解或解决方案
  • 建议如何将其纳入计划
如果经深入分析后不相关:
  • 说明'不适用:[原因]' "

**筛选示例:**

Found 15 learning files, plan is about "Rails API caching"

找到15个经验文件,计划主题为“Rails API缓存”

SPAWN (likely relevant):

生成子Agent(可能相关):

docs/solutions/performance-issues/n-plus-one-queries.md # tags: [activerecord] ✓ docs/solutions/performance-issues/redis-cache-stampede.md # tags: [caching, redis] ✓ docs/solutions/configuration-fixes/redis-connection-pool.md # tags: [redis] ✓
docs/solutions/performance-issues/n-plus-one-queries.md # tags: [activerecord] ✓ docs/solutions/performance-issues/redis-cache-stampede.md # tags: [caching, redis] ✓ docs/solutions/configuration-fixes/redis-connection-pool.md # tags: [redis] ✓

SKIP (clearly not applicable):

跳过(明显不适用):

docs/solutions/deployment-issues/heroku-memory-quota.md # not about caching docs/solutions/frontend-issues/stimulus-race-condition.md # plan is API, not frontend docs/solutions/authentication-issues/jwt-expiry.md # plan has no auth

**Spawn sub-agents in PARALLEL for all filtered learnings.**

**These learnings are institutional knowledge - applying them prevents repeating past mistakes.**
docs/solutions/deployment-issues/heroku-memory-quota.md # 与缓存无关 docs/solutions/frontend-issues/stimulus-race-condition.md # 计划涉及API,不涉及前端 docs/solutions/authentication-issues/jwt-expiry.md # 计划不涉及认证

**并行为所有筛选后的经验生成子Agent。**

**这些经验是机构知识——应用它们可以避免重复过去的错误。**

4. Launch Per-Section Research Agents

4. 启动按部分划分的研究Agent

<thinking> For each major section in the plan, spawn dedicated sub-agents to research improvements. Use the Explore agent type for open-ended research. </thinking>
For each identified section, launch parallel research:
Task Explore: "Research best practices, patterns, and real-world examples for: [section topic].
Find:
- Industry standards and conventions
- Performance considerations
- Common pitfalls and how to avoid them
- Documentation and tutorials
Return concrete, actionable recommendations."
Also use Context7 MCP for framework documentation:
For any technologies/frameworks mentioned in the plan, query Context7:
mcp__plugin_compound-engineering_context7__resolve-library-id: Find library ID for [framework]
mcp__plugin_compound-engineering_context7__query-docs: Query documentation for specific patterns
Use WebSearch for current best practices:
Search for recent (2024-2026) articles, blog posts, and documentation on topics in the plan.
<thinking> 为计划中的每个主要部分启动专属的研究子Agent。使用Explore Agent类型进行开放式研究。 </thinking>
为每个识别出的部分启动并行研究:
Task Explore: "研究[部分主题]的最佳实践、模式和真实世界示例。
查找:
- 行业标准与惯例
- 性能注意事项
- 常见陷阱及避免方法
- 文档与教程
返回具体、可操作的建议。"
同时使用Context7 MCP获取框架文档:
对于计划中提及的任何技术/框架,查询Context7:
mcp__plugin_compound-engineering_context7__resolve-library-id: 查找[framework]的库ID
mcp__plugin_compound-engineering_context7__query-docs: 查询特定模式的文档
使用WebSearch获取当前最佳实践:
搜索计划主题的近期(2024-2026)文章、博客文章和文档。

5. Discover and Run ALL Review Agents

5. 发现并运行所有评审Agent

<thinking> Dynamically discover every available agent and run them ALL against the plan. Don't filter, don't skip, don't assume relevance. 40+ parallel agents is fine. Use everything available. </thinking>
Step 1: Discover ALL available agents from ALL sources
bash
undefined
<thinking> 动态发现所有可用Agent,并全部运行以评审计划。不要筛选,不要跳过,不要假设相关性。并行运行40多个Agent是可行的。利用所有可用资源。 </thinking>
步骤1:从所有来源发现所有可用Agent
bash
undefined

1. Project-local agents (highest priority - project-specific)

1. 项目本地Agent(优先级最高 - 项目专属)

find .claude/agents -name "*.md" 2>/dev/null
find .claude/agents -name "*.md" 2>/dev/null

2. User's global agents (~/.claude/)

2. 用户全局Agent (~/.claude/)

find ~/.claude/agents -name "*.md" 2>/dev/null
find ~/.claude/agents -name "*.md" 2>/dev/null

3. compound-engineering plugin agents (all subdirectories)

3. compound-engineering 插件Agent(所有子目录)

find ~/.claude/plugins/cache//compound-engineering//agents -name "*.md" 2>/dev/null
find ~/.claude/plugins/cache//compound-engineering//agents -name "*.md" 2>/dev/null

4. ALL other installed plugins - check every plugin for agents

4. 所有其他已安装插件 - 检查每个插件的Agent

find ~/.claude/plugins/cache -path "/agents/.md" 2>/dev/null
find ~/.claude/plugins/cache -path "/agents/.md" 2>/dev/null

5. Check installed_plugins.json to find all plugin locations

5. 检查 installed_plugins.json 以找到所有插件位置

cat ~/.claude/plugins/installed_plugins.json
cat ~/.claude/plugins/installed_plugins.json

6. For local plugins (isLocal: true), check their source directories

6. 对于本地插件(isLocal: true),检查其源目录

Parse installed_plugins.json and find local plugin paths

解析 installed_plugins.json 并找到本地插件路径


**Important:** Check EVERY source. Include agents from:
- Project `.claude/agents/`
- User's `~/.claude/agents/`
- compound-engineering plugin (but SKIP workflow/ agents - only use review/, research/, design/, docs/)
- ALL other installed plugins (agent-sdk-dev, frontend-design, etc.)
- Any local plugins

**For compound-engineering plugin specifically:**
- USE: `agents/review/*` (all reviewers)
- USE: `agents/research/*` (all researchers)
- USE: `agents/design/*` (design agents)
- USE: `agents/docs/*` (documentation agents)
- SKIP: `agents/workflow/*` (these are workflow orchestrators, not reviewers)

**Step 2: For each discovered agent, read its description**

Read the first few lines of each agent file to understand what it reviews/analyzes.

**Step 3: Launch ALL agents in parallel**

For EVERY agent discovered, launch a Task in parallel:
Task [agent-name]: "Review this plan using your expertise. Apply all your checks and patterns. Plan content: [full plan content]"

**CRITICAL RULES:**
- Do NOT filter agents by "relevance" - run them ALL
- Do NOT skip agents because they "might not apply" - let them decide
- Launch ALL agents in a SINGLE message with multiple Task tool calls
- 20, 30, 40 parallel agents is fine - use everything
- Each agent may catch something others miss
- The goal is MAXIMUM coverage, not efficiency

**Step 4: Also discover and run research agents**

Research agents (like `best-practices-researcher`, `framework-docs-researcher`, `git-history-analyzer`, `repo-research-analyst`) should also be run for relevant plan sections.

**重要提示:**检查所有来源。包括来自以下位置的Agent:
- 项目`.claude/agents/`
- 用户的`~/.claude/agents/`
- compound-engineering插件(但跳过workflow/ Agent - 仅使用review/、research/、design/、docs/)
- 所有其他已安装插件(agent-sdk-dev、frontend-design等)
- 任何本地插件

**对于compound-engineering插件:**
- 使用:`agents/review/*`(所有评审Agent)
- 使用:`agents/research/*`(所有研究Agent)
- 使用:`agents/design/*`(设计Agent)
- 使用:`agents/docs/*`(文档Agent)
- 跳过:`agents/workflow/*`(这些是工作流编排器,不是评审Agent)

**步骤2:对于每个发现的Agent,读取其描述**

读取每个Agent文件的前几行,以了解其评审/分析的内容。

**步骤3:并行启动所有Agent**

对于每个发现的Agent,并行启动一个Task:
Task [agent-name]: "使用您的专业知识评审此计划。应用所有检查和模式。计划内容:[完整计划内容]"

**关键规则:**
- 不要根据“相关性”筛选Agent——全部运行
- 不要因为Agent“可能不适用”而跳过——让Agent自己决定
- 在一条消息中使用多个Task工具调用启动所有Agent
- 并行运行20、30、40个Agent是可行的——利用所有资源
- 每个Agent可能会发现其他Agent遗漏的问题
- 目标是最大覆盖范围,而非效率

**步骤4:同时发现并运行研究Agent**

研究Agent(如`best-practices-researcher`、`framework-docs-researcher`、`git-history-analyzer`、`repo-research-analyst`)也应针对相关计划部分运行。

6. Wait for ALL Agents and Synthesize Everything

6. 等待所有Agent完成并合成所有结果

<thinking> Wait for ALL parallel agents to complete - skills, research agents, review agents, everything. Then synthesize all findings into a comprehensive enhancement. </thinking>
Collect outputs from ALL sources:
  1. Skill-based sub-agents - Each skill's full output (code examples, patterns, recommendations)
  2. Learnings/Solutions sub-agents - Relevant documented learnings from /ce:compound
  3. Research agents - Best practices, documentation, real-world examples
  4. Review agents - All feedback from every reviewer (architecture, security, performance, simplicity, etc.)
  5. Context7 queries - Framework documentation and patterns
  6. Web searches - Current best practices and articles
For each agent's findings, extract:
  • Concrete recommendations (actionable items)
  • Code patterns and examples (copy-paste ready)
  • Anti-patterns to avoid (warnings)
  • Performance considerations (metrics, benchmarks)
  • Security considerations (vulnerabilities, mitigations)
  • Edge cases discovered (handling strategies)
  • Documentation links (references)
  • Skill-specific patterns (from matched skills)
  • Relevant learnings (past solutions that apply - prevent repeating mistakes)
Deduplicate and prioritize:
  • Merge similar recommendations from multiple agents
  • Prioritize by impact (high-value improvements first)
  • Flag conflicting advice for human review
  • Group by plan section
<thinking> 等待所有并行Agent完成——技能Agent、研究Agent、评审Agent,所有Agent。然后将所有发现合成一个全面的增强计划。 </thinking>
收集所有来源的输出:
  1. 基于技能的子Agent - 每个技能的完整输出(代码示例、模式、建议)
  2. 经验/解决方案子Agent - 来自
    /ce:compound
    的相关已记录经验
  3. 研究Agent - 最佳实践、文档、真实世界示例
  4. 评审Agent - 所有评审Agent的反馈(架构、安全、性能、简洁性等)
  5. Context7查询 - 框架文档和模式
  6. Web搜索 - 当前最佳实践和文章
对于每个Agent的发现,提取:
  • 具体建议(可操作项)
  • 代码模式和示例(可直接复制使用)
  • 应避免的反模式(警告)
  • 性能注意事项(指标、基准)
  • 安全注意事项(漏洞、缓解措施)
  • 发现的边缘场景(处理策略)
  • 文档链接(参考资料)
  • 特定技能的模式(来自匹配的技能)
  • 相关经验(适用的过去解决方案 - 避免重复错误)
去重并排序优先级:
  • 合并来自多个Agent的相似建议
  • 按影响排序(高价值改进优先)
  • 标记冲突建议以供人工评审
  • 按计划部分分组

7. Enhance Plan Sections

7. 增强计划部分

<thinking> Merge research findings back into the plan, adding depth without changing the original structure. </thinking>
Enhancement format for each section:
markdown
undefined
<thinking> 将研究发现合并回计划,在不改变原始结构的前提下增加内容深度。 </thinking>
每个部分的增强格式:
markdown
undefined

[Original Section Title]

[原始部分标题]

[Original content preserved]
[保留原始内容]

Research Insights

研究见解

Best Practices:
  • [Concrete recommendation 1]
  • [Concrete recommendation 2]
Performance Considerations:
  • [Optimization opportunity]
  • [Benchmark or metric to target]
Implementation Details:
// Concrete code example from research
Edge Cases:
  • [Edge case 1 and how to handle]
  • [Edge case 2 and how to handle]
References:
  • [Documentation URL 1]
  • [Documentation URL 2]
undefined
最佳实践:
  • [具体建议1]
  • [具体建议2]
性能注意事项:
  • [优化机会]
  • [目标基准或指标]
实现细节:
// 来自研究的具体代码示例
边缘场景:
  • [边缘场景1及处理方法]
  • [边缘场景2及处理方法]
参考资料:
  • [文档URL 1]
  • [文档URL 2]
undefined

8. Add Enhancement Summary

8. 添加增强摘要

At the top of the plan, add a summary section:
markdown
undefined
在计划顶部添加摘要部分:
markdown
undefined

Enhancement Summary

增强摘要

Deepened on: [Date] Sections enhanced: [Count] Research agents used: [List]
深化时间: [日期] 增强的部分数量: [数量] 使用的研究Agent: [列表]

Key Improvements

主要改进

  1. [Major improvement 1]
  2. [Major improvement 2]
  3. [Major improvement 3]
  1. [主要改进1]
  2. [主要改进2]
  3. [主要改进3]

New Considerations Discovered

新发现的注意事项

  • [Important finding 1]
  • [Important finding 2]
undefined
  • [重要发现1]
  • [重要发现2]
undefined

9. Update Plan File

9. 更新计划文件

Write the enhanced plan:
  • Preserve original filename
  • Add
    -deepened
    suffix if user prefers a new file
  • Update any timestamps or metadata
写入增强后的计划:
  • 保留原始文件名
  • 如果用户偏好新文件,添加
    -deepened
    后缀
  • 更新任何时间戳或元数据

Output Format

输出格式

Update the plan file in place (or if user requests a separate file, append
-deepened
after
-plan
, e.g.,
2026-01-15-feat-auth-plan-deepened.md
).
原地更新计划文件(或如果用户要求单独文件,在
-plan
后追加
-deepened
,例如:
2026-01-15-feat-auth-plan-deepened.md
)。

Quality Checks

质量检查

Before finalizing:
  • All original content preserved
  • Research insights clearly marked and attributed
  • Code examples are syntactically correct
  • Links are valid and relevant
  • No contradictions between sections
  • Enhancement summary accurately reflects changes
最终确定前:
  • 所有原始内容均被保留
  • 研究见解被清晰标记并注明来源
  • 代码示例语法正确
  • 链接有效且相关
  • 各部分之间无矛盾
  • 增强摘要准确反映了变更

Post-Enhancement Options

增强后选项

After writing the enhanced plan, use the AskUserQuestion tool to present these options:
Question: "Plan deepened at
[plan_path]
. What would you like to do next?"
Options:
  1. View diff - Show what was added/changed
  2. Run
    /technical_review
    - Get feedback from reviewers on enhanced plan
  3. Start
    /ce:work
    - Begin implementing this enhanced plan
  4. Deepen further - Run another round of research on specific sections
  5. Revert - Restore original plan (if backup exists)
Based on selection:
  • View diff → Run
    git diff [plan_path]
    or show before/after
  • /technical_review
    → Call the /technical_review command with the plan file path
  • /ce:work
    → Call the /ce:work command with the plan file path
  • Deepen further → Ask which sections need more research, then re-run those agents
  • Revert → Restore from git or backup
写入增强后的计划后,使用AskUserQuestion工具提供以下选项:
问题:“计划已在
[plan_path]
完成深化。您接下来想做什么?”
选项:
  1. 查看差异 - 显示添加/更改的内容
  2. 运行
    /technical_review
    - 获取评审Agent对增强后计划的反馈
  3. 启动
    /ce:work
    - 开始实施此增强后计划
  4. 进一步深化 - 对特定部分运行另一轮研究
  5. 恢复 - 恢复原始计划(如果存在备份)
根据选择执行:
  • 查看差异 → 运行
    git diff [plan_path]
    或显示前后对比
  • /technical_review
    → 使用计划文件路径调用
    /technical_review
    命令
  • /ce:work
    → 使用计划文件路径调用
    /ce:work
    命令
  • 进一步深化 → 询问哪些部分需要更多研究,然后重新运行这些Agent
  • 恢复 → 从git或备份恢复

Example Enhancement

增强示例

Before (from /workflows:plan):
markdown
undefined
增强前(来自/workflows:plan):
markdown
undefined

Technical Approach

技术方案

Use React Query for data fetching with optimistic updates.

**After (from /workflows:deepen-plan):**
```markdown
使用React Query进行数据获取,并实现乐观更新。

**增强后(来自/workflows:deepen-plan):**
```markdown

Technical Approach

技术方案

Use React Query for data fetching with optimistic updates.
使用React Query进行数据获取,并实现乐观更新。

Research Insights

研究见解

Best Practices:
  • Configure
    staleTime
    and
    cacheTime
    based on data freshness requirements
  • Use
    queryKey
    factories for consistent cache invalidation
  • Implement error boundaries around query-dependent components
Performance Considerations:
  • Enable
    refetchOnWindowFocus: false
    for stable data to reduce unnecessary requests
  • Use
    select
    option to transform and memoize data at query level
  • Consider
    placeholderData
    for instant perceived loading
Implementation Details:
typescript
// Recommended query configuration
const queryClient = new QueryClient({
  defaultOptions: {
    queries: {
      staleTime: 5 * 60 * 1000, // 5 minutes
      retry: 2,
      refetchOnWindowFocus: false,
    },
  },
});
Edge Cases:
  • Handle race conditions with
    cancelQueries
    on component unmount
  • Implement retry logic for transient network failures
  • Consider offline support with
    persistQueryClient
References:

NEVER CODE! Just research and enhance the plan.
最佳实践:
  • 根据数据新鲜度要求配置
    staleTime
    cacheTime
  • 使用
    queryKey
    工厂实现一致的缓存失效
  • 在依赖查询的组件周围实现错误边界
性能注意事项:
  • 对于稳定数据,启用
    refetchOnWindowFocus: false
    以减少不必要的请求
  • 使用
    select
    选项在查询级别转换和记忆数据
  • 考虑使用
    placeholderData
    实现即时感知加载
实现细节:
typescript
// 推荐的查询配置
const queryClient = new QueryClient({
  defaultOptions: {
    queries: {
      staleTime: 5 * 60 * 1000, // 5分钟
      retry: 2,
      refetchOnWindowFocus: false,
    },
  },
});
边缘场景:
  • 在组件卸载时使用
    cancelQueries
    处理竞争条件
  • 为临时网络故障实现重试逻辑
  • 考虑使用
    persistQueryClient
    实现离线支持
参考资料:

绝对不要直接编写代码!仅进行研究并增强计划。