create-meta-prompts
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese<objective>
Create prompts optimized for Claude-to-Claude communication in multi-stage workflows. Outputs are structured with XML and metadata for efficient parsing by subsequent prompts.
Every execution produces a for quick human scanning without reading full outputs.
SUMMARY.mdEach prompt gets its own folder in with its output artifacts, enabling clear provenance and chain detection.
</objective>
.prompts/<quick_start>
<workflow>
- Intake: Determine purpose (Do/Plan/Research/Refine), gather requirements
- Chain detection: Check for existing research/plan files to reference
- Generate: Create prompt using purpose-specific patterns
- Save: Create folder in
.prompts/{number}-{topic}-{purpose}/ - Present: Show decision tree for running
- Execute: Run prompt(s) with dependency-aware execution engine
- Summarize: Create SUMMARY.md for human scanning </workflow>
<folder_structure>
.prompts/
├── 001-auth-research/
│ ├── completed/
│ │ └── 001-auth-research.md # Prompt (archived after run)
│ ├── auth-research.md # Full output (XML for Claude)
│ └── SUMMARY.md # Executive summary (markdown for human)
├── 002-auth-plan/
│ ├── completed/
│ │ └── 002-auth-plan.md
│ ├── auth-plan.md
│ └── SUMMARY.md
├── 003-auth-implement/
│ ├── completed/
│ │ └── 003-auth-implement.md
│ └── SUMMARY.md # Do prompts create code elsewhere
├── 004-auth-research-refine/
│ ├── completed/
│ │ └── 004-auth-research-refine.md
│ ├── archive/
│ │ └── auth-research-v1.md # Previous version
│ └── SUMMARY.md</folder_structure>
</quick_start>
<context>
Prompts directory: !`[ -d ./.prompts ] && echo "exists" || echo "missing"`
Existing research/plans: !`find ./.prompts -name "*-research.md" -o -name "*-plan.md" 2>/dev/null | head -10`
Next prompt number: !`ls -d ./.prompts/*/ 2>/dev/null | wc -l | xargs -I {} expr {} + 1`
</context>
<automated_workflow>
<step_0_intake_gate>
<title>Adaptive Requirements Gathering</title>
<critical_first_action>
BEFORE analyzing anything, check if context was provided.
IF no context provided (skill invoked without description):
→ IMMEDIATELY use AskUserQuestion with:
- header: "Purpose"
- question: "What is the purpose of this prompt?"
- options:
- "Do" - Execute a task, produce an artifact
- "Plan" - Create an approach, roadmap, or strategy
- "Research" - Gather information or understand something
- "Refine" - Improve an existing research or plan output
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF context was provided:
→ Check if purpose is inferable from keywords:
- ,
implement,build,create,fix,add→ Dorefactor - ,
plan,roadmap,approach,strategy,decide→ Planphases - ,
research,understand,learn,gather,analyze→ Researchexplore - ,
refine,improve,deepen,expand,iterate→ Refineupdate
→ If unclear, ask the Purpose question above as first contextual question
→ If clear, proceed to adaptive_analysis with inferred purpose
</critical_first_action>
<adaptive_analysis>
Extract and infer:
- Purpose: Do, Plan, Research, or Refine
- Topic identifier: Kebab-case identifier for file naming (e.g., ,
auth)stripe-payments - Complexity: Simple vs complex (affects prompt depth)
- Prompt structure: Single vs multiple prompts
- Target (Refine only): Which existing output to improve
If topic identifier not obvious, ask:
- header: "Topic"
- question: "What topic/feature is this for? (used for file naming)"
- Let user provide via "Other" option
- Enforce kebab-case (convert spaces/underscores to hyphens)
For Refine purpose, also identify target output from to improve.
</adaptive_analysis>
.prompts/*/<chain_detection>
Scan for existing and files.
.prompts/*/*-research.md*-plan.mdIf found:
- List them: "Found existing files: auth-research.md (in 001-auth-research/), stripe-plan.md (in 005-stripe-plan/)"
- Use AskUserQuestion:
- header: "Reference"
- question: "Should this prompt reference any existing research or plans?"
- options: List found files + "None"
- multiSelect: true
Match by topic keyword when possible (e.g., "auth plan" → suggest auth-research.md).
</chain_detection>
<contextual_questioning>
Generate 2-4 questions using AskUserQuestion based on purpose and gaps.
Load questions from: references/question-bank.md
Route by purpose:
- Do → artifact type, scope, approach
- Plan → plan purpose, format, constraints
- Research → depth, sources, output format
- Refine → target selection, feedback, preservation </contextual_questioning>
<decision_gate>
After receiving answers, present decision gate using AskUserQuestion:
- header: "Ready"
- question: "Ready to create the prompt?"
- options:
- "Proceed" - Create the prompt with current context
- "Ask more questions" - I have more details to clarify
- "Let me add context" - I want to provide additional information
Loop until "Proceed" selected.
</decision_gate>
<finalization>
After "Proceed" selected, state confirmation:
"Creating a {purpose} prompt for: {topic}
Folder: .prompts/{number}-{topic}-{purpose}/
References: {list any chained files}"
Then proceed to generation.
</finalization>
</step_0_intake_gate>
<step_1_generate>
<title>Generate Prompt</title>
Load purpose-specific patterns:
- Do: references/do-patterns.md
- Plan: references/plan-patterns.md
- Research: references/research-patterns.md
- Refine: references/refine-patterns.md
Load intelligence rules: references/intelligence-rules.md
<prompt_structure>
All generated prompts include:
- Objective: What to accomplish, why it matters
- Context: Referenced files (@), dynamic context (!)
- Requirements: Specific instructions for the task
- Output specification: Where to save, what structure
- Metadata requirements: For research/plan outputs, specify XML metadata structure
- SUMMARY.md requirement: All prompts must create a SUMMARY.md file
- Success criteria: How to know it worked
For Research and Plan prompts, output must include:
- - How confident in findings
<confidence> - - What's needed to proceed
<dependencies> - - What remains uncertain
<open_questions> - - What was assumed
<assumptions>
All prompts must create with:
SUMMARY.md- One-liner - Substantive description of outcome
- Version - v1 or iteration info
- Key Findings - Actionable takeaways
- Files Created - (Do prompts only)
- Decisions Needed - What requires user input
- Blockers - External impediments
- Next Step - Concrete forward action </prompt_structure>
<file_creation>
- Create folder:
.prompts/{number}-{topic}-{purpose}/ - Create subfolder
completed/ - Write prompt to:
.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md - Prompt instructs output to: </file_creation> </step_1_generate>
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
<step_2_present>
<title>Present Decision Tree</title>
After saving prompt(s), present inline (not AskUserQuestion):
<single_prompt_presentation>
Prompt created: .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md
What's next?
1. Run prompt now
2. Review/edit prompt first
3. Save for later
4. Other
Choose (1-4): _</single_prompt_presentation>
<multi_prompt_presentation>
Prompts created:
- .prompts/001-auth-research/001-auth-research.md
- .prompts/002-auth-plan/002-auth-plan.md
- .prompts/003-auth-implement/003-auth-implement.md
Detected execution order: Sequential (002 references 001 output, 003 references 002 output)
What's next?
1. Run all prompts (sequential)
2. Review/edit prompts first
3. Save for later
4. Other
Choose (1-4): _</multi_prompt_presentation>
</step_2_present>
<step_3_execute>
<title>Execution Engine</title>
<execution_modes>
<single_prompt>
Straightforward execution of one prompt.
- Read prompt file contents
- Spawn Task agent with subagent_type="general-purpose"
- Include in task prompt:
- The complete prompt contents
- Output location:
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
- Wait for completion
- Validate output (see validation section)
- Archive prompt to subfolder
completed/ - Report results with next-step options </single_prompt>
<sequential_execution>
For chained prompts where each depends on previous output.
- Build execution queue from dependency order
- For each prompt in queue: a. Read prompt file b. Spawn Task agent c. Wait for completion d. Validate output e. If validation fails → stop, report failure, offer recovery options f. If success → archive prompt, continue to next
- Report consolidated results
<progress_reporting>
Show progress during execution:
Executing 1/3: 001-auth-research... ✓
Executing 2/3: 002-auth-plan... ✓
Executing 3/3: 003-auth-implement... (running)</progress_reporting>
</sequential_execution>
<parallel_execution>
For independent prompts with no dependencies.
- Read all prompt files
- CRITICAL: Spawn ALL Task agents in a SINGLE message
- This is required for true parallel execution
- Each task includes its output location
- Wait for all to complete
- Validate all outputs
- Archive all prompts
- Report consolidated results (successes and failures)
<failure_handling>
Unlike sequential, parallel continues even if some fail:
- Collect all results
- Archive successful prompts
- Report failures with details
- Offer to retry failed prompts </failure_handling> </parallel_execution>
<mixed_dependencies>
For complex DAGs (e.g., two parallel research → one plan).
- Analyze dependency graph from @ references
- Group into execution layers:
- Layer 1: No dependencies (run parallel)
- Layer 2: Depends only on layer 1 (run after layer 1 completes)
- Layer 3: Depends on layer 2, etc.
- Execute each layer:
- Parallel within layer
- Sequential between layers
- Stop if any dependency fails (downstream prompts can't run)
<dependency_detection>
<automatic_detection>
Scan prompt contents for @ references to determine dependencies:
- Parse each prompt for patterns
@.prompts/{number}-{topic}/ - Build dependency graph
- Detect cycles (error if found)
- Determine execution order
<inference_rules>
If no explicit @ references found, infer from purpose:
- Research prompts: No dependencies (can parallel)
- Plan prompts: Depend on same-topic research
- Do prompts: Depend on same-topic plan
Override with explicit references when present.
</inference_rules>
</automatic_detection>
<missing_dependencies>
If a prompt references output that doesn't exist:
- Check if it's another prompt in this session (will be created)
- Check if it exists in (already completed)
.prompts/*/ - If truly missing:
- Warn user: "002-auth-plan references auth-research.md which doesn't exist"
- Offer: Create the missing research prompt first? / Continue anyway? / Cancel? </missing_dependencies> </dependency_detection>
- File exists: Check output file was created
- Not empty: File has content (> 100 chars)
- Metadata present (for research/plan): Check for required XML tags
<confidence><dependencies><open_questions><assumptions>
- SUMMARY.md exists: Check SUMMARY.md was created
- SUMMARY.md complete: Has required sections (Key Findings, Decisions Needed, Blockers, Next Step)
- One-liner is substantive: Not generic like "Research completed"
<validation_failure>
If validation fails:
- Report what's missing
- Offer options:
- Retry the prompt
- Continue anyway (for non-critical issues)
- Stop and investigate </validation_failure> </output_validation> </validation>
<failure_handling>
<sequential_failure>
Stop the chain immediately:
✗ Failed at 2/3: 002-auth-plan
Completed:
- 001-auth-research ✓ (archived)
Failed:
- 002-auth-plan: Output file not created
Not started:
- 003-auth-implement
What's next?
1. Retry 002-auth-plan
2. View error details
3. Stop here (keep completed work)
4. Other</sequential_failure>
<parallel_failure>
Continue others, report all results:
Parallel execution completed with errors:
✓ 001-api-research (archived)
✗ 002-db-research: Validation failed - missing <confidence> tag
✓ 003-ui-research (archived)
What's next?
1. Retry failed prompt (002)
2. View error details
3. Continue without 002
4. Other</parallel_failure>
</failure_handling>
<archiving>
<archive_timing>
- **Sequential**: Archive each prompt immediately after successful completion
- Provides clear state if execution stops mid-chain
- **Parallel**: Archive all at end after collecting results
- Keeps prompts available for potential retry
<archive_operation>
Move prompt file to completed subfolder:
bash
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/Output file stays in place (not moved).
</archive_operation>
</archiving>
<result_presentation>
<single_result>
✓ Executed: 001-auth-research
✓ Created: .prompts/001-auth-research/SUMMARY.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━<objective>
为多阶段工作流中的Claude-to-Claude通信创建优化的提示词。输出采用XML和元数据结构,以便后续提示词高效解析。
每次执行都会生成一个,方便人类快速浏览,无需查看完整输出。
SUMMARY.md每个提示词在目录下都有独立文件夹,存放其输出产物,便于清晰追溯来源和检测调用链。
</objective>
.prompts/<quick_start>
<workflow>
- 需求收集:确定目标(Do/Plan/Research/Refine),收集需求
- 调用链检测:检查是否存在可参考的现有研究/规划文件
- 生成:使用特定目标的模板创建提示词
- 保存:在目录下创建文件夹
.prompts/{number}-{topic}-{purpose}/ - 展示:呈现运行决策树
- 执行:使用支持依赖感知的执行引擎运行提示词
- 总结:创建供人类浏览的SUMMARY.md </workflow>
<folder_structure>
.prompts/
├── 001-auth-research/
│ ├── completed/
│ │ └── 001-auth-research.md # Prompt (archived after run)
│ ├── auth-research.md # Full output (XML for Claude)
│ └── SUMMARY.md # Executive summary (markdown for human)
├── 002-auth-plan/
│ ├── completed/
│ │ └── 002-auth-plan.md
│ ├── auth-plan.md
│ └── SUMMARY.md
├── 003-auth-implement/
│ ├── completed/
│ │ └── 003-auth-implement.md
│ └── SUMMARY.md # Do prompts create code elsewhere
├── 004-auth-research-refine/
│ ├── completed/
│ │ └── 004-auth-research-refine.md
│ ├── archive/
│ │ └── auth-research-v1.md # Previous version
│ └── SUMMARY.md</folder_structure>
</quick_start>
<context>
提示词目录:!`[ -d ./.prompts ] && echo "exists" || echo "missing"`
现有研究/规划文件:!`find ./.prompts -name "*-research.md" -o -name "*-plan.md" 2>/dev/null | head -10`
下一个提示词编号:!`ls -d ./.prompts/*/ 2>/dev/null | wc -l | xargs -I {} expr {} + 1`
</context>
<automated_workflow>
<step_0_intake_gate>
<title>自适应需求收集</title>
<critical_first_action>
在分析任何内容之前,检查是否提供了上下文信息。
如果未提供上下文(调用技能时未附带描述):
→ 立即使用AskUserQuestion,内容如下:
- header: "目标"
- question: "此提示词的目标是什么?"
- options:
- "Do" - 执行任务,生成产物
- "Plan" - 创建方案、路线图或策略
- "Research" - 收集信息或了解相关内容
- "Refine" - 优化现有的研究或规划输出
用户选择后,询问:"描述你想要完成的内容"(用户可选择“Other”来提供自由文本)。
如果已提供上下文:
→ 检查是否可通过关键词推断目标:
- ,
implement,build,create,fix,add→ Dorefactor - ,
plan,roadmap,approach,strategy,decide→ Planphases - ,
research,understand,learn,gather,analyze→ Researchexplore - ,
refine,improve,deepen,expand,iterate→ Refineupdate
→ 如果目标不明确,先询问上述的目标问题
→ 如果目标明确,带着推断出的目标进入自适应分析阶段
</critical_first_action>
<adaptive_analysis>
提取并推断:
- 目标:Do、Plan、Research或Refine
- 主题标识符:用于文件命名的短横线分隔式标识符(例如:,
auth)stripe-payments - 复杂度:简单或复杂(影响提示词的深度)
- 提示词结构:单提示词或多提示词
- 目标对象(仅Refine场景):要优化的现有输出
如果主题标识符不明确,询问:
- header: "主题"
- question: "此提示词针对哪个主题/功能?(用于文件命名)"
- 允许用户通过“Other”选项提供
- 强制使用短横线分隔式(将空格/下划线转换为短横线)
对于Refine目标,还需从中识别要优化的目标输出。
</adaptive_analysis>
.prompts/*/<chain_detection>
扫描目录下的现有和文件。
.prompts/*/*-research.md*-plan.md如果找到:
- 列出文件:"找到现有文件:auth-research.md(位于001-auth-research/),stripe-plan.md(位于005-stripe-plan/)"
- 使用AskUserQuestion:
- header: "参考文件"
- question: "此提示词是否需要参考任何现有的研究或规划文件?"
- options: 列出找到的文件 + "无"
- multiSelect: true
尽可能按主题关键词匹配(例如:“auth plan” → 推荐auth-research.md)。
</chain_detection>
<contextual_questioning>
根据目标和信息缺口,使用AskUserQuestion生成2-4个问题。
从references/question-bank.md加载问题。
按目标分类提问:
- Do → 产物类型、范围、实现方式
- Plan → 规划目的、格式、约束条件
- Research → 研究深度、来源、输出格式
- Refine → 目标选择、反馈意见、保留内容 </contextual_questioning>
<decision_gate>
收到所有答案后,使用AskUserQuestion呈现决策门:
- header: "准备就绪"
- question: "是否准备好创建提示词?"
- options:
- "继续" - 使用当前上下文创建提示词
- "补充问题" - 我有更多细节需要澄清
- "添加上下文" - 我想提供额外的信息
循环直到用户选择“继续”。
</decision_gate>
<finalization>
用户选择“继续”后,确认信息:
"正在为以下内容创建{purpose}类型的提示词:{topic}
文件夹:.prompts/{number}-{topic}-{purpose}/
参考文件:{列出所有关联文件}"
然后进入生成阶段。
</finalization>
</step_0_intake_gate>
<step_1_generate>
<title>生成提示词</title>
加载特定目标的模板:
- Do: references/do-patterns.md
- Plan: references/plan-patterns.md
- Research: references/research-patterns.md
- Refine: references/refine-patterns.md
加载智能规则:references/intelligence-rules.md
<prompt_structure>
所有生成的提示词都包含以下部分:
- 目标:要完成的任务及其重要性
- 上下文:参考文件(@)、动态上下文(!)
- 需求:任务的具体说明
- 输出规范:保存位置、结构要求
- 元数据要求:对于研究/规划输出,指定XML元数据结构
- SUMMARY.md要求:所有提示词必须创建SUMMARY.md文件
- 成功标准:如何判断任务完成
对于Research和Plan类型的提示词,输出必须包含:
- - 对研究结果的置信度
<confidence> - - 后续执行所需的条件
<dependencies> - - 尚未确定的问题
<open_questions> - - 做出的假设
<assumptions>
所有提示词必须创建包含以下内容的:
SUMMARY.md- 一句话总结 - 对结果的实质性描述
- 版本 - v1或迭代信息
- 关键发现 - 可执行的结论
- 创建的文件 - (仅Do类型提示词)
- 待决策事项 - 需要用户输入的内容
- 阻塞点 - 外部障碍
- 下一步行动 - 具体的后续步骤 </prompt_structure>
<file_creation>
- 创建文件夹:
.prompts/{number}-{topic}-{purpose}/ - 创建子文件夹
completed/ - 将提示词写入:
.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md - 提示词指定输出位置为:</file_creation> </step_1_generate>
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
<step_2_present>
<title>呈现决策树</title>
保存提示词后,直接内联呈现(不使用AskUserQuestion):
<single_prompt_presentation>
已创建提示词:.prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md
下一步操作?
1. 立即运行提示词
2. 先查看/编辑提示词
3. 保存供以后使用
4. 其他
请选择(1-4):_</single_prompt_presentation>
<multi_prompt_presentation>
已创建提示词:
- .prompts/001-auth-research/001-auth-research.md
- .prompts/002-auth-plan/002-auth-plan.md
- .prompts/003-auth-implement/003-auth-implement.md
检测到执行顺序:顺序执行(002参考001的输出,003参考002的输出)
下一步操作?
1. 运行所有提示词(顺序执行)
2. 先查看/编辑提示词
3. 保存供以后使用
4. 其他
请选择(1-4):_</multi_prompt_presentation>
</step_2_present>
<step_3_execute>
<title>执行引擎</title>
<execution_modes>
<single_prompt>
单个提示词的直接执行流程。
- 读取提示词文件内容
- 生成Task agent,设置subagent_type="general-purpose"
- 在任务提示词中包含:
- 完整的提示词内容
- 输出位置:
.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md
- 等待执行完成
- 验证输出(见验证部分)
- 将提示词归档到子文件夹
completed/ - 报告结果并提供下一步选项 </single_prompt>
<sequential_execution>
适用于存在依赖关系的链式提示词,每个提示词依赖前一个的输出。
- 根据依赖顺序构建执行队列
- 对队列中的每个提示词: a. 读取提示词文件 b. 生成Task agent c. 等待执行完成 d. 验证输出 e. 如果验证失败 → 停止执行,报告失败原因,提供恢复选项 f. 如果执行成功 → 归档提示词,继续下一个
- 报告综合结果
<progress_reporting>
执行过程中显示进度:
正在执行 1/3: 001-auth-research... ✓
正在执行 2/3: 002-auth-plan... ✓
正在执行 3/3: 003-auth-implement... (运行中)</progress_reporting>
</sequential_execution>
<parallel_execution>
适用于无依赖关系的独立提示词。
- 读取所有提示词文件
- 关键要求:在单个消息中生成所有Task agent
- 这是实现真正并行执行的必要条件
- 每个任务包含其输出位置
- 等待所有任务完成
- 验证所有输出
- 归档所有提示词
- 报告综合结果(成功和失败情况)
<failure_handling>
与顺序执行不同,并行执行即使部分任务失败也会继续:
- 收集所有结果
- 归档成功的提示词
- 报告失败详情
- 提供重试失败提示词的选项 </failure_handling> </parallel_execution>
<mixed_dependencies>
适用于复杂的DAG依赖(例如:两个并行的研究提示词 → 一个规划提示词)。
- 从@引用中分析依赖图
- 将提示词分组为执行层:
- 第1层:无依赖(并行运行)
- 第2层:仅依赖第1层的提示词(第1层完成后运行)
- 第3层:依赖第2层的提示词,依此类推
- 执行每个层:
- 层内并行执行
- 层间顺序执行
- 如果任何依赖任务失败,停止执行(下游提示词无法运行)
<dependency_detection>
<automatic_detection>
扫描提示词内容中的@引用以确定依赖关系:
- 解析每个提示词中的模式
@.prompts/{number}-{topic}/ - 构建依赖图
- 检测循环依赖(如果发现则报错)
- 确定执行顺序
<inference_rules>
如果未找到明确的@引用,根据目标推断依赖关系:
- Research提示词:无依赖(可并行执行)
- Plan提示词:依赖同主题的Research提示词
- Do提示词:依赖同主题的Plan提示词
如果存在明确引用,则覆盖推断结果。
</inference_rules>
</automatic_detection>
<missing_dependencies>
如果提示词引用的输出不存在:
- 检查是否是本次会话中将要创建的其他提示词
- 检查是否存在于目录中(已完成的提示词)
.prompts/*/ - 如果确实不存在:
- 警告用户:"002-auth-plan引用的auth-research.md不存在"
- 提供选项:先创建缺失的研究提示词?/ 继续执行?/ 取消? </missing_dependencies> </dependency_detection>
- 文件存在:检查输出文件是否已创建
- 非空:文件内容长度大于100字符
- 元数据存在(仅研究/规划类型):检查是否包含所需的XML标签
<confidence><dependencies><open_questions><assumptions>
- SUMMARY.md存在:检查是否已创建SUMMARY.md
- SUMMARY.md内容完整:包含所需部分(关键发现、待决策事项、阻塞点、下一步行动)
- 一句话总结具有实质性:不是通用描述如“研究完成”
<validation_failure>
如果验证失败:
- 报告缺失的内容
- 提供选项:
- 重试提示词
- 继续执行(针对非关键问题)
- 停止执行并调查 </validation_failure> </output_validation> </validation>
<failure_handling>
<sequential_failure>
立即停止链式执行:
✗ 执行失败 2/3: 002-auth-plan
已完成:
- 001-auth-research ✓ (已归档)
失败:
- 002-auth-plan:未创建输出文件
未开始:
- 003-auth-implement
下一步操作?
1. 重试002-auth-plan
2. 查看错误详情
3. 在此停止(保留已完成的工作)
4. 其他</sequential_failure>
<parallel_failure>
继续执行其他任务,报告所有结果:
并行执行完成,存在错误:
✓ 001-api-research(已归档)
✗ 002-db-research:验证失败 - 缺少<confidence>标签
✓ 003-ui-research(已归档)
下一步操作?
1. 重试失败的提示词(002)
2. 查看错误详情
3. 忽略002继续执行
4. 其他</parallel_failure>
</failure_handling>
<archiving>
<archive_timing>
- **顺序执行**:每个提示词执行成功后立即归档
- 便于在执行中途停止时清晰了解状态
- **并行执行**:收集所有结果后统一归档
- 保留提示词以便后续重试
<archive_operation>
将提示词文件移动到completed子文件夹:
bash
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/输出文件保留在原位置(不移动)。
</archive_operation>
</archiving>
<result_presentation>
<single_result>
✓ 执行完成:001-auth-research
✓ 已创建:.prompts/001-auth-research/SUMMARY.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Auth Research Summary
认证研究总结
JWT with jose library and httpOnly cookies recommended
推荐使用jose库和httpOnly Cookie实现JWT
Key Findings
关键发现
• jose outperforms jsonwebtoken with better TypeScript support
• httpOnly cookies required (localStorage is XSS vulnerable)
• Refresh rotation is OWASP standard
• jose库比jsonwebtoken性能更好,且TypeScript支持更完善
• 必须使用httpOnly Cookie(localStorage易受XSS攻击)
• 刷新令牌轮换是OWASP标准
Decisions Needed
待决策事项
None - ready for planning
无 - 已准备好进入规划阶段
Blockers
阻塞点
None
无
Next Step
下一步行动
Create auth-plan.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
What's next?
- Create planning prompt (auth-plan)
- View full research output
- Done
- Other
Display the actual SUMMARY.md content inline so user sees findings without opening files.
</single_result>
<chain_result>✓ Chain completed: auth workflow
Results:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
001-auth-research
JWT with jose library and httpOnly cookies recommended
Decisions: None • Blockers: None
002-auth-plan
4-phase implementation: types → JWT core → refresh → tests
Decisions: Approve 15-min token expiry • Blockers: None
003-auth-implement
JWT middleware complete with 6 files created
Decisions: Review before Phase 2 • Blockers: None
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
All prompts archived. Full summaries in .prompts/*/SUMMARY.md
What's next?
- Review implementation
- Run tests
- Create new prompt chain
- Other
For chains, show condensed one-liner from each SUMMARY.md with decisions/blockers flagged.
</chain_result>
</result_presentation>
<special_cases>
<re_running_completed>
If user wants to re-run an already-completed prompt:
1. Check if prompt is in `completed/` subfolder
2. Move it back to parent folder
3. Optionally backup existing output: `{output}.bak`
4. Execute normally
</re_running_completed>
<output_conflicts>
If output file already exists:
1. For re-runs: Backup existing → `{filename}.bak`
2. For new runs: Should not happen (unique numbering)
3. If conflict detected: Ask user - Overwrite? / Rename? / Cancel?
</output_conflicts>
<commit_handling>
After successful execution:
1. Do NOT auto-commit (user controls git workflow)
2. Mention what files were created/modified
3. User can commit when ready
Exception: If user explicitly requests commit, stage and commit:
- Output files created
- Prompts archived
- Any implementation changes (for Do prompts)
</commit_handling>
<recursive_prompts>
If a prompt's output includes instructions to create more prompts:
1. This is advanced usage - don't auto-detect
2. Present the output to user
3. User can invoke skill again to create follow-up prompts
4. Maintains user control over prompt creation
</recursive_prompts>
</special_cases>
</step_3_execute>
</automated_workflow>
<reference_guides>
**Prompt patterns by purpose:**
- [references/do-patterns.md](references/do-patterns.md) - Execution prompts + output structure
- [references/plan-patterns.md](references/plan-patterns.md) - Planning prompts + plan.md structure
- [references/research-patterns.md](references/research-patterns.md) - Research prompts + research.md structure
- [references/refine-patterns.md](references/refine-patterns.md) - Iteration prompts + versioning
**Shared templates:**
- [references/summary-template.md](references/summary-template.md) - SUMMARY.md structure and field requirements
- [references/metadata-guidelines.md](references/metadata-guidelines.md) - Confidence, dependencies, open questions, assumptions
**Supporting references:**
- [references/question-bank.md](references/question-bank.md) - Intake questions by purpose
- [references/intelligence-rules.md](references/intelligence-rules.md) - Extended thinking, parallel tools, depth decisions
</reference_guides>
<success_criteria>
**Prompt Creation:**
- Intake gate completed with purpose and topic identified
- Chain detection performed, relevant files referenced
- Prompt generated with correct structure for purpose
- Folder created in `.prompts/` with correct naming
- Output file location specified in prompt
- SUMMARY.md requirement included in prompt
- Metadata requirements included for Research/Plan outputs
- Quality controls included for Research outputs (verification checklist, QA, pre-submission)
- Streaming write instructions included for Research outputs
- Decision tree presented
**Execution (if user chooses to run):**
- Dependencies correctly detected and ordered
- Prompts executed in correct order (sequential/parallel/mixed)
- Output validated after each completion
- SUMMARY.md created with all required sections
- One-liner is substantive (not generic)
- Failed prompts handled gracefully with recovery options
- Successful prompts archived to `completed/` subfolder
- SUMMARY.md displayed inline in results
- Results presented with decisions/blockers flagged
**Research Quality (for Research prompts):**
- Verification checklist completed
- Quality report distinguishes verified from assumed claims
- Sources consulted listed with URLs
- Confidence levels assigned to findings
- Critical claims verified with official documentation
</success_criteria>创建auth-plan.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
下一步操作?
- 创建规划提示词(auth-plan)
- 查看完整研究输出
- 完成
- 其他
直接内联显示SUMMARY.md的内容,方便用户无需打开文件即可查看研究结果。
</single_result>
<chain_result>✓ 链式执行完成:认证工作流
结果:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
001-auth-research
推荐使用jose库和httpOnly Cookie实现JWT
待决策:无 • 阻塞点:无
002-auth-plan
4阶段实施:类型定义 → JWT核心 → 刷新机制 → 测试
待决策:批准15分钟令牌过期时间 • 阻塞点:无
003-auth-implement
JWT中间件已完成,创建了6个文件
待决策:第二阶段前需审核 • 阻塞点:无
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
所有提示词已归档。完整总结位于.prompts/*/SUMMARY.md
下一步操作?
- 查看实施内容
- 运行测试
- 创建新的提示词链
- 其他
对于链式执行,显示每个SUMMARY.md中的一句话总结,并标记待决策事项和阻塞点。
</chain_result>
</result_presentation>
<special_cases>
<re_running_completed>
如果用户想要重新运行已完成的提示词:
1. 检查提示词是否在`completed/`子文件夹中
2. 将其移回父文件夹
3. 可选:备份现有输出为`{output}.bak`
4. 正常执行
</re_running_completed>
<output_conflicts>
如果输出文件已存在:
1. 对于重新运行的提示词:备份现有文件为`{filename}.bak`
2. 对于新运行的提示词:不应出现此情况(编号唯一)
3. 如果检测到冲突:询问用户 - 覆盖?/ 重命名?/ 取消?
</output_conflicts>
<commit_handling>
执行成功后:
1. **不自动提交**(用户控制git工作流)
2. 说明创建/修改的文件
3. 用户可在准备好后提交
例外情况:如果用户明确要求提交,则暂存并提交以下内容:
- 创建的输出文件
- 已归档的提示词
- (Do类型提示词的)任何实施变更
</commit_handling>
<recursive_prompts>
如果提示词的输出包含创建更多提示词的说明:
1. 这是高级用法 - 不自动检测
2. 将输出呈现给用户
3. 用户可再次调用技能创建后续提示词
4. 保持用户对提示词创建的控制权
</recursive_prompts>
</special_cases>
</step_3_execute>
</automated_workflow>
<reference_guides>
**按目标分类的提示词模板:**
- [references/do-patterns.md](references/do-patterns.md) - 执行类提示词 + 输出结构
- [references/plan-patterns.md](references/plan-patterns.md) - 规划类提示词 + plan.md结构
- [references/research-patterns.md](references/research-patterns.md) - 研究类提示词 + research.md结构
- [references/refine-patterns.md](references/refine-patterns.md) - 优化类提示词 + 版本控制
**共享模板:**
- [references/summary-template.md](references/summary-template.md) - SUMMARY.md结构和字段要求
- [references/metadata-guidelines.md](references/metadata-guidelines.md) - 置信度、依赖关系、未解决问题、假设的规范
**支持性参考:**
- [references/question-bank.md](references/question-bank.md) - 按目标分类的需求收集问题
- [references/intelligence-rules.md](references/intelligence-rules.md) - 扩展思考、并行工具、深度决策规则
</reference_guides>
<success_criteria>
**提示词创建成功标准:**
- 完成需求收集,确定目标和主题
- 执行调用链检测,参考相关文件
- 根据目标生成结构正确的提示词
- 在`.prompts/`目录下创建命名规范的文件夹
- 提示词中指定输出文件位置
- 提示词中包含SUMMARY.md生成要求
- 研究/规划类提示词包含元数据要求
- 研究类提示词包含质量控制要求(验证清单、QA、提交前检查)
- 研究类提示词包含流式写入说明
- 呈现决策树
**执行成功标准(如果用户选择运行):**
- 正确检测并排序依赖关系
- 按正确顺序执行提示词(顺序/并行/混合)
- 每个提示词执行完成后验证输出
- 创建内容完整的SUMMARY.md
- 一句话总结具有实质性(非通用描述)
- 优雅处理失败的提示词,提供恢复选项
- 成功的提示词归档到`completed/`子文件夹
- 结果中内联显示SUMMARY.md内容
- 结果中标记待决策事项和阻塞点
**研究质量标准(仅研究类提示词):**
- 完成验证清单
- 质量报告区分已验证和假设的结论
- 列出参考来源及URL
- 为研究结果分配置信度
- 关键结论已通过官方文档验证
</success_criteria>