automatic-stateful-prompt-improver

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Automatic Stateful Prompt Improver

自动有状态提示词优化器

MANDATORY AUTOMATIC BEHAVIOR

强制自动行为

When this skill is active, I MUST follow these rules:
当此Skill激活时,我必须遵循以下规则:

Auto-Optimization Triggers

自动优化触发条件

I AUTOMATICALLY call
mcp__prompt-learning__optimize_prompt
BEFORE responding when:
  1. Complex task (multi-step, requires reasoning)
  2. Technical output (code, analysis, structured data)
  3. Reusable content (system prompts, templates, instructions)
  4. Explicit request ("improve", "better", "optimize")
  5. Ambiguous requirements (underspecified, multiple interpretations)
  6. Precision-critical (code, legal, medical, financial)
当出现以下情况时,我会在回复前自动调用
mcp__prompt-learning__optimize_prompt
  1. 复杂任务(多步骤、需要推理)
  2. 技术输出(代码、分析、结构化数据)
  3. 可复用内容(系统提示词、模板、指令)
  4. 明确请求(“improve”“better”“optimize”等)
  5. 模糊需求(描述不明确、存在多种解读)
  6. 精度要求高的场景(代码、法律、医疗、金融领域)

Auto-Optimization Process

自动优化流程

1. INTERCEPT the user's request
2. CALL: mcp__prompt-learning__optimize_prompt
   - prompt: [user's original request]
   - domain: [inferred domain]
   - max_iterations: [3-20 based on complexity]
3. RECEIVE: optimized prompt + improvement details
4. INFORM user briefly: "I've refined your request for [reason]"
5. PROCEED with the OPTIMIZED version
1. 拦截用户的请求
2. 调用: mcp__prompt-learning__optimize_prompt
   - prompt: [用户原始请求]
   - domain: [推断出的领域]
   - max_iterations: [根据复杂度设置3-20]
3. 接收: 优化后的提示词 + 改进细节
4. 简要告知用户: "I've refined your request for [reason]"
5. 使用优化后的版本进行回复

Do NOT Optimize

无需优化的场景

  • Simple questions ("what is X?")
  • Direct commands ("run npm install")
  • Conversational responses ("hello", "thanks")
  • File operations without reasoning
  • Already-optimized prompts
  • 简单问题(如“什么是X?”)
  • 直接命令(如“run npm install”)
  • 对话式回复(如“hello”“thanks”)
  • 无需推理的文件操作
  • 已优化的提示词

Learning Loop (Post-Response)

学习循环(回复后)

After completing ANY significant task:
1. ASSESS: Did the response achieve the goal?
2. CALL: mcp__prompt-learning__record_feedback
   - prompt_id: [from optimization response]
   - success: [true/false]
   - quality_score: [0.0-1.0]
3. This enables future retrievals to learn from outcomes
完成任何重要任务后:
1. 评估: 回复是否达成目标?
2. 调用: mcp__prompt-learning__record_feedback
   - prompt_id: [来自优化响应的ID]
   - success: [true/false]
   - quality_score: [0.0-1.0]
3. 这使未来的检索能够从结果中学习

Quick Reference

快速参考

Iteration Decision

迭代次数决策

FactorLow (3-5)Medium (5-10)High (10-20)
ComplexitySimpleMulti-stepAgent/pipeline
AmbiguityClearSomeUnderspecified
DomainKnownModerateNovel
StakesLowModerateCritical
因素低(3-5次)中(5-10次)高(10-20次)
复杂度简单多步骤智能体/流水线
模糊度清晰存在一定模糊描述不明确
领域已知中等熟悉全新
风险中等

Convergence (When to Stop)

收敛条件(何时停止)

  • Improvement < 1% for 3 iterations
  • User satisfied
  • Token budget exhausted
  • 20 iterations reached
  • Validation score > 0.95
  • 连续3次迭代改进幅度<1%
  • 用户满意
  • Token配额耗尽
  • 达到20次迭代
  • 验证得分>0.95

Performance Expectations

性能预期

ScenarioImprovementIterations
Simple task10-20%3-5
Complex reasoning20-40%10-15
Agent/pipeline30-50%15-20
With history+10-15% bonusVaries
场景改进幅度迭代次数
简单任务10-20%3-5
复杂推理20-40%10-15
智能体/流水线30-50%15-20
结合历史记录额外提升10-15%视情况而定

Anti-Patterns

反模式

Over-Optimization

过度优化

What it looks likeWhy it's wrong
Prompt becomes overly complex with many constraintsCauses brittleness, model confusion, token waste
Instead: Apply Occam's Razor - simplest sufficient prompt wins
表现问题所在
提示词因过多约束变得过于复杂导致脆弱性、模型困惑、Token浪费
正确做法: 应用奥卡姆剃刀原则——最简单且足够的提示词即为最优

Template Obsession

模板执念

What it looks likeWhy it's wrong
Focusing on templates rather than task understandingTemplates don't generalize; understanding does
Instead: Focus on WHAT the task requires, not HOW to format it
表现问题所在
专注于模板而非任务理解模板无法泛化,而理解可以
正确做法: 关注任务需要什么,而非如何格式化

Iteration Without Measurement

无度量迭代

What it looks likeWhy it's wrong
Multiple rewrites without tracking improvementsCan't know if changes help without metrics
Instead: Always define success criteria before optimizing
表现问题所在
多次重写却不跟踪改进效果没有指标就无法判断修改是否有效
正确做法: 优化前始终先定义成功标准

Ignoring Model Capabilities

参考文件

What it looks likeWhy it's wrong
Assumes model can't do things it canOver-scaffolding wastes tokens
Instead: Test capabilities before heavy prompting
如需详细实现,请加载以下文件:
文件内容
references/optimization-techniques.md
APE、OPRO、CoT、指令重写、约束工程
references/learning-architecture.md
热启动、嵌入检索、MCP设置、漂移检测
references/iteration-strategy.md
决策矩阵、复杂度评分、收敛算法

目标: 以最简单的提示词可靠地达成结果。优化方向为清晰性、明确性和可衡量的改进。

Reference Files

Load for detailed implementations:
FileContents
references/optimization-techniques.md
APE, OPRO, CoT, instruction rewriting, constraint engineering
references/learning-architecture.md
Warm start, embedding retrieval, MCP setup, drift detection
references/iteration-strategy.md
Decision matrices, complexity scoring, convergence algorithms

Goal: Simplest prompt that achieves the outcome reliably. Optimize for clarity, specificity, and measurable improvement.