enhance-prompts

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

enhance-prompts

enhance-prompts

Analyze prompts for clarity, structure, examples, and output reliability.
分析提示词的清晰度、结构、示例以及输出可靠性。

Parse Arguments

解析参数

javascript
const args = '$ARGUMENTS'.split(' ').filter(Boolean);
const targetPath = args.find(a => !a.startsWith('--')) || '.';
const fix = args.includes('--fix');
javascript
const args = '$ARGUMENTS'.split(' ').filter(Boolean);
const targetPath = args.find(a => !a.startsWith('--')) || '.';
const fix = args.includes('--fix');

Differentiation from enhance-agent-prompts

与enhance-agent-prompts的区别

SkillFocusUse When
enhance-prompts
Prompt quality (clarity, structure, examples)General prompts, system prompts, templates
enhance-agent-prompts
Agent config (frontmatter, tools, model)Agent files with YAML frontmatter
技能关注重点使用场景
enhance-prompts
提示词质量(清晰度、结构、示例)通用提示词、系统提示词、模板
enhance-agent-prompts
Agent配置(前置元数据、工具、模型)带有YAML前置元数据的Agent文件

Workflow

工作流程

  1. Run Analyzer - Execute the JavaScript analyzer to get findings:
    bash
    node -e "const a = require('./lib/enhance/prompt-analyzer.js'); console.log(JSON.stringify(a.analyzeAllPrompts('.'), null, 2));"
    For a specific path:
    a.analyzeAllPrompts('./plugins/enhance')
    For a single file:
    a.analyzePrompt('./path/to/file.md')
  2. Parse Results - The analyzer returns JSON with
    summary
    and
    findings
  3. Filter - Apply certainty filtering based on --verbose flag
  4. Report - Format findings as markdown output
  5. Fix - If --fix flag, apply auto-fixes from findings
The JavaScript analyzer (
lib/enhance/prompt-analyzer.js
) implements all detection patterns including AST-based code validation. The patterns below are reference documentation.

  1. 运行分析器 - 执行JavaScript分析器获取检测结果:
    bash
    node -e "const a = require('./lib/enhance/prompt-analyzer.js'); console.log(JSON.stringify(a.analyzeAllPrompts('.'), null, 2));"
    指定路径的用法:
    a.analyzeAllPrompts('./plugins/enhance')
    单个文件的用法:
    a.analyzePrompt('./path/to/file.md')
  2. 解析结果 - 分析器返回包含
    summary
    findings
    的JSON数据
  3. 过滤 - 根据--verbose标记应用确定性过滤
  4. 生成报告 - 将检测结果格式化为Markdown输出
  5. 修复 - 如果带有--fix标记,根据检测结果应用自动修复
JavaScript分析器(
lib/enhance/prompt-analyzer.js
)实现了所有检测模式,包括基于AST的代码验证。以下模式为参考文档。

Prompt Engineering Knowledge Reference

提示词工程知识参考

System Prompt Structure

系统提示词结构

Effective system prompts include: Role/Identity, Capabilities & Constraints, Instruction Priority, Output Format, Behavioral Directives, Examples, Error Handling.
Minimal Template:
xml
<system>
You are [ROLE]. [PURPOSE].
Key constraints: [CONSTRAINTS]
Output format: [FORMAT]
When uncertain: [HANDLING]
</system>
有效的系统提示词应包含:角色/身份、能力与约束、指令优先级、输出格式、行为指引、示例、错误处理。
最简模板:
xml
<system>
You are [ROLE]. [PURPOSE].
Key constraints: [CONSTRAINTS]
Output format: [FORMAT]
When uncertain: [HANDLING]
</system>

XML Tags (Claude-Specific)

XML标签(Claude专属)

Claude is fine-tuned for XML tags. Use:
<role>
,
<constraints>
,
<output_format>
,
<examples>
,
<instructions>
,
<context>
xml
<constraints>
- Maximum response length: 500 words
- Use only Python 3.10+ syntax
</constraints>
Claude针对XML标签进行了微调,建议使用:
<role>
<constraints>
<output_format>
<examples>
<instructions>
<context>
xml
<constraints>
- Maximum response length: 500 words
- Use only Python 3.10+ syntax
</constraints>

Few-Shot Examples

少样本示例

  • 2-5 examples is optimal (research-backed)
  • Include edge cases and ensure format consistency
  • Start zero-shot, add examples only if needed
  • Show both good AND bad examples when relevant
  • 2-5个示例为最优(有研究支持)
  • 包含边缘案例并确保格式一致性
  • 先尝试零样本,仅在需要时添加示例
  • 相关时同时展示好示例与坏示例

Chain-of-Thought (CoT)

思维链(CoT)

Use CoTDon't Use CoT
Complex multi-step reasoningSimple factual questions
Math and logic problemsClassification tasks
Code debuggingWhen model has built-in reasoning
Key: Modern models (Claude 4.x, o1/o3) perform CoT internally. "Think step by step" is redundant.
使用CoT的场景不使用CoT的场景
复杂多步骤推理简单事实类问题
数学与逻辑问题分类任务
代码调试模型内置推理能力的场景
关键点: 现代模型(Claude 4.x、o1/o3)会在内部执行CoT。“Think step by step”这类表述已冗余。

Role Prompting

角色提示法

Helps: Creative tasks, tone/style, roleplay Doesn't help: Accuracy tasks, factual retrieval, complex reasoning
Better: "Approach systematically, showing work" vs "You are an expert"
适用场景: 创意任务、语气/风格调整、角色扮演 不适用场景: 准确性任务、事实检索、复杂推理
更好的表述:“系统地处理问题,展示推导过程” 而非 “你是专家”

Instruction Hierarchy

指令优先级

Priority: System > Developer > User > Retrieved Content
Include explicit priority in prompts with multiple constraint sources.
优先级顺序:系统指令 > 开发者指令 > 用户指令 > 检索到的内容
当提示词包含多来源约束时,需明确指定优先级。

Negative Prompting

否定提示法

Positive alternatives are more effective than negatives:
Less EffectiveMore Effective
"Don't use markdown""Use prose paragraphs"
"Don't be vague""Use specific language"
肯定表述比否定表述更有效:
效果较差效果较好
"Don't use markdown""Use prose paragraphs"
"Don't be vague""Use specific language"

Structured Output

结构化输出

  • Prompt-based: ~35.9% reliability
  • Schema enforcement: 100% reliability
  • Always provide schema example and validate output
  • 基于提示词的方式:约35.9%的可靠性
  • 模式强制:100%的可靠性
  • 始终提供模式示例并验证输出

Context Window Optimization

上下文窗口优化

Lost-in-the-Middle: Models weigh beginning and end more heavily.
Place critical constraints at start, examples in middle, error handling at end.
Lost-in-the-Middle效应: 模型对开头和结尾内容的权重更高。
将关键约束放在开头,示例放在中间,错误处理放在结尾。

Extended Thinking

深度思考引导

High-level instructions ("Think deeply") outperform step-by-step guidance. "Think step-by-step" is redundant with modern models.
高层指令(“Think deeply”)的效果优于分步指导。“Think step-by-step”对现代模型已冗余。

Anti-Patterns Quick Reference

反模式速查

Anti-PatternProblemFix
Vague references"The above code" loses contextQuote specifically
Negative-only"Don't do X" without alternativeState what TO do
Aggressive emphasis"CRITICAL: MUST"Use normal language
Redundant CoTWastes tokensLet model manage
Critical info buriedLost-in-the-middlePlace at start/end

反模式问题修复方案
模糊引用“上述代码”会丢失上下文明确引用具体内容
仅用否定表述“不要做X”但未给出替代方案说明应该做什么
过度强调“CRITICAL: MUST”这类表述使用常规语言
冗余CoT浪费Token让模型自行处理推理
关键信息被埋没陷入Lost-in-the-Middle效应将信息放在开头/结尾

Detection Patterns

检测模式

1. Clarity Issues (HIGH Certainty)

1. 清晰度问题(高确定性)

Vague Instructions: "usually", "sometimes", "try to", "if possible", "might", "could"
Negative-Only Constraints: "don't", "never", "avoid" without stating what TO do
Aggressive Emphasis: Excessive CAPS (CRITICAL, IMPORTANT), multiple !!
模糊指令: 使用“usually”、“sometimes”、“try to”、“if possible”、“might”、“could”这类模糊限定词
仅否定约束: 使用“don't”、“never”、“avoid”但未说明应该做什么
过度强调: 过度使用大写(如CRITICAL、IMPORTANT)、多个感叹号!!

2. Structure Issues (HIGH/MEDIUM Certainty)

2. 结构问题(高/中确定性)

Missing XML Structure: Complex prompts (>800 tokens) without XML tags
Inconsistent Sections: Mixed heading styles, skipped levels (H1→H3)
Critical Info Buried: Important instructions in middle 40%, constraints after examples
缺少XML结构: 复杂提示词(超过800 Token)未使用XML标签
不一致的章节: 混合使用不同标题样式、跳过层级(如从H1直接到H3)
关键信息被埋没: 重要指令位于中间40%的内容中,约束条件放在示例之后

3. Example Issues (HIGH/MEDIUM Certainty)

3. 示例问题(高/中确定性)

Missing Examples: Complex tasks without few-shot, format requests without example
Suboptimal Count: Only 1 example (optimal: 2-5), more than 7 (bloat)
Missing Contrast: No good/bad labeling, no edge cases
缺少示例: 复杂任务未使用少样本,格式要求未附带示例
示例数量不合理: 仅1个示例(最优为2-5个),或超过7个示例(冗余)
缺少对比: 未标记好/坏示例,未包含边缘案例

4. Context Issues (MEDIUM Certainty)

4. 上下文问题(中确定性)

Missing WHY: Rules without explanation
Missing Priority: Multiple constraint sections without conflict resolution
缺少原因说明: 规则未附带解释
缺少优先级: 多个约束章节未说明冲突解决方式

5. Output Format Issues (HIGH/MEDIUM Certainty)

5. 输出格式问题(高/中确定性)

Missing Format: Substantial prompts without format specification
JSON Without Schema: Requests JSON but no example structure
缺少格式指定: 较长的提示词未说明输出格式
仅要求JSON但无模式: 要求返回JSON但未给出示例结构

6. Anti-Patterns (HIGH/MEDIUM/LOW Certainty)

6. 反模式(高/中/低确定性)

Redundant CoT (HIGH): "Think step by step" with modern models
Overly Prescriptive (MEDIUM): 10+ numbered steps, micro-managing reasoning
Prompt Bloat (LOW): Over 2500 tokens, redundant instructions
Vague References (HIGH): "The above code", "as mentioned"

冗余CoT(高): 对现代模型使用“Think step by step”
过度指定(中): 超过10个编号步骤,微观管理推理过程
提示词冗余(低): 超过2500 Token,包含重复指令
模糊引用(高): 使用“The above code”、“as mentioned”这类表述

Auto-Fix Implementations

自动修复实现

1. Aggressive Emphasis

1. 过度强调问题

Replace CRITICAL→critical, !!→!, remove excessive caps
将CRITICAL替换为critical,!!替换为!,移除过度使用的大写

2. Negative-Only to Positive

2. 仅否定表述转肯定表述

Suggest positive alternatives for "don't" statements

为“don't”类表述提供肯定替代方案

Output Format

输出格式

markdown
undefined
markdown
undefined

Prompt Analysis: {prompt-name}

Prompt Analysis: {prompt-name}

File: {path} Type: {system|agent|skill|template} Token Count: ~{tokens}
File: {path} Type: {system|agent|skill|template} Token Count: ~{tokens}

Summary

Summary

  • HIGH: {count} issues
  • MEDIUM: {count} issues
  • HIGH: {count} issues
  • MEDIUM: {count} issues

Clarity Issues ({n})

Clarity Issues ({n})

| Issue | Location | Fix | Certainty |
| Issue | Location | Fix | Certainty |

Structure Issues ({n})

Structure Issues ({n})

| Issue | Location | Fix | Certainty |
| Issue | Location | Fix | Certainty |

Example Issues ({n})

Example Issues ({n})

| Issue | Location | Fix | Certainty |

---
| Issue | Location | Fix | Certainty |

---

Pattern Statistics

模式统计

CategoryPatternsAuto-Fixable
Clarity41
Structure40
Examples40
Context20
Output Format30
Anti-Pattern40
Total211

<examples>
类别模式数量可自动修复
清晰度41
结构40
示例40
上下文20
输出格式30
反模式40
总计211

<examples>

Example: Vague Instructions

示例:模糊指令

<bad_example>
markdown
You should usually follow best practices when possible.
Why it's bad: Vague qualifiers reduce determinism. </bad_example>
<good_example>
markdown
Follow these practices:
1. Validate input before processing
2. Handle null/undefined explicitly
Why it's good: Specific, actionable instructions. </good_example>
<bad_example>
markdown
You should usually follow best practices when possible.
问题原因:模糊限定词降低了确定性。 </bad_example>
<good_example>
markdown
Follow these practices:
1. Validate input before processing
2. Handle null/undefined explicitly
优秀原因:具体、可执行的指令。 </good_example>

Example: Negative-Only Constraints

示例:仅否定约束

<bad_example>
markdown
- Don't use vague language
- Never skip validation
Why it's bad: Only states what NOT to do. </bad_example>
<good_example>
markdown
- Use specific, deterministic language
- Always validate input; return structured errors
Why it's good: Each constraint includes positive action. </good_example>
<bad_example>
markdown
- Don't use vague language
- Never skip validation
问题原因:仅说明不能做什么。 </bad_example>
<good_example>
markdown
- Use specific, deterministic language
- Always validate input; return structured errors
优秀原因:每个约束都包含肯定的行动要求。 </good_example>

Example: Redundant Chain-of-Thought

示例:冗余思维链

<bad_example>
markdown
Think through this step by step:
1. First, analyze the input
2. Then, identify the key elements
Why it's bad: Modern models do this internally. Wastes tokens. </bad_example>
<good_example>
markdown
Analyze the input carefully before responding.
Why it's good: High-level guidance without micro-managing. </good_example>
<bad_example>
markdown
Think through this step by step:
1. First, analyze the input
2. Then, identify the key elements
问题原因:现代模型会自行执行此过程,浪费Token。 </bad_example>
<good_example>
markdown
Analyze the input carefully before responding.
优秀原因:高层指导,未微观管理推理过程。 </good_example>

Example: Missing Output Format

示例:缺少输出格式

<bad_example>
markdown
Respond with a JSON object containing the analysis results.
Why it's bad: No schema or example. </bad_example>
<good_example>
markdown
undefined
<bad_example>
markdown
Respond with a JSON object containing the analysis results.
问题原因:未提供模式或示例。 </bad_example>
<good_example>
markdown
undefined

Output Format

Output Format

{"status": "success|error", "findings": [{"severity": "HIGH"}]}
**Why it's good**: Concrete schema shows exact structure.
</good_example>
{"status": "success|error", "findings": [{"severity": "HIGH"}]}
**优秀原因**:具体的模式展示了精确的结构。
</good_example>

Example: Critical Info Buried

示例:关键信息被埋没

<bad_example>
markdown
undefined
<bad_example>
markdown
undefined

Task

Task

[task]
[task]

Background

Background

[500 words...]
[500 words...]

Important Constraints <- buried at end

Important Constraints <- buried at end

**Why it's bad**: Lost-in-the-middle effect.
</bad_example>

<good_example>
```markdown
**问题原因**:陷入Lost-in-the-Middle效应。
</bad_example>

<good_example>
```markdown

Task

Task

Critical Constraints <- at start

Critical Constraints <- at start

[constraints]
[constraints]

Background

Background

**Why it's good**: Critical info at start where attention is highest.
</good_example>
</examples>

---
**优秀原因**:关键信息放在模型关注度最高的开头位置。
</good_example>
</examples>

---

Constraints

约束条件

  • Only apply auto-fixes for HIGH certainty issues
  • Preserve original structure and formatting
  • Validate against embedded knowledge reference above
  • 仅对高确定性问题应用自动修复
  • 保留原始结构与格式
  • 根据上述内置知识参考进行验证