prompt-improver

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Prompt Improver Skill

提示词优化Skill

Purpose

用途

Transform vague, ambiguous prompts into actionable, well-defined requests through systematic research and targeted clarification. This skill is invoked when the hook has already determined a prompt needs enrichment.
将模糊、歧义的提示词通过系统性调研和针对性澄清转化为可执行、定义明确的请求。当钩子(hook)已判定提示词需要优化时,会调用此Skill。

When This Skill is Invoked

调用时机

Automatic invocation:
  • UserPromptSubmit hook evaluates prompt
  • Hook determines prompt is vague (missing specifics, context, or clear target)
  • Hook invokes this skill to guide research and questioning
Manual invocation:
  • To enrich a vague prompt with research-based questions
  • When building or testing prompt evaluation systems
  • When prompt lacks sufficient context even with conversation history
Assumptions:
  • Prompt has already been identified as vague
  • Evaluation phase is complete (done by hook)
  • Proceed directly to research and clarification
自动调用:
  • UserPromptSubmit钩子对提示词进行评估
  • 钩子判定提示词模糊(缺少细节、上下文或明确目标)
  • 钩子调用此Skill来指导调研和提问
手动调用:
  • 基于调研生成问题来优化模糊提示词时
  • 构建或测试提示词评估系统时
  • 即便有对话历史,提示词仍缺乏足够上下文时
预设前提:
  • 提示词已被判定为模糊
  • 评估阶段已完成(由钩子执行)
  • 直接进入调研和澄清环节

Core Workflow

核心工作流

This skill follows a 4-phase approach to prompt enrichment:
此Skill采用四阶段方法来优化提示词:

Phase 1: Research

阶段1:调研

Create a dynamic research plan using TodoWrite before asking questions.
Research Plan Template:
  1. Check conversation history first - Avoid redundant exploration if context already exists
  2. Review codebase if needed:
    • Task/Explore for architecture and project structure
    • Grep/Glob for specific patterns, related files
    • Check git log for recent changes
    • Search for errors, failing tests, TODO/FIXME comments
  3. Gather additional context as needed:
    • Read local documentation files
    • WebFetch for online documentation
    • WebSearch for best practices, common approaches, current information
  4. Document findings to ground questions in actual project context
Critical Rules:
  • NEVER skip research
  • Check conversation history before exploring codebase
  • Questions must be grounded in actual findings, not assumptions or base knowledge
For detailed research strategies, patterns, and examples, see references/research-strategies.md.
在提问前,使用TodoWrite创建动态调研计划。
调研计划模板:
  1. 优先检查对话历史 - 若已有上下文,避免重复探索
  2. 必要时审查代码库:
    • 查看任务/架构和项目结构
    • 使用Grep/Glob查找特定模式、相关文件
    • 检查git日志中的近期变更
    • 搜索错误、失败的测试、TODO/FIXME注释
  3. 按需收集额外上下文:
    • 阅读本地文档文件
    • 使用WebFetch获取在线文档
    • 使用WebSearch查找最佳实践、通用方法、最新信息
  4. 记录调研结果,让提问基于实际项目上下文
关键规则:
  • 绝对不能跳过调研步骤
  • 在探索代码库前先检查对话历史
  • 提问必须基于实际调研结果,而非假设或已有知识
如需详细的调研策略、模式和示例,请查看references/research-strategies.md

Phase 2: Generate Targeted Questions

阶段2:生成针对性问题

Based on research findings, formulate 1-6 questions that will clarify the ambiguity.
Question Guidelines:
  • Grounded: Every option comes from research (codebase findings, documentation, common patterns)
  • Specific: Avoid vague options like "Other approach"
  • Multiple choice: Provide 2-4 concrete options per question
  • Focused: Each question addresses one decision point
  • Contextual: Include brief explanations of trade-offs
Number of Questions:
  • 1-2 questions: Simple ambiguity (which file? which approach?)
  • 3-4 questions: Moderate complexity (scope + approach + validation)
  • 5-6 questions: Complex scenarios (major feature with multiple decision points)
For question templates, effective patterns, and examples, see references/question-patterns.md.
基于调研结果,制定1-6个可澄清歧义的问题。
问题制定准则:
  • 基于事实:每个选项均来自调研(代码库发现、文档、通用模式)
  • 具体明确:避免类似“其他方法”这类模糊选项
  • 选择题形式:每个问题提供2-4个具体选项
  • 聚焦重点:每个问题解决一个决策点
  • 上下文相关:包含关于取舍的简短说明
问题数量:
  • 1-2个问题:简单歧义(如针对哪个文件?采用哪种方法?)
  • 3-4个问题:中等复杂度(范围+方法+验证)
  • 5-6个问题:复杂场景(包含多个决策点的大型功能)
如需问题模板、有效模式和示例,请查看references/question-patterns.md

Phase 3: Get Clarification

阶段3:获取澄清

Use the AskUserQuestion tool to present your research-grounded questions.
AskUserQuestion Format:
- question: Clear, specific question ending with ?
- header: Short label (max 12 chars) for UI display
- multiSelect: false (unless choices aren't mutually exclusive)
- options: Array of 2-4 specific choices from research
  - label: Concise choice text (1-5 words)
  - description: Context about this option (trade-offs, implications)
Important: Always include multiSelect field (true/false). User can always select "Other" for custom input.
使用AskUserQuestion工具展示基于调研的问题。
AskUserQuestion格式:
- question: 清晰、具体的问题,以问号结尾
- header: 用于UI显示的短标签(最多12个字符)
- multiSelect: false(除非选项之间不互斥)
- options: 来自调研的2-4个具体选项数组
  - label: 简洁的选项文本(1-5个词)
  - description: 此选项的上下文信息(取舍、影响)
重要提示: 务必包含multiSelect字段(true/false)。用户始终可以选择“其他”来输入自定义内容。

Phase 4: Execute with Context

阶段4:结合上下文执行

Proceed with the original user request using:
  • Original prompt intent
  • Clarification answers from user
  • Research findings and context
  • Conversation history
Execute the request as if it had been clear from the start.
使用以下信息执行原始用户请求:
  • 原始提示词的意图
  • 用户提供的澄清答案
  • 调研结果和上下文
  • 对话历史
按照从一开始提示词就清晰明确的方式执行请求。

Examples

示例

Example 1: Skill Invocation → Research → Questions → Execution

示例1:Skill调用→调研→提问→执行

Hook evaluation: Determined prompt is vague Original prompt: "fix the bug" Skill invoked: Yes (prompt lacks target and context)
Research plan:
  1. Check conversation history for recent errors
  2. Explore codebase for failing tests
  3. Grep for TODO/FIXME comments
  4. Check git log for recent problem areas
Research findings:
  • Recent conversation mentions login failures
  • auth.py:145 has try/catch swallowing errors
  • Tests failing in test_auth.py
Questions generated:
  1. Which bug are you referring to?
    • Login authentication failure (auth.py:145)
    • Session timeout issues (session.py:89)
    • Other
User answer: Login authentication failure
Execution: Fix the error handling in auth.py:145 that's causing login failures
钩子评估: 判定提示词模糊 原始提示词: "修复bug" 是否调用Skill: 是(提示词缺少目标和上下文)
调研计划:
  1. 检查对话历史中的近期错误
  2. 探索代码库中的失败测试
  3. 使用Grep查找TODO/FIXME注释
  4. 检查git日志中的近期问题区域
调研结果:
  • 近期对话提及登录失败
  • auth.py:145有try/catch语句吞掉了错误
  • test_auth.py中的测试失败
生成的问题:
  1. 你指的是哪个bug?
    • 登录认证失败(auth.py:145)
    • 会话超时问题(session.py:89)
    • 其他
用户回答: 登录认证失败
执行: 修复auth.py:145中导致登录失败的错误处理逻辑

Example 2: Clear Prompt (Skill Not Invoked)

示例2:清晰提示词(不调用Skill)

Original prompt: "Refactor the getUserById function in src/api/users.ts to use async/await instead of promises"
Hook evaluation: Passes all checks
  • Specific target: getUserById in src/api/users.ts
  • Clear action: refactor to async/await
  • Success criteria: use async/await instead of promises
Skill invoked: No (prompt is clear, proceeds immediately without skill invocation)
For comprehensive examples showing various prompt types and transformations, see references/examples.md.
原始提示词: "将src/api/users.ts中的getUserById函数重构为使用async/await而非promises"
钩子评估: 通过所有检查
  • 明确目标:src/api/users.ts中的getUserById
  • 清晰操作:重构为使用async/await
  • 成功标准:使用async/await替代promises
是否调用Skill: 否(提示词清晰,无需调用Skill,直接执行)
如需展示各类提示词类型和转化的综合示例,请查看references/examples.md

Key Principles

核心原则

  1. Assume Vagueness: Skill is only invoked for vague prompts (evaluation done by hook)
  2. Research First: Always gather context before formulating questions
  3. Ground Questions: Use research findings, not assumptions or base knowledge
  4. Be Specific: Provide concrete options from actual codebase/context
  5. Stay Focused: Max 1-6 questions, each addressing one decision point
  6. Systematic Approach: Follow 4-phase workflow (Research → Questions → Clarify → Execute)
  1. 默认模糊:仅在提示词模糊时调用此Skill(由钩子完成评估)
  2. 调研优先:制定问题前务必收集上下文
  3. 问题基于事实:使用调研结果,而非假设或已有知识
  4. 具体明确:提供来自实际代码库/上下文的具体选项
  5. 聚焦重点:最多1-6个问题,每个问题解决一个决策点
  6. 系统化方法:遵循四阶段工作流(调研→提问→澄清→执行)

Progressive Disclosure

渐进式披露

This SKILL.md contains the core workflow and essentials. For deeper guidance:
  • Research strategies: references/research-strategies.md
  • Question patterns: references/question-patterns.md
  • Comprehensive examples: references/examples.md
Load these references only when detailed guidance is needed on specific aspects of prompt improvement.
本SKILL.md包含核心工作流和基本内容。如需更深入的指导:
  • 调研策略references/research-strategies.md
  • 问题模式references/question-patterns.md
  • 综合示例references/examples.md
仅当需要针对提示词优化的特定方面获取详细指导时,才加载这些参考文档。