prompt-generator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Prompt Generator

提示词生成器

Take the user's rough thoughts, scattered notes, or half-formed ideas and turn them into a clean, well-structured LLM prompt. This is a formatter and structurer, not a brainstorming tool - the user already knows what they want, they just need help wording and organizing it.
接收用户的粗略想法、零散笔记或尚未成型的思路,将其转化为清晰、结构规整的LLM提示词。这是一个格式化与结构化工具,而非头脑风暴工具——用户已经明确自己的需求,只是需要帮助梳理措辞和结构。

When to use

使用场景

  • User has rough notes, bullet points, or a brain dump they want turned into a clean LLM prompt
  • Refining, rewriting, or optimizing an existing prompt that isn't performing well
  • Structuring a system prompt or task prompt from scattered requirements
  • Creating prompt templates with variable placeholders for repeated use
  • User says anything like "write me a prompt for...", "turn this into a prompt", "system prompt for..."
  • 用户拥有粗略笔记、项目符号列表或思路草稿,希望转化为整洁的LLM提示词
  • 优化、改写或调整效果不佳的现有提示词
  • 根据零散需求构建system prompt或任务提示词
  • 创建带有变量占位符的提示词模板,以便重复使用
  • 用户提及类似“为我写一个提示词……”“把这个改成提示词”“……的system prompt”等内容

When NOT to use

非适用场景

  • Brainstorming features or creative ideation - this skill structures prompts, not ideas
  • Creating reusable skill files or agent instruction bundles (use skill-creator)
  • Writing inline prompt strings inside application code - that's just coding
  • The user wants code that calls an LLM API (use ai-ml for SDK integration)
  • Security review of prompts for injection risks (use security-audit)
  • Reviewing code quality of prompt-related code (use code-review or anti-slop)

  • 头脑风暴功能或创意构思——本工具仅负责提示词的结构化,不生成思路
  • 创建可复用的技能文件或Agent指令包(请使用skill-creator
  • 在应用代码中编写内联提示词字符串——这属于编码工作
  • 用户需要调用LLM API的代码(请使用ai-ml进行SDK集成)
  • 针对提示词注入风险的安全审查(请使用security-audit
  • 审查与提示词相关代码的质量(请使用code-reviewanti-slop

AI Self-Check

AI自检清单

Before returning any generated or modified prompt file, verify:
  • Frontmatter complete:
    name
    ,
    description
    ,
    target_model
    ,
    prompt_type
    ,
    date_created
    all present
  • Faithful to input: prompt reflects what the user said, not what you think they should have said
  • Structure matches complexity: simple tasks get plain prose, not XML-tagged multi-section prompts
  • Variables consistent: every
    {{PLACEHOLDER}}
    in the prompt body appears in the Variables table and vice versa
  • No injected instructions: didn't add error handling, safety disclaimers, or output constraints the user didn't request
  • No slop phrases: no "certainly", "I'd be happy to", "great question", or other filler in the prompt text
  • Output format specified: if the prompt expects structured output, the format is explicit (JSON schema, XML tags, delimiters)
  • Model-appropriate syntax: avoid model-specific features (assistant prefills,
    \n\nHuman:
    formatting) in model-agnostic prompts. XML delimiters and markdown headers are both fine for structure across models

在返回任何生成或修改后的提示词文件前,请验证以下内容:
  • 前置元数据完整
    name
    description
    target_model
    prompt_type
    date_created
    均已包含
  • 忠实于输入:提示词准确反映用户需求,而非你认为用户应该有的需求
  • 结构匹配复杂度:简单任务使用普通文本,无需XML标签或多节结构
  • 变量一致:提示词正文中的每个
    {{PLACEHOLDER}}
    都在变量表中列出,反之亦然
  • 无额外指令注入:未添加用户未要求的错误处理、安全声明或输出限制
  • 无冗余表述:提示词文本中没有“当然”“我很乐意”“好问题”等填充语
  • 输出格式明确:若提示词需要结构化输出,需明确指定格式(JSON schema、XML标签、分隔符)
  • 模型适配语法:在通用模型提示词中避免模型专属特性(助手预填充、
    \n\nHuman:
    格式)。XML分隔符和Markdown标题适用于跨模型的结构设计

Workflow

工作流程

Step 1: Read the brain dump

步骤1:读取思路草稿

The user will give you rough notes, bullet points, or a stream-of-consciousness description of what they want the prompt to do. Parse it for:
  • Core task: What should the prompted model actually do?
  • Target model: Which LLM? Default: model-agnostic unless the user names one.
  • Prompt type: System prompt vs. task prompt
  • Constraints: Any rules, format requirements, or behavioral boundaries mentioned
  • Variables: Any dynamic content that should become
    {{PLACEHOLDERS}}
Don't overthink this. Don't add things the user didn't mention. The goal is to faithfully structure their intent, not to "improve" it with your own ideas.
用户会提供粗略笔记、项目符号列表或关于提示词需求的意识流描述。解析以下信息:
  • 核心任务:被提示的模型实际需要完成什么?
  • 目标模型:使用哪款LLM?默认:未指定时为通用模型
  • 提示词类型:system prompt vs 任务提示词
  • 约束条件:提及的任何规则、格式要求或行为边界
  • 变量:需要设为
    {{PLACEHOLDERS}}
    的动态内容
无需过度解读,不要添加用户未提及的内容。目标是忠实地结构化用户意图,而非用你的想法“优化”它。

Step 2: Clarify only if stuck

步骤2:仅在困惑时澄清

If something is genuinely ambiguous (you can't tell if it's a system prompt or task prompt, or the target model matters for technique choice), ask. Batch questions, max 1 round. If you can reasonably infer it, just infer it. If ambiguity remains after the one round, pick the most reasonable default and note your assumption so the user can correct it during review.
Most of the time, skip this step entirely.
若存在明确歧义(无法区分是system prompt还是任务提示词,或目标模型会影响技术选择),可提问。批量提问,最多一轮。若可合理推断,则直接推断。若一轮澄清后仍有歧义,选择最合理的默认值并标注你的假设,方便用户在审核时修正。
大多数情况下可跳过此步骤。

Step 3: Structure and present

步骤3:结构化并展示

  1. Turn the rough notes into a clean prompt, applying structure proportional to complexity:
    • Simple (one task, no variables): plain prose, 3-10 lines. No XML, no sections.
    • Medium (multiple steps or constraints): numbered steps, clear sections.
    • Complex (agentic, multi-document, behavioral rules): clear section delimiters, variable placeholders, explicit output format.
  2. Present the prompt in conversation for review. Don't write files yet.
  3. On approval, save to file (see Output Format below).
  4. Revisions: edit in place, don't create new files.
  1. 将粗略笔记转化为整洁的提示词,根据复杂度匹配结构:
    • 简单任务(单一任务,无变量):普通文本,3-10行。无需XML或分段。
    • 中等任务(多步骤或约束):编号步骤,清晰分段。
    • 复杂任务(Agent化、多文档、行为规则):明确的分段分隔符、变量占位符、清晰的输出格式。
  2. 在对话中展示提示词供审核。暂不写入文件。
  3. 获得批准后,保存到文件(见下方输出格式)。
  4. 修订:直接编辑现有内容,不要创建新文件。

Step 4: Save

步骤4:保存

  1. Resolve output directory: user-specified path >
    docs/prompts/
    >
    docs/
    > ask
  2. Scan for
    NNN-*.md
    files, increment highest number, zero-pad to 3 digits
  3. Infer a slug from the topic (e.g.,
    code-review
    ,
    data-extraction
    )
  4. Write to
    <output-dir>/NNN-slug.md

  1. 确定输出目录:用户指定路径 >
    docs/prompts/
    >
    docs/
    > 询问用户
  2. 扫描
    NNN-*.md
    文件,将最高编号递增,补零至3位数字
  3. 根据主题推断别名(例如:
    code-review
    data-extraction
  4. 写入
    <output-dir>/NNN-slug.md

Output File Format

输出文件格式

markdown
---
name: Descriptive Prompt Name
description: One-line summary
target_model: model-agnostic
prompt_type: system | task
date_created: YYYY-MM-DD
---
markdown
---
name: 描述性提示词名称
description: 一行摘要
target_model: model-agnostic
prompt_type: system | task
date_created: YYYY-MM-DD
---

Purpose

用途

What this prompt does and when to use it.
本提示词的功能及使用场景。

Variables

变量

VariableDescriptionRequired
{{VAR}}
What it isYes/No
变量描述是否必填
{{VAR}}
变量说明是/否

Prompt

提示词

The actual prompt content here.

Only include sections that apply. A simple prompt with no variables skips the Variables table.

Optional frontmatter additions: `tags: [...]`, `related: [NNN-other.md]` - only when genuinely useful.

**Target model values**: `claude`, `gpt`, `gemini`, `llama`, `mistral`, `model-agnostic`

---
此处为实际提示词内容。

仅保留适用的章节。无变量的简单提示词可跳过变量表。

可选的前置元数据补充:`tags: [...]`、`related: [NNN-other.md]`——仅在真正有用时添加。

**目标模型取值**:`claude`、`gpt`、`gemini`、`llama`、`mistral`、`model-agnostic`

---

Structuring Guidelines

结构化指南

These are for YOU when structuring the user's notes. Not a knowledge dump - just the non-obvious stuff.
Match complexity to content. A 3-line task doesn't need XML tags and numbered steps. A multi-document agentic system prompt does. The user's rough notes give you the complexity signal.
Long content goes on top. If the prompt will receive large documents or data at runtime, position the data slot at the top and the task instructions at the bottom. Up to 30% better performance on multi-document tasks.
Explain WHY, not just WHAT. When the user's notes include a rule ("don't use markdown"), turn it into a motivated constraint ("write in plain prose because the output feeds a TTS engine"). Models generalize from motivation.
Agentic prompts need boundaries. If the prompt is for a coding agent or automation, separate what it can do freely (reads, searches) from what needs confirmation (deletes, publishes, pushes).
Anti-hallucination is a sentence, not a paragraph. "Only make claims verifiable from the provided context. If unsure, say so." That's it.
这些是你在结构化用户笔记时的参考规则。无需向用户输出——仅记录非显而易见的要点。
结构复杂度匹配内容复杂度。3行的任务无需XML标签和编号步骤,多文档Agent化的system prompt则需要。用户的粗略笔记会给出复杂度信号。
长内容置于顶部。若提示词在运行时会接收大文档或数据,将数据插槽放在顶部,任务指令放在底部。在多文档任务中可提升最高30%的性能。
解释原因,而非仅说明要求。当用户笔记包含规则(“不要使用markdown”),将其转化为有依据的约束(“使用纯文本,因为输出会接入TTS引擎”)。模型会从动机中进行泛化。
Agent化提示词需要边界。若提示词用于编码Agent或自动化,需明确区分可自主执行的操作(读取、搜索)和需要确认的操作(删除、发布、推送)。
反幻觉只需一句话,而非段落。“仅基于提供的上下文做出可验证的声明。若不确定,请说明。”这样就足够了。

Model-Specific Formatting

模型专属格式

When the target model is known, adapt format to its strengths:
TargetPreferred structureNotes
ClaudeXML tags for sections, markdown for contentSupports assistant prefill; use
<result>
tags for structured output
GPTMarkdown headers, JSON schema for structured outputNative JSON mode available - use it over prose format instructions
GeminiMarkdown sections, explicit output examplesSeparate instructions for text vs. attached files/images
Model-agnosticMarkdown headers + explicit delimitersAvoid prefills, model-specific tags, or format-mode flags
Aggressive shouting ("CRITICAL!", "YOU MUST", "NEVER EVER") usually hurts more than it helps. Use calm, explicit instructions.
当目标模型明确时,根据其优势调整格式:
目标模型推荐结构说明
Claude分段使用XML标签,内容使用Markdown支持助手预填充;使用
<result>
标签实现结构化输出
GPTMarkdown标题,结构化输出使用JSON schema原生JSON模式可用——优先使用该模式,而非文本格式指令
GeminiMarkdown分段,明确的输出示例区分文本与附加文件/图片的指令
Model-agnosticMarkdown标题 + 明确分隔符避免预填充、模型专属标签或格式模式标记
过于强硬的表述(“CRITICAL!”“YOU MUST”“NEVER EVER”)通常弊大于利。使用冷静、明确的指令。

Structured Output Guidance

结构化输出指导

When the prompt is for agent consumption (not human reading), specify output format explicitly:
  • JSON mode: if the tool supports native JSON mode or schema-constrained output, use it. Otherwise instruct the model to return valid JSON and seed with
    {
    only when the tool supports assistant prefills.
  • XML structure: wrap output in tags like
    <result>
    ,
    <analysis>
    ,
    <decision>
    .
  • Delimiter-based: for simple key-value, use
    KEY: value
    format.
Include a concrete output example in the prompt whenever possible - models generalize better from examples than from format descriptions.
当提示词供Agent使用(而非人类阅读),需明确指定输出格式:
  • JSON模式:若工具支持原生JSON模式或受 schema 约束的输出,则使用该模式。否则指示模型返回有效的JSON,仅当工具支持助手预填充时才用
    {
    开头。
  • XML结构:用
    <result>
    <analysis>
    <decision>
    等标签包裹输出。
  • 分隔符格式:对于简单键值对,使用
    KEY: value
    格式。
尽可能在提示词中包含具体的输出示例——模型从示例中泛化的效果优于格式描述。

The Four-Block Pattern

四模块模式

For medium-to-complex prompts, structure into four clear blocks:
  1. INSTRUCTIONS - what to do (role, task, constraints)
  2. CONTEXT - background information, reference data
  3. TASK - the specific request for this invocation
  4. OUTPUT FORMAT - exact structure of the expected response
Keep blocks visually separated with XML tags, markdown headers, or other clear delimiters. Place long context documents before shorter task instructions (see "Long content goes on top" above).

对于中等至复杂的提示词,可分为四个清晰模块:
  1. INSTRUCTIONS(指令) - 需要完成的内容(角色、任务、约束)
  2. CONTEXT(上下文) - 背景信息、参考数据
  3. TASK(任务) - 本次调用的具体请求
  4. OUTPUT FORMAT(输出格式) - 预期响应的精确结构
使用XML标签、Markdown标题或其他清晰分隔符在视觉上区分模块。将长上下文文档放在较短的任务指令之前(见上文“长内容置于顶部”)。

Refining Existing Prompts

优化现有提示词

If the user gives you an existing prompt to improve (not rough notes):
  1. Read it
  2. Diagnose gaps - check for these common prompt weaknesses:
    • Missing scope: no clear boundary on what the model should and shouldn't do
    • No output format: model guesses structure instead of following a spec
    • Vague role: "helpful assistant" tells the model nothing useful
    • Missing constraints: no anti-patterns, no "do not" list, no quality criteria
    • Over-specified: drowning the model in rules when 2-3 clear constraints would work
  3. Present specific changes with reasoning - not a full rewrite unless it's warranted
  4. On approval, edit in place
Example refinement:
Before:
You are a helpful assistant that reviews code.
After:
You are a senior code reviewer. For each file, check for: bugs, edge cases, security issues, and performance problems. Report findings as a list with severity (critical/warning/info), file:line, and a one-line description. Skip style nitpicks. If nothing is wrong, say "No issues found."
- added: scope, output format, severity scale, constraint against noise.
若用户提供现有提示词要求优化(而非粗略笔记):
  1. 阅读提示词
  2. 诊断缺陷——检查以下常见的提示词问题:
    • 缺少范围界定:未明确模型应做和不应做的边界
    • 无输出格式:模型需猜测结构,而非遵循规范
    • 角色模糊:“乐于助人的助手”对模型没有实际指导意义
    • 缺少约束:无反模式、无“禁止”列表、无质量标准
    • 过度指定:用大量规则淹没模型,而实际上2-3条清晰约束即可生效
  3. 提出具体修改建议并说明理由——除非必要,否则不要完全重写
  4. 获得批准后,直接编辑现有内容
优化示例:
优化前:
You are a helpful assistant that reviews code.
优化后:
You are a senior code reviewer. For each file, check for: bugs, edge cases, security issues, and performance problems. Report findings as a list with severity (critical/warning/info), file:line, and a one-line description. Skip style nitpicks. If nothing is wrong, say "No issues found."
- 新增:范围、输出格式、严重程度分级、避免冗余内容的约束。

Example: Creation from Scratch

示例:从零创建提示词

User input (brain dump):
I want an AI that helps me write better emails. It should fix grammar, make things more concise, and match the tone I want. Sometimes formal, sometimes casual.
Structured output:
You are an email editor. You receive a draft email and a target tone, then return an improved
version.

Your edits:
- Fix grammar and spelling errors
- Make the text more concise - cut filler words and redundant phrases
- Match the requested tone (formal, casual, or neutral)

Preserve the sender's intent and meaning. Do not add information they didn't include.
Do not change the greeting or sign-off style unless it clashes with the requested tone.

Input format:
  TONE: {{TONE}}
  DRAFT:
  {{EMAIL_DRAFT}}

Return only the improved email. No commentary, no explanations, no "Here's your improved email:".
Note: simple task, so plain prose - no XML sections, no numbered steps, no bloated preamble.

用户输入(思路草稿):
我想要一个帮我写更好邮件的AI。它需要修正语法、让内容更简洁, 还要匹配我想要的语气。有时正式,有时随意。
结构化输出:
You are an email editor. You receive a draft email and a target tone, then return an improved
version.

Your edits:
- Fix grammar and spelling errors
- Make the text more concise - cut filler words and redundant phrases
- Match the requested tone (formal, casual, or neutral)

Preserve the sender's intent and meaning. Do not add information they didn't include.
Do not change the greeting or sign-off style unless it clashes with the requested tone.

Input format:
  TONE: {{TONE}}
  DRAFT:
  {{EMAIL_DRAFT}}

Return only the improved email. No commentary, no explanations, no "Here's your improved email:".
说明:任务简单,因此使用普通文本——无需XML分段、编号步骤或冗长的前言。

Related Skills

相关技能

  • skill-creator - creates reusable skill files (SKILL.md) for AI tools and coding agents. Skills are structured prompts, but they follow different conventions (frontmatter, workflow sections, rules) than standalone prompts. If someone says "create a skill", use skill-creator.
  • Application code - if the user needs a prompt string inside application code (for example a TypeScript
    const systemPrompt = ...
    ), that's coding, not this skill.
  • anti-slop - if the user asks to "clean up" or "simplify" a prompt embedded in code, that's a code quality issue, not prompt structuring.

  • skill-creator - 为AI工具和编码Agent创建可复用的技能文件(SKILL.md)。技能属于结构化提示词,但遵循与独立提示词不同的规范(前置元数据、工作流章节、规则)。若用户要求“创建一个skill”,请使用skill-creator。
  • 应用代码 - 若用户需要在应用代码中编写提示词字符串(例如TypeScript中的
    const systemPrompt = ...
    ),这属于编码工作,而非本技能的范畴。
  • anti-slop - 若用户要求“清理”或“简化”代码中嵌入的提示词,这属于代码质量问题,而非提示词结构化工作。

Rules

规则

  1. Faithful structuring. Organize what the user said, not what you think they should have said. If they didn't mention error handling, don't add error handling instructions. If they didn't mention output format, ask or leave it open.
  2. Never write files without approval. Always present in conversation first.
  3. Scale structure to complexity. Simple = lean. Complex = structured. Never the reverse.
  4. Respect their voice. If the rough notes have a specific tone or personality, preserve it in the structured version.
  5. Run the AI Self-Check. Every generated prompt file gets verified against the checklist before returning.
  1. 忠实结构化。整理用户表述的内容,而非你认为用户应该表述的内容。若用户未提及错误处理,不要添加错误处理指令。若用户未提及输出格式,可询问或留空。
  2. 未经批准绝不写入文件。始终先在对话中展示内容。
  3. 结构复杂度匹配内容。简单任务用简洁结构,复杂任务用结构化设计。切勿反向操作。
  4. 尊重用户语气。若粗略笔记带有特定语气或风格,在结构化版本中予以保留。
  5. 执行AI自检。每个生成的提示词文件在返回前都需对照清单验证。