prompt-engineering

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Prompt Engineering Patterns

Prompt Engineering 模式

Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.
用于最大化LLM性能、可靠性和可控性的高级Prompt Engineering技术。

Core Capabilities

核心能力

1. Few-Shot Learning

1. Few-Shot Learning

Teach the model by showing examples instead of explaining rules. Include 2-5 input-output pairs that demonstrate the desired behavior. Use when you need consistent formatting, specific reasoning patterns, or handling of edge cases. More examples improve accuracy but consume tokens—balance based on task complexity.
Example:
markdown
Extract key information from support tickets:

Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}

Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}

Now process: "Can't upload files larger than 10MB, getting timeout"
通过展示示例而非解释规则来训练模型。包含2-5个演示期望行为的输入输出对。当你需要一致的格式、特定的推理模式或处理边缘情况时使用。更多示例会提高准确性,但会消耗token——根据任务复杂度平衡数量。
示例:
markdown
从支持工单中提取关键信息:

输入:"我的登录无法正常工作,一直收到403错误"
输出:{"issue": "authentication", "error_code": "403", "priority": "high"}

输入:"功能请求:在设置中添加深色模式"
输出:{"issue": "feature_request", "error_code": null, "priority": "low"}

现在处理:"无法上传大于10MB的文件,出现超时"

2. Chain-of-Thought Prompting

2. Chain-of-Thought Prompting

Request step-by-step reasoning before the final answer. Add "Let's think step by step" (zero-shot) or include example reasoning traces (few-shot). Use for complex problems requiring multi-step logic, mathematical reasoning, or when you need to verify the model's thought process. Improves accuracy on analytical tasks by 30-50%.
Example:
markdown
Analyze this bug report and determine root cause.

Think step by step:
1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?

Bug: "Users can't save drafts after the cache update deployed yesterday"
要求在给出最终答案前进行逐步推理。添加“Let's think step by step”(零样本)或包含示例推理轨迹(少样本)。用于需要多步骤逻辑、数学推理或需要验证模型思考过程的复杂问题。可将分析任务的准确性提高30-50%。
示例:
markdown
分析此bug报告并确定根本原因。

逐步思考:
1. 预期行为是什么?
2. 实际行为是什么?
3. 最近有哪些变化可能导致此问题?
4. 涉及哪些组件?
5. 最可能的根本原因是什么?

Bug:"用户在昨天部署缓存更新后无法保存草稿"

3. Prompt Optimization

3. 提示词优化

Systematically improve prompts through testing and refinement. Start simple, measure performance (accuracy, consistency, token usage), then iterate. Test on diverse inputs including edge cases. Use A/B testing to compare variations. Critical for production prompts where consistency and cost matter.
Example:
markdown
Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points

Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance

Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information
通过测试和改进系统性地优化提示词。从简单版本开始,衡量性能(准确性、一致性、token使用量),然后迭代。在多样化输入(包括边缘情况)上进行测试。使用A/B测试比较不同变体。这对于生产环境中要求一致性和成本控制的提示词至关重要。
示例:
markdown
版本1(简单):"总结这篇文章"
→ 结果:长度不一致,遗漏关键点

版本2(添加约束):"用3个要点总结"
→ 结果:结构更好,但仍遗漏细节

版本3(添加推理要求):"识别3个主要结论,然后分别总结"
→ 结果:一致、准确,涵盖关键信息

4. Template Systems

4. 模板系统

Build reusable prompt structures with variables, conditional sections, and modular components. Use for multi-turn conversations, role-based interactions, or when the same pattern applies to different inputs. Reduces duplication and ensures consistency across similar tasks.
Example:
python
undefined
构建包含变量、条件部分和模块化组件的可重用提示词结构。用于多轮对话、基于角色的交互或相同模式适用于不同输入的场景。减少重复并确保相似任务的一致性。
示例:
python
undefined

Reusable code review template

可重用的代码审查模板

template = """ Review this {language} code for {focus_area}.
Code: {code_block}
Provide feedback on: {checklist} """
template = """ 审查这段{language}代码的{focus_area}。
代码: {code_block}
提供以下方面的反馈: {checklist} """

Usage

使用示例

prompt = template.format( language="Python", focus_area="security vulnerabilities", code_block=user_code, checklist="1. SQL injection\n2. XSS risks\n3. Authentication" )
undefined
prompt = template.format( language="Python", focus_area="安全漏洞", code_block=user_code, checklist="1. SQL注入\n2. XSS风险\n3. 身份验证" )
undefined

5. System Prompt Design

5. System Prompt 设计

Set global behavior and constraints that persist across the conversation. Define the model's role, expertise level, output format, and safety guidelines. Use system prompts for stable instructions that shouldn't change turn-to-turn, freeing up user message tokens for variable content.
Example:
markdown
System: You are a senior backend engineer specializing in API design.

Rules:
- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern

Format responses as:
1. Analysis
2. Recommendation
3. Code example
4. Trade-offs
设置在对话中持续生效的全局行为和约束。定义模型的角色、专业水平、输出格式和安全准则。使用System Prompt来提供不应随对话轮次变化的稳定指令,从而将用户消息的token留给可变内容。
示例:
markdown
System: 你是一名专注于API设计的资深后端工程师。

规则:
- 始终考虑可扩展性和性能
- 默认建议RESTful模式
- 立即标记安全问题
- 提供Python代码示例
- 使用提前返回模式

响应格式:
1. 分析
2. 建议
3. 代码示例
4. 权衡

Key Patterns

关键模式

Progressive Disclosure

渐进式披露

Start with simple prompts, add complexity only when needed:
  1. Level 1: Direct instruction
    • "Summarize this article"
  2. Level 2: Add constraints
    • "Summarize this article in 3 bullet points, focusing on key findings"
  3. Level 3: Add reasoning
    • "Read this article, identify the main findings, then summarize in 3 bullet points"
  4. Level 4: Add examples
    • Include 2-3 example summaries with input-output pairs
从简单提示词开始,仅在需要时添加复杂度:
  1. 级别1:直接指令
    • "总结这篇文章"
  2. 级别2:添加约束
    • "用3个要点总结这篇文章,重点关注关键发现"
  3. 级别3:添加推理要求
    • "阅读这篇文章,识别主要结论,然后用3个要点总结"
  4. 级别4:添加示例
    • 包含2-3个带输入输出对的示例总结

Instruction Hierarchy

指令层次结构

[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
[系统上下文] → [任务指令] → [示例] → [输入数据] → [输出格式]

Error Recovery

错误恢复

Build prompts that gracefully handle failures:
  • Include fallback instructions
  • Request confidence scores
  • Ask for alternative interpretations when uncertain
  • Specify how to indicate missing information
构建可优雅处理失败的提示词:
  • 包含回退指令
  • 请求置信度分数
  • 当不确定时要求替代解释
  • 指定如何指示缺失信息

Best Practices

最佳实践

  1. Be Specific: Vague prompts produce inconsistent results
  2. Show, Don't Tell: Examples are more effective than descriptions
  3. Test Extensively: Evaluate on diverse, representative inputs
  4. Iterate Rapidly: Small changes can have large impacts
  5. Monitor Performance: Track metrics in production
  6. Version Control: Treat prompts as code with proper versioning
  7. Document Intent: Explain why prompts are structured as they are
  1. 具体明确:模糊的提示词会产生不一致的结果
  2. 展示而非告知:示例比描述更有效
  3. 广泛测试:在多样化、有代表性的输入上评估
  4. 快速迭代:微小的变化可能带来巨大的影响
  5. 监控性能:在生产环境中跟踪指标
  6. 版本控制:将提示词视为代码进行适当的版本管理
  7. 记录意图:解释提示词为何如此构建

Common Pitfalls

常见陷阱

  • Over-engineering: Starting with complex prompts before trying simple ones
  • Example pollution: Using examples that don't match the target task
  • Context overflow: Exceeding token limits with excessive examples
  • Ambiguous instructions: Leaving room for multiple interpretations
  • Ignoring edge cases: Not testing on unusual or boundary inputs
  • 过度设计:在尝试简单提示词之前就使用复杂版本
  • 示例污染:使用与目标任务不匹配的示例
  • 上下文溢出:因过多示例超出token限制
  • 模糊指令:留下多种解释的空间
  • 忽略边缘情况:未在异常或边界输入上测试

Integration Patterns

集成模式

With RAG Systems

与RAG系统集成

python
undefined
python
undefined

Combine retrieved context with prompt engineering

将检索到的上下文与Prompt Engineering结合

prompt = f"""Given the following context: {retrieved_context}
{few_shot_examples}
Question: {user_question}
Provide a detailed answer based solely on the context above. If the context doesn't contain enough information, explicitly state what's missing."""
undefined
prompt = f"""给定以下上下文: {retrieved_context}
{few_shot_examples}
问题:{user_question}
仅根据上述上下文提供详细答案。如果上下文包含的信息不足,明确说明缺失的内容。"""
undefined

With Validation

与验证集成

python
undefined
python
undefined

Add self-verification step

添加自我验证步骤

prompt = f"""{main_task_prompt}
After generating your response, verify it meets these criteria:
  1. Answers the question directly
  2. Uses only information from provided context
  3. Cites specific sources
  4. Acknowledges any uncertainty
If verification fails, revise your response."""
undefined
prompt = f"""{main_task_prompt}
生成响应后,验证其是否符合以下标准:
  1. 直接回答问题
  2. 仅使用提供的上下文信息
  3. 引用特定来源
  4. 承认任何不确定性
如果验证失败,请修改你的响应。"""
undefined

Performance Optimization

性能优化

Token Efficiency

Token效率

  • Remove redundant words and phrases
  • Use abbreviations consistently after first definition
  • Consolidate similar instructions
  • Move stable content to system prompts
  • 删除冗余的单词和短语
  • 在首次定义后一致使用缩写
  • 合并相似的指令
  • 将稳定内容移至System Prompt

Latency Reduction

延迟降低

  • Minimize prompt length without sacrificing quality
  • Use streaming for long-form outputs
  • Cache common prompt prefixes
  • Batch similar requests when possible

  • 在不牺牲质量的前提下最小化提示词长度
  • 对长文本输出使用流式传输
  • 缓存常见的提示词前缀
  • 尽可能批量处理相似请求

Agent Prompting Best Practices

Agent Prompting 最佳实践

Based on Anthropic's official best practices for agent prompting.
基于Anthropic官方的Agent提示词最佳实践。

Core principles

核心原则

Context Window

上下文窗口

The “context window” refers to the entirety of the amount of text a language model can look back on and reference when generating new text plus the new text it generates. This is different from the large corpus of data the language model was trained on, and instead represents a “working memory” for the model. A larger context window allows the model to understand and respond to more complex and lengthy prompts, while a smaller context window may limit the model’s ability to handle longer prompts or maintain coherence over extended conversations.
  • Progressive token accumulation: As the conversation advances through turns, each user message and assistant response accumulates within the context window. Previous turns are preserved completely.
  • Linear growth pattern: The context usage grows linearly with each turn, with previous turns preserved completely.
  • 200K token capacity: The total available context window (200,000 tokens) represents the maximum capacity for storing conversation history and generating new output from Claude.
  • Input-output flow: Each turn consists of:
    • Input phase: Contains all previous conversation history plus the current user message
    • Output phase: Generates a text response that becomes part of a future input
“上下文窗口”指的是语言模型在生成新文本时可以回顾和引用的全部文本,加上它生成的新文本。这与语言模型训练时使用的大型语料库不同,而是代表模型的“工作记忆”。更大的上下文窗口允许模型理解和响应更复杂、更长的提示词,而较小的上下文窗口可能限制模型处理长提示词或在扩展对话中保持连贯性的能力。
  • 渐进式token累积:随着对话轮次推进,每条用户消息和助手响应都会累积到上下文窗口中。之前的轮次会被完整保留。
  • 线性增长模式:上下文使用量随每个轮次线性增长,之前的轮次被完整保留。
  • 200K token容量:Claude的总可用上下文窗口(200,000 token)代表存储对话历史和生成新输出的最大容量。
  • 输入输出流程:每个轮次包括:
    • 输入阶段:包含所有之前的对话历史加上当前用户消息
    • 输出阶段:生成文本响应,成为未来输入的一部分

Concise is key

简洁是关键

The context window is a public good. Your prompt, command, skill shares the context window with everything else Claude needs to know, including:
  • The system prompt
  • Conversation history
  • Other commands, skills, hooks, metadata
  • Your actual request
Default assumption: Claude is already very smart
Only add context Claude doesn't already have. Challenge each piece of information:
  • "Does Claude really need this explanation?"
  • "Can I assume Claude knows this?"
  • "Does this paragraph justify its token cost?"
Good example: Concise (approximately 50 tokens):
markdown
undefined
上下文窗口是公共资源。你的提示词、命令、技能与Claude需要知道的所有其他内容共享上下文窗口,包括:
  • System Prompt
  • 对话历史
  • 其他命令、技能、钩子、元数据
  • 你的实际请求
默认假设:Claude已经非常智能
仅添加Claude不知道的上下文。质疑每一条信息:
  • “Claude真的需要这个解释吗?”
  • “我可以假设Claude知道这个吗?”
  • “这段内容值得消耗的token成本吗?”
好示例:简洁(约50个token):
markdown
undefined

Extract PDF text

提取PDF文本

Use pdfplumber for text extraction:
python
import pdfplumber

with pdfplumber.open("file.pdf") as pdf:
    text = pdf.pages[0].extract_text()

**Bad example: Too verbose** (approximately 150 tokens):

```markdown  theme={null}
使用pdfplumber进行文本提取:
python
import pdfplumber

with pdfplumber.open("file.pdf") as pdf:
    text = pdf.pages[0].extract_text()

**坏示例:过于冗长**(约150个token):

```markdown  theme={null}

Extract PDF text

提取PDF文本

PDF (Portable Document Format) files are a common file format that contains text, images, and other content. To extract text from a PDF, you'll need to use a library. There are many libraries available for PDF processing, but we recommend pdfplumber because it's easy to use and handles most cases well. First, you'll need to install it using pip. Then you can use the code below...

The concise version assumes Claude knows what PDFs are and how libraries work.
PDF(Portable Document Format)是一种常见的文件格式,包含文本、图像和其他内容。要从PDF中提取文本,你需要使用一个库。有许多可用于PDF处理的库,但我们推荐pdfplumber,因为它易于使用且能处理大多数情况。首先,你需要使用pip安装它。然后你可以使用下面的代码...

简洁版本假设Claude知道PDF是什么以及库的工作方式。

Set appropriate degrees of freedom

设置适当的自由度

Match the level of specificity to the task's fragility and variability.
High freedom (text-based instructions):
Use when:
  • Multiple approaches are valid
  • Decisions depend on context
  • Heuristics guide the approach
Example:
markdown
undefined
将具体程度与任务的脆弱性和可变性相匹配。
高自由度(基于文本的指令):
适用于:
  • 多种方法都有效
  • 决策取决于上下文
  • 启发式方法指导流程
示例:
markdown
undefined

Code review process

代码审查流程

  1. Analyze the code structure and organization
  2. Check for potential bugs or edge cases
  3. Suggest improvements for readability and maintainability
  4. Verify adherence to project conventions

**Medium freedom** (pseudocode or scripts with parameters):

Use when:

- A preferred pattern exists
- Some variation is acceptable
- Configuration affects behavior

Example:

````markdown  theme={null}
  1. 分析代码结构和组织
  2. 检查潜在的bug或边缘情况
  3. 提出可读性和可维护性改进建议
  4. 验证是否符合项目规范

**中等自由度**(带参数的伪代码或脚本):

适用于:

- 存在首选模式
- 允许一定变化
- 配置影响行为

示例:

````markdown  theme={null}

Generate report

生成报告

Use this template and customize as needed:
python
def generate_report(data, format="markdown", include_charts=True):
    # Process data
    # Generate output in specified format
    # Optionally include visualizations

**Low freedom** (specific scripts, few or no parameters):

Use when:

- Operations are fragile and error-prone
- Consistency is critical
- A specific sequence must be followed

Example:

````markdown  theme={null}
使用此模板并根据需要自定义:
python
def generate_report(data, format="markdown", include_charts=True):
    # 处理数据
    # 生成指定格式的输出
    # 可选包含可视化

**低自由度**(特定脚本,很少或没有参数):

适用于:

- 操作脆弱且容易出错
- 一致性至关重要
- 必须遵循特定顺序

示例:

````markdown  theme={null}

Database migration

数据库迁移

Run exactly this script:
bash
python scripts/migrate.py --verify --backup
Do not modify the command or add additional flags.

**Analogy**: Think of Claude as a robot exploring a path:

- **Narrow bridge with cliffs on both sides**: There's only one safe way forward. Provide specific guardrails and exact instructions (low freedom). Example: database migrations that must run in exact sequence.
- **Open field with no hazards**: Many paths lead to success. Give general direction and trust Claude to find the best route (high freedom). Example: code reviews where context determines the best approach.
严格运行此脚本:
bash
python scripts/migrate.py --verify --backup
不要修改命令或添加额外的标志。

**类比**:将Claude视为探索路径的机器人:

- **两侧是悬崖的狭窄桥梁**:只有一条安全路径。提供具体的护栏和精确指令(低自由度)。示例:必须按精确顺序运行的数据库迁移。
- **无危险的开阔场地**:多条路径都能成功。给出大致方向并信任Claude找到最佳路线(高自由度)。示例:上下文决定最佳方法的代码审查。

Persuasion Principles for Agent Communication

Agent沟通的说服原则

Usefull for writing prompts, including but not limited to: commands, hooks, skills for Claude Code, or prompts for sub agents or any other LLM interaction.
适用于编写提示词,包括但不限于:Claude Code的命令、钩子、技能,或子Agent的提示词,或任何其他LLM交互。

Overview

概述

LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
Research foundation: Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p < .001).
LLM对与人类相同的说服原则做出响应。理解这种心理有助于你设计更有效的技能——不是为了操纵,而是为了确保即使在压力下也能遵循关键实践。
研究基础:Meincke等人(2025年)在28,000次AI对话中测试了7项说服原则。说服技术将合规率提高了一倍以上(33% → 72%,p < .001)。

The Seven Principles

七项原则

1. Authority

1. 权威

What it is: Deference to expertise, credentials, or official sources.
How it works in prompts:
  • Imperative language: "YOU MUST", "Never", "Always"
  • Non-negotiable framing: "No exceptions"
  • Eliminates decision fatigue and rationalization
When to use:
  • Discipline-enforcing skills (TDD, verification requirements)
  • Safety-critical practices
  • Established best practices
Example:
markdown
✅ Write code before test? Delete it. Start over. No exceptions.
❌ Consider writing tests first when feasible.
定义:对专业知识、资质或官方来源的服从。
在提示词中的应用
  • 命令式语言:"YOU MUST"、"Never"、"Always"
  • 非协商框架:"No exceptions"
  • 消除决策疲劳和合理化
适用场景
  • 纪律执行类技能(TDD、验证要求)
  • 安全关键实践
  • 已确立的最佳实践
示例:
markdown
✅ 先写代码再写测试?删除它,重新开始。No exceptions。
❌ 考虑在可行时先写测试。

2. Commitment

2. 承诺

What it is: Consistency with prior actions, statements, or public declarations.
How it works in prompts:
  • Require announcements: "Announce skill usage"
  • Force explicit choices: "Choose A, B, or C"
  • Use tracking: TodoWrite for checklists
When to use:
  • Ensuring skills are actually followed
  • Multi-step processes
  • Accountability mechanisms
Example:
markdown
✅ When you find a skill, you MUST announce: "I'm using [Skill Name]"
❌ Consider letting your partner know which skill you're using.
定义:与先前行动、声明或公开承诺保持一致。
在提示词中的应用
  • 要求声明:"Announce skill usage"
  • 强制明确选择:"Choose A, B, or C"
  • 使用跟踪:TodoWrite用于清单
适用场景
  • 确保技能被实际遵循
  • 多步骤流程
  • 问责机制
示例:
markdown
✅ 当你找到一个技能时,你必须声明:"I'm using [Skill Name]"
❌ 考虑让你的搭档知道你正在使用哪个技能。

3. Scarcity

3. 稀缺性

What it is: Urgency from time limits or limited availability.
How it works in prompts:
  • Time-bound requirements: "Before proceeding"
  • Sequential dependencies: "Immediately after X"
  • Prevents procrastination
When to use:
  • Immediate verification requirements
  • Time-sensitive workflows
  • Preventing "I'll do it later"
Example:
markdown
✅ After completing a task, IMMEDIATELY request code review before proceeding.
❌ You can review code when convenient.
定义:来自时间限制或有限可用性的紧迫感。
在提示词中的应用
  • 时间绑定要求:"Before proceeding"
  • 顺序依赖:"Immediately after X"
  • 防止拖延
适用场景
  • 即时验证要求
  • 时间敏感的工作流
  • 防止“我稍后再做”
示例:
markdown
✅ 完成任务后,IMMEDIATELY请求代码审查,然后再继续。
❌ 你可以在方便时审查代码。

4. Social Proof

4. 社会认同

What it is: Conformity to what others do or what's considered normal.
How it works in prompts:
  • Universal patterns: "Every time", "Always"
  • Failure modes: "X without Y = failure"
  • Establishes norms
When to use:
  • Documenting universal practices
  • Warning about common failures
  • Reinforcing standards
Example:
markdown
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
❌ Some people find TodoWrite helpful for checklists.
定义:与他人的行为或被视为正常的行为保持一致。
在提示词中的应用
  • 通用模式:"Every time"、"Always"
  • 失败模式:"X without Y = failure"
  • 建立规范
适用场景
  • 记录通用实践
  • 警告常见失败
  • 强化标准
示例:
markdown
✅ 没有TodoWrite跟踪的清单=步骤会被跳过。Every time。
❌ 有些人发现TodoWrite对清单有帮助。

5. Unity

5. 团结

What it is: Shared identity, "we-ness", in-group belonging.
How it works in prompts:
  • Collaborative language: "our codebase", "we're colleagues"
  • Shared goals: "we both want quality"
When to use:
  • Collaborative workflows
  • Establishing team culture
  • Non-hierarchical practices
Example:
markdown
✅ We're colleagues working together. I need your honest technical judgment.
❌ You should probably tell me if I'm wrong.
定义:共享身份、“我们”的归属感、群体内的认同。
在提示词中的应用
  • 协作语言:"our codebase"、"we're colleagues"
  • 共同目标:"we both want quality"
适用场景
  • 协作工作流
  • 建立团队文化
  • 非层级化实践
示例:
markdown
✅ 我们是一起工作的同事。我需要你诚实的技术判断。
❌ 你可能应该告诉我我是否错了。

6. Reciprocity

6. 互惠

What it is: Obligation to return benefits received.
How it works:
  • Use sparingly - can feel manipulative
  • Rarely needed in prompts
When to avoid:
  • Almost always (other principles more effective)
定义:回报所获利益的义务。
应用方式
  • 谨慎使用——可能会让人感觉被操纵
  • 很少在提示词中需要
避免场景
  • 几乎所有情况(其他原则更有效)

7. Liking

7. 喜爱

What it is: Preference for cooperating with those we like.
How it works:
  • DON'T USE for compliance
  • Conflicts with honest feedback culture
  • Creates sycophancy
When to avoid:
  • Always for discipline enforcement
定义:更愿意与我们喜欢的人合作。
应用方式
  • 不要用于合规性要求
  • 与诚实反馈文化冲突
  • 产生阿谀奉承
避免场景
  • 始终用于纪律执行

Principle Combinations by Prompt Type

按提示词类型的原则组合

Prompt TypeUseAvoid
Discipline-enforcingAuthority + Commitment + Social ProofLiking, Reciprocity
Guidance/techniqueModerate Authority + UnityHeavy authority
CollaborativeUnity + CommitmentAuthority, Liking
ReferenceClarity onlyAll persuasion
提示词类型使用避免
纪律执行权威 + 承诺 + 社会认同喜爱、互惠
指导/技术适度权威 + 团结过度权威
协作团结 + 承诺权威、喜爱
参考仅清晰表达所有说服原则

Why This Works: The Psychology

为何有效:心理学原理

Bright-line rules reduce rationalization:
  • "YOU MUST" removes decision fatigue
  • Absolute language eliminates "is this an exception?" questions
  • Explicit anti-rationalization counters close specific loopholes
Implementation intentions create automatic behavior:
  • Clear triggers + required actions = automatic execution
  • "When X, do Y" more effective than "generally do Y"
  • Reduces cognitive load on compliance
LLMs are parahuman:
  • Trained on human text containing these patterns
  • Authority language precedes compliance in training data
  • Commitment sequences (statement → action) frequently modeled
  • Social proof patterns (everyone does X) establish norms
明确规则减少合理化:
  • "YOU MUST"消除决策疲劳
  • 绝对语言消除“这是例外吗?”的问题
  • 明确的反合理化条款堵住特定漏洞
实施意图创造自动行为:
  • 清晰的触发条件 + 必需的行动 = 自动执行
  • "When X, do Y"比“通常做Y”更有效
  • 降低合规性的认知负荷
LLM是类人智能:
  • 基于包含这些模式的人类文本训练
  • 权威语言在训练数据中先于合规行为
  • 承诺序列(声明→行动)频繁被建模
  • 社会认同模式(每个人都做X)建立规范

Ethical Use

伦理使用

Legitimate:
  • Ensuring critical practices are followed
  • Creating effective documentation
  • Preventing predictable failures
Illegitimate:
  • Manipulating for personal gain
  • Creating false urgency
  • Guilt-based compliance
The test: Would this technique serve the user's genuine interests if they fully understood it?
合法用途:
  • 确保遵循关键实践
  • 创建有效的文档
  • 防止可预见的失败
非法用途:
  • 为个人利益操纵
  • 创建虚假紧迫感
  • 基于内疚的合规性
测试标准:如果用户完全理解此技术,它是否符合用户的真实利益?

Quick Reference

快速参考

When designing a prompt, ask:
  1. What type is it? (Discipline vs. guidance vs. reference)
  2. What behavior am I trying to change?
  3. Which principle(s) apply? (Usually authority + commitment for discipline)
  4. Am I combining too many? (Don't use all seven)
  5. Is this ethical? (Serves user's genuine interests?)
设计提示词时,问自己:
  1. 它是什么类型?(纪律执行 vs 指导 vs 参考)
  2. 我试图改变什么行为?
  3. 哪些原则适用?(通常纪律执行使用权威+承诺)
  4. 我组合的原则太多了吗?(不要使用全部七个)
  5. 这符合伦理吗?(符合用户的真实利益吗?)