meta-prompting

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Meta-Prompting

元提示(Meta-Prompting)

Enhanced reasoning via
/commands
or natural language. Commands combine left-to-right:
/verify /adversarial
. Auto-trigger when context warrants — note which pattern applied.
通过
/命令
或自然语言增强推理能力。命令可以从左到右组合使用:
/verify /adversarial
。当上下文需要时会自动触发——请注意应用的是哪种模式。

Patterns

模式说明

/think
|
/show
— Show reasoning step-by-step: decision points, alternatives considered, why each accepted/rejected. With
/think doubt
: after each step, flag what could be wrong and why before proceeding.
/adversarial
|
/argue
— After answering, argue against it. 3 strongest counterarguments ranked by severity. Identify blind spots and unstated assumptions.
/constrain
|
/strict
— Tight constraints: 3 sentences max, cite sources, no hedging. Override inline:
/constrain 5 sentences
.
/json
|
/format
— Respond in valid JSON code block, no surrounding prose unless asked. Default schema:
json
{"analysis": "", "confidence_score": 0-100, "methodology": "", "limitations": []}
Custom keys:
/json {keys: summary, risks, recommendation}
/budget
|
/deep
— Extended thinking space (~500 words) showing dead ends and reasoning pivots, then clearly separated final answer.
/compare
|
/vs
— Compare options as table. Default dimensions: speed, accuracy, cost, complexity, maintenance. Custom:
/compare [dim1, dim2]
.
/confidence
|
/conf
— Rate each claim 0-100. Flag below 70 as SPECULATIVE. Group by tier: HIGH (85+), MEDIUM (70-84), LOW (<70). Include assumptions made and rate each 1-10 on confidence.
/edge
|
/break
— 5+ inputs/scenarios that break the approach. Code: null/empty, concurrency, overflow, encoding, auth bypass. Strategies: market conditions, timing, dependencies. Auto-triggers on: security, validation, parsing contexts.
/verify
|
/check
— Three phases: (1) Answer direct response, (2) Challenge 3 ways it could be wrong, (3) Verify investigate each, update if needed. Mark final as
VERIFIED ANSWER:
or
REVISED ANSWER:
. Auto-triggers on: architecture decisions, critical choices, "Am I right?"
/flip
|
/alt
— Solve without the obvious approach. What's the second-best solution and when would it actually be better? Override:
/flip 3
for top 3 alternatives. Auto-triggers on: architecture decisions where the "easy" answer may break at scale.
/assumptions
|
/presume
— Before answering, list every implicit assumption in the question/task. Then answer with assumptions explicit. The assumption list is often more valuable than the answer. Auto-triggers on: architecture reviews, ambiguous requirements.
/tensions
|
/perspectives
— Answer from two named opposing perspectives (e.g., security engineer vs. shipping PM). Focus output on where they disagree — that's where the real insight lives. Override roles:
/tensions [devops, security]
.
/think
|
/show
— 逐步展示推理过程:包括决策节点、考虑过的替代方案,以及每种方案被采纳或拒绝的原因。搭配
/think doubt
使用时:在每一步之后,先标记可能存在的问题及原因,再继续推进。
/adversarial
|
/argue
— 在给出答案后,反驳该答案。列出3个按严重程度排序的最有力反论点,识别思维盲区和未阐明的假设前提。
/constrain
|
/strict
— 严格限制输出:最多3句话,需引用来源,不得含糊其辞。可通过内联方式覆盖限制:
/constrain 5 sentences
(即最多5句话)。
/json
|
/format
— 以有效的JSON代码块形式响应,除非另有要求,否则不要添加周围的描述性文字。默认JSON schema:
json
{"analysis": "", "confidence_score": 0-100, "methodology": "", "limitations": []}
自定义键:
/json {keys: summary, risks, recommendation}
/budget
|
/deep
— 扩展思考空间(约500词),展示思考过程中的死胡同和推理转向,然后清晰区分出最终答案。
/compare
|
/vs
— 以表格形式对比选项。默认对比维度:速度、准确性、成本、复杂度、维护难度。可自定义维度:
/compare [dim1, dim2]
/confidence
|
/conf
— 对每个主张评分(0-100分)。将低于70分的标记为“推测性(SPECULATIVE)”。按等级分组:高置信度(85分及以上)、中等置信度(70-84分)、低置信度(低于70分)。同时列出所做的假设前提,并对每个假设的置信度评分(1-10分)。
/edge
|
/break
— 列出5种以上会导致当前方案失效的输入/场景。代码相关场景:空值/空输入、并发问题、溢出、编码错误、绕过认证。策略相关场景:市场环境变化、时机问题、依赖项故障。 当涉及安全、验证、解析上下文时会自动触发。
/verify
|
/check
— 分为三个阶段:(1) 回答直接给出响应,(2) 质疑列出3种可能导致答案错误的情况,(3) 验证逐一调查这些情况,必要时更新答案。最终标记为
VERIFIED ANSWER:
(已验证答案)或
REVISED ANSWER:
(修订后答案)。 当涉及架构决策、关键选择、“我的判断正确吗?”这类场景时会自动触发。
/flip
|
/alt
— 不使用常规的明显方案来解决问题。给出次优解决方案,并说明在什么情况下该方案实际上更优。可通过
/flip 3
覆盖默认设置,返回排名前三的替代方案。 当涉及架构决策,且“简单”方案可能在规模化场景下失效时会自动触发。
/assumptions
|
/presume
— 在给出答案之前,列出问题/任务中所有隐含的假设前提。然后在明确这些假设的基础上给出答案。通常假设前提列表比答案本身更有价值。 当涉及架构审查、模糊需求时会自动触发。
/tensions
|
/perspectives
— 从两个指定的对立视角给出答案(例如,安全工程师 vs 交付项目经理)。重点输出双方存在分歧的地方——这才是真正有价值的洞察。可自定义角色:
/tensions [devops, security]

Combos

组合命令

/analyze
=
/think
+
/edge
+
/verify
— Code reviews, architecture, security-sensitive work. Auto-triggers on: code review requests.
/trade
=
/confidence
+
/adversarial
+
/edge
— Trade ideas, position analysis, market thesis. Auto-triggers on: trade/position discussions.
/analyze
=
/think
+
/edge
+
/verify
— 适用于代码审查、架构设计、安全敏感工作。 当收到代码审查请求时会自动触发。
/trade
=
/confidence
+
/adversarial
+
/edge
— 适用于交易思路、头寸分析、市场论点。 当涉及交易/头寸讨论时会自动触发。

Conventions

约定规则

  • Separate combined pattern outputs with
    ---
  • Keep core answer prominent — patterns enhance, not bury the response
  • New patterns can be defined mid-conversation ("Add
    /eli5
    for explain like I'm 5") — applied for the session
  • 组合模式的输出用
    ---
    分隔
  • 核心答案要突出显示——模式是为了增强响应,而非掩盖核心答案
  • 可以在对话过程中定义新的模式(例如“添加
    /eli5
    用于像给5岁孩子解释一样简化内容”)——该模式将在本次会话中生效