cognitive-bias-detection

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cognitive Bias Detection

认知偏差检测

Core principle: Human (and AI) reasoning is systematically distorted by cognitive biases — predictable errors in judgment that operate below conscious awareness. The most dangerous analyses are the ones that feel most certain. This skill audits the reasoning process itself, not just the conclusions.

核心原则:人类(以及AI)的推理会系统性地受到认知偏差的扭曲,认知偏差是指发生在意识层面以下的可预测判断误差。最危险的分析往往是那些感觉绝对确定的分析。本技能不仅审核结论,还会审核推理过程本身。

The Most Impactful Biases to Check

需要排查的最高影响偏差

Evaluation & Decision Biases

评估与决策偏差

Confirmation Bias Seeking, interpreting, and remembering information that confirms existing beliefs. Disconfirming evidence is dismissed or reframed.
  • Signal: "The data confirms what we suspected." / Evidence against the conclusion gets less attention than evidence for it.
  • Fix: Actively seek the strongest case against the conclusion. Assign someone to argue the opposite.
Anchoring Over-weighting the first number, estimate, or framing encountered.
  • Signal: Estimates cluster around an initial figure. Comparisons are made relative to a reference point that was never validated.
  • Fix: Generate estimates independently before seeing others. Ask "what would this look like if the anchor didn't exist?"
Availability Heuristic Overweighting recent, memorable, or vivid events when estimating likelihood.
  • Signal: "We just had an incident like this" leads to overestimating its probability. Quiet failures are underweighted.
  • Fix: Use base rates. Ask "how often does this actually happen over a long period?"
Sunk Cost Fallacy Continuing a course of action because of past investment, not future value.
  • Signal: "We've already put 6 months into this." / Reluctance to abandon despite evidence it's not working.
  • Fix: Ask "if we hadn't invested anything yet, would we start this today?"
Planning Fallacy Systematic underestimation of time, cost, and risk — even when we know past projects ran over.
  • Signal: Estimates feel optimistic. No buffer for unknowns. Past projects are treated as exceptions.
  • Fix: Use reference class forecasting: how long did similar projects actually take?
确认偏差 主动寻找、解读、记忆能够佐证已有信念的信息,否定或重构与已有信念相悖的证据。
  • 信号:“数据证实了我们的猜测”/ 相比支撑结论的证据,反对结论的证据得到的关注少得多。
  • 修正方案:主动寻找最有力的反对结论的论据,安排专人站在对立立场进行论证。
锚定效应 过度重视最先接触到的数字、预估或表述框架。
  • 信号:所有预估都围绕最初给出的数值展开,所有对比都以一个从未经过验证的参考点为基准。
  • 修正方案:在看到其他人的预估前先独立生成自己的预估,反问自己“如果不存在这个锚点,情况会是什么样?”
可得性启发法 预估事件发生概率时,过度重视近期发生的、记忆深刻的或有冲击力的事件。
  • 信号:“我们刚遇到过类似的事故”就会高估这类事件的发生概率,不显眼的故障则被轻视。
  • 修正方案:使用基准概率,反问自己“长期来看这类事件的实际发生频率是多少?”
沉没成本谬误 因为过去的投入而非未来的价值选择继续执行某一行动方案。
  • 信号:“我们已经在这上面投入了6个月时间”/ 即便有证据表明方案不可行也不愿放弃。
  • 修正方案:反问自己“如果我们之前没有任何投入,现在会选择启动这个项目吗?”
规划谬误 系统性地低估时间、成本和风险,即便我们知道过去的项目都出现了超支超时的情况。
  • 信号:预估结果看起来很乐观,没有预留应对未知情况的缓冲,过去的项目超支都被当作例外情况。
  • 修正方案:使用参考类预测法:同类项目实际花费了多长时间?

Social & Group Biases

社会与群体偏差

Groupthink Desire for group harmony overrides realistic appraisal. Dissent is suppressed.
  • Signal: Everyone agrees quickly. No one plays devil's advocate. Contrarian views are dismissed socially.
  • Fix: Assign a formal devil's advocate. Ask people to write independent opinions before group discussion.
Authority Bias Overweighting the opinion of someone perceived as an authority, independent of their actual expertise.
  • Signal: "The CTO/senior person thinks X, so it must be right." Analysis stops when authority speaks.
  • Fix: Evaluate the argument on its merits, not its source. Ask "what's the evidence, separate from who said it?"
In-group Bias Favoring people, ideas, and solutions associated with one's own group.
  • Signal: Solutions from the team are evaluated more generously than identical solutions from outside.
  • Fix: Blind evaluation where possible. Ask "would we accept this if a competitor proposed it?"
群体思维 对群体和谐的追求压过了现实评估,不同意见被压制。
  • 信号:所有人都快速达成一致,没有人唱反调,不同意见在社交层面被排斥。
  • 修正方案:指定正式的反对者,要求大家在群体讨论前先写下独立的观点。
权威偏差 过度重视权威人士的观点,而不考虑其实际专业能力是否匹配。
  • 信号:“CTO/高管认为X是对的,所以肯定没错”,权威人士发言后分析就终止了。
  • 修正方案:根据论证本身的价值而非来源进行评估,反问自己“抛开是谁说的不谈,证据是什么?”
内群体偏差 更偏向和自己所属群体相关的人、想法和解决方案。
  • 信号:对团队内部提出的方案的评估比对外部提出的完全相同的方案宽松得多。
  • 修正方案:尽可能采用盲评,反问自己“如果这是竞争对手提出的方案,我们会接受吗?”

Framing & Perception Biases

框架与感知偏差

Framing Effect The same information leads to different decisions depending on how it's presented (gain vs. loss framing).
  • Signal: "90% success rate" vs. "10% failure rate" trigger different reactions to the same fact.
  • Fix: Reframe every option in multiple ways before deciding. Check if the decision changes.
Survivorship Bias Drawing conclusions from visible successes while ignoring invisible failures.
  • Signal: "Company X did Y and succeeded" — but how many companies did Y and failed?
  • Fix: Actively seek the failure cases. Ask "what don't we see because they didn't survive?"
Dunning-Kruger Effect Low competence in a domain produces overconfidence; high competence produces underconfidence.
  • Signal: Extreme certainty in a novel or complex domain. Or excessive hedging from a genuine expert.
  • Fix: Calibrate confidence against demonstrated track record in this specific domain.
Recency Bias Overweighting recent data and underweighting long-term patterns.
  • Signal: Last quarter's results dominate the analysis. Historical base rates are ignored.
  • Fix: Extend the time window. Look at multi-year trends, not just recent performance.

框架效应 相同的信息因为呈现方式不同(收益框架vs损失框架)会导致不同的决策。
  • 信号:“90%成功率”和“10%失败率”是完全相同的事实,却引发了不同的反应。
  • 修正方案:决策前用多种方式重新表述所有选项,检查决策是否会发生变化。
幸存者偏差 只从可见的成功案例中得出结论,忽略了看不见的失败案例。
  • 信号:“X公司做了Y就成功了”——但有多少公司做了Y却失败了?
  • 修正方案:主动寻找失败案例,反问自己“有哪些信息是我们看不到的,因为对应的主体没有存活下来?”
邓宁-克鲁格效应 某一领域能力不足的人会过度自信,能力极强的人则会不够自信。
  • 信号:在陌生或复杂领域表现出极端确定,或者真正的专家给出了过度保守的判断。
  • 修正方案:根据对应领域内已被证实的过往记录校准信心水平。
近因偏差 过度重视近期数据,轻视长期规律。
  • 信号:上个季度的结果主导了分析,历史基准概率被忽略。
  • 修正方案:拉长时间窗口,查看多年趋势,而不是只看近期表现。

Output Format

输出格式

🔍 Bias Scan Results

🔍 偏差扫描结果

For each bias checked:
BiasPresent?Signal ObservedSeverity
Confirmation BiasYes / Possible / No[Evidence]Low/Med/High
Sunk CostYes / Possible / No[Evidence]Low/Med/High
...
针对每一个排查的偏差:
偏差类型是否存在?观测到的信号严重程度
确认偏差是 / 可能存在 / 否[证据]低/中/高
沉没成本谬误是 / 可能存在 / 否[证据]低/中/高
...

⚠️ High-Risk Findings

⚠️ 高风险发现

For each high-severity bias detected:
  • Bias: Name and brief description
  • How it's showing up: Specific evidence in the reasoning or decision
  • What it's distorting: What conclusion or estimate is being skewed, and in which direction?
  • Debiasing move: Concrete action to correct or validate
针对每一个检测到的高严重程度偏差:
  • 偏差:名称和简要描述
  • 表现形式:推理或决策中存在的具体证据
  • 造成的扭曲:哪些结论或预估被扭曲,以及扭曲的方向?
  • 去偏差措施:修正或验证的具体行动

🧹 Debiased Re-evaluation

🧹 去偏差后重评估

After flagging biases, offer a corrected version of the analysis:
  • What changes if we remove the bias?
  • What evidence is actually strong vs. inflated by bias?
  • Does the conclusion still hold?
标记出偏差后,提供修正后的分析版本:
  • 如果我们消除了偏差,哪些内容会发生变化?
  • 哪些证据是真正有力的,哪些是被偏差放大的?
  • 结论仍然成立吗?

🎯 Confidence Calibration

🎯 信心校准

  • What is the actual confidence level warranted by the evidence, absent bias?
  • What would need to be true to justify higher confidence?
  • What's the most important thing to validate before committing?

  • 排除偏差后,证据支撑的实际信心水平是多少?
  • 需要满足什么条件才能证明更高的信心是合理的?
  • 决策前最需要验证的内容是什么?

Meta-Check: Is Claude Biased Here?

元检查:Claude自身是否存在偏差?

This skill also applies to Claude's own analysis. When generating evaluations, check:
  • Am I confirming what the user wants to hear? (sycophancy / confirmation bias)
  • Am I anchoring to the first framing the user gave me?
  • Am I overweighting the most vivid or recent example?
  • Am I assuming the user's group/team/approach is better without evidence?
If yes to any — flag it and correct.

本技能也适用于Claude自己的分析。生成评估内容时,检查:
  • 我是不是在说用户想听的内容?(奉承/确认偏差)
  • 我是不是被用户给出的最初框架锚定了?
  • 我是不是过度重视了最有冲击力或最新的案例?
  • 我是不是没有证据就默认用户的群体/团队/方法更好?
如果以上任意一条回答为是——标记出来并修正。

Thinking Triggers

思考触发点

  • "How would this analysis look if we had concluded the opposite from the start?"
  • "What's the strongest evidence against the current conclusion?"
  • "Are we continuing because it's right, or because we've invested too much to stop?"
  • "Who benefits from this conclusion, and are they also the ones evaluating it?"
  • "If a stranger reviewed this reasoning, what would they say we're missing?"
  • “如果我们从一开始就得出了相反的结论,这份分析会是什么样的?”
  • “反对当前结论的最有力证据是什么?”
  • “我们继续推进是因为方案正确,还是因为我们已经投入太多没法停下?”
  • “谁能从这个结论中获益,他们是不是就是做评估的人?”
  • “如果一个陌生人审阅这个推理过程,他们会说我们漏掉了什么?”