devils-advocate

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Devil's Advocate

魔鬼代言人

Systematically challenge ideas, assumptions, designs, and decisions by playing devil's advocate to identify weaknesses, blind spots, and alternative perspectives before committing to a course of action.
通过系统性地扮演魔鬼代言人,在确定行动方案之前质疑想法、假设、设计和决策,找出漏洞、盲区和其他视角。

When to use me

何时使用我

Use this skill when:
  • A team seems to be converging on a solution without debate
  • Important decisions are being made based on consensus rather than evidence
  • You need to stress-test ideas before implementation
  • Identifying potential failure modes and risks
  • Preventing groupthink and confirmation bias
  • Evaluating multiple alternatives before choosing
  • Preparing for stakeholder challenges or objections
  • Building resilience against criticism and failure
当出现以下情况时使用本技能:
  • 团队似乎在没有辩论的情况下就达成了某个解决方案
  • 重要决策基于共识而非证据制定
  • 需要在实施前对想法进行压力测试
  • 识别潜在的失败模式和风险
  • 避免群体思维和确认偏差
  • 在选择前评估多种替代方案
  • 为应对利益相关者的质疑或反对做准备
  • 增强对批评和失败的抵御能力

What I do

我能做什么

1. Argument Deconstruction

1. 论点解构

  • Identify core claims and assumptions in proposals, designs, or decisions
  • Analyze supporting evidence for strength, relevance, and reliability
  • Map logical connections between premises and conclusions
  • Detect rhetorical fallacies and cognitive biases in reasoning
  • Surface implicit beliefs that aren't explicitly stated
  • 识别提案、设计或决策中的核心主张和假设
  • 分析支持证据的力度、相关性和可靠性
  • 绘制前提与结论之间的逻辑关联
  • 检测推理中的修辞谬误和认知偏差
  • 挖掘未明确陈述的隐含信念

2. Counterargument Generation

2. 反论点生成

  • Generate alternative explanations for the same evidence
  • Propose competing hypotheses that could also be true
  • Identify contradictory data or opposing viewpoints
  • Suggest different interpretations of facts and findings
  • Construct "what if" scenarios where current thinking is wrong
  • 针对同一证据生成其他解释
  • 提出同样可能成立的对立假设
  • 识别矛盾数据或对立观点
  • 对事实和发现提出不同解读
  • 构建当前思路错误的“假设”场景

3. Weakness Identification

3. 漏洞识别

  • Find logical gaps in arguments and reasoning
  • Identify unexamined risks and potential failure modes
  • Spot overconfidence in predictions or estimates
  • Detect oversimplification of complex problems
  • Recognize missing perspectives or stakeholder views
  • 找出论点和推理中的逻辑缺口
  • 识别未被审视的风险和潜在失败模式
  • 发现预测或估算中的过度自信
  • 检测复杂问题的过度简化处理
  • 识别缺失的视角或利益相关者观点

4. Alternative Perspective Exploration

4. 其他视角探索

  • Adopt different stakeholder viewpoints (users, customers, regulators, competitors)
  • Consider opposite positions on controversial issues
  • Explore edge cases and boundary conditions
  • Apply different mental models or frameworks
  • Question fundamental premises rather than just conclusions
  • 采纳不同利益相关者的视角(用户、客户、监管方、竞争对手)
  • 考虑争议性问题的对立立场
  • 探索极端案例和边界条件
  • 应用不同的思维模型或框架
  • 质疑基本前提而非仅仅结论

Devil's Advocate Techniques

魔鬼代言人技巧

For Technical Decisions:

针对技术决策:

  • Challenge technology choices: "What if this framework becomes unmaintained?"
  • Question architecture decisions: "How does this handle ten times the expected load?"
  • Probe security assumptions: "What attack vectors are we not considering?"
  • Test scalability claims: "What breaks first under stress?"
  • Examine dependency risks: "What happens if this third-party service changes?"
  • 挑战技术选择:“如果这个框架不再维护怎么办?”
  • 质疑架构决策:“这个架构如何处理十倍于预期的负载?”
  • 探查安全假设:“我们忽略了哪些攻击向量?”
  • 测试可扩展性主张:“压力下最先崩溃的是什么?”
  • 审视依赖风险:“如果这个第三方服务发生变化会怎样?”

For Product/Feature Decisions:

针对产品/功能决策:

  • Challenge user assumptions: "What if users don't behave as we expect?"
  • Question market fit: "What evidence contradicts our market assumptions?"
  • Probe value propositions: "Why would customers choose alternatives?"
  • Test business models: "What assumptions make this revenue model fail?"
  • Examine competitive threats: "How could competitors easily undermine this?"
  • 挑战用户假设:“如果用户的行为与我们预期不符怎么办?”
  • 质疑市场适配性:“有哪些证据与我们的市场假设相矛盾?”
  • 探查价值主张:“客户为什么会选择替代方案?”
  • 测试商业模式:“哪些假设会导致这个盈利模式失败?”
  • 审视竞争威胁:“竞争对手如何能轻易削弱我们的优势?”

For Process/Operational Decisions:

针对流程/运营决策:

  • Challenge workflow efficiency: "What makes this process fragile?"
  • Question measurement validity: "Are we measuring the right things?"
  • Probe team dynamics: "What interpersonal issues could derail this?"
  • Test communication plans: "Where could misunderstandings occur?"
  • Examine incentive alignment: "What perverse incentives does this create?"
  • 挑战工作流效率:“是什么让这个流程脆弱不堪?”
  • 质疑衡量指标的有效性:“我们是否在衡量正确的指标?”
  • 探查团队动态:“哪些人际问题可能会阻碍进程?”
  • 测试沟通计划:“哪里可能出现误解?”
  • 审视激励机制的一致性:“这会产生哪些反常的激励效果?”

Examples

示例

bash
undefined
bash
undefined

Challenge a technical design decision

Challenge a technical design decision

npm run devils-advocate:challenge -- --decision "use-microservices" --context "ecommerce-platform"
npm run devils-advocate:challenge -- --decision "use-microservices" --context "ecommerce-platform"

Test a product assumption

Test a product assumption

npm run devils-advocate:test -- --assumption "users-want-mobile-app" --data "user-research.json"
npm run devils-advocate:test -- --assumption "users-want-mobile-app" --data "user-research.json"

Identify weaknesses in a proposal

Identify weaknesses in a proposal

npm run devils-advocate:weaknesses -- --proposal "architecture-redesign.pdf" --perspective "security"
npm run devils-advocate:weaknesses -- --proposal "architecture-redesign.pdf" --perspective "security"

Generate counterarguments for a business case

Generate counterarguments for a business case

npm run devils-advocate:counterarguments -- --business-case "expand-to-europe.md" --stakeholder "competitor"
npm run devils-advocate:counterarguments -- --business-case "expand-to-europe.md" --stakeholder "competitor"

Comprehensive devil's advocate review

Comprehensive devil's advocate review

npm run devils-advocate:review -- --document "project-plan.md" --thoroughness high
undefined
npm run devils-advocate:review -- --document "project-plan.md" --thoroughness high
undefined

Output format

输出格式

Devil's Advocate Analysis
──────────────────────────────
Subject: Microservices Architecture Proposal
Date: 2026-02-26
Analysis Duration: 45 minutes

Core Claims Identified:
1. "Microservices will improve development velocity by 40%"
2. "Team autonomy will increase with bounded contexts"
3. "System will be more resilient to failures"
4. "Scaling will be more efficient and cost-effective"
5. "Technology diversity will enable better tool selection"

Argument Analysis:

1. Claim: "Microservices improve development velocity by 40%"
   Supporting Evidence: Case studies from other companies
   
   Devil's Advocate Challenges:
   - Case studies may not apply to our context (different scale, team structure)
   - 40% improvement assumes optimal implementation and mature DevOps
   - Velocity gains may be offset by coordination overhead
   - Microservices introduce new failure modes (network, versioning, data consistency)
   - Learning curve could reduce velocity initially
   
   Alternative Explanation: 
   The perceived velocity improvement may come from better practices 
   (CI/CD, testing) that could be applied to monoliths too.

2. Claim: "Team autonomy increases with bounded contexts"
   Supporting Evidence: Conway's Law, team structure diagrams
   
   Devil's Advocate Challenges:
   - Bounded contexts require clear domain boundaries (do we have them?)
   - Teams need new skills (distributed systems, SRE practices)
   - Increased autonomy may lead to inconsistent standards
   - Cross-team coordination becomes more formal and slower
   - Some domains naturally span multiple services
   
   Alternative Perspective:
   Monolith with modular architecture might provide similar autonomy benefits 
   without operational complexity.

3. Claim: "System more resilient to failures"
   Supporting Evidence: Failure isolation diagrams, redundancy claims
   
   Devil's Advocate Challenges:
   - Distributed systems have MORE failure modes (network partitions, service discovery)
   - Resilience requires sophisticated infrastructure (circuit breakers, retries, fallbacks)
   - Debugging failures across services is harder
   - Data consistency becomes a major challenge
   - Cascading failures possible if dependencies not managed
   
   Contradictory Evidence:
   Many companies report increased operational complexity and failure rates 
   after microservices adoption without proper preparation.

Risk Assessment:
- High Risk: Data consistency across services
- Medium Risk: Operational complexity overwhelming team
- Medium Risk: Cross-team coordination overhead
- Low Risk: Technology choice limitations

Alternative Approaches Worth Considering:
1. Modular monolith with clear internal boundaries
2. Start with coarse-grained services, refine later
3. Hybrid approach: monolith for core domain, services for edge functions
4. Focus on DevOps maturity first, then evaluate service boundaries

Critical Questions Unanswered:
1. What specific metrics will measure "improved velocity"?
2. How will we handle distributed transactions?
3. What is the rollback plan if microservices don't deliver value?
4. How will team structure change to support service ownership?
5. What monitoring and observability investments are needed?

Recommendations:
1. Pilot with one non-critical service first
2. Define clear success metrics before proceeding
3. Invest in foundational capabilities (monitoring, deployment, testing)
4. Consider evolutionary architecture rather than big-bang rewrite
5. Document assumptions and revisit after 3 months

Analysis Value:
- Weaknesses identified: 8 significant, 3 critical
- Alternative perspectives generated: 5 viable alternatives
- Assumptions challenged: 12 core assumptions
- Risk areas highlighted: 3 high-risk areas needing mitigation
- Decision quality improvement: High (prevents potential costly mistake)
Devil's Advocate Analysis
──────────────────────────────
Subject: Microservices Architecture Proposal
Date: 2026-02-26
Analysis Duration: 45 minutes

Core Claims Identified:
1. "Microservices will improve development velocity by 40%"
2. "Team autonomy will increase with bounded contexts"
3. "System will be more resilient to failures"
4. "Scaling will be more efficient and cost-effective"
5. "Technology diversity will enable better tool selection"

Argument Analysis:

1. Claim: "Microservices improve development velocity by 40%"
   Supporting Evidence: Case studies from other companies
   
   Devil's Advocate Challenges:
   - Case studies may not apply to our context (different scale, team structure)
   - 40% improvement assumes optimal implementation and mature DevOps
   - Velocity gains may be offset by coordination overhead
   - Microservices introduce new failure modes (network, versioning, data consistency)
   - Learning curve could reduce velocity initially
   
   Alternative Explanation: 
   The perceived velocity improvement may come from better practices 
   (CI/CD, testing) that could be applied to monoliths too.

2. Claim: "Team autonomy increases with bounded contexts"
   Supporting Evidence: Conway's Law, team structure diagrams
   
   Devil's Advocate Challenges:
   - Bounded contexts require clear domain boundaries (do we have them?)
   - Teams need new skills (distributed systems, SRE practices)
   - Increased autonomy may lead to inconsistent standards
   - Cross-team coordination becomes more formal and slower
   - Some domains naturally span multiple services
   
   Alternative Perspective:
   Monolith with modular architecture might provide similar autonomy benefits 
   without operational complexity.

3. Claim: "System more resilient to failures"
   Supporting Evidence: Failure isolation diagrams, redundancy claims
   
   Devil's Advocate Challenges:
   - Distributed systems have MORE failure modes (network partitions, service discovery)
   - Resilience requires sophisticated infrastructure (circuit breakers, retries, fallbacks)
   - Debugging failures across services is harder
   - Data consistency becomes a major challenge
   - Cascading failures possible if dependencies not managed
   
   Contradictory Evidence:
   Many companies report increased operational complexity and failure rates 
   after microservices adoption without proper preparation.

Risk Assessment:
- High Risk: Data consistency across services
- Medium Risk: Operational complexity overwhelming team
- Medium Risk: Cross-team coordination overhead
- Low Risk: Technology choice limitations

Alternative Approaches Worth Considering:
1. Modular monolith with clear internal boundaries
2. Start with coarse-grained services, refine later
3. Hybrid approach: monolith for core domain, services for edge functions
4. Focus on DevOps maturity first, then evaluate service boundaries

Critical Questions Unanswered:
1. What specific metrics will measure "improved velocity"?
2. How will we handle distributed transactions?
3. What is the rollback plan if microservices don't deliver value?
4. How will team structure change to support service ownership?
5. What monitoring and observability investments are needed?

Recommendations:
1. Pilot with one non-critical service first
2. Define clear success metrics before proceeding
3. Invest in foundational capabilities (monitoring, deployment, testing)
4. Consider evolutionary architecture rather than big-bang rewrite
5. Document assumptions and revisit after 3 months

Analysis Value:
- Weaknesses identified: 8 significant, 3 critical
- Alternative perspectives generated: 5 viable alternatives
- Assumptions challenged: 12 core assumptions
- Risk areas highlighted: 3 high-risk areas needing mitigation
- Decision quality improvement: High (prevents potential costly mistake)

Notes

注意事项

  • Devil's advocate is a role, not a personality – it's temporary and purposeful
  • The goal is better decisions, not winning arguments or blocking progress
  • Balance skepticism with constructive alternatives
  • Document challenges and responses for future reference
  • Use devil's advocate selectively for important decisions, not every discussion
  • The most valuable challenges are those that are hardest to answer
  • Pair devil's advocate with other perspectives for balanced analysis
  • Time-box devil's advocate analysis to prevent analysis paralysis
  • Focus on substance, not style – challenge ideas, not people
  • The best devil's advocate questions are those that make everyone think differently
  • 魔鬼代言人是一个角色,而非性格——它是临时且有明确目的的
  • 目标是做出更好的决策,而非赢得争论或阻碍进展
  • 在怀疑态度和建设性替代方案之间取得平衡
  • 将质疑和回应记录下来以备未来参考
  • 有选择性地对重要决策使用魔鬼代言人,而非每一次讨论
  • 最有价值的质疑是那些最难回答的问题
  • 将魔鬼代言人视角与其他视角结合,实现平衡分析
  • 为魔鬼代言人分析设定时间限制,避免分析瘫痪
  • 关注实质内容而非形式——质疑想法,而非针对个人
  • 最好的魔鬼代言人问题是那些能让所有人换个角度思考的问题