debug-like-expert

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
<objective> Deep analysis debugging mode for complex issues. This skill activates methodical investigation protocols with evidence gathering, hypothesis testing, and rigorous verification when standard troubleshooting has failed.
The skill emphasizes treating code you wrote with MORE skepticism than unfamiliar code, as cognitive biases about "how it should work" can blind you to actual implementation errors. Use scientific method to systematically identify root causes rather than applying quick fixes. </objective>
<context_scan> Run on every invocation to detect domain-specific debugging expertise:
bash
undefined
<objective> 针对复杂问题的深度分析调试模式。当标准故障排查失败时,该技能会激活包含证据收集、假设测试和严格验证的系统性调查流程。
该技能强调,对自己编写的代码要比对陌生代码保持更多怀疑,因为关于“代码应该如何工作”的认知偏差会让你忽略实际的实现错误。请使用科学方法系统性地识别根本原因,而非采用快速修复方案。 </objective>
<context_scan> 每次调用时运行,以检测领域特定的调试专业技能:
bash
undefined

What files are we debugging?

What files are we debugging?

echo "FILE_TYPES:" find . -maxdepth 2 -type f 2>/dev/null | grep -E '.(py|js|jsx|ts|tsx|rs|swift|c|cpp|go|java)$' | head -10
echo "FILE_TYPES:" find . -maxdepth 2 -type f 2>/dev/null | grep -E '.(py|js|jsx|ts|tsx|rs|swift|c|cpp|go|java)$' | head -10

Check for domain indicators

Check for domain indicators

[ -f "package.json" ] && echo "DETECTED: JavaScript/Node project" [ -f "Cargo.toml" ] && echo "DETECTED: Rust project" [ -f "setup.py" ] || [ -f "pyproject.toml" ] && echo "DETECTED: Python project" [ -f "*.xcodeproj" ] || [ -f "Package.swift" ] && echo "DETECTED: Swift/macOS project" [ -f "go.mod" ] && echo "DETECTED: Go project"
[ -f "package.json" ] && echo "DETECTED: JavaScript/Node project" [ -f "Cargo.toml" ] && echo "DETECTED: Rust project" [ -f "setup.py" ] || [ -f "pyproject.toml" ] && echo "DETECTED: Python project" [ -f "*.xcodeproj" ] || [ -f "Package.swift" ] && echo "DETECTED: Swift/macOS project" [ -f "go.mod" ] && echo "DETECTED: Go project"

Scan for available domain expertise

Scan for available domain expertise

echo "EXPERTISE_SKILLS:" ls ~/.claude/skills/expertise/ 2>/dev/null | head -5

**Present findings before starting investigation.**
</context_scan>

<domain_expertise>
**Domain-specific expertise lives in `~/.claude/skills/expertise/`**

Domain skills contain comprehensive knowledge including debugging, testing, performance, and common pitfalls. Before investigation, determine if domain expertise should be loaded.

<scan_domains>
```bash
ls ~/.claude/skills/expertise/ 2>/dev/null
This reveals available domain expertise (e.g., macos-apps, iphone-apps, python-games, unity-games).
If no expertise skills found: Proceed without domain expertise (graceful degradation). The skill works fine with general debugging methodology. </scan_domains>
<inference_rules> If user's description or codebase contains domain keywords, INFER the domain:
Keywords/FilesDomain Skill
"Python", "game", "pygame", ".py" + game loopexpertise/python-games
"React", "Next.js", ".jsx/.tsx"expertise/nextjs-ecommerce
"Rust", "cargo", ".rs" filesexpertise/rust-systems
"Swift", "macOS", ".swift" + AppKit/SwiftUIexpertise/macos-apps
"iOS", "iPhone", ".swift" + UIKitexpertise/iphone-apps
"Unity", ".cs" + Unity importsexpertise/unity-games
"SuperCollider", ".sc", ".scd"expertise/supercollider
"Agent SDK", "claude-agent"expertise/with-agent-sdk
If domain inferred, confirm:
Detected: [domain] issue → expertise/[skill-name]
Load this debugging expertise? (Y / see other options / none)
</inference_rules>
<no_inference> If no domain obvious, present options:
What type of project are you debugging?

Available domain expertise:
1. macos-apps - macOS Swift (SwiftUI, AppKit, debugging, testing)
2. iphone-apps - iOS Swift (UIKit, debugging, performance)
3. python-games - Python games (Pygame, physics, performance)
4. unity-games - Unity (C#, debugging, optimization)
[... any others found in build/]

N. None - proceed with general debugging methodology
C. Create domain expertise for this domain

Select:
</no_inference>
<load_domain> When domain selected, READ all references from that skill:
bash
cat ~/.claude/skills/expertise/[domain]/references/*.md 2>/dev/null
This loads comprehensive domain knowledge BEFORE investigation:
  • Common issues and error patterns
  • Domain-specific debugging tools and techniques
  • Testing and verification approaches
  • Performance profiling and optimization
  • Known pitfalls and anti-patterns
  • Platform-specific considerations
Announce: "Loaded [domain] expertise. Investigating with domain-specific context."
If domain skill not found: Inform user and offer to proceed with general methodology or create the expertise. </load_domain>
<when_to_load> Domain expertise should be loaded BEFORE investigation when domain is known.
Domain expertise is NOT needed for:
  • Pure logic bugs (domain-agnostic)
  • Generic algorithm issues
  • When user explicitly says "skip domain context" </when_to_load> </domain_expertise>
<context> This skill activates when standard troubleshooting has failed. The issue requires methodical investigation, not quick fixes. You are entering the mindset of a senior engineer who debugs with scientific rigor.
Important: If you wrote or modified any of the code being debugged, you have cognitive biases about how it works. Your mental model of "how it should work" may be wrong. Treat code you wrote with MORE skepticism than unfamiliar code - you're blind to your own assumptions. </context>
<core_principle> VERIFY, DON'T ASSUME. Every hypothesis must be tested. Every "fix" must be validated. No solutions without evidence.
ESPECIALLY: Code you designed or implemented is guilty until proven innocent. Your intent doesn't matter - only the code's actual behavior matters. Question your own design decisions as rigorously as you'd question anyone else's. </core_principle>
<quick_start>
<evidence_gathering>
Before proposing any solution:
A. Document Current State
  • What is the EXACT error message or unexpected behavior?
  • What are the EXACT steps to reproduce?
  • What is the ACTUAL output vs EXPECTED output?
  • When did this start working incorrectly (if known)?
B. Map the System
  • Trace the execution path from entry point to failure point
  • Identify all components involved
  • Read relevant source files completely, not just scanning
  • Note dependencies, imports, configurations affecting this area
C. Gather External Knowledge (when needed)
  • Use MCP servers for API documentation, library details, or domain knowledge
  • Use web search for error messages, framework-specific behaviors, or recent changes
  • Check official docs for intended behavior vs what you observe
  • Look for known issues, breaking changes, or version-specific quirks
See references/when-to-research.md for detailed guidance on research strategy.
</evidence_gathering>
<root_cause_analysis>
A. Form Hypotheses
Based on evidence, list possible causes:
  1. [Hypothesis 1] - because [specific evidence]
  2. [Hypothesis 2] - because [specific evidence]
  3. [Hypothesis 3] - because [specific evidence]
B. Test Each Hypothesis
For each hypothesis:
  • What would prove this true?
  • What would prove this false?
  • Design a minimal test
  • Execute and document results
See references/hypothesis-testing.md for scientific method application.
C. Eliminate or Confirm
Don't move forward until you can answer:
  • Which hypothesis is supported by evidence?
  • What evidence contradicts other hypotheses?
  • What additional information is needed?
</root_cause_analysis>
<solution_development>
Only after confirming root cause:
A. Design Solution
  • What is the MINIMAL change that addresses the root cause?
  • What are potential side effects?
  • What could this break?
B. Implement with Verification
  • Make the change
  • Add logging/debugging output if needed to verify behavior
  • Document why this change addresses the root cause
C. Test Thoroughly
  • Does the original issue still occur?
  • Do the reproduction steps now work?
  • Run relevant tests if they exist
  • Check for regressions in related functionality
See references/verification-patterns.md for comprehensive verification approaches.
</solution_development>
</quick_start>
<critical_rules>
  1. NO DRIVE-BY FIXES: If you can't explain WHY a change works, don't make it
  2. VERIFY EVERYTHING: Test your assumptions. Read the actual code. Check the actual behavior
  3. USE ALL TOOLS:
    • MCP servers for external knowledge
    • Web search for error messages, docs, known issues
    • Extended thinking ("think deeply") for complex reasoning
    • File reading for complete context
  4. THINK OUT LOUD: Document your reasoning at each step
  5. ONE VARIABLE: Change one thing at a time, verify, then proceed
  6. COMPLETE READS: Don't skim code. Read entire relevant files
  7. CHASE DEPENDENCIES: If the issue involves libraries, configs, or external systems, investigate those too
  8. QUESTION PREVIOUS WORK: Maybe the earlier "fix" was wrong. Re-examine with fresh eyes
</critical_rules>
<success_criteria>
Before starting:
  • Context scan executed to detect domain
  • Domain expertise loaded if available and relevant
During investigation:
  • Do you understand WHY the issue occurred?
  • Have you verified the fix actually works?
  • Have you tested the original reproduction steps?
  • Have you checked for side effects?
  • Can you explain the solution to someone else?
  • Would this fix survive code review?
If you can't answer "yes" to all of these, keep investigating.
CRITICAL: Do NOT mark debugging tasks as complete until this checklist passes.
</success_criteria>
<output_format>
markdown
undefined
echo "EXPERTISE_SKILLS:" ls ~/.claude/skills/expertise/ 2>/dev/null | head -5

**在开始调查前呈现发现结果。**
</context_scan>

<domain_expertise>
**领域特定专业技能存储在`~/.claude/skills/expertise/`目录中**

领域技能包含调试、测试、性能和常见陷阱等全面知识。在开始调查前,需确定是否应加载领域专业技能。

<scan_domains>
```bash
ls ~/.claude/skills/expertise/ 2>/dev/null
这会显示可用的领域专业技能(例如:macos-apps、iphone-apps、python-games、unity-games)。
如果未找到专业技能: 无需领域专业技能即可继续(优雅降级)。该技能凭借通用调试方法论即可正常工作。 </scan_domains>
<inference_rules> 如果用户的描述或代码库包含领域关键词,则推断所属领域:
关键词/文件领域技能
"Python", "game", "pygame", ".py" + game loopexpertise/python-games
"React", "Next.js", ".jsx/.tsx"expertise/nextjs-ecommerce
"Rust", "cargo", ".rs" filesexpertise/rust-systems
"Swift", "macOS", ".swift" + AppKit/SwiftUIexpertise/macos-apps
"iOS", "iPhone", ".swift" + UIKitexpertise/iphone-apps
"Unity", ".cs" + Unity importsexpertise/unity-games
"SuperCollider", ".sc", ".scd"expertise/supercollider
"Agent SDK", "claude-agent"expertise/with-agent-sdk
如果推断出领域,请确认:
检测到:[领域] 问题 → expertise/[技能名称]
是否加载该调试专业技能?(Y / 查看其他选项 / 不加载)
</inference_rules>
<no_inference> 如果无法明确领域,请提供选项:
你正在调试哪种类型的项目?

可用领域专业技能:
1. macos-apps - macOS Swift (SwiftUI, AppKit, 调试、测试)
2. iphone-apps - iOS Swift (UIKit, 调试、性能)
3. python-games - Python 游戏 (Pygame, 物理、性能)
4. unity-games - Unity (C#, 调试、优化)
[... 其他在build/目录中找到的技能]

N. 不加载 - 使用通用调试方法论继续
C. 为该领域创建专业技能

请选择:
</no_inference>
<load_domain> 当选择领域后,读取该技能的所有参考资料:
bash
cat ~/.claude/skills/expertise/[domain]/references/*.md 2>/dev/null
这会在调查前加载全面的领域知识:
  • 常见问题和错误模式
  • 领域特定调试工具和技术
  • 测试和验证方法
  • 性能分析和优化
  • 已知陷阱和反模式
  • 平台特定注意事项
提示:“已加载[领域]专业技能。正在结合领域特定上下文进行调查。”
如果未找到领域技能: 告知用户,并提供使用通用方法论继续或创建该领域专业技能的选项。 </load_domain>
<when_to_load> 当领域已知时,应在调查前加载领域专业技能。
无需加载领域专业技能的场景:
  • 纯逻辑错误(与领域无关)
  • 通用算法问题
  • 用户明确要求“跳过领域上下文”时 </when_to_load> </domain_expertise>
<context> 当标准故障排查失败时激活该技能。问题需要系统性调查,而非快速修复。你需要进入资深工程师的思维模式,以科学严谨的态度进行调试。
重要提示:如果你编写或修改了正在调试的任何代码,你会对代码的工作方式存在认知偏差。你关于“代码应该如何工作”的心智模型可能是错误的。对自己编写的代码要比对陌生代码保持更多怀疑——你会对自己的假设视而不见。 </context>
<core_principle> 验证,而非假设。 每个假设都必须经过测试。每个“修复”都必须经过验证。无证据不给出解决方案。
尤其注意:你设计或实现的代码在被证明正确之前都应被视为有问题。你的意图无关紧要——只有代码的实际行为才重要。像质疑他人的设计决策一样,严格质疑自己的设计决策。 </core_principle>
<quick_start>
<evidence_gathering>
在提出任何解决方案之前:
A. 记录当前状态
  • 确切的错误消息或意外行为是什么?
  • 重现问题的确切步骤是什么?
  • 实际输出与预期输出分别是什么?
  • 问题从何时开始出现(如果已知)?
B. 梳理系统架构
  • 追踪从入口点到故障点的执行路径
  • 识别所有涉及的组件
  • 完整阅读相关源文件,而非仅浏览
  • 记录影响该区域的依赖项、导入和配置
C. 收集外部知识(必要时)
  • 使用MCP服务器获取API文档、库详情或领域知识
  • 使用网络搜索查找错误消息、框架特定行为或近期变更
  • 查阅官方文档,对比预期行为与实际观察结果
  • 查找已知问题、破坏性变更或版本特定的特殊情况
有关研究策略的详细指导,请参阅references/when-to-research.md
</evidence_gathering>
<root_cause_analysis>
A. 形成假设
基于证据,列出可能的原因:
  1. [假设1] - 依据[具体证据]
  2. [假设2] - 依据[具体证据]
  3. [假设3] - 依据[具体证据]
B. 测试每个假设
针对每个假设:
  • 什么可以证明该假设成立?
  • 什么可以证明该假设不成立?
  • 设计最小化测试用例
  • 执行并记录结果
有关科学方法的应用,请参阅references/hypothesis-testing.md
C. 排除或确认假设
在推进前,你必须能够回答:
  • 哪个假设得到了证据支持?
  • 哪些证据与其他假设矛盾?
  • 还需要哪些额外信息?
</root_cause_analysis>
<solution_development>
仅在确认根本原因后:
A. 设计解决方案
  • 解决根本原因的最小变更是什么?
  • 可能存在哪些潜在副作用?
  • 这可能会破坏什么?
B. 实施并验证
  • 进行变更
  • 如有需要,添加日志/调试输出以验证行为
  • 记录该变更为何能解决根本原因
C. 全面测试
  • 原始问题是否仍然存在?
  • 重现步骤现在是否正常工作?
  • 如果存在相关测试,请运行这些测试
  • 检查相关功能是否出现回归
有关全面验证方法,请参阅references/verification-patterns.md
</solution_development>
</quick_start>
<critical_rules>
  1. 禁止随意修复:如果你无法解释变更为何有效,请勿进行变更
  2. 验证所有内容:测试你的假设。阅读实际代码。检查实际行为
  3. 充分利用所有工具
    • 使用MCP服务器获取外部知识
    • 使用网络搜索查找错误消息、文档和已知问题
    • 深度思考("think deeply")以处理复杂推理
    • 读取文件以获取完整上下文
  4. 清晰记录推理过程:在每个步骤中记录你的推理
  5. 单一变量原则:每次只变更一个内容,验证后再继续
  6. 完整阅读代码:不要浏览代码。阅读整个相关文件
  7. 追踪依赖项:如果问题涉及库、配置或外部系统,也需对其进行调查
  8. 质疑先前工作:之前的“修复”可能是错误的。以全新视角重新审视
</critical_rules>
<success_criteria>
开始前:
  • 已执行上下文扫描以检测领域
  • 如果可用且相关,已加载领域专业技能
调查期间:
  • 你是否理解问题发生的原因?
  • 你是否已验证修复确实有效?
  • 你是否已测试原始重现步骤?
  • 你是否已检查副作用?
  • 你能否向他人解释解决方案?
  • 该修复能否通过代码审查?
如果你无法对所有问题回答“是”,请继续调查。
关键提示:在通过此检查清单前,请勿将调试任务标记为完成。
</success_criteria>
<output_format>
markdown
undefined

Issue: [Problem Description]

问题:[问题描述]

Evidence

证据

[What you observed - exact errors, behaviors, outputs]
[你观察到的内容 - 确切错误、行为、输出]

Investigation

调查过程

[What you checked, what you found, what you ruled out]
[你检查的内容、发现的结果、排除的可能性]

Root Cause

根本原因

[The actual underlying problem with evidence]
[实际的潜在问题及证据]

Solution

解决方案

[What you changed and WHY it addresses the root cause]
[你进行的变更,以及该变更为何能解决根本原因]

Verification

验证

[How you confirmed this works and doesn't break anything else]

</output_format>

<advanced_topics>

For deeper topics, see reference files:

**Debugging mindset**: [references/debugging-mindset.md](references/debugging-mindset.md)
- First principles thinking applied to debugging
- Cognitive biases that lead to bad fixes
- The discipline of systematic investigation
- When to stop and restart with fresh assumptions

**Investigation techniques**: [references/investigation-techniques.md](references/investigation-techniques.md)
- Binary search / divide and conquer
- Rubber duck debugging
- Minimal reproduction
- Working backwards from desired state
- Adding observability before changing code

**Hypothesis testing**: [references/hypothesis-testing.md](references/hypothesis-testing.md)
- Forming falsifiable hypotheses
- Designing experiments that prove/disprove
- What makes evidence strong vs weak
- Recovering from wrong hypotheses gracefully

**Verification patterns**: [references/verification-patterns.md](references/verification-patterns.md)
- Definition of "verified" (not just "it ran")
- Testing reproduction steps
- Regression testing adjacent functionality
- When to write tests before fixing

**Research strategy**: [references/when-to-research.md](references/when-to-research.md)
- Signals that you need external knowledge
- What to search for vs what to reason about
- Balancing research time vs experimentation

</advanced_topics>
[你如何确认该解决方案有效且不会破坏其他功能]

</output_format>

<advanced_topics>

有关更深层次的主题,请参阅参考文件:

**调试思维模式**:[references/debugging-mindset.md](references/debugging-mindset.md)
- 应用于调试的第一性原理思维
- 导致错误修复的认知偏差
- 系统性调查的准则
- 何时停止并以全新假设重新开始

**调查技巧**:[references/investigation-techniques.md](references/investigation-techniques.md)
- 二分查找/分治法
- 橡皮鸭调试法
- 最小化重现
- 从期望结果倒推
- 在变更代码前添加可观察性

**假设测试**:[references/hypothesis-testing.md](references/hypothesis-testing.md)
- 形成可证伪的假设
- 设计可证明/推翻假设的实验
- 强证据与弱证据的区别
- 从错误假设中优雅恢复

**验证模式**:[references/verification-patterns.md](references/verification-patterns.md)
- “验证”的定义(不仅仅是“代码能运行”)
- 测试重现步骤
- 相关功能的回归测试
- 何时在修复前编写测试

**研究策略**:[references/when-to-research.md](references/when-to-research.md)
- 表明你需要外部知识的信号
- 应搜索的内容与应推理的内容
- 平衡研究时间与实验时间

</advanced_topics>