testing-anti-patterns

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Testing Anti-Patterns Skill

Testing Anti-Patterns Skill

Operator Context

操作场景

This skill operates as an operator for test quality assessment, configuring Claude's behavior for identifying and fixing common testing mistakes. It provides "negative knowledge" -- patterns to AVOID. It complements
test-driven-development
by focusing on what goes wrong, not just what to do right.
Core principle: Tests should verify behavior, be reliable, run fast, and fail for the right reasons.
本Skill作为测试质量评估的操作指南,用于配置Claude识别和修复常见测试错误的行为。它提供“反向知识”——即需要避免的模式。它与
test-driven-development
形成互补,重点关注测试中出现的问题,而非仅强调正确的做法。
核心原则:测试应验证行为、可靠稳定、运行快速,且仅在出现真实问题时失败。

Hardcoded Behaviors (Always Apply)

硬编码行为(始终适用)

  • CLAUDE.md Compliance: Read and follow repository CLAUDE.md files
  • Over-Engineering Prevention: Fix the specific anti-pattern; do not rewrite the entire test suite
  • Preserve Test Intent: When fixing anti-patterns, maintain what the test was trying to verify
  • Show Real Examples: Point to actual code when identifying anti-patterns, not abstract descriptions
  • Behavior Over Implementation: Always guide toward testing observable behavior, not internals
  • CLAUDE.md合规性:阅读并遵循仓库中的CLAUDE.md文件
  • 避免过度设计:仅修复特定的反模式,不要重写整个测试套件
  • 保留测试意图:修复反模式时,保持原测试要验证的核心目标
  • 展示真实示例:识别反模式时指向实际代码,而非抽象描述
  • 行为优先于实现:始终引导测试可观察的行为,而非内部实现细节

Default Behaviors (ON unless disabled)

默认行为(默认开启,可关闭)

  • Communication: Report anti-patterns with specific file:line references and concrete fixes
  • Severity Classification: Distinguish critical (flaky, order-dependent) from minor (naming) issues
  • Quick Wins First: Suggest fixes that improve reliability immediately
  • One Pattern at a Time: Address each anti-pattern individually with before/after
  • 沟通规范:报告反模式时需包含具体的文件:行号引用和明确的修复方案
  • 严重性分类:区分严重问题(不稳定测试、依赖执行顺序)与次要问题(命名不佳)
  • 优先快速修复:建议能立即提升可靠性的修复方案
  • 逐个处理模式:针对每个反模式单独提供修复前后的对比

Optional Behaviors (OFF unless enabled)

可选行为(默认关闭,可开启)

  • Full Suite Audit: Scan entire test suite for anti-patterns (can be slow)
  • Refactoring Mode: Apply fixes automatically rather than just identifying them
  • Metrics Collection: Count anti-patterns by category for reporting
  • 全套件审计:扫描整个测试套件查找反模式(可能耗时较长)
  • 重构模式:自动应用修复方案,而非仅识别问题
  • 指标收集:按类别统计反模式数量用于报告

What This Skill CAN Do

本Skill可实现的功能

  • Identify specific anti-patterns in test code with file:line references
  • Provide concrete before/after examples for fixes
  • Prioritize fixes by impact (flaky > order-dependent > slow > naming)
  • Explain WHY a pattern is problematic
  • Suggest incremental improvements without rewriting suites
  • 识别测试代码中的特定反模式,并提供文件:行号引用
  • 提供修复前后的具体代码示例
  • 按影响程度优先修复问题(不稳定测试 > 依赖执行顺序 > 运行缓慢 > 命名不佳)
  • 解释某一模式为何存在问题
  • 建议增量式改进,无需重写整个测试套件

What This Skill CANNOT Do

本Skill不可实现的功能

  • Fix fundamental architectural issues (use
    systematic-refactoring
    )
  • Write new tests from scratch (use
    test-driven-development
    )
  • Profile test performance (use actual profilers)
  • Guarantee test correctness (anti-patterns can exist in "working" tests)
  • Skip identification and jump straight to rewriting

  • 修复基础架构问题(请使用
    systematic-refactoring
  • 从头编写新测试(请使用
    test-driven-development
  • 分析测试性能(请使用专业性能分析工具)
  • 保证测试完全正确(反模式可能存在于“可运行”的测试中)
  • 跳过识别步骤直接重写测试

Instructions

操作步骤

Phase 1: SCAN

阶段1:扫描

Goal: Identify anti-patterns present in the target test code.
Step 1: Locate test files
Use Grep/Glob to find test files in the relevant area. If user pointed to specific files, start there. Common patterns:
  • Go:
    *_test.go
  • Python:
    test_*.py
    or
    *_test.py
  • JavaScript/TypeScript:
    *.test.ts
    ,
    *.spec.ts
    ,
    *.test.js
    ,
    *.spec.js
Step 2: Read CLAUDE.md
Check for project-specific testing conventions before flagging anti-patterns. Some projects intentionally deviate from general best practices.
Step 3: Classify anti-patterns
For each test file, scan for these 10 categories (detailed examples in
references/anti-pattern-catalog.md
):
#Anti-PatternDetection Signal
1Testing implementation detailsAsserts on private fields, internal regex, spy on private methods
2Over-mocking / brittle selectorsMock setup > 50% of test code, CSS nth-child selectors
3Order-dependent testsShared mutable state, class-level variables, numbered test names
4Incomplete assertions
!= nil
,
> 0
,
toBeTruthy()
, no value checks
5Over-specificationExact timestamps, hardcoded IDs, asserting every default field
6Ignored failures
@skip
,
.skip
,
xit
, empty catch blocks,
_ = err
7Poor naming
testFunc2
,
test_new
,
it('works')
,
it('handles case')
8Missing edge casesOnly happy path, no empty/null/boundary/error tests
9Slow test suitesFull DB reset per test, no parallelization, no fixture sharing
10Flaky tests
sleep()
,
time.Sleep()
,
setTimeout()
, unsynchronized goroutines
Step 4: Document findings
markdown
undefined
目标:识别目标测试代码中存在的反模式。
步骤1:定位测试文件
使用Grep/Glob查找相关区域的测试文件。如果用户指定了具体文件,从这些文件开始。常见的文件命名模式:
  • Go:
    *_test.go
  • Python:
    test_*.py
    *_test.py
  • JavaScript/TypeScript:
    *.test.ts
    ,
    *.spec.ts
    ,
    *.test.js
    ,
    *.spec.js
步骤2:阅读CLAUDE.md
在标记反模式之前,先检查项目特定的测试约定。部分项目可能会有意偏离通用最佳实践。
步骤3:分类反模式
针对每个测试文件,扫描以下10类反模式(详细示例见
references/anti-pattern-catalog.md
):
序号反模式检测信号
1测试实现细节断言私有字段、内部正则表达式、监听私有方法
2过度Mock / 脆弱选择器Mock代码占测试代码的50%以上、使用CSS nth-child选择器
3依赖执行顺序的测试共享可变状态、类级变量、带编号的测试名称
4不完整的断言
!= nil
,
> 0
,
toBeTruthy()
、无具体值检查
5过度指定断言精确时间戳、硬编码ID、断言所有默认字段
6忽略失败
@skip
,
.skip
,
xit
、空catch块、
_ = err
7命名不佳
testFunc2
,
test_new
,
it('works')
,
it('handles case')
8缺失边界用例仅覆盖正常路径,无空值/空输入/边界/错误场景测试
9缓慢的测试套件每次测试重置完整数据库、未并行化、未共享测试夹具
10不稳定测试
sleep()
,
time.Sleep()
,
setTimeout()
、未同步的goroutine
步骤4:记录发现
markdown
undefined

Anti-Pattern Report

反模式报告

[File:Line] - [Anti-Pattern Name]

[文件:行号] - [反模式名称]

  • Severity: HIGH / MEDIUM / LOW
  • Issue: [What is wrong]
  • Impact: [Flaky / slow / false-confidence / maintenance burden]

**Gate**: At least one anti-pattern identified with file:line reference. Proceed only when gate passes.
  • 严重性: HIGH / MEDIUM / LOW
  • 问题: [具体错误内容]
  • 影响: [不稳定 / 缓慢 / 错误信心 / 维护负担]

**准入条件**:至少识别出一个带文件:行号引用的反模式。仅当满足条件时继续下一步。

Phase 2: PRIORITIZE

阶段2:优先级排序

Goal: Rank findings by impact to fix the most damaging patterns first.
Priority order:
  1. HIGH - Flaky tests, order-dependent tests, ignored failures (erode trust in suite)
  2. MEDIUM - Over-mocking, incomplete assertions, missing edge cases (false confidence)
  3. LOW - Poor naming, over-specification, slow suites (maintenance burden)
Gate: Findings ranked. User agrees on scope of fixes. Proceed only when gate passes.
目标:按影响程度对发现的问题排序,优先修复最具破坏性的模式。
优先级顺序:
  1. 高优先级 - 不稳定测试、依赖执行顺序的测试、忽略失败的测试(会削弱对测试套件的信任)
  2. 中优先级 - 过度Mock、不完整断言、缺失边界用例(会导致错误信心)
  3. 低优先级 - 命名不佳、过度指定、缓慢的测试套件(增加维护负担)
准入条件:已对发现的问题排序。用户同意修复范围。仅当满足条件时继续下一步。

Phase 3: FIX

阶段3:修复

Goal: Apply targeted fixes to identified anti-patterns.
Step 1: For each anti-pattern (highest priority first):
markdown
ANTI-PATTERN: [Name]
Location: [file:line]
Issue: [What is wrong]
Impact: [Flaky/slow/false-confidence/maintenance burden]

Current:
[problematic code snippet]

Fixed:
[improved code snippet]

Priority: [HIGH/MEDIUM/LOW]
Step 2: Apply fix
  • Change only what is needed to fix the anti-pattern
  • Preserve the original test's intent and coverage
  • One anti-pattern fix at a time
  • Consult
    references/fix-strategies.md
    for language-specific patterns
Step 3: Run tests after each fix
  • Run the specific fixed test first to confirm it passes
  • Run the full file or package to check for interactions
  • If a fix makes a previously-passing test fail, the test was likely depending on buggy behavior -- investigate before proceeding
Gate: Each fix verified individually. Tests pass after each change.
目标:对识别出的反模式应用针对性修复。
步骤1:针对每个反模式(从最高优先级开始):
markdown
反模式: [名称]
位置: [文件:行号]
问题: [具体错误内容]
影响: [不稳定/缓慢/错误信心/维护负担]

当前代码:
[有问题的代码片段]

修复后代码:
[改进后的代码片段]

优先级: [HIGH/MEDIUM/LOW]
步骤2:应用修复
  • 仅修改修复反模式所需的内容
  • 保留原测试的意图和覆盖范围
  • 一次修复一个反模式
  • 参考
    references/fix-strategies.md
    获取语言特定的修复模式
步骤3:每次修复后运行测试
  • 先运行修复后的特定测试,确认其通过
  • 运行整个文件或包,检查是否存在交互问题
  • 如果修复导致之前通过的测试失败,该测试可能依赖于有问题的行为——在继续前需调查原因
准入条件:每个修复都已单独验证。每次修改后测试均通过。

Phase 4: VERIFY

阶段4:验证

Goal: Confirm all fixes work together and suite is healthier.
Step 1: Run full test suite -- all pass
Step 2: Verify previously-flaky tests are now deterministic (run 3x if applicable)
  • Go:
    go test -count=3 -run TestFixed ./...
  • Python:
    pytest --count=3 tests/test_fixed.py
  • JS: Run test file 3 times sequentially
Step 3: Confirm no test was accidentally deleted or skipped
  • Compare test count before and after fixes
  • Search for any new
    @skip
    or
    .skip
    annotations introduced
Step 4: Summary report
markdown
undefined
目标:确认所有修复协同工作,测试套件更健康。
步骤1:运行完整测试套件——所有测试均通过
步骤2:验证之前不稳定的测试现在具有确定性(如有需要运行3次)
  • Go:
    go test -count=3 -run TestFixed ./...
  • Python:
    pytest --count=3 tests/test_fixed.py
  • JS: 连续运行测试文件3次
步骤3:确认没有测试被意外删除或跳过
  • 对比修复前后的测试数量
  • 检查是否引入了新的
    @skip
    .skip
    注解
步骤4:生成总结报告
markdown
undefined

Fix Summary

修复总结

Anti-patterns fixed: [count] Files modified: [list] Tests affected: [count] Suite status: all passing / [details] Remaining issues: [any deferred items]

**Gate**: Full suite passes. All fixes verified. Summary delivered.

---
修复的反模式数量: [数量] 修改的文件: [列表] 受影响的测试: [数量] 套件状态: 全部通过 / [详细信息] 剩余问题: [任何延迟处理的事项]

**准入条件**:完整套件测试通过。所有修复均已验证。已交付总结报告。

---

Examples

示例

Example 1: Flaky Test Investigation

示例1:不稳定测试排查

User says: "This test passes locally but fails randomly in CI" Actions:
  1. Scan test for timing dependencies -- find
    sleep(500)
    (SCAN)
  2. Classify as Anti-Pattern 10: Flaky Test, severity HIGH (PRIORITIZE)
  3. Replace
    sleep()
    with
    waitFor()
    or inject fake clock (FIX)
  4. Run test 10x to confirm determinism, run full suite (VERIFY) Result: Flaky test replaced with deterministic wait
用户反馈:“这个测试在本地通过,但在CI中随机失败” 操作步骤:
  1. 扫描测试中的时间依赖——发现
    sleep(500)
    (扫描阶段)
  2. 归类为反模式10:不稳定测试,严重性HIGH(优先级排序阶段)
  3. waitFor()
    或注入假时钟替换
    sleep()
    (修复阶段)
  4. 运行测试10次确认确定性,运行完整套件(验证阶段) 结果:不稳定测试被替换为确定性等待逻辑

Example 2: Over-Mocked Test Suite

示例2:过度Mock的测试套件

User says: "Every small refactor breaks dozens of tests" Actions:
  1. Scan for mock density -- find tests with 5+ mocks each (SCAN)
  2. Classify as Anti-Pattern 2: Over-mocking, severity MEDIUM (PRIORITIZE)
  3. Replace mocks with real implementations at I/O boundaries (FIX)
  4. Verify suite passes, confirm refactoring no longer breaks tests (VERIFY) Result: Tests verify behavior instead of mock wiring
用户反馈:“每次小重构都会导致数十个测试失败” 操作步骤:
  1. 扫描Mock密度——发现每个测试有5个以上Mock(扫描阶段)
  2. 归类为反模式2:过度Mock,严重性MEDIUM(优先级排序阶段)
  3. 在I/O边界用真实实现替换Mock(修复阶段)
  4. 验证套件通过,确认重构不再导致测试失败(验证阶段) 结果:测试验证行为而非Mock的 wiring

Example 3: False Confidence

示例3:错误信心

User says: "Tests all pass but we keep finding bugs in production" Actions:
  1. Scan for incomplete assertions (
    != nil
    ,
    toBeTruthy
    ) and missing edge cases (SCAN)
  2. Classify as Anti-Patterns 4+8, severity MEDIUM (PRIORITIZE)
  3. Add specific value assertions, add edge case tests (FIX)
  4. Verify new assertions catch known production bugs (VERIFY) Result: Tests now catch real issues before deployment
用户反馈:“所有测试都通过,但我们在生产环境中不断发现bug” 操作步骤:
  1. 扫描不完整断言(
    != nil
    ,
    toBeTruthy
    )和缺失的边界用例(扫描阶段)
  2. 归类为反模式4+8,严重性MEDIUM(优先级排序阶段)
  3. 添加具体值断言,补充边界用例测试(修复阶段)
  4. 验证新断言能捕获已知的生产环境bug(验证阶段) 结果:测试现在能在部署前发现真实问题

Example 4: Order-Dependent Suite

示例4:依赖执行顺序的套件

User says: "Tests pass in sequence but fail when run in parallel or random order" Actions:
  1. Scan for shared mutable state, class-level variables, global DB mutations (SCAN)
  2. Classify as Anti-Pattern 3: Order Dependence, severity HIGH (PRIORITIZE)
  3. Give each test its own setup/teardown, remove shared state (FIX)
  4. Run suite with randomized order 3x, confirm all pass (VERIFY) Result: Tests are self-contained and parallelizable
用户反馈:“测试按顺序运行通过,但并行或随机顺序运行时失败” 操作步骤:
  1. 扫描共享可变状态、类级变量、全局数据库变更(扫描阶段)
  2. 归类为反模式3:依赖执行顺序,严重性HIGH(优先级排序阶段)
  3. 为每个测试提供独立的初始化/清理逻辑,移除共享状态(修复阶段)
  4. 随机顺序运行套件3次,确认所有测试通过(验证阶段) 结果:测试独立且可并行运行

Example 5: Skipped Test Audit

示例5:跳过测试审计

User says: "We have 40 skipped tests, are any still relevant?" Actions:
  1. Grep for
    @skip
    ,
    .skip
    ,
    xit
    ,
    @pytest.mark.skip
    across suite (SCAN)
  2. Classify each: outdated (delete), still relevant (fix), environment-specific (document) (PRIORITIZE)
  3. Delete dead tests, unskip and fix relevant ones, add reason annotations (FIX)
  4. Verify suite passes with formerly-skipped tests re-enabled (VERIFY) Result: No unexplained skips remain; suite coverage restored

用户反馈:“我们有40个跳过的测试,哪些仍然相关?” 操作步骤:
  1. 扫描整个套件中的
    @skip
    ,
    .skip
    ,
    xit
    ,
    @pytest.mark.skip
    (扫描阶段)
  2. 分类每个测试:过时的(删除)、仍相关的(修复)、环境特定的(文档化)(优先级排序阶段)
  3. 删除无用测试,取消跳过并修复相关测试,添加原因注解(修复阶段)
  4. 验证重新启用原跳过的测试后套件通过(验证阶段) 结果:无未解释的跳过测试;套件覆盖率恢复

Error Handling

错误处理

Error: "Cannot Determine if Pattern is Anti-Pattern"

错误:“无法确定某模式是否为反模式”

Cause: Context-dependent -- pattern may be valid in specific situations Solution:
  1. Check if the test has a comment explaining the unusual approach
  2. Consider the testing layer (unit vs integration vs E2E)
  3. If mock-heavy test is for a unit with many dependencies, suggest integration test instead
  4. When in doubt, flag as MEDIUM and explain trade-offs
原因:依赖上下文——该模式在特定场景下可能有效 解决方案:
  1. 检查测试是否有注释解释这种特殊方法
  2. 考虑测试层级(单元测试vs集成测试vs E2E测试)
  3. 如果依赖多个依赖的单元测试Mock较多,建议改用集成测试
  4. 如有疑问,标记为MEDIUM严重性并解释权衡

Error: "Fix Changes Test Behavior"

错误:“修复改变了测试行为”

Cause: Anti-pattern was masking an actual test gap or testing wrong thing Solution:
  1. Identify what the test was originally trying to verify
  2. Write the correct assertion for that behavior
  3. If original behavior was wrong, note it as a separate finding
  4. Do not silently change what a test covers
原因:反模式掩盖了实际的测试缺口或测试了错误的内容 解决方案:
  1. 明确原测试要验证的内容
  2. 为该行为编写正确的断言
  3. 如果原行为错误,将其作为单独的发现记录
  4. 不要默默改变测试的覆盖范围

Error: "Suite Has Hundreds of Anti-Patterns"

错误:“套件中有数百个反模式”

Cause: Systemic test quality issues, not individual mistakes Solution:
  1. Do NOT attempt to fix everything at once
  2. Focus on HIGH severity items only (flaky, order-dependent)
  3. Recommend adopting TDD going forward to prevent new anti-patterns
  4. Suggest incremental cleanup strategy (fix on touch, not bulk rewrite)

原因:系统性的测试质量问题,而非个别错误 解决方案:
  1. 不要尝试一次性修复所有问题
  2. 仅关注高优先级项(不稳定、依赖执行顺序)
  3. 建议采用TDD来防止新的反模式
  4. 建议增量清理策略(修改时修复,而非批量重写)

Anti-Patterns (Meta)

元反模式

Anti-Pattern 1: Rewriting Instead of Fixing

元反模式1:重写而非修复

What it looks like: Deleting the entire test and writing a new one from scratch Why wrong: Loses institutional knowledge of what was being tested; may reduce coverage Do instead: Preserve intent, fix the specific anti-pattern, keep test focused
表现:删除整个测试并从头编写新测试 错误原因:丢失了测试要验证内容的 institutional knowledge;可能降低覆盖率 正确做法:保留测试意图,修复特定反模式,保持测试聚焦

Anti-Pattern 2: Fixing Style Without Fixing Substance

元反模式2:修复样式而非实质

What it looks like: Renaming
test1
to
test_creates_user
but not fixing the incomplete assertion inside Why wrong: Cosmetic improvement without reliability gain Do instead: Fix reliability issues first (assertions, flakiness), then naming
表现:将
test1
重命名为
test_creates_user
但未修复内部的不完整断言 错误原因:仅做表面改进,未提升可靠性 正确做法:先修复可靠性问题(断言、不稳定性),再处理命名

Anti-Pattern 3: Adding Tests Without Removing Anti-Patterns

元反模式3:添加新测试但保留反模式

What it looks like: Writing new good tests alongside existing bad ones Why wrong: Bad tests still produce false confidence and maintenance burden Do instead: Fix or delete the anti-pattern test, then add proper coverage if needed
表现:在现有坏测试旁编写新的好测试 错误原因:坏测试仍会产生错误信心和维护负担 正确做法:修复或删除反模式测试,然后在需要时添加适当的覆盖

Anti-Pattern 4: Bulk Fixing Without Verification

元反模式4:批量修复而不验证

What it looks like: Applying the same fix pattern to 50 tests without running them Why wrong: Mechanical fixes miss context-specific nuances; may break tests Do instead: Fix one, verify, fix next. Batch only after pattern is proven safe.

表现:对50个测试应用相同的修复模式而不运行它们 错误原因:机械修复会忽略上下文特定的细节;可能破坏测试 正确做法:修复一个,验证,再修复下一个。仅在模式被证明安全后再批量处理

References

参考资料

Quick Reference Table

快速参考表

Anti-PatternSymptomFix
Testing implementationTest breaks on refactorTest behavior, not internals
Over-mockingMock setup > test logicIntegration test or mock only I/O
Order dependenceTests fail in isolationEach test owns its data
Incomplete assertions
assert result != nil
Assert specific expected values
Over-specificationAsserts on defaults/timestampsAssert only what matters for this test
Ignored failures
@skip
, empty catch
Delete or fix immediately
Poor naming
testFunc2
Test{What}_{When}_{Expected}
Missing edge casesOnly happy pathempty, null, boundary, error, large
Slow suite30s+ for simple testsParallelize, share fixtures, rollback
Flaky testsRandom failuresControl time, synchronize, no sleep
反模式症状修复方案
测试实现细节重构时测试失败测试行为而非内部实现
过度MockMock代码多于测试逻辑改用集成测试或仅在I/O边界Mock
依赖执行顺序单独运行测试失败每个测试拥有独立的数据
不完整断言
assert result != nil
断言具体的预期值
过度指定断言默认值/时间戳仅断言对本测试重要的内容
忽略失败
@skip
、空catch块
立即删除或修复
命名不佳
testFunc2
使用
Test{What}_{When}_{Expected}
格式
缺失边界用例仅覆盖正常路径添加空值、null、边界、错误、大输入等用例
缓慢的套件简单测试耗时30秒以上并行化、共享夹具、回滚
不稳定测试随机失败控制时间、同步、不使用sleep

Red Flags During Review

评审中的危险信号

  • @skip
    ,
    @ignore
    ,
    xit
    ,
    .skip
    without expiration date
  • time.sleep()
    ,
    setTimeout()
    in test code
  • Test names with sequential numbers (
    test1
    ,
    test2
    )
  • Global mutable state accessed by multiple tests
  • Mock setup spanning 20+ lines
  • Empty catch blocks in tests
  • Assertions like
    != nil
    ,
    > 0
    ,
    toBeTruthy()
    without value checks
  • @skip
    ,
    @ignore
    ,
    xit
    ,
    .skip
    无过期日期
  • 测试代码中使用
    time.sleep()
    ,
    setTimeout()
  • 测试名称带序号(
    test1
    ,
    test2
  • 多个测试访问全局可变状态
  • Mock代码超过20行
  • 测试中的空catch块
  • 仅使用
    != nil
    ,
    > 0
    ,
    toBeTruthy()
    而无具体值检查的断言

TDD Relationship

与TDD的关系

Strict TDD prevents most anti-patterns:
  1. RED phase catches incomplete assertions (test must fail first)
  2. GREEN phase minimum prevents over-specification
  3. Watch failure confirms you test behavior, not mocks
  4. Incremental cycles prevent test interdependence
  5. Refactor phase reveals tests coupled to implementation
If you find anti-patterns in a codebase, check if TDD discipline slipped.
严格的TDD可防止大多数反模式:
  1. RED阶段 捕获不完整断言(测试必须先失败)
  2. GREEN阶段的最小实现 防止过度指定
  3. 观察失败 确认测试的是行为而非Mock
  4. 增量循环 防止测试相互依赖
  5. 重构阶段 揭示与实现耦合的测试
如果在代码库中发现反模式,检查是否TDD规范有所松懈。

Domain-Specific Anti-Rationalization

领域特定的反合理化

RationalizationWhy It's WrongRequired Action
"The test passes, so it's fine"Passing with anti-patterns gives false confidenceEvaluate assertion quality, not just pass/fail
"We can fix test quality later"Anti-patterns compound; flaky tests erode trust dailyFix HIGH severity items now, defer LOW
"Just skip the flaky test for now"Skipped tests become permanent blind spotsDiagnose root cause, fix or delete
"Mocking everything is faster"Over-mocking tests mock wiring, not behaviorMock only at architectural boundaries
"One big test covers everything"Monolithic tests are fragile and hard to debugSplit into focused, independent tests
合理化理由错误原因必要操作
“测试通过了,所以没问题”带反模式的测试通过会带来错误信心评估断言质量,而非仅看是否通过
“我们以后再修复测试质量”反模式会恶化;不稳定测试每天都会削弱信任现在修复高优先级项,延迟低优先级项
“先跳过这个不稳定测试”跳过的测试会成为永久的盲点诊断根本原因,修复或删除
“Mock所有内容更快”过度Mock的测试验证的是Mock wiring而非行为仅在架构边界Mock
“一个大测试覆盖所有内容”单体测试脆弱且难以调试拆分为聚焦、独立的测试

Reference Files

参考文件

  • ${CLAUDE_SKILL_DIR}/references/anti-pattern-catalog.md
    : Detailed code examples for all 10 anti-patterns (Go, Python, JavaScript)
  • ${CLAUDE_SKILL_DIR}/references/fix-strategies.md
    : Language-specific fix patterns and tooling
  • ${CLAUDE_SKILL_DIR}/references/blind-spot-taxonomy.md
    : 6-category taxonomy of what high-coverage test suites commonly miss (concurrency, state, boundaries, security, integration, resilience)
  • ${CLAUDE_SKILL_DIR}/references/load-test-scenarios.md
    : 6 load test scenario types (smoke, load, stress, spike, soak, breakpoint) with configurations and critical endpoint priorities
  • ${CLAUDE_SKILL_DIR}/references/anti-pattern-catalog.md
    : 所有10种反模式的详细代码示例(Go、Python、JavaScript)
  • ${CLAUDE_SKILL_DIR}/references/fix-strategies.md
    : 语言特定的修复模式和工具
  • ${CLAUDE_SKILL_DIR}/references/blind-spot-taxonomy.md
    : 高覆盖率测试套件通常遗漏的6类内容分类(并发、状态、边界、安全、集成、韧性)
  • ${CLAUDE_SKILL_DIR}/references/load-test-scenarios.md
    : 6种负载测试场景类型(冒烟、负载、压力、尖峰、浸泡、断点)及配置和关键端点优先级