project-test-loop

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Run automated TDD cycle: test → fix → refactor.
Note: Configure project-specific test/build commands in
CLAUDE.md
or
.claude/rules/
for automatic detection.
Steps:
  1. Detect test command (if not already configured):
    • Check
      package.json
      for
      scripts.test
      (Node.js)
    • Check for
      pytest
      or
      python -m unittest
      (Python)
    • Check for
      cargo test
      (Rust)
    • Check for
      go test ./...
      (Go)
    • Check
      Makefile
      for test target
    • If not found, ask user how to run tests
  2. Run test suite:
    bash
    # Run detected test command
    [test_command]
  3. Analyze results:
    If tests FAIL:
    • Parse failure output
    • Identify failing tests:
      • Which tests failed?
      • What assertions failed?
      • What was expected vs actual?
    • Identify root cause:
      • Bug in implementation?
      • Missing implementation?
      • Incorrect test?
    • Make minimal fix:
      • Fix only what's needed to pass the failing test
      • Don't add extra functionality
      • Don't fix tests (fix code instead)
    • Re-run tests to confirm fix
    • Loop back to step 2
    If tests PASS:
    • Check for refactoring opportunities:
      • Code duplication?
      • Unclear naming?
      • Long functions?
      • Complex logic that can be simplified?
      • Magic numbers/strings to extract?
    • If refactoring identified:
      • Refactor while keeping tests green
      • Re-run tests after each refactoring
      • Ensure tests still pass
    • If no refactoring needed:
      • Report success
      • Stop loop
  4. Repeat until:
    • All tests pass AND
    • No obvious refactoring opportunities
    OR stop if:
    • User intervention needed
    • Blocked by external dependency
    • Unclear how to fix failure
  5. Report results:
    🧪 Test Loop Results:
    
    Cycles: [N] iterations
    
    Fixes Applied:
    - [Fix 1]: [Brief description]
    - [Fix 2]: [Brief description]
    
    Refactorings Performed:
    - [Refactor 1]: [Brief description]
    - [Refactor 2]: [Brief description]
    
    Current Status:
    ✅ All tests pass
    ✅ Code refactored
    📝 Ready for commit
    
    OR
    
    ⚠️ Blocked: [Reason]
    📝 Next steps: [Recommendation]
TDD Cycle Details:
运行自动化TDD循环:测试→修复→重构。
注意:请在
CLAUDE.md
.claude/rules/
中配置项目特定的测试/构建命令,以便自动检测。
步骤:
  1. 检测测试命令(如果尚未配置):
    • 检查
      package.json
      中的
      scripts.test
      (Node.js)
    • 检查是否存在
      pytest
      python -m unittest
      (Python)
    • 检查是否存在
      cargo test
      (Rust)
    • 检查是否存在
      go test ./...
      (Go)
    • 检查
      Makefile
      中的测试目标
    • 如果未找到,询问用户如何运行测试
  2. 运行测试套件:
    bash
    # Run detected test command
    [test_command]
  3. 分析结果:
    如果测试失败:
    • 解析失败输出
    • 识别失败的测试:
      • 哪些测试失败了?
      • 哪些断言失败了?
      • 预期结果与实际结果分别是什么?
    • 识别根本原因:
      • 实现中存在Bug?
      • 缺少实现?
      • 测试用例不正确?
    • 进行最小化修复:
      • 仅修复使失败测试通过所需的内容
      • 不要添加额外功能
      • 不要修改测试(而是修复代码)
    • 重新运行测试以确认修复有效
    • 回到步骤2循环
    如果测试通过:
    • 寻找重构机会:
      • 存在代码重复?
      • 命名不清晰?
      • 函数过长?
      • 复杂逻辑可简化?
      • 存在可提取的魔术数字/字符串?
    • 如果识别出重构点:
      • 在保持测试通过的前提下进行重构
      • 每次重构后重新运行测试
      • 确保测试仍然通过
    • 如果无需重构:
      • 报告成功
      • 终止循环
  4. 重复直到:
    • 所有测试通过 AND
    • 无明显重构机会
    或在以下情况时终止:
    • 需要用户干预
    • 被外部依赖阻塞
    • 不清楚如何修复失败问题
  5. 报告结果:
    🧪 测试循环结果:
    
    循环次数: [N] 次迭代
    
    已应用的修复:
    - [修复1]: [简要描述]
    - [修复2]: [简要描述]
    
    已执行的重构:
    - [重构1]: [简要描述]
    - [重构2]: [简要描述]
    
    当前状态:
    ✅ 所有测试通过
    ✅ 代码已重构
    📝 可提交
    
    
    ⚠️ 已阻塞: [原因]
    📝 下一步: [建议]
TDD循环详情:

RED Phase (If starting new feature)

RED阶段(如果开始新功能)

  1. Write failing test describing desired behavior
  2. Run tests → Should FAIL (expected)
  3. This command picks up from here
  1. 编写描述期望行为的失败测试
  2. 运行测试 → 应失败(符合预期)
  3. 本命令从此处开始执行

GREEN Phase (This command handles)

GREEN阶段(本命令处理)

  1. Run tests
  2. If fail → Make minimal fix
  3. Re-run tests → Should PASS
  4. Loop until all pass
  1. 运行测试
  2. 如果失败 → 进行最小化修复
  3. 重新运行测试 → 应通过
  4. 循环直到所有测试通过

REFACTOR Phase (This command handles)

REFACTOR阶段(本命令处理)

  1. Tests pass
  2. Look for improvements
  3. Refactor
  4. Re-run tests → Should STILL PASS
  5. Loop until no improvements
Common Failure Patterns:
Pattern: Missing Implementation
  • Symptom:
    undefined is not a function
    ,
    NameError
    , etc.
  • Fix: Implement the missing function/class/method
  • Minimal: Just the signature, return dummy value
Pattern: Wrong Return Value
  • Symptom:
    Expected X but got Y
  • Fix: Update implementation to return correct value
  • Minimal: Don't add extra logic, just fix the return
Pattern: Missing Edge Case
  • Symptom: Test fails for specific input
  • Fix: Handle the edge case
  • Minimal: Add condition for this case only
Pattern: Integration Issue
  • Symptom: Test fails when components interact
  • Fix: Fix the integration point
  • Minimal: Fix just the integration, not entire components
Refactoring Opportunities:
Look for:
  • Duplicated code → Extract to function
  • Magic numbers → Extract to constants
  • Long functions → Break into smaller functions
  • Complex conditionals → Extract to well-named functions
  • Unclear names → Rename to be descriptive
  • Comments explaining code → Refactor code to be self-explanatory
Don't:
  • Change behavior
  • Add new functionality
  • Skip test runs
  • Make tests pass by changing tests
Auto-Stop Conditions:
Stop and report if:
  • All tests pass + no refactoring needed (SUCCESS)
  • Same test fails 3 times in a row (STUCK)
  • Error in test command itself (TEST SETUP ISSUE)
  • External dependency unavailable (BLOCKED)
  • Unclear how to fix (NEEDS USER INPUT)
Integration with Blueprint Development:
This command applies project-specific skills:
  • Testing strategies: Knows how to structure tests
  • Implementation guides: Knows how to implement fixes
  • Quality standards: Knows what to refactor
  • Architecture patterns: Knows where code should go
  1. 测试通过
  2. 寻找改进点
  3. 进行重构
  4. 重新运行测试 → 应仍然通过
  5. 循环直到无改进空间
常见失败模式:
模式: 缺少实现
  • 症状:
    undefined is not a function
    ,
    NameError
    , etc.
  • 修复: 实现缺失的函数/类/方法
  • 最小化原则: 仅实现签名,返回占位值
模式: 返回值错误
  • 症状:
    Expected X but got Y
  • 修复: 更新实现以返回正确值
  • 最小化原则: 不要添加额外逻辑,仅修复返回值
模式: 缺少边界情况处理
  • 症状: 特定输入导致测试失败
  • 修复: 处理该边界情况
  • 最小化原则: 仅添加针对该情况的条件
模式: 集成问题
  • 症状: 组件交互时测试失败
  • 修复: 修复集成点
  • 最小化原则: 仅修复集成问题,而非整个组件
重构机会:
寻找以下情况:
  • 重复代码 → 提取为函数
  • 魔术数字 → 提取为常量
  • 过长函数 → 拆分为更小的函数
  • 复杂条件 → 提取为命名清晰的函数
  • 不清晰的命名 → 重命名为描述性名称
  • 用于解释代码的注释 → 重构代码使其自解释
请勿:
  • 改变行为
  • 添加新功能
  • 跳过测试运行
  • 通过修改测试使测试通过
自动终止条件:
在以下情况时终止并报告:
  • 所有测试通过 + 无需重构(成功)
  • 同一测试连续失败3次(卡住)
  • 测试命令本身出错(测试设置问题)
  • 外部依赖不可用(已阻塞)
  • 不清楚如何修复(需要用户输入)
与蓝图开发的集成:
本命令应用项目特定技能:
  • 测试策略: 了解如何构建测试结构
  • 实现指南: 了解如何实施修复
  • 质量标准: 了解需要重构的内容
  • 架构模式: 了解代码应放置的位置