Loading...
Loading...
Test-driven development workflow with philosophy guide - plan → write tests → implement → validate
npx skill4agent add parcadei/continuous-claude-v3 tddNO PRODUCTION CODE WITHOUT A FAILING TEST FIRSTtest('retries failed operations 3 times', async () => {
let attempts = 0;
const operation = () => {
attempts++;
if (attempts < 3) throw new Error('fail');
return 'success';
};
const result = await retryOperation(operation);
expect(result).toBe('success');
expect(attempts).toBe(3);
});test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});npm test path/to/test.test.ts
# or
pytest path/to/test_file.pyasync function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
for (let i = 0; i < 3; i++) {
try {
return await fn();
} catch (e) {
if (i === 2) throw e;
}
}
throw new Error('unreachable');
}async function retryOperation<T>(
fn: () => Promise<T>,
options?: {
maxRetries?: number;
backoff?: 'linear' | 'exponential';
onRetry?: (attempt: number) => void;
}
): Promise<T> {
// YAGNI - over-engineered
}npm test path/to/test.test.ts| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| Problem | Solution |
|---|---|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
┌────────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐
│ plan- │───▶│ arbiter │───▶│ kraken │───▶│ arbiter │
│ agent │ │ │ │ │ │ │
└────────────┘ └──────────┘ └──────────┘ └───────────┘
Design Write Implement Verify
approach failing minimal all tests
tests code pass| # | Agent | Role | Output |
|---|---|---|---|
| 1 | plan-agent | Design test cases and implementation approach | Test plan |
| 2 | arbiter | Write failing tests (RED phase) | Test files |
| 3 | kraken | Implement minimal code to pass (GREEN phase) | Implementation |
| 4 | arbiter | Run all tests, verify nothing broken | Test report |
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRSTTask(
subagent_type="plan-agent",
prompt="""
Design TDD approach for: [FEATURE_NAME]
Define:
1. What behaviors need to be tested
2. Edge cases to cover
3. Expected test structure
DO NOT write any implementation code.
Output: Test plan document
"""
)Task(
subagent_type="arbiter",
prompt="""
Write failing tests for: [FEATURE_NAME]
Test plan: [from phase 1]
Requirements:
- Write tests FIRST
- Run tests to confirm they FAIL
- Tests must fail because feature is missing (not syntax errors)
- Create clear test names describing expected behavior
DO NOT write any implementation code.
"""
)Task(
subagent_type="kraken",
prompt="""
Implement MINIMAL code to pass tests: [FEATURE_NAME]
Tests location: [test file path]
Requirements:
- Write ONLY enough code to make tests pass
- No additional features beyond what tests require
- No "improvements" or "enhancements"
- Run tests after each change
Follow Red-Green-Refactor strictly.
"""
)Task(
subagent_type="arbiter",
prompt="""
Validate TDD implementation: [FEATURE_NAME]
- Run full test suite
- Verify all new tests pass
- Verify no existing tests broke
- Check test coverage if available
"""
)User: /tdd Add email validation to the signup form
Claude: Starting /tdd workflow for email validation...
Phase 1: Planning test cases...
[Spawns plan-agent]
Test plan:
- Valid email formats
- Invalid email formats
- Empty email rejection
- Edge cases (unicode, long emails)
Phase 2: Writing failing tests (RED)...
[Spawns arbiter]
✅ 8 tests written, all failing as expected
Phase 3: Implementing minimal code (GREEN)...
[Spawns kraken]
✅ All 8 tests now passing
Phase 4: Validating...
[Spawns arbiter]
✅ 247 tests passing (8 new), 0 failing
TDD workflow complete!Task(
subagent_type="kraken",
prompt="""
Refactor: [FEATURE_NAME]
- Clean up code while keeping tests green
- Remove duplication
- Improve naming
- Extract helpers if needed
DO NOT add new behavior. Keep all tests passing.
"""
)