WIN: Complete Coverage Framework
Core Rule: Before writing any code, enumerate what could go wrong. Before calling anything done, verify nothing was missed. Before saying "looks good", prove every path is covered.
PHASE 1: PLANNING — Think Like an Attacker
When planning any feature, fix, or change, work through every section below. If a section doesn't apply, explicitly state why.
1.1 Input Space Analysis
For every input the feature accepts (user input, API params, DB data, URL params, file uploads, environment variables):
- Valid inputs: What are all the valid forms? (types, ranges, formats)
- Boundary values: Empty string, 0, -1, MAX_INT, max length, min length, exactly-at-limit
- Invalid inputs: Wrong type, null, undefined, NaN, Infinity, empty array, nested nulls
- Malicious inputs: SQL injection, XSS payloads, path traversal, oversized payloads, unicode edge cases (ZWJ, RTL, emoji)
- Concurrent inputs: Same user submitting twice, race between two users, stale data
1.2 State Space Analysis
Map every state the system can be in during and after this change:
- Loading states: Initial load, refresh, pagination, background sync
- Empty states: No data yet, data deleted, filtered to nothing
- Error states: Network failure, auth expired, rate limited, server 500, partial failure
- Success states: Single result, many results, exactly-at-limit results
- Transition states: Optimistic update that rolls back, concurrent modification, mid-operation failure
- Stale states: Cached data that's outdated, browser tab left open, websocket disconnected
1.3 Failure Mode Analysis
For every external dependency (API call, DB query, file read, third-party service):
- What if it's slow (5s, 30s, timeout)?
- What if it fails (network error, 500, malformed response)?
- What if it returns unexpected data (empty, wrong shape, extra fields, null where not expected)?
- What if it partially succeeds (3 of 5 items saved, then crash)?
- What if the user retries while the first attempt is still in-flight?
- What is the recovery path? Can the user retry? Is the data left in a consistent state?
1.4 Security Threat Model
- Authentication: Can this be accessed without login? With an expired token?
- Authorization: Can user A access user B's data? Can a free user access pro features?
- Data exposure: Are we returning more fields than the client needs? PII in logs?
- Injection: Any string concatenation in SQL, HTML, or shell commands?
- Rate limiting: Can this endpoint be abused? Is there a cost multiplier (e.g., triggers expensive AI call)?
1.5 Assumption Discovery
Before building or fixing anything, step back from the immediate task and ask: what is the current context assuming?
- Identify every assumption the current task, test case, or bug report is making about who uses this, what data they provide, how they use it, and in what context
- Challenge each assumption: What if a real user doesn't match this assumption? What breaks, degrades, or behaves unexpectedly?
- Enumerate what SHOULD work but isn't being considered — scenarios, user types, data types, or contexts beyond the one in front of you right now
- Enumerate what should NOT work — scenarios the app should explicitly reject, block, or handle gracefully. Are they actually handled, or would they silently break or accidentally be allowed?
Do not use a predefined checklist. The assumptions are different for every app and every feature. You must discover them from the specific context you're working in.
1.6 Impact Analysis
- What existing features could this break? Trace all callers and consumers.
- What data could this corrupt? Check migration safety, default values, nullable columns.
- What performance could this degrade? New queries without indexes? N+1? Large payloads?
- What's the rollback plan? If this goes wrong in production, how do we undo it?
1.7 Plan Completeness Checklist
Before finalizing any plan, verify:
- Every user-facing state is accounted for (loading, empty, error, success, partial)
- Every external call has error handling specified
- Every input has validation rules defined
- Edge cases are explicitly listed (not "handle edge cases" — LIST them)
- The testing strategy covers more than just the happy path
- Performance implications are noted for any new queries or API calls
- Security implications are noted for any new endpoints or data access
- Migration safety is confirmed (no data loss, backwards compatible)
- Assumptions about users, data, and context have been identified and challenged
PHASE 2: IMPLEMENTATION — Defensive by Default
2.1 Error Handling Rules
- Every gets error handling — no unhandled promise rejections
- Every API/DB client call checks for errors before accessing response data
- Every array index access is bounds-checked — needs
- Every optional chain has a fallback — what happens when the value IS nullish?
- Network errors get user-facing messages — not silent failures or console.log only
- Partial failure states are handled — if step 2 of 3 fails, what happens to step 1's data?
2.2 UI State Coverage
For every component that displays data, implement ALL states:
- Loading: Skeleton or spinner (not blank screen)
- Empty: Helpful message with action (not blank screen)
- Error: Specific message with retry option (not generic "Something went wrong")
- Success: The actual content
- Partial: Some data loaded, some failed (don't hide the successes)
- Stale: Visual indicator if data might be outdated
2.3 Implementation Completeness Check
After writing each piece of code, verify:
- What happens if called with null? undefined? empty string? empty array?
- What happens if the network call fails? Times out? Returns unexpected shape?
- What happens if the user navigates away mid-operation?
- What happens if two users do this simultaneously?
- Did I handle the error path in the UI, not just console.log it?
- Am I exposing any data the user shouldn't see?
- Is this implementation built around assumptions from the current test case? Would it break for a user in a different context?
- Are unsupported or invalid scenarios explicitly handled (clear rejection, error message), or will they silently fail or accidentally be allowed?
PHASE 3: TESTING — Prove It Works, Then Prove It Fails Gracefully
3.1 Test Categories (ALL Required)
For every feature or fix, address each category:
Happy Path Tests
- Basic functionality works with typical inputs
- Verify the exact output/behavior, not just "no errors"
Input Boundary Tests
- Empty inputs (empty string, empty array, null, undefined)
- Minimum valid input (1 character, single item)
- Maximum valid input (at the limit)
- Just over the limit (should reject gracefully)
- Special characters, unicode, very long strings
Error Path Tests
- Network failure during operation
- Auth token expired mid-session
- Invalid data from API (missing fields, wrong types)
- Database constraint violations (duplicate key, foreign key)
- Rate limit hit
State Transition Tests
- Component handles loading → success correctly
- Component handles loading → error → retry → success
- State is consistent after error recovery
- Back button / navigation during async operation
Security Tests
- Unauthorized access returns 401/403, not 500
- Cross-user data access is blocked
- SQL/XSS payloads in inputs are handled safely
- Sensitive data isn't leaked in error messages
Assumption Tests
- Test with scenarios beyond the immediate test case — different types of users, data, or contexts the feature should support
- Test scenarios that should NOT work — verify they are properly rejected with clear feedback, not silently broken or accidentally allowed
Regression Tests
- Existing functionality still works after the change
- Related features aren't broken by side effects
3.2 Testing Completeness Check
Verify before moving on:
- Every public function/endpoint has at least one happy path test
- Every error code/message returned has a test that triggers it
- Every conditional branch in the code is exercised by a test
- Boundary values for every input are tested
- The test suite would CATCH a regression if someone broke this feature
PHASE 4: ANALYSIS & REVIEW — Audit Like a Senior Engineer
When analyzing existing code, reviewing a feature, investigating a bug, or auditing for improvements, apply this systematic framework. Do NOT give surface-level "looks fine" answers.
4.1 Existing Code Audit
When asked to analyze or review existing code:
- Read the full flow, not just the file mentioned — trace the data from entry point to database and back
- Map every conditional branch — is every if/else/switch case handled? Are there missing cases?
- Check every external call — does each API/DB/service call have proper error handling?
- Identify silent failures — places where errors are caught but swallowed (empty catch blocks, , missing error states in UI)
- Check null/undefined paths — what happens when optional data is actually missing? Is the code assuming data always exists?
- Verify auth boundaries — is every endpoint/query properly scoped to the authenticated user?
- Challenge assumptions — is this code built on assumptions about who uses it, what data they provide, or in what context? Would it break for someone in a different situation?
4.2 Bug Investigation
When investigating a bug or unexpected behavior:
- Reproduce the full path — don't guess. Trace the exact code path from trigger to symptom
- Check ALL callers — if the bug is in a shared function, who else calls it? Are they affected too?
- Look for the systemic cause — a bug in one place often indicates the same pattern elsewhere. Search for similar patterns in the codebase
- Identify what ELSE could break — fixing the immediate bug is not enough. What related code has the same vulnerability?
- Check data integrity — has the bug corrupted any existing data? Does the fix need a data migration or backfill?
- Verify the fix prevents recurrence — will this same bug happen again for new users, new data, or new features? If yes, fix the root cause, not the symptom
- Challenge the assumption that caused the bug — did this bug exist because the code assumed a specific type of user, data, or context? Does the fix hold for all real-world scenarios, not just the reported case?
4.3 Feature Improvement Analysis
When asked to improve or optimize an existing feature:
- Benchmark the current state — what's the actual performance/behavior now? Don't optimize blindly
- Identify ALL consumers — who depends on this feature? Will improvements break any downstream consumers?
- Check for unhandled edge cases in the current implementation:
- What inputs/states does the current code NOT handle?
- What error paths are missing or incomplete?
- Are there race conditions or timing issues?
- Does it degrade gracefully under load?
- Assess the blast radius — how many files, functions, and features does this change touch?
- Propose with tradeoffs — every improvement has a cost (complexity, performance, migration). State the tradeoffs explicitly
- Check assumption coverage — does the current implementation only handle the scenario it was built and tested with, or does it account for the full range of real-world usage?
4.4 Architecture Review
When reviewing system design or architecture:
- Data flow completeness — trace data from user input through every layer to storage and back. Are there gaps?
- Error propagation — when something fails deep in the stack, does the error surface correctly to the user? Or does it get lost?
- Consistency guarantees — if a multi-step operation fails halfway, is the system in a valid state?
- Scaling bottlenecks — what happens at 10x current load? 100x? Where does it break first?
- Dependency risks — what happens if a third-party service goes down? Is there a fallback?
- Security surface area — every new endpoint, every new data flow is an attack surface. Map them all
4.5 Analysis Completeness Check
Before delivering any analysis or review, verify:
- I traced the complete data flow, not just the surface-level code
- I identified edge cases the current code does NOT handle
- I checked for the same pattern/bug elsewhere in the codebase
- I assessed security implications (auth, data exposure, injection)
- I considered performance under stress (concurrent users, large data)
- I noted specific file:line references for every finding
- My recommendations include tradeoffs, not just "do this"
- I answered "will this happen again?" and addressed recurrence prevention
PHASE 5: FINAL VERIFICATION — The "Ship It" Checklist
Before declaring ANY task complete, verify every item:
Code Quality
- TypeScript compiles with zero errors
- No lint warnings
- No types, no , no added
- No console.log/debug statements left in
- No hardcoded values that should be constants or config
Functional Completeness
- Every requirement from the original request is addressed
- Every edge case identified in planning has code handling it
- Error states have user-facing feedback, not just thrown exceptions
- The feature works on first use, not just happy path demo
- The feature works beyond the specific scenario it was built and tested with — assumptions have been identified and addressed
Safety
- No new security vulnerabilities (check OWASP top 10)
- No data leaks in API responses or error messages
- Auth/authz is enforced on every new endpoint
- Database migrations are backwards compatible and reversible
Resilience
- External service failures are handled gracefully
- User can recover from errors without refreshing the page
- Data integrity is maintained even if operations are interrupted
- No race conditions in concurrent access scenarios