Plan and document QA deliverables — test plans, test cases, regression suites, Figma validations, and bug reports — in a structured format compatible with
execution.
All artifacts follow this directory layout, shared with
:
-
Read
references/test_case_templates.md
to select the appropriate template variant (Functional, UI, Integration, Regression, Security, Performance).
-
Assign each test case an ID following the naming scheme:
| Type | Prefix | Example |
|---|
| Functional | TC-FUNC- | TC-FUNC-001 |
| UI/Visual | TC-UI- | TC-UI-045 |
| Integration | TC-INT- | TC-INT-012 |
| Regression | TC-REG- | TC-REG-089 |
| Security | TC-SEC- | TC-SEC-005 |
| Performance | TC-PERF- | TC-PERF-023 |
| Smoke | SMOKE- | SMOKE-001 |
-
Each test case must include:
- Priority: P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low).
- Objective: What is being validated and why.
- Preconditions: Setup requirements and test data.
- Test Steps: Numbered actions with an result for each.
- Edge Cases: Boundary values, null inputs, special characters.
- Automation Target: , , or .
- Automation Status: , , , or .
- Automation Command/Spec: Existing spec path or command when known.
- Automation Notes: Why the case should be automated, remain manual, or is blocked.
-
Write each test case to
<qa-output-path>/qa/test-cases/<TC-ID>.md
.
-
When generating test cases interactively, execute
scripts/generate_test_cases.sh <qa-output-path>/test-cases
.
-
Read
references/regression_testing.md
for suite structure and execution strategy.
-
Classify tests into tiers:
| Suite | Duration | Frequency | Coverage |
|---|
| Smoke | 15-30 min | Daily/per-build | Critical paths only |
| Targeted | 30-60 min | Per change | Affected areas |
| Full | 2-4 hours | Weekly/Release | Comprehensive |
| Sanity | 10-15 min | After hotfix | Quick validation |
-
Prioritize test cases using the shared priority scale:
- P0: Business-critical, security, revenue-impacting — must run always.
- P1: Major features, common flows — run weekly or more.
- P2: Minor features, edge cases — run at releases.
-
Mark automation candidates explicitly:
- Tag changed or regression-critical P0 and P1 public flows as when the repository already has an E2E harness.
- Tag bug-driven public regressions as
Automation Status: Missing
until confirms the spec was added or updated.
- Tag exploratory, visual-judgment, or unsupported flows as or with a reason.
-
Define execution order: Smoke first (if fails, stop) → P0 → P1 → P2 → Exploratory.
-
Define pass/fail criteria:
- PASS: All P0 pass, 90%+ P1 pass, no critical bugs open.
- FAIL: Any P0 fails, critical bug discovered, security vulnerability, data loss.
- CONDITIONAL: P1 failures with documented workarounds, fix plan in place.
-
Write the suite document to
<qa-output-path>/qa/test-plans/<suite-name>-regression.md
.
Skip this step if Figma MCP is not configured.
The
and
skills share a common output directory and artifact format. The intended workflow:
When
runs after
, it reads test cases from
<qa-output-path>/qa/test-cases/
to inform its execution matrix, automation priorities, and reporting fields, then writes bugs to
<qa-output-path>/qa/issues/
using the same unified template.