qa-report
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseQA Test Planner
QA测试规划器
Plan and document QA deliverables — test plans, test cases, regression suites, Figma validations, and bug reports — in a structured format compatible with execution.
qa-execution以与兼容的结构化格式规划并记录QA交付物——测试计划、测试用例、回归套件、Figma验证以及Bug报告。
qa-executionRequired Inputs
必填输入项
- qa-output-path (optional): Directory where all QA artifacts are stored. When provided, create the directory if it does not exist. When omitted, use the current working directory. This path must match the same argument passed to when both skills are used together.
qa-execution
- qa-output-path(可选):存储所有QA工件的目录。若提供该路径,当目录不存在时将自动创建;若未提供,则使用当前工作目录。当同时使用本技能与时,该路径必须与传递给
qa-execution的参数一致。qa-execution
Shared Output Structure
统一输出结构
All artifacts follow this directory layout, shared with :
qa-execution<qa-output-path>/qa/
├── test-plans/ # Test plan documents
├── test-cases/ # Individual test case files (TC-*.md)
├── issues/ # Bug reports (BUG-*.md)
├── screenshots/ # Visual evidence and Figma comparisons
└── verification-report.md # Generated by qa-execution所有工件遵循以下目录结构,与共享:
qa-execution<qa-output-path>/qa/
├── test-plans/ # 测试计划文档
├── test-cases/ # 单个测试用例文件(TC-*.md)
├── issues/ # Bug报告(BUG-*.md)
├── screenshots/ # 可视化证据与Figma对比图
└── verification-report.md # 由qa-execution生成Procedures
操作流程
Step 1: Resolve Output Directory
- If the user provided a argument, use that path.
qa-output-path - Otherwise, default to the current working directory.
- Create the subdirectory under the resolved path, then create
qa/,qa/test-plans/,qa/test-cases/, andqa/issues/if they do not exist.qa/screenshots/
Step 2: Identify the Deliverable Type
Parse the user request to determine which deliverable to generate:
| Request Pattern | Deliverable | Output Path |
|---|---|---|
| "Create test plan for..." | Test Plan | |
| "Generate test cases for..." | Test Cases | |
| "Build regression suite..." | Regression Suite | |
| "Compare with Figma..." | Figma Validation | |
| "Document bug..." | Bug Report | |
Step 3: Generate Test Plans
- Read for the test plan structure.
references/test_case_templates.md - Generate a test plan document with these mandatory sections:
- Executive summary with objectives and key risks.
- Scope definition (in-scope and out-of-scope).
- Test strategy and approach.
- Automation strategy covering which flows should become E2E, which remain manual-only, and which are blocked by environment gaps.
- Environment requirements (OS, browsers, devices).
- Entry criteria (what must be true before testing begins).
- Exit criteria (what must be true before testing ends, including pass-rate thresholds and automation follow-up expectations for critical flows).
- Risk assessment table (Risk, Probability, Impact, Mitigation).
- Timeline and deliverables.
- Write the plan to .
<qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md
Step 4: Generate Test Cases
-
Readto select the appropriate template variant (Functional, UI, Integration, Regression, Security, Performance).
references/test_case_templates.md -
Assign each test case an ID following the naming scheme:
Type Prefix Example Functional TC-FUNC- TC-FUNC-001 UI/Visual TC-UI- TC-UI-045 Integration TC-INT- TC-INT-012 Regression TC-REG- TC-REG-089 Security TC-SEC- TC-SEC-005 Performance TC-PERF- TC-PERF-023 Smoke SMOKE- SMOKE-001 -
Each test case must include:
- Priority: P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low).
- Objective: What is being validated and why.
- Preconditions: Setup requirements and test data.
- Test Steps: Numbered actions with an result for each.
**Expected:** - Edge Cases: Boundary values, null inputs, special characters.
- Automation Target: ,
E2E, orIntegration.Manual-only - Automation Status: ,
Existing,Missing, orBlocked.N/A - Automation Command/Spec: Existing spec path or command when known.
- Automation Notes: Why the case should be automated, remain manual, or is blocked.
-
Write each test case to.
<qa-output-path>/qa/test-cases/<TC-ID>.md -
When generating test cases interactively, execute.
scripts/generate_test_cases.sh <qa-output-path>/test-cases
Step 5: Build Regression Suites
-
Readfor suite structure and execution strategy.
references/regression_testing.md -
Classify tests into tiers:
Suite Duration Frequency Coverage Smoke 15-30 min Daily/per-build Critical paths only Targeted 30-60 min Per change Affected areas Full 2-4 hours Weekly/Release Comprehensive Sanity 10-15 min After hotfix Quick validation -
Prioritize test cases using the shared priority scale:
- P0: Business-critical, security, revenue-impacting — must run always.
- P1: Major features, common flows — run weekly or more.
- P2: Minor features, edge cases — run at releases.
-
Mark automation candidates explicitly:
- Tag changed or regression-critical P0 and P1 public flows as when the repository already has an E2E harness.
Automation Target: E2E - Tag bug-driven public regressions as until
Automation Status: Missingconfirms the spec was added or updated.qa-execution - Tag exploratory, visual-judgment, or unsupported flows as or
Manual-onlywith a reason.Blocked
- Tag changed or regression-critical P0 and P1 public flows as
-
Define execution order: Smoke first (if fails, stop) → P0 → P1 → P2 → Exploratory.
-
Define pass/fail criteria:
- PASS: All P0 pass, 90%+ P1 pass, no critical bugs open.
- FAIL: Any P0 fails, critical bug discovered, security vulnerability, data loss.
- CONDITIONAL: P1 failures with documented workarounds, fix plan in place.
-
Write the suite document to.
<qa-output-path>/qa/test-plans/<suite-name>-regression.md
Step 6: Validate Against Figma Designs
Skip this step if Figma MCP is not configured.
- Read for the validation workflow.
references/figma_validation.md - Extract design specifications from Figma using MCP queries:
- Dimensions (width, height).
- Colors (background, text, border — exact hex values).
- Typography (font family, size, weight, line-height, color).
- Spacing (padding, margin).
- Border radius, shadows.
- Interactive states (default, hover, active, focus, disabled).
- Generate UI test cases (TC-UI-*) that compare each property against the implementation.
- Test responsive behavior at these standard viewports:
- Mobile: 375px.
- Tablet: 768px.
- Desktop: 1280px.
- When validation reveals discrepancies, generate a bug report following Step 7.
- Use (from the
agent-browsercompanion skill) when browser-based verification is needed. The core loop is: open → snapshot → interact → re-snapshot → verify.qa-execution
Step 7: Create Bug Reports
- Use the unified bug report format from , shared with
assets/issue-template.md.qa-execution - Assign a bug ID with the prefix (e.g.,
BUG-).BUG-001 - Every bug report must include:
- Severity: Critical | High | Medium | Low.
- Priority: P0 | P1 | P2 | P3.
- Environment: Build, OS, Browser, URL.
- Reproduction: Exact steps to reproduce.
- Expected vs Actual: Clear descriptions.
- Impact: Users affected, frequency, workaround.
- Related: TC-ID if discovered during test case execution, Figma URL if UI bug.
- Write each bug report to .
<qa-output-path>/qa/issues/<BUG-ID>.md - When creating bug reports interactively, execute .
scripts/create_bug_report.sh <qa-output-path>/issues
Step 8: Validate Completeness
- Verify all generated test cases have an expected result for each step.
- Verify all bug reports have reproducible steps.
- Verify traceability: test cases reference requirements, bugs reference test cases.
- Verify every planned critical flow has an explicit automation annotation and that or
Missingstates include a reason.Blocked - Cross-reference against for coverage gaps when planning for later execution.
../qa-execution/references/checklist.md
步骤1:确定输出目录
- 若用户提供了参数,则使用该路径。
qa-output-path - 否则,默认使用当前工作目录。
- 在已确定的路径下创建子目录,若
qa/、qa/test-plans/、qa/test-cases/和qa/issues/不存在,则一并创建。qa/screenshots/
步骤2:确定交付物类型
解析用户请求,确定需要生成的交付物类型:
| 请求模式 | 交付物 | 输出路径 |
|---|---|---|
| "为……创建测试计划" | 测试计划 | |
| "为……生成测试用例" | 测试用例 | |
| "构建回归套件……" | 回归套件 | |
| "与Figma对比……" | Figma验证 | |
| "记录Bug……" | Bug报告 | |
步骤3:生成测试计划
- 读取获取测试计划结构。
references/test_case_templates.md - 生成包含以下必填章节的测试计划文档:
- 包含目标与关键风险的执行摘要。
- 范围定义(包含范围内与范围外内容)。
- 测试策略与方法。
- 自动化策略,涵盖哪些流程应转为E2E测试、哪些需保持纯手动测试,以及哪些因环境缺口而受阻。
- 环境要求(操作系统、浏览器、设备)。
- 准入标准(测试开始前必须满足的条件)。
- 准出标准(测试结束前必须满足的条件,包括通过率阈值以及关键流程的自动化跟进预期)。
- 风险评估表(风险、概率、影响、缓解措施)。
- 时间线与交付物。
- 将计划写入。
<qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md
步骤4:生成测试用例
-
读取,选择合适的模板变体(功能测试、UI测试、集成测试、回归测试、安全测试、性能测试)。
references/test_case_templates.md -
为每个测试用例分配符合以下命名规则的ID:
类型 前缀 示例 功能测试 TC-FUNC- TC-FUNC-001 UI/可视化测试 TC-UI- TC-UI-045 集成测试 TC-INT- TC-INT-012 回归测试 TC-REG- TC-REG-089 安全测试 TC-SEC- TC-SEC-005 性能测试 TC-PERF- TC-PERF-023 冒烟测试 SMOKE- SMOKE-001 -
每个测试用例必须包含:
- 优先级:P0(关键)| P1(高)| P2(中)| P3(低)。
- 目标:验证内容及原因。
- 前置条件:设置要求与测试数据。
- 测试步骤:带编号的操作步骤,每个步骤需包含。
**预期结果:** - 边缘情况:边界值、空输入、特殊字符。
- 自动化目标:、
E2E或Integration。Manual-only - 自动化状态:、
Existing、Missing或Blocked。N/A - 自动化命令/规范:已知的现有规范路径或命令。
- 自动化说明:说明该用例应自动化、保持手动或受阻的原因。
-
将每个测试用例写入。
<qa-output-path>/qa/test-cases/<TC-ID>.md -
交互式生成测试用例时,执行。
scripts/generate_test_cases.sh <qa-output-path>/test-cases
步骤5:构建回归测试套件
-
读取获取套件结构与执行策略。
references/regression_testing.md -
将测试分为不同层级:
套件 时长 执行频率 覆盖范围 冒烟测试 15-30分钟 每日/每次构建 仅关键路径 定向测试 30-60分钟 每次变更后 受影响区域 全量测试 2-4小时 每周/发布前 全面覆盖 Sanity测试 10-15分钟 热修复后 快速验证 -
使用统一优先级标准对测试用例进行优先级排序:
- P0:业务关键、安全相关、影响营收的测试——必须始终执行。
- P1:主要功能、常见流程——每周或更频繁执行。
- P2:次要功能、边缘情况——发布时执行。
-
明确标记自动化候选用例:
- 当仓库已具备E2E测试框架时,将变更或回归关键的P0、P1公共流程标记为。
Automation Target: E2E - 将由Bug驱动的公共回归测试标记为,直到
Automation Status: Missing确认规范已添加或更新。qa-execution - 将探索性测试、需视觉判断或不支持的流程标记为或
Manual-only并说明原因。Blocked
- 当仓库已具备E2E测试框架时,将变更或回归关键的P0、P1公共流程标记为
-
定义执行顺序:先执行冒烟测试(若失败则停止)→ P0 → P1 → P2 → 探索性测试。
-
定义通过/失败标准:
- 通过:所有P0测试通过,90%以上P1测试通过,无未解决的关键Bug。
- 失败:任意P0测试失败,发现关键Bug、安全漏洞或数据丢失问题。
- 条件通过:P1测试失败但有文档记录的解决方法,且已制定修复计划。
-
将套件文档写入。
<qa-output-path>/qa/test-plans/<suite-name>-regression.md
步骤6:与Figma设计进行验证
若未配置Figma MCP则跳过此步骤。
- 读取获取验证流程。
references/figma_validation.md - 使用MCP查询从Figma提取设计规范:
- 尺寸(宽度、高度)。
- 颜色(背景、文本、边框——精确十六进制值)。
- 排版(字体家族、大小、字重、行高、颜色)。
- 间距(内边距、外边距)。
- 边框圆角、阴影。
- 交互状态(默认、悬停、激活、聚焦、禁用)。
- 生成UI测试用例(TC-UI-*),将每个属性与实现效果进行对比。
- 在以下标准视口测试响应式表现:
- 移动端:375px。
- 平板端:768px。
- 桌面端:1280px。
- 若验证发现差异,按照步骤7生成Bug报告。
- 当需要基于浏览器的验证时,使用配套技能中的
qa-execution。核心流程为:打开→截图→交互→重新截图→验证。agent-browser
步骤7:创建Bug报告
- 使用中的统一Bug报告格式,该格式与
assets/issue-template.md共享。qa-execution - 为Bug分配以为前缀的ID(例如
BUG-)。BUG-001 - 每份Bug报告必须包含:
- 严重程度:Critical | High | Medium | Low。
- 优先级:P0 | P1 | P2 | P3。
- 环境:构建版本、操作系统、浏览器、URL。
- 复现步骤:精确的复现步骤。
- 预期与实际结果:清晰的描述。
- 影响:受影响用户范围、发生频率、解决方法。
- 关联内容:若在测试用例执行中发现,则关联TC-ID;若为UI Bug,则关联Figma URL。
- 将每份Bug报告写入。
<qa-output-path>/qa/issues/<BUG-ID>.md - 交互式创建Bug报告时,执行。
scripts/create_bug_report.sh <qa-output-path>/issues
步骤8:验证完整性
- 验证所有生成的测试用例每个步骤都有预期结果。
- 验证所有Bug报告都有可复现的步骤。
- 验证可追溯性:测试用例关联需求,Bug关联测试用例。
- 验证每个规划的关键流程都有明确的自动化注释,且或
Missing状态包含原因说明。Blocked - 当为后续执行做规划时,对照检查覆盖缺口。
../qa-execution/references/checklist.md
Severity Definitions
严重程度定义
| Level | Criteria | Examples |
|---|---|---|
| Critical | System crash, data loss, security breach | Payment fails, login broken |
| High | Major feature broken, no workaround | Search not working, checkout fails |
| Medium | Feature partial, workaround exists | Filter missing option, slow load |
| Low | Cosmetic, rare edge case | Typo, minor alignment |
| 级别 | 判断标准 | 示例 |
|---|---|---|
| Critical | 系统崩溃、数据丢失、安全漏洞 | 支付失败、登录功能损坏 |
| High | 主要功能损坏,无解决方法 | 搜索功能失效、结账流程失败 |
| Medium | 功能部分可用,存在解决方法 | 筛选功能缺少选项、加载缓慢 |
| Low | 外观问题、罕见边缘情况 | 文字拼写错误、轻微对齐问题 |
Priority vs Severity Matrix
优先级与严重程度矩阵
| Low Impact | Medium | High | Critical | |
|---|---|---|---|---|
| Rare | P3 | P3 | P2 | P1 |
| Sometimes | P3 | P2 | P1 | P0 |
| Often | P2 | P1 | P0 | P0 |
| Always | P2 | P1 | P0 | P0 |
| 低影响 | 中影响 | 高影响 | 严重影响 | |
|---|---|---|---|---|
| 罕见 | P3 | P3 | P2 | P1 |
| 偶尔 | P3 | P2 | P1 | P0 |
| 频繁 | P2 | P1 | P0 | P0 |
| 始终 | P2 | P1 | P0 | P0 |
Companion Skill: qa-execution
配套技能:qa-execution
The and skills share a common output directory and artifact format. The intended workflow:
qa-reportqa-execution- Plan first with : generate test plans, test cases, and regression suites.
qa-report - Execute with : run verification gates, exercise flows end-to-end, discover bugs, and add or update E2E coverage when the repository already supports it.
qa-execution - Document with : create structured bug reports for issues found during execution.
qa-report
When runs after , it reads test cases from to inform its execution matrix, automation priorities, and reporting fields, then writes bugs to using the same unified template.
qa-executionqa-report<qa-output-path>/qa/test-cases/<qa-output-path>/qa/issues/qa-reportqa-execution- 先规划:使用生成测试计划、测试用例和回归套件。
qa-report - 再执行:使用运行验证门、端到端执行流程、发现Bug,并在仓库支持的情况下添加或更新E2E测试覆盖。
qa-execution - 后记录:使用为执行过程中发现的问题创建结构化Bug报告。
qa-report
当在之后运行时,它会从读取测试用例,以确定执行矩阵、自动化优先级和报告字段,然后使用相同的统一模板将Bug写入。
qa-executionqa-report<qa-output-path>/qa/test-cases/<qa-output-path>/qa/issues/Error Handling
错误处理
- If the directory cannot be created, report the error and fall back to the current working directory.
qa-output-path - If Figma MCP is not configured, skip Figma validation steps and note the gap in the test plan.
- If is not available for UI validation, generate test cases as documentation for manual execution and note the limitation.
agent-browser - If the repository does not have a known E2E harness, mark affected cases as or
Manual-onlyinstead of inventing automation commands.Blocked - If the user provides a feature description that is too vague to generate test cases, ask for specific requirements, user flows, or acceptance criteria before proceeding.
- 若无法创建目录,报告错误并回退到当前工作目录。
qa-output-path - 若未配置Figma MCP,跳过Figma验证步骤并在测试计划中注明该缺口。
- 若不可用于UI验证,将测试用例生成为手动执行的文档并注明限制。
agent-browser - 若仓库没有已知的E2E测试框架,将受影响的用例标记为或
Manual-only,而非编造自动化命令。Blocked - 若用户提供的功能描述过于模糊,无法生成测试用例,在继续之前请求具体需求、用户流程或验收标准。