qa-report

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

QA Test Planner

QA测试规划器

Plan and document QA deliverables — test plans, test cases, regression suites, Figma validations, and bug reports — in a structured format compatible with
qa-execution
execution.
以与
qa-execution
兼容的结构化格式规划并记录QA交付物——测试计划、测试用例、回归套件、Figma验证以及Bug报告。

Required Inputs

必填输入项

  • qa-output-path (optional): Directory where all QA artifacts are stored. When provided, create the directory if it does not exist. When omitted, use the current working directory. This path must match the same argument passed to
    qa-execution
    when both skills are used together.
  • qa-output-path(可选):存储所有QA工件的目录。若提供该路径,当目录不存在时将自动创建;若未提供,则使用当前工作目录。当同时使用本技能与
    qa-execution
    时,该路径必须与传递给
    qa-execution
    的参数一致。

Shared Output Structure

统一输出结构

All artifacts follow this directory layout, shared with
qa-execution
:
<qa-output-path>/qa/
├── test-plans/          # Test plan documents
├── test-cases/          # Individual test case files (TC-*.md)
├── issues/              # Bug reports (BUG-*.md)
├── screenshots/         # Visual evidence and Figma comparisons
└── verification-report.md  # Generated by qa-execution
所有工件遵循以下目录结构,与
qa-execution
共享:
<qa-output-path>/qa/
├── test-plans/          # 测试计划文档
├── test-cases/          # 单个测试用例文件(TC-*.md)
├── issues/              # Bug报告(BUG-*.md)
├── screenshots/         # 可视化证据与Figma对比图
└── verification-report.md  # 由qa-execution生成

Procedures

操作流程

Step 1: Resolve Output Directory
  1. If the user provided a
    qa-output-path
    argument, use that path.
  2. Otherwise, default to the current working directory.
  3. Create the
    qa/
    subdirectory under the resolved path, then create
    qa/test-plans/
    ,
    qa/test-cases/
    ,
    qa/issues/
    , and
    qa/screenshots/
    if they do not exist.
Step 2: Identify the Deliverable Type
Parse the user request to determine which deliverable to generate:
Request PatternDeliverableOutput Path
"Create test plan for..."Test Plan
test-plans/
"Generate test cases for..."Test Cases
test-cases/
"Build regression suite..."Regression Suite
test-plans/
"Compare with Figma..."Figma Validation
test-cases/
(TC-UI-*)
"Document bug..."Bug Report
issues/
Step 3: Generate Test Plans
  1. Read
    references/test_case_templates.md
    for the test plan structure.
  2. Generate a test plan document with these mandatory sections:
    • Executive summary with objectives and key risks.
    • Scope definition (in-scope and out-of-scope).
    • Test strategy and approach.
    • Automation strategy covering which flows should become E2E, which remain manual-only, and which are blocked by environment gaps.
    • Environment requirements (OS, browsers, devices).
    • Entry criteria (what must be true before testing begins).
    • Exit criteria (what must be true before testing ends, including pass-rate thresholds and automation follow-up expectations for critical flows).
    • Risk assessment table (Risk, Probability, Impact, Mitigation).
    • Timeline and deliverables.
  3. Write the plan to
    <qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md
    .
Step 4: Generate Test Cases
  1. Read
    references/test_case_templates.md
    to select the appropriate template variant (Functional, UI, Integration, Regression, Security, Performance).
  2. Assign each test case an ID following the naming scheme:
    TypePrefixExample
    FunctionalTC-FUNC-TC-FUNC-001
    UI/VisualTC-UI-TC-UI-045
    IntegrationTC-INT-TC-INT-012
    RegressionTC-REG-TC-REG-089
    SecurityTC-SEC-TC-SEC-005
    PerformanceTC-PERF-TC-PERF-023
    SmokeSMOKE-SMOKE-001
  3. Each test case must include:
    • Priority: P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low).
    • Objective: What is being validated and why.
    • Preconditions: Setup requirements and test data.
    • Test Steps: Numbered actions with an
      **Expected:**
      result for each.
    • Edge Cases: Boundary values, null inputs, special characters.
    • Automation Target:
      E2E
      ,
      Integration
      , or
      Manual-only
      .
    • Automation Status:
      Existing
      ,
      Missing
      ,
      Blocked
      , or
      N/A
      .
    • Automation Command/Spec: Existing spec path or command when known.
    • Automation Notes: Why the case should be automated, remain manual, or is blocked.
  4. Write each test case to
    <qa-output-path>/qa/test-cases/<TC-ID>.md
    .
  5. When generating test cases interactively, execute
    scripts/generate_test_cases.sh <qa-output-path>/test-cases
    .
Step 5: Build Regression Suites
  1. Read
    references/regression_testing.md
    for suite structure and execution strategy.
  2. Classify tests into tiers:
    SuiteDurationFrequencyCoverage
    Smoke15-30 minDaily/per-buildCritical paths only
    Targeted30-60 minPer changeAffected areas
    Full2-4 hoursWeekly/ReleaseComprehensive
    Sanity10-15 minAfter hotfixQuick validation
  3. Prioritize test cases using the shared priority scale:
    • P0: Business-critical, security, revenue-impacting — must run always.
    • P1: Major features, common flows — run weekly or more.
    • P2: Minor features, edge cases — run at releases.
  4. Mark automation candidates explicitly:
    • Tag changed or regression-critical P0 and P1 public flows as
      Automation Target: E2E
      when the repository already has an E2E harness.
    • Tag bug-driven public regressions as
      Automation Status: Missing
      until
      qa-execution
      confirms the spec was added or updated.
    • Tag exploratory, visual-judgment, or unsupported flows as
      Manual-only
      or
      Blocked
      with a reason.
  5. Define execution order: Smoke first (if fails, stop) → P0 → P1 → P2 → Exploratory.
  6. Define pass/fail criteria:
    • PASS: All P0 pass, 90%+ P1 pass, no critical bugs open.
    • FAIL: Any P0 fails, critical bug discovered, security vulnerability, data loss.
    • CONDITIONAL: P1 failures with documented workarounds, fix plan in place.
  7. Write the suite document to
    <qa-output-path>/qa/test-plans/<suite-name>-regression.md
    .
Step 6: Validate Against Figma Designs
Skip this step if Figma MCP is not configured.
  1. Read
    references/figma_validation.md
    for the validation workflow.
  2. Extract design specifications from Figma using MCP queries:
    • Dimensions (width, height).
    • Colors (background, text, border — exact hex values).
    • Typography (font family, size, weight, line-height, color).
    • Spacing (padding, margin).
    • Border radius, shadows.
    • Interactive states (default, hover, active, focus, disabled).
  3. Generate UI test cases (TC-UI-*) that compare each property against the implementation.
  4. Test responsive behavior at these standard viewports:
    • Mobile: 375px.
    • Tablet: 768px.
    • Desktop: 1280px.
  5. When validation reveals discrepancies, generate a bug report following Step 7.
  6. Use
    agent-browser
    (from the
    qa-execution
    companion skill) when browser-based verification is needed. The core loop is: open → snapshot → interact → re-snapshot → verify.
Step 7: Create Bug Reports
  1. Use the unified bug report format from
    assets/issue-template.md
    , shared with
    qa-execution
    .
  2. Assign a bug ID with the
    BUG-
    prefix (e.g.,
    BUG-001
    ).
  3. Every bug report must include:
    • Severity: Critical | High | Medium | Low.
    • Priority: P0 | P1 | P2 | P3.
    • Environment: Build, OS, Browser, URL.
    • Reproduction: Exact steps to reproduce.
    • Expected vs Actual: Clear descriptions.
    • Impact: Users affected, frequency, workaround.
    • Related: TC-ID if discovered during test case execution, Figma URL if UI bug.
  4. Write each bug report to
    <qa-output-path>/qa/issues/<BUG-ID>.md
    .
  5. When creating bug reports interactively, execute
    scripts/create_bug_report.sh <qa-output-path>/issues
    .
Step 8: Validate Completeness
  1. Verify all generated test cases have an expected result for each step.
  2. Verify all bug reports have reproducible steps.
  3. Verify traceability: test cases reference requirements, bugs reference test cases.
  4. Verify every planned critical flow has an explicit automation annotation and that
    Missing
    or
    Blocked
    states include a reason.
  5. Cross-reference against
    ../qa-execution/references/checklist.md
    for coverage gaps when planning for later execution.
步骤1:确定输出目录
  1. 若用户提供了
    qa-output-path
    参数,则使用该路径。
  2. 否则,默认使用当前工作目录。
  3. 在已确定的路径下创建
    qa/
    子目录,若
    qa/test-plans/
    qa/test-cases/
    qa/issues/
    qa/screenshots/
    不存在,则一并创建。
步骤2:确定交付物类型
解析用户请求,确定需要生成的交付物类型:
请求模式交付物输出路径
"为……创建测试计划"测试计划
test-plans/
"为……生成测试用例"测试用例
test-cases/
"构建回归套件……"回归套件
test-plans/
"与Figma对比……"Figma验证
test-cases/
(TC-UI-*)
"记录Bug……"Bug报告
issues/
步骤3:生成测试计划
  1. 读取
    references/test_case_templates.md
    获取测试计划结构。
  2. 生成包含以下必填章节的测试计划文档:
    • 包含目标与关键风险的执行摘要。
    • 范围定义(包含范围内与范围外内容)。
    • 测试策略与方法。
    • 自动化策略,涵盖哪些流程应转为E2E测试、哪些需保持纯手动测试,以及哪些因环境缺口而受阻。
    • 环境要求(操作系统、浏览器、设备)。
    • 准入标准(测试开始前必须满足的条件)。
    • 准出标准(测试结束前必须满足的条件,包括通过率阈值以及关键流程的自动化跟进预期)。
    • 风险评估表(风险、概率、影响、缓解措施)。
    • 时间线与交付物。
  3. 将计划写入
    <qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md
步骤4:生成测试用例
  1. 读取
    references/test_case_templates.md
    ,选择合适的模板变体(功能测试、UI测试、集成测试、回归测试、安全测试、性能测试)。
  2. 为每个测试用例分配符合以下命名规则的ID:
    类型前缀示例
    功能测试TC-FUNC-TC-FUNC-001
    UI/可视化测试TC-UI-TC-UI-045
    集成测试TC-INT-TC-INT-012
    回归测试TC-REG-TC-REG-089
    安全测试TC-SEC-TC-SEC-005
    性能测试TC-PERF-TC-PERF-023
    冒烟测试SMOKE-SMOKE-001
  3. 每个测试用例必须包含:
    • 优先级:P0(关键)| P1(高)| P2(中)| P3(低)。
    • 目标:验证内容及原因。
    • 前置条件:设置要求与测试数据。
    • 测试步骤:带编号的操作步骤,每个步骤需包含
      **预期结果:**
    • 边缘情况:边界值、空输入、特殊字符。
    • 自动化目标
      E2E
      Integration
      Manual-only
    • 自动化状态
      Existing
      Missing
      Blocked
      N/A
    • 自动化命令/规范:已知的现有规范路径或命令。
    • 自动化说明:说明该用例应自动化、保持手动或受阻的原因。
  4. 将每个测试用例写入
    <qa-output-path>/qa/test-cases/<TC-ID>.md
  5. 交互式生成测试用例时,执行
    scripts/generate_test_cases.sh <qa-output-path>/test-cases
步骤5:构建回归测试套件
  1. 读取
    references/regression_testing.md
    获取套件结构与执行策略。
  2. 将测试分为不同层级:
    套件时长执行频率覆盖范围
    冒烟测试15-30分钟每日/每次构建仅关键路径
    定向测试30-60分钟每次变更后受影响区域
    全量测试2-4小时每周/发布前全面覆盖
    Sanity测试10-15分钟热修复后快速验证
  3. 使用统一优先级标准对测试用例进行优先级排序:
    • P0:业务关键、安全相关、影响营收的测试——必须始终执行。
    • P1:主要功能、常见流程——每周或更频繁执行。
    • P2:次要功能、边缘情况——发布时执行。
  4. 明确标记自动化候选用例:
    • 当仓库已具备E2E测试框架时,将变更或回归关键的P0、P1公共流程标记为
      Automation Target: E2E
    • 将由Bug驱动的公共回归测试标记为
      Automation Status: Missing
      ,直到
      qa-execution
      确认规范已添加或更新。
    • 将探索性测试、需视觉判断或不支持的流程标记为
      Manual-only
      Blocked
      并说明原因。
  5. 定义执行顺序:先执行冒烟测试(若失败则停止)→ P0 → P1 → P2 → 探索性测试。
  6. 定义通过/失败标准:
    • 通过:所有P0测试通过,90%以上P1测试通过,无未解决的关键Bug。
    • 失败:任意P0测试失败,发现关键Bug、安全漏洞或数据丢失问题。
    • 条件通过:P1测试失败但有文档记录的解决方法,且已制定修复计划。
  7. 将套件文档写入
    <qa-output-path>/qa/test-plans/<suite-name>-regression.md
步骤6:与Figma设计进行验证
若未配置Figma MCP则跳过此步骤。
  1. 读取
    references/figma_validation.md
    获取验证流程。
  2. 使用MCP查询从Figma提取设计规范:
    • 尺寸(宽度、高度)。
    • 颜色(背景、文本、边框——精确十六进制值)。
    • 排版(字体家族、大小、字重、行高、颜色)。
    • 间距(内边距、外边距)。
    • 边框圆角、阴影。
    • 交互状态(默认、悬停、激活、聚焦、禁用)。
  3. 生成UI测试用例(TC-UI-*),将每个属性与实现效果进行对比。
  4. 在以下标准视口测试响应式表现:
    • 移动端:375px。
    • 平板端:768px。
    • 桌面端:1280px。
  5. 若验证发现差异,按照步骤7生成Bug报告。
  6. 当需要基于浏览器的验证时,使用
    qa-execution
    配套技能中的
    agent-browser
    。核心流程为:打开→截图→交互→重新截图→验证。
步骤7:创建Bug报告
  1. 使用
    assets/issue-template.md
    中的统一Bug报告格式,该格式与
    qa-execution
    共享。
  2. 为Bug分配以
    BUG-
    为前缀的ID(例如
    BUG-001
    )。
  3. 每份Bug报告必须包含:
    • 严重程度:Critical | High | Medium | Low。
    • 优先级:P0 | P1 | P2 | P3。
    • 环境:构建版本、操作系统、浏览器、URL。
    • 复现步骤:精确的复现步骤。
    • 预期与实际结果:清晰的描述。
    • 影响:受影响用户范围、发生频率、解决方法。
    • 关联内容:若在测试用例执行中发现,则关联TC-ID;若为UI Bug,则关联Figma URL。
  4. 将每份Bug报告写入
    <qa-output-path>/qa/issues/<BUG-ID>.md
  5. 交互式创建Bug报告时,执行
    scripts/create_bug_report.sh <qa-output-path>/issues
步骤8:验证完整性
  1. 验证所有生成的测试用例每个步骤都有预期结果。
  2. 验证所有Bug报告都有可复现的步骤。
  3. 验证可追溯性:测试用例关联需求,Bug关联测试用例。
  4. 验证每个规划的关键流程都有明确的自动化注释,且
    Missing
    Blocked
    状态包含原因说明。
  5. 当为后续执行做规划时,对照
    ../qa-execution/references/checklist.md
    检查覆盖缺口。

Severity Definitions

严重程度定义

LevelCriteriaExamples
CriticalSystem crash, data loss, security breachPayment fails, login broken
HighMajor feature broken, no workaroundSearch not working, checkout fails
MediumFeature partial, workaround existsFilter missing option, slow load
LowCosmetic, rare edge caseTypo, minor alignment
级别判断标准示例
Critical系统崩溃、数据丢失、安全漏洞支付失败、登录功能损坏
High主要功能损坏,无解决方法搜索功能失效、结账流程失败
Medium功能部分可用,存在解决方法筛选功能缺少选项、加载缓慢
Low外观问题、罕见边缘情况文字拼写错误、轻微对齐问题

Priority vs Severity Matrix

优先级与严重程度矩阵

Low ImpactMediumHighCritical
RareP3P3P2P1
SometimesP3P2P1P0
OftenP2P1P0P0
AlwaysP2P1P0P0
低影响中影响高影响严重影响
罕见P3P3P2P1
偶尔P3P2P1P0
频繁P2P1P0P0
始终P2P1P0P0

Companion Skill: qa-execution

配套技能:qa-execution

The
qa-report
and
qa-execution
skills share a common output directory and artifact format. The intended workflow:
  1. Plan first with
    qa-report
    : generate test plans, test cases, and regression suites.
  2. Execute with
    qa-execution
    : run verification gates, exercise flows end-to-end, discover bugs, and add or update E2E coverage when the repository already supports it.
  3. Document with
    qa-report
    : create structured bug reports for issues found during execution.
When
qa-execution
runs after
qa-report
, it reads test cases from
<qa-output-path>/qa/test-cases/
to inform its execution matrix, automation priorities, and reporting fields, then writes bugs to
<qa-output-path>/qa/issues/
using the same unified template.
qa-report
qa-execution
技能共享统一的输出目录和工件格式。预期工作流程:
  1. 先规划:使用
    qa-report
    生成测试计划、测试用例和回归套件。
  2. 再执行:使用
    qa-execution
    运行验证门、端到端执行流程、发现Bug,并在仓库支持的情况下添加或更新E2E测试覆盖。
  3. 后记录:使用
    qa-report
    为执行过程中发现的问题创建结构化Bug报告。
qa-execution
qa-report
之后运行时,它会从
<qa-output-path>/qa/test-cases/
读取测试用例,以确定执行矩阵、自动化优先级和报告字段,然后使用相同的统一模板将Bug写入
<qa-output-path>/qa/issues/

Error Handling

错误处理

  • If the
    qa-output-path
    directory cannot be created, report the error and fall back to the current working directory.
  • If Figma MCP is not configured, skip Figma validation steps and note the gap in the test plan.
  • If
    agent-browser
    is not available for UI validation, generate test cases as documentation for manual execution and note the limitation.
  • If the repository does not have a known E2E harness, mark affected cases as
    Manual-only
    or
    Blocked
    instead of inventing automation commands.
  • If the user provides a feature description that is too vague to generate test cases, ask for specific requirements, user flows, or acceptance criteria before proceeding.
  • 若无法创建
    qa-output-path
    目录,报告错误并回退到当前工作目录。
  • 若未配置Figma MCP,跳过Figma验证步骤并在测试计划中注明该缺口。
  • agent-browser
    不可用于UI验证,将测试用例生成为手动执行的文档并注明限制。
  • 若仓库没有已知的E2E测试框架,将受影响的用例标记为
    Manual-only
    Blocked
    ,而非编造自动化命令。
  • 若用户提供的功能描述过于模糊,无法生成测试用例,在继续之前请求具体需求、用户流程或验收标准。