review

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Review Playwright Tests

评审Playwright测试

Systematically review Playwright test files for anti-patterns, missed best practices, and coverage gaps.
系统性地检查Playwright测试文件中的反模式、遗漏的最佳实践以及覆盖缺口。

Input

输入

$ARGUMENTS
can be:
  • A file path: review that specific test file
  • A directory: review all test files in the directory
  • Empty: review all tests in the project's
    testDir
$ARGUMENTS
可以是:
  • 文件路径:评审指定的测试文件
  • 目录:评审目录下所有测试文件
  • 空值:评审项目
    testDir
    中的所有测试

Steps

步骤

1. Gather Context

1. 收集上下文

  • Read
    playwright.config.ts
    for project settings
  • List all
    *.spec.ts
    /
    *.spec.js
    files in scope
  • If reviewing a single file, also check related page objects and fixtures
  • 读取
    playwright.config.ts
    获取项目设置
  • 列出范围内所有
    *.spec.ts
    /
    *.spec.js
    文件
  • 如果评审单个文件,同时检查相关的页面对象和fixture

2. Check Each File Against Anti-Patterns

2. 检查每个文件中的反模式

Load
anti-patterns.md
from this skill directory. Check for all 20 anti-patterns.
Critical (must fix):
  1. waitForTimeout()
    usage
  2. Non-web-first assertions (
    expect(await ...)
    )
  3. Hardcoded URLs instead of
    baseURL
  4. CSS/XPath selectors when role-based exists
  5. Missing
    await
    on Playwright calls
  6. Shared mutable state between tests
  7. Test execution order dependencies
Warning (should fix): 8. Tests longer than 50 lines (consider splitting) 9. Magic strings without named constants 10. Missing error/edge case tests 11.
page.evaluate()
for things locators can do 12. Nested
test.describe()
more than 2 levels deep 13. Generic test names ("should work", "test 1")
Info (consider): 14. No page objects for pages with 5+ locators 15. Inline test data instead of factory/fixture 16. Missing accessibility assertions 17. No visual regression tests for UI-heavy pages 18. Console error assertions not checked 19. Network idle waits instead of specific assertions 20. Missing
test.describe()
grouping
从本技能目录加载
anti-patterns.md
,检查全部20种反模式。
严重问题(必须修复):
  1. 使用
    waitForTimeout()
  2. 非Web优先断言(
    expect(await ...)
  3. 使用硬编码URL而非
    baseURL
  4. 已有基于角色的选择器时仍使用CSS/XPath选择器
  5. Playwright调用遗漏
    await
  6. 测试间共享可变状态
  7. 测试执行存在顺序依赖
警告(建议修复): 8. 测试代码超过50行(考虑拆分) 9. 使用魔法字符串而非命名常量 10. 遗漏错误/边缘用例测试 11. 使用
page.evaluate()
实现定位器可完成的操作 12.
test.describe()
嵌套超过2层 13. 通用测试名称(如“should work”、“test 1”)
信息(可考虑优化): 14. 包含5个以上定位器的页面未使用页面对象 15. 使用内联测试数据而非工厂/fixture 16. 遗漏可访问性断言 17. 对UI密集型页面未进行视觉回归测试 18. 未检查控制台错误断言 19. 使用网络空闲等待而非特定断言 20. 遗漏
test.describe()
分组

3. Score Each File

3. 为每个文件评分

Rate 1-10 based on:
  • 9-10: Production-ready, follows all golden rules
  • 7-8: Good, minor improvements possible
  • 5-6: Functional but has anti-patterns
  • 3-4: Significant issues, likely flaky
  • 1-2: Needs rewrite
基于以下标准给出1-10分:
  • 9-10分:可用于生产环境,遵循所有黄金准则
  • 7-8分:良好,可进行小幅优化
  • 5-6分:功能可用但存在反模式
  • 3-4分:存在严重问题,可能不稳定
  • 1-2分:需要重写

4. Generate Review Report

4. 生成评审报告

For each file:
undefined
针对每个文件:
undefined

<filename> — Score: X/10

<filename> — 评分:X/10

Critical

严重问题

  • Line 15:
    waitForTimeout(2000)
    → use
    expect(locator).toBeVisible()
  • Line 28: CSS selector
    .btn-submit
    getByRole('button', { name: 'Submit' })
  • 第15行:
    waitForTimeout(2000)
    → 改用
    expect(locator).toBeVisible()
  • 第28行:CSS选择器
    .btn-submit
    getByRole('button', { name: 'Submit' })

Warning

警告

  • Line 42: Test name "test login" → "should redirect to dashboard after login"
  • 第42行:测试名称“test login” → “should redirect to dashboard after login”

Suggestions

建议

  • Consider adding error case: what happens with invalid credentials?
undefined
  • 考虑添加错误用例:输入无效凭据时会发生什么?
undefined

5. For Project-Wide Review

5. 项目级评审

If reviewing an entire test suite:
  • Spawn sub-agents per file for parallel review (up to 5 concurrent)
  • Or use
    /batch
    for very large suites
  • Aggregate results into a summary table
如果评审整个测试套件:
  • 为每个文件生成子Agent以并行评审(最多5个并发)
  • 对于超大型套件,可使用
    /batch
    功能
  • 将结果汇总为摘要表格

6. Offer Fixes

6. 提供修复方案

For each critical issue, provide the corrected code. Ask user: "Apply these fixes? [Yes/No]"
If yes, apply all fixes using
Edit
tool.
针对每个严重问题,提供修正后的代码。询问用户:“是否应用这些修复?[是/否]”
如果用户选择是,使用
Edit
工具应用所有修复。

Output

输出

  • File-by-file review with scores
  • Summary: total files, average score, critical issue count
  • Actionable fix list
  • Coverage gaps identified (pages/features with no tests)
  • 逐文件的评审结果及评分
  • 摘要:文件总数、平均得分、严重问题数量
  • 可执行的修复列表
  • 识别出的覆盖缺口(未测试的页面/功能)