w03-testing-and-diagnostics

Original🇨🇳 Chinese
Translated
3 scriptsChecked / no sensitive code detected

Testing and diagnosis workflow, including unit tests and browser tests, with automatic diagnosis when tests fail. Suitable for test execution and troubleshooting after code changes.

2installs
Added on

NPX Install

npx skill4agent add qiao-925/qiao-skills w03-testing-and-diagnostics

SKILL.md Content (Chinese)

View Translation Comparison →

Testing and Diagnosis Specification

Suitable for test execution and troubleshooting after code changes, ensuring reliable results and complete remediation.

⚠️ Core Mandatory Requirements

Step 1: Create Test Task Document

Must create first before starting testing
agent-task-log/ongoing/TEST_[Date]_[Task].md
After completion, archive to
agent-task-log/archive/[Year-Month]/
markdown
# Test Task: [Task Name]

## Current Status
**Phase**: 🔄 Executing Tests
**Next Step**: Run Unit Tests

## Progress

| Phase | Status |
|------|------|
| Executing Tests | 🔄 In Progress |
| Diagnosis (if needed)| ⬜ Pending |

## Test Records
(To be filled)

Step 2: Execute Tests

  • Backend Changes (
    backend/**
    ) → Unit Tests
  • Frontend Changes (
    frontend/**
    ) → Browser Tests
  • Full-stack Changes → First run unit tests, then browser tests

Step 3: Update Document

Update the document immediately after test completion and record the results

Baseline Constraints

  • Do not submit delivery results before test completion
  • Must fix issues before proceeding if tests fail
  • If tests cannot be executed, explain the reason and make a supplementary test plan

AI Agent Behavior Requirements

At the Start of Test Task

  1. First create the TEST_*.md document
  2. Select tests based on the type of change
  3. Execute tests and update the document
  4. Trigger diagnosis workflow if tests fail (maximum 3 rounds)

Diagnosis Workflow

Each round: Observe → Infer → Operate → Result
Escalation Conditions:
  • No results after 3 rounds of troubleshooting
  • High-risk or involves architecture/security decisions

Resume Execution

New conversations check
agent-task-log/TEST_*.md
and continue from the "Current Status"

Human-AI Collaboration

AI cannot complete all tests 100% autonomously; human assistance is required in some scenarios:
Situations where human assistance can be requested:
  • Browser pages need to be manually opened or navigated
  • Manual verification of visual effects is required
  • Involves complex user interaction flows
  • System resources inaccessible to AI tools
Collaboration Method:
  1. Clearly inform the user of the specific operations requiring assistance
  2. After the user completes the task, AI continues with subsequent test steps
  3. Record collaboration points in the test document
Principle: Semi-automated testing is also effective; AI handles most of the work, while humans supplement the parts that AI finds difficult to handle.

Collaboration with W00 (Automatic + Manual)

  • Before entering testing, automatically call
    w00-workflow-checkpoint checkpoint
    to record the test starting point and next step.
  • When tests fail and enter diagnosis, automatically update the issue to
    status:blocked
    and record the blocker.
  • Users can manually execute
    /w00-workflow-checkpoint
    to supplement test nodes and checkpoint information.

Prohibited Items

  • ❌ Skip or delay testing
  • ❌ Report completion without recording results
  • ❌ Submit changes after test failure

Tool Scripts

  • scripts/run_test_workflow.py
    - Unit test workflow
  • scripts/run_browser_tests.py
    - Browser test workflow
  • scripts/auto_diagnose.py
    - Automatic diagnosis

Reference Materials

  • references/testing-workflow.md
    - Detailed testing workflow description
  • references/browser-testing.md
    - Detailed browser testing description
  • references/diagnosis-workflow.md
    - Detailed diagnosis workflow description