w03-testing-and-diagnostics
Original:🇨🇳 Chinese
Translated
3 scriptsChecked / no sensitive code detected
Testing and diagnosis workflow, including unit tests and browser tests, with automatic diagnosis when tests fail. Suitable for test execution and troubleshooting after code changes.
2installs
Sourceqiao-925/qiao-skills
Added on
NPX Install
npx skill4agent add qiao-925/qiao-skills w03-testing-and-diagnosticsTags
Translated version includes tags in frontmatterSKILL.md Content (Chinese)
View Translation Comparison →Testing and Diagnosis Specification
Suitable for test execution and troubleshooting after code changes, ensuring reliable results and complete remediation.
⚠️ Core Mandatory Requirements
Step 1: Create Test Task Document
Must create first before starting testing
agent-task-log/ongoing/TEST_[Date]_[Task].mdAfter completion, archive to
agent-task-log/archive/[Year-Month]/markdown
# Test Task: [Task Name]
## Current Status
**Phase**: 🔄 Executing Tests
**Next Step**: Run Unit Tests
## Progress
| Phase | Status |
|------|------|
| Executing Tests | 🔄 In Progress |
| Diagnosis (if needed)| ⬜ Pending |
## Test Records
(To be filled)Step 2: Execute Tests
- Backend Changes () → Unit Tests
backend/** - Frontend Changes () → Browser Tests
frontend/** - Full-stack Changes → First run unit tests, then browser tests
Step 3: Update Document
Update the document immediately after test completion and record the results
Baseline Constraints
- Do not submit delivery results before test completion
- Must fix issues before proceeding if tests fail
- If tests cannot be executed, explain the reason and make a supplementary test plan
AI Agent Behavior Requirements
At the Start of Test Task
- First create the TEST_*.md document
- Select tests based on the type of change
- Execute tests and update the document
- Trigger diagnosis workflow if tests fail (maximum 3 rounds)
Diagnosis Workflow
Each round: Observe → Infer → Operate → Result
Escalation Conditions:
- No results after 3 rounds of troubleshooting
- High-risk or involves architecture/security decisions
Resume Execution
New conversations check and continue from the "Current Status"
agent-task-log/TEST_*.mdHuman-AI Collaboration
AI cannot complete all tests 100% autonomously; human assistance is required in some scenarios:
Situations where human assistance can be requested:
- Browser pages need to be manually opened or navigated
- Manual verification of visual effects is required
- Involves complex user interaction flows
- System resources inaccessible to AI tools
Collaboration Method:
- Clearly inform the user of the specific operations requiring assistance
- After the user completes the task, AI continues with subsequent test steps
- Record collaboration points in the test document
Principle: Semi-automated testing is also effective; AI handles most of the work, while humans supplement the parts that AI finds difficult to handle.
Collaboration with W00 (Automatic + Manual)
- Before entering testing, automatically call to record the test starting point and next step.
w00-workflow-checkpoint checkpoint - When tests fail and enter diagnosis, automatically update the issue to and record the blocker.
status:blocked - Users can manually execute to supplement test nodes and checkpoint information.
/w00-workflow-checkpoint
Prohibited Items
- ❌ Skip or delay testing
- ❌ Report completion without recording results
- ❌ Submit changes after test failure
Tool Scripts
- - Unit test workflow
scripts/run_test_workflow.py - - Browser test workflow
scripts/run_browser_tests.py - - Automatic diagnosis
scripts/auto_diagnose.py
Reference Materials
- - Detailed testing workflow description
references/testing-workflow.md - - Detailed browser testing description
references/browser-testing.md - - Detailed diagnosis workflow description
references/diagnosis-workflow.md