Code Testing Generation Skill
An AI-powered skill that generates comprehensive, workable unit tests for any programming language using a coordinated multi-agent pipeline.
When to Use This Skill
Use this skill when you need to:
- Generate unit tests for an entire project or specific files
- Improve test coverage for existing codebases
- Create test files that follow project conventions
- Write tests that actually compile and pass
- Add tests for new features or untested code
When Not to Use
- Running or executing existing tests (use the skill)
- Migrating between test frameworks (use migration skills)
- Writing tests specifically for MSTest patterns (use )
- Debugging failing test logic
How It Works
This skill coordinates multiple specialized agents in a Research → Plan → Implement pipeline:
Pipeline Overview
┌─────────────────────────────────────────────────────────────┐
│ TEST GENERATOR │
│ Coordinates the full pipeline and manages state │
└─────────────────────┬───────────────────────────────────────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────────┐
│ RESEARCHER│ │ PLANNER │ │ IMPLEMENTER │
│ │ │ │ │ │
│ Analyzes │ │ Creates │ │ Writes tests │
│ codebase │→ │ phased │→ │ per phase │
│ │ │ plan │ │ │
└───────────┘ └───────────┘ └───────┬───────┘
│
┌─────────┬───────┼───────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌───────┐ ┌───────┐ ┌───────┐
│ BUILDER │ │TESTER │ │ FIXER │ │LINTER │
│ │ │ │ │ │ │ │
│ Compiles│ │ Runs │ │ Fixes │ │Formats│
│ code │ │ tests │ │ errors│ │ code │
└─────────┘ └───────┘ └───────┘ └───────┘
Step-by-Step Instructions
Step 1: Determine the user request
Make sure you understand what user is asking and for what scope.
When the user does not express strong requirements for test style, coverage goals, or conventions, source the guidelines from unit-test-generation.prompt.md. This prompt provides best practices for discovering conventions, parameterization strategies, coverage goals (aim for 80%), and language-specific patterns.
Step 2: Invoke the Test Generator
Start by calling the
agent with your test generation request:
Generate unit tests for [path or description of what to test], following the [unit-test-generation.prompt.md](unit-test-generation.prompt.md) guidelines
The Test Generator will manage the entire pipeline automatically.
Step 3: Research Phase (Automatic)
The
agent analyzes your codebase to understand:
- Language & Framework: Detects C#, TypeScript, Python, Go, Rust, Java, etc.
- Testing Framework: Identifies MSTest, xUnit, Jest, pytest, go test, etc.
- Project Structure: Maps source files, existing tests, and dependencies
- Build Commands: Discovers how to build and test the project
Step 4: Planning Phase (Automatic)
The
agent creates a structured implementation plan:
- Groups files into logical phases (2-5 phases typical)
- Prioritizes by complexity and dependencies
- Specifies test cases for each file
- Defines success criteria per phase
Step 5: Implementation Phase (Automatic)
The
agent executes each phase sequentially:
- Read source files to understand the API
- Write test files following project patterns
- Build using the sub-agent to verify compilation
- Test using the sub-agent to verify tests pass
- Fix using the sub-agent if errors occur
- Lint using the sub-agent for code formatting
Each phase completes before the next begins, ensuring incremental progress.
Coverage Types
- Happy path: Valid inputs produce expected outputs
- Edge cases: Empty values, boundaries, special characters
- Error cases: Invalid inputs, null handling, exceptions
State Management
All pipeline state is stored in
folder:
| File | Purpose |
|---|
| Codebase analysis results |
| Phased implementation plan |
| Progress tracking (optional) |
Examples
Example 1: Full Project Testing
Generate unit tests for my Calculator project at C:\src\Calculator
Example 2: Specific File Testing
Generate unit tests for src/services/UserService.ts
Example 3: Targeted Coverage
Add tests for the authentication module with focus on edge cases
Agent Reference
| Agent | Purpose |
|---|
| Coordinates pipeline |
| Analyzes codebase |
| Creates test plan |
| Writes test files |
| Compiles code |
| Runs tests |
| Fixes errors |
| Formats code |
Requirements
- Project must have a build/test system configured
- Testing framework should be installed (or installable)
- VS Code with GitHub Copilot extension
Troubleshooting
Tests don't compile
The
agent will attempt to resolve compilation errors. Check
for the expected test structure. Check the
folder for language-specific error code references (e.g.,
for .NET).
Tests fail
Most failures in generated tests are caused by wrong expected values in assertions, not production code bugs:
- Read the actual test output
- Read the production code to understand correct behavior
- Fix the assertion, not the production code
- Never mark tests or just to make them pass
Wrong testing framework detected
Specify your preferred framework in the initial request: "Generate Jest tests for..."
Environment-dependent tests fail
Tests that depend on external services, network endpoints, specific ports, or precise timing will fail in CI environments. Focus on unit tests with mocked dependencies instead.
Build fails on full solution
During phase implementation, build only the specific test project for speed. After all phases, run a full non-incremental workspace build to catch cross-project errors.