Loading...
Loading...
Validate code quality, test coverage, performance, and security. Use when verifying implemented features meet all standards and requirements before marking complete.
npx skill4agent add matteocervelli/llms validation# Format check (Black)
black --check src/ tests/
# Type checking (mypy)
mypy src/
# Linting (flake8, if configured)
flake8 src/ tests/
# All checks together
make lint # If Makefile configuredquality-checklist.md# Use validation script
python scripts/run_checks.py --quality# Run all tests with coverage
pytest --cov=src --cov-report=html --cov-report=term-missing
# Check coverage threshold
pytest --cov=src --cov-fail-under=80
# View HTML coverage report
open htmlcov/index.html# Show untested lines
pytest --cov=src --cov-report=term-missing
# Generate detailed HTML report
pytest --cov=src --cov-report=html# Run tests 10 times to check for flaky tests
for i in {1..10}; do pytest || break; done
# Run in random order
pytest --random-order# Verify no slow tests in unit tests
pytest tests/unit/ -m "not slow"
# Run integration tests separately
pytest tests/integration/performance-benchmarks.md# Run performance tests
pytest tests/performance/ -v
# Profile code
python -m cProfile -o profile.stats script.py
python -m pstats profile.stats
# Memory profiling
python -m memory_profiler script.py# Example performance test
def test_performance_requirement():
"""Verify operation meets performance requirement."""
start = time.time()
result = expensive_operation()
duration = time.time() - start
assert duration < 1.0, f"Took {duration}s, required < 1.0s"security-checklist.md# Check for vulnerable dependencies
pip-audit
# Or use safety
safety check --json
# Check for outdated dependencies
pip list --outdated# Test CLI (if applicable)
python -m src.tools.feature.main --help
python -m src.tools.feature.main create --name test
# Test with sample data
python -m src.tools.feature.main --input samples/test.json
# Test error cases
python -m src.tools.feature.main --invalid-option# Run integration tests
pytest tests/integration/ -v
# Test with real dependencies (in test environment)
pytest tests/integration/ --no-mock# Test complete workflows
pytest tests/e2e/ -v
# Manual E2E testing
./scripts/manual_test.sh# Use automated validation script
python scripts/run_checks.py --all
# Or run individual checks
python scripts/run_checks.py --quality
python scripts/run_checks.py --tests
python scripts/run_checks.py --coverage
python scripts/run_checks.py --security# Validation Report: [Feature Name]
## Quality ✅
- Black: PASS
- mypy: PASS
- flake8: PASS (0 errors, 0 warnings)
## Testing ✅
- Unit tests: 45 passed
- Integration tests: 12 passed
- Coverage: 87% (target: 80%)
## Performance ✅
- Response time (p95): 145ms (target: < 200ms)
- Throughput: 1200 req/s (target: 1000 req/s)
- Memory usage: 75MB (target: < 100MB)
## Security ✅
- No vulnerable dependencies
- Input validation: Complete
- Secrets management: Secure
## Requirements ✅
- All acceptance criteria met
- No regressions detected
## Documentation ✅
- Code documentation: Complete
- Technical docs: Complete
- CHANGELOG: Updated
## Status: READY FOR PR ✅performance-benchmarks.mdscripts/run_checks.py# Run all checks
python scripts/run_checks.py --all
# Run specific checks
python scripts/run_checks.py --quality
python scripts/run_checks.py --tests
python scripts/run_checks.py --coverage
python scripts/run_checks.py --security
python scripts/run_checks.py --performance
# Generate report
python scripts/run_checks.py --all --report validation-report.mdblack src/ tests/pytest --cov-report=htmlpython -m cProfilepip-audit