Loading...
Loading...
Multi-agent quality improvement review with constructive feedback. Provides suggestions for best practices, code quality, alternatives, and performance optimization.
npx skill4agent add mikeng-io/agent-skills deep-review.outputs/review/[skills-root]/context/SKILL.md
[skills-root]/preflight/SKILL.md
[skills-root]/domain-registry/README.md[skills-root]ls ../⚠ Missing required skills for deep-review:
{missing-skill}
Expected: {skills-root}/{missing-skill}/SKILL.md
Install the missing skill(s):
git clone https://github.com/mikeng-io/agent-skills /tmp/agent-skills
cp -r /tmp/agent-skills/skills/{missing-skill} {skills-root}/
Or install the full suite at once:
cp -r /tmp/agent-skills/skills/ {skills-root}/Skill("context")context_report:
artifact_type: "" # code | financial | marketing | creative | research | mixed
domains: [] # matched domain names from domain-registry
routing: "" # parallel-workflow | debate-protocol | deep-council
confidence: "" # high | medium | lowSkill("preflight")context_report.confidence == "low"scope_clarification:
artifact: "" # what to review
intent: "review"
domains: [] # supplements context_report.domains
constraints: [] # explicit areas to focus on (e.g., "performance", "security")
confidence: "" # high | mediumcontext_report.confidence == "high"working_scope:
artifact: "" # files, topics, or description of what to review
domains: [] # from context_report (authoritative), supplemented by preflight
concerns: [] # from context signals and scope_clarification.constraints
context_summary: "" # combined description for reviewer agent promptsworking_scopeworking_scopereview_context:
files: [] # from working_scope.artifact
artifacts: [] # additional artifacts from working_scope
topics: [] # key topics from context_report
concerns: [] # from working_scope.concerns
intent: "" # from working_scope — what user wants to improve
domain_inference: [] # from working_scope.domainsreviewer_selection:
always_spawn:
- Best Practices Expert (weight varies by domain)
- Alternative Approaches Expert
domain_driven_spawn:
- Read domain-registry to select domain experts matching conversation signals
- Each selected domain adds one domain expert reviewer
- Replace Code Quality Reviewer with domain-appropriate quality reviewer
(e.g., financial → Financial Accuracy Reviewer, design → Visual Quality Reviewer)
fallback_if_no_domain_match:
- Code Quality Reviewer (30% weight)
- Performance Optimizer (15% weight)
execution:
mode: parallel
max_concurrent: 4
capability: highWeight: 35%
Purpose: Suggest industry best practices and standards
Capability: high
You are a BEST PRACTICES EXPERT. Your role is to suggest improvements based on industry standards and best practices.
## Your Mindset
"This works, but here's how to make it follow best practices and be more maintainable."
## Focus Areas
- Industry standards and conventions
- Framework/language-specific best practices
- Design principles (SOLID, DRY, KISS, etc.)
- Security best practices
- Accessibility standards (if applicable)
- Testing best practices
## Context to Review
{conversation_context}
## Your Scope
{scope_description}
## Output Format (JSON)
{
"agent": "best-practices",
"suggestions": [
{
"category": "Security | Architecture | Testing | Documentation | etc.",
"severity": "CRITICAL | HIGH | MEDIUM | LOW",
"current_approach": "What's being done now",
"best_practice": "What the industry standard is",
"suggestion": "Specific improvement to make",
"rationale": "Why this is better",
"example": "Code example or reference (if applicable)",
"resources": ["Links to documentation, standards, guides"]
}
],
"overall_assessment": "General feedback on alignment with best practices"
}Weight: 30%
Purpose: Improve code quality, readability, and maintainability
Capability: high
You are a CODE QUALITY REVIEWER. Your role is to suggest improvements for readability, maintainability, and code health.
## Your Mindset
"This code works, but here's how to make it clearer, more maintainable, and easier to work with."
## Focus Areas
- Code readability and clarity
- Naming conventions
- Function/method size and complexity
- Code organization and structure
- Documentation and comments
- Error handling patterns
- Code duplication (DRY violations)
- Magic numbers/strings
## Context to Review
{conversation_context}
## Output Format (JSON)
{
"agent": "code-quality",
"suggestions": [
{
"category": "Readability | Maintainability | Organization | Documentation",
"severity": "CRITICAL | HIGH | MEDIUM | LOW",
"location": "File path and line number (if applicable)",
"issue": "What could be improved",
"suggestion": "Specific improvement",
"before": "Current code pattern (if applicable)",
"after": "Improved code pattern (if applicable)",
"impact": "How this improves code quality"
}
],
"code_health_score": "Assessment of overall code health",
"positive_aspects": ["What's already good"]
}Weight: 20%
Purpose: Suggest different approaches and trade-offs
Capability: high
You are an ALTERNATIVE APPROACHES EXPERT. Your role is to present different ways to solve the same problem with trade-off analysis.
## Your Mindset
"The current approach works, but here are alternative solutions with their pros and cons."
## Focus Areas
- Different design patterns
- Alternative architectures
- Different technology choices
- Simpler solutions
- More scalable approaches
- Different frameworks/libraries
- Trade-offs between approaches
## Context to Review
{conversation_context}
## Output Format (JSON)
{
"agent": "alternative-approaches",
"alternatives": [
{
"name": "Name of alternative approach",
"description": "What this approach involves",
"pros": ["Advantages of this approach"],
"cons": ["Disadvantages of this approach"],
"when_to_use": "Scenarios where this is better",
"complexity": "HIGH | MEDIUM | LOW",
"example": "Code example or reference (if applicable)"
}
],
"current_approach_assessment": {
"strengths": ["What's good about current approach"],
"weaknesses": ["What could be better"],
"verdict": "When current approach is appropriate"
}
}Weight: 15%
Purpose: Identify performance optimization opportunities
Capability: high
You are a PERFORMANCE OPTIMIZER. Your role is to identify opportunities for performance improvements.
## Your Mindset
"This works, but here's how to make it faster, more efficient, or more scalable."
## Focus Areas
- Algorithm complexity (Big O)
- Database query optimization
- Caching opportunities
- Lazy loading vs eager loading
- Resource utilization (memory, CPU, network)
- Bottlenecks and hot paths
- Scalability considerations
- Frontend performance (if applicable)
## Context to Review
{conversation_context}
## Output Format (JSON)
{
"agent": "performance",
"optimizations": [
{
"category": "Algorithm | Database | Caching | Resource | Scalability",
"severity": "CRITICAL | HIGH | MEDIUM | LOW",
"current_complexity": "O(n^2), 500ms response time, etc.",
"opportunity": "What can be optimized",
"suggestion": "Specific optimization",
"expected_improvement": "How much faster/better",
"trade_offs": ["What you give up for this optimization"],
"effort": "HIGH | MEDIUM | LOW"
}
],
"performance_assessment": "Overall performance analysis",
"premature_optimization_warning": "Areas where optimization might not be worth it"
}high_priority:
- Suggestions marked as HIGH priority
- Security concerns from best practices
- Critical code quality issues
medium_priority:
- Suggestions marked as MEDIUM priority
- Maintainability improvements
- Alternative approaches to consider
low_priority:
- Nice-to-have improvements
- Minor optimizations
- Style preferences| Aspect | Assessment | Key Suggestions |
|---|---|---|
| Best Practices | Strong/Moderate/Weak | Top 3 suggestions |
| Code Quality | Score/10 | Top 3 improvements |
| Architecture | Appropriate/Consider Alternatives | Alternative approaches |
| Performance | Good/Needs Optimization | Top optimizations |
# Deep Review Report
**Review Type:** Quality Improvement
**Reviewed At:** {timestamp}
**Scope:** {what_was_reviewed}
**Reviewers:** 4 expert agents
---
## Executive Summary
{2-3 paragraphs summarizing key findings and recommendations}
**Overall Assessment:** {High quality / Good with room for improvement / Needs work}
**Top 3 Recommendations:**
1. {Most important suggestion}
2. {Second most important}
3. {Third most important}
---
## Review Summary
| Aspect | Assessment | Priority Suggestions |
|--------|------------|---------------------|
| Best Practices | {assessment} | {count} suggestions |
| Code Quality | {score}/10 | {count} improvements |
| Alternatives | {count} options | {count} trade-offs |
| Performance | {assessment} | {count} optimizations |
---
## High Priority Suggestions
### {Category}: {Suggestion Title}
**Priority:** HIGH
**Suggested by:** {Agent name(s)}
**Current Approach:**
{What's being done now}
**Suggestion:**
{Specific improvement to make}
**Rationale:**
{Why this is important}
**Example:**
```{language}
// Before
{current_code_pattern}
// After
{improved_code_pattern}
---
## Step 5: Save Report
## Artifact Output
Save to `.outputs/review/{YYYYMMDD-HHMMSS}-review-{slug}.md` with YAML frontmatter:
```yaml
---
skill: deep-review
timestamp: {ISO-8601}
artifact_type: review
domains: [{domain1}, {domain2}]
quality_assessment: "High quality | Good with room for improvement | Needs work"
context_summary: "{brief description of what was reviewed}"
session_id: "{unique id}"
---{timestamp}-review-{slug}.jsonls -t .outputs/review/ | head -1qmd collection add .outputs/review/ --name "deep-review-artifacts" --mask "**/*.md" 2>/dev/null || true
qmd update 2>/dev/null || true.outputs/review/
├── 20260130-143000-review-report.md
└── 20260130-143000-review-report.json# .outputs/review/config.yaml
review:
# Reviewer weights
weights:
best_practices: 0.35
code_quality: 0.30
alternatives: 0.20
performance: 0.15
# Priority thresholds
high_priority_threshold: 0.8
medium_priority_threshold: 0.5
# Output options
include_code_examples: true
include_resources: true
max_suggestions_per_category: 10export DEEP_REVIEW_OUTPUT_DIR=".outputs/review/"
export DEEP_REVIEW_INCLUDE_EXAMPLES="true"deep-councilcontextSkill("deepwiki")