Loading...
Loading...
Mechanize Pattern 15 — the seven-pass adversarial review protocol for academic manuscripts. Spawns 7 forked subagents in parallel (abstract, intro, methods, results, robustness, prose, citations), then synthesizes a prioritized revision checklist. Use for submission-ready or R&R-stage papers where single-pass review isn't enough.
npx skill4agent add pedrohcgs/claude-code-my-workflow seven-pass-reviewWhen to pick this over: This skill costs roughly 7× more tokens than/review-paper(default) and ~2× more than/review-paper. Use it when the paper is submission-ready or at R&R stage and you need maximum lens coverage. For early drafts or iterative work,/review-paper --adversarialis the right tool. For journal-simulation pressure test, use/review-paperinstead./review-paper --peer <journal>
$0.tex.qmd.md.pdf| # | Lens | Focus | Agent type |
|---|---|---|---|
| 1 | Abstract audit | Does the abstract state the question, method, result, and contribution? Does it match the paper? | general-purpose |
| 2 | Intro structure | Does the intro follow Cochrane / Varian framework? Literature placement? Contribution clarity? | general-purpose |
| 3 | Methods / identification | Are assumptions stated? Is identification credible? Are alternatives addressed? | domain-reviewer |
| 4 | Results + tables | Do tables read standalone? Is magnitude + significance discussed? Units consistent? | general-purpose |
| 5 | Robustness | Are obvious threats pre-empted? Is the robustness section convincing or theatrical? | general-purpose |
| 6 | Prose quality | Sentence-level clarity, hedging, passive voice, paragraph cohesion | proofreader |
| 7 | Citation audit | Invokes | general-purpose |
.pdfpdftotext -layoutquality_reports/seven_pass_[stem]/quality_reports/seven_pass_[stem]/lens_[N]_[lens-name].md/validate-bib --semanticquality_reports/seven_pass_[stem]/_SYNTHESIS.md# Seven-Pass Review: [Manuscript]
**Date:** YYYY-MM-DD
**Path:** [manuscript]
## Executive verdict
**Overall state:** [SUBMIT / REVISE-MINOR / REVISE-MAJOR / REJECT-AND-RESTART]
## Cross-lens CRITICAL issues
| # | Lens(es) | Issue | Recommendation |
|---|---|---|---|
## MAJOR issues (second-round)
| # | Lens(es) | Issue |
|---|---|---|
## MINOR polish
[bulleted]
## Per-lens scorecard
| Lens | Critical | Major | Minor | Score/10 |
|---|---|---|---|---|
| 1. Abstract | | | | |
| 2. Intro | | | | |
| 3. Methods | | | | |
| 4. Results | | | | |
| 5. Robustness | | | | |
| 6. Prose | | | | |
| 7. Citations | | | | |
| **Overall** | | | | |
## Revision plan (in recommended order)
1. [Highest-leverage fix — usually a lens with 2+ CRITICALs]
2. …
7. [Lowest-leverage polish]
## Contradictions between lenses
[If two lenses disagree, surface here. E.g., Lens 2 says "expand contribution" but Lens 6 says "trim intro".]Seven-pass review complete.
Subagents: 7 (parallel) + 1 synthesizer.
Approx token usage: ~80–120k (vs ~15k for single-pass /review-paper).
Runtime: ~3–5 min wall-clock.
For cheaper alternatives:
- Single-pass: /review-paper
- Iterative: /review-paper --adversarial/review-paper.claude/skills/review-paper/SKILL.md--adversarial.claude/skills/validate-bib/SKILL.md.claude/skills/audit-reproducibility/SKILL.mdCRITICAL_SYNTHESIS.md--incremental/review-paper --adversarial