Loading...
Loading...
Generate publication-quality academic illustrations through a local Codex app-server bridge that uses Codex native image generation. This is a separate experimental alternative to `paper-illustration`, intended for Claude Code users who want a GPT-image-style renderer without modifying the original skill.
npx skill4agent add wanshuiyin/auto-claude-code-research-in-sleep paper-illustration-image2┌──────────────────────────────────────────────────────────────────────────┐
│ MULTI-STAGE ITERATIVE WORKFLOW │
├──────────────────────────────────────────────────────────────────────────┤
│ │
│ User Request │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Claude │ ◄─── Step 1: Parse request, create initial prompt │
│ │ (Planner) │ - Extract components, labels, and data flow │
│ │ │ - Write a paper-ready figure brief │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Claude/Codex │ ◄─── Step 2: Optimize layout description │
│ │ Layout │ - Refine component positioning │
│ │ Review │ - Optimize spacing and grouping │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Claude/Codex │ ◄─── Step 3: CVPR/NeurIPS style verification │
│ │ Style │ - Check palette, arrows, and label standards │
│ │ Check │ - Tighten the prompt before rendering │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ codex-image2│ ◄─── Step 4: Native image generation via bridge │
│ │ MCP bridge │ - Call generate_start / generate_status │
│ │ + app-server│ - Accept only native imageGeneration output │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Claude │ ◄─── Step 5: STRICT visual review + SCORE (1-10) │
│ │ (Reviewer) │ - Verify logic, labels, arrows, and aesthetics │
│ │ STRICT! │ - Reject unclear or non-paper-ready figures │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ Score ≥ 9? ──YES──► Accept & Output │
│ │ │
│ NO │
│ │ │
│ ▼ │
│ Generate SPECIFIC improvement feedback ──► Loop back to Step 2 │
│ │
└──────────────────────────────────────────────────────────────────────────┘codex-image2mcp__codex__codexfigures/ai_generated/EnglishstrictimageGenerationpython3 tools/paper_illustration_image2.py| Figure Type | Quality | Examples |
|---|---|---|
| Architecture diagrams | Excellent | Model architecture, pipeline, encoder-decoder |
| Method illustrations | Excellent | Conceptual diagrams, algorithm flowcharts |
| Conceptual figures | Good | Comparison diagrams, taxonomy trees |
/paper-figure/figure-spec📋 paper-illustration-image2 integration checklist:
[ ] 1. python3 tools/paper_illustration_image2.py preflight --workspace <cwd> --json-out figures/ai_generated/preflight.json
[ ] 2. Confirm preflight JSON says ok=true before rendering
[ ] 3. Render via mcp__codex-image2__generate_start + generate_status
[ ] 4. Finalize via python3 tools/paper_illustration_image2.py finalize --workspace <cwd> --best-image <best_png>
[ ] 5. Verify artifacts via python3 tools/paper_illustration_image2.py verify --workspace <cwd> --json-out figures/ai_generated/verify.jsonfigures/ai_generated/python3 tools/paper_illustration_image2.py preflight \
--workspace <cwd> \
--json-out figures/ai_generated/preflight.jsonok=truemcp__codex__codexmcp__codex__codexmcp__codex-image2__generate_startpromptcwdoutputPathfigures/ai_generated/figure_v1.pngsystemAcademic paper figure. Prefer crisp English labels.timeoutSeconds180mcp__codex-image2__generate_statusdone=truestatus=completeddone=truestatus=failedfigure_v2.pngfigure_v3.pngIncrease spacing between genome scan and scoring modulesMake the off-target branch thinner and secondaryUse cleaner English labels: "Candidate sgRNA library", not "sgRNA library 23 bp"figure_final.pnglatex_include.texreview_log.jsonpython3 tools/paper_illustration_image2.py finalize \
--workspace <cwd> \
--best-image figures/ai_generated/figure_vN.png \
--score 9 \
--review-summary "Accepted after strict review; labels and arrows are paper-ready."
python3 tools/paper_illustration_image2.py verify \
--workspace <cwd> \
--json-out figures/ai_generated/verify.json\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{figures/ai_generated/figure_final.png}
\caption{[Replace with a paper-ready caption].}
\label{fig:[replace-me]}
\end{figure*}codex-image2tools/paper_illustration_image2.py finalizetools/paper_illustration_image2.py verifypython3 tools/paper_illustration_image2.py finalize \
--workspace <cwd> \
--best-image figures/ai_generated/figure_vN.png
python3 tools/paper_illustration_image2.py verify \
--workspace <cwd> \
--json-out figures/ai_generated/verify.jsonfigures/ai_generated/
├── preflight.json # Helper preflight receipt
├── figure_v1.png # Iteration 1
├── figure_v2.png # Iteration 2
├── figure_v3.png # Iteration 3
├── figure_final.png # Accepted version (copy of best, score ≥ 9)
├── latex_include.tex # LaTeX snippet
├── review_log.json # Review notes and refinement history
└── verify.json # Helper verification diagnostic| Stage | Agent / Tool | Purpose |
|---|---|---|
| Step 0 | | Observable activation predicate and preflight receipt |
| Step 1 | Claude | Parse request and create the initial figure prompt |
| Step 2 | Claude (+ optional Codex critique) | Refine layout, grouping, spacing, and arrow routing |
| Step 3 | Claude (+ optional Codex critique) | Verify academic visual style before rendering |
| Step 4 | | Native raster image generation through Codex app-server |
| Step 5 | Claude | Strict visual review and scoring |
| Step 7 | | Emit canonical artifacts and external verification receipt |