Loading...
Loading...
Generate publication-quality figures and tables from experiment results. Use when user says "画图", "作图", "generate figures", "paper figures", or needs plots for a paper.
npx skill4agent add wanshuiyin/auto-claude-code-research-in-sleep paper-figure| Category | Can auto-generate? | Examples |
|---|---|---|
| Data-driven plots | ✅ Yes | Line plots (training curves), bar charts (method comparison), scatter plots, heatmaps, box/violin plots |
| Comparison tables | ✅ Yes | LaTeX tables comparing prior bounds, method features, ablation results |
| Multi-panel figures | ✅ Yes | Subfigure grids combining multiple plots (e.g., 3×3 dataset × method) |
| Architecture/pipeline diagrams | ❌ No — manual | Model architecture, data flow diagrams, system overviews. At best can generate a rough TikZ skeleton, but expect to draw these yourself using tools like draw.io, Figma, or TikZ |
| Generated image grids | ❌ No — manual | Grids of generated samples (e.g., GAN/diffusion outputs). These come from running your model, not from this skill |
| Photographs / screenshots | ❌ No — manual | Real-world images, UI screenshots, qualitative examples |
figures//paper-writepublicationpublicationposterslidepdfpdfpngtab10tab10Set2colorblindfigures/gpt-5.4/paper-planfigures/| ID | Type | Description | Data Source | Priority |
|----|------|-------------|-------------|----------|
| Fig 1 | Architecture | ... | manual | HIGH |
| Fig 2 | Line plot | ... | figures/exp.json | HIGH |# paper_plot_style.py — shared across all figure scripts
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({
'font.size': FONT_SIZE,
'font.family': 'serif',
'font.serif': ['Times New Roman', 'Times', 'DejaVu Serif'],
'axes.labelsize': FONT_SIZE,
'axes.titlesize': FONT_SIZE + 1,
'xtick.labelsize': FONT_SIZE - 1,
'ytick.labelsize': FONT_SIZE - 1,
'legend.fontsize': FONT_SIZE - 1,
'figure.dpi': DPI,
'savefig.dpi': DPI,
'savefig.bbox': 'tight',
'savefig.pad_inches': 0.05,
'axes.grid': False,
'axes.spines.top': False,
'axes.spines.right': False,
'text.usetex': False, # set True if LaTeX is available
'mathtext.fontset': 'stix',
})
# Color palette
COLORS = plt.cm.tab10.colors # or Set2, or colorblind-safe
def save_fig(fig, name, fmt=FORMAT):
"""Save figure to FIG_DIR with consistent naming."""
fig.savefig(f'{FIG_DIR}/{name}.{fmt}')
print(f'Saved: {FIG_DIR}/{name}.{fmt}')| Data Pattern | Recommended Type | Size |
|---|---|---|
| X=time/steps, Y=metric | Line plot | 0.48\textwidth |
| Methods × 1 metric | Bar chart | 0.48\textwidth |
| Methods × multiple metrics | Grouped bar / radar | 0.95\textwidth |
| Two continuous variables | Scatter plot | 0.48\textwidth |
| Matrix / grid values | Heatmap | 0.48\textwidth |
| Distribution comparison | Box/violin plot | 0.48\textwidth |
| Multi-dataset results | Multi-panel (subfigure) | 0.95\textwidth |
| Prior work comparison | LaTeX table | — |
# gen_fig2_training_curves.py
from paper_plot_style import *
import json
with open('figures/exp_results.json') as f:
data = json.load(f)
fig, ax = plt.subplots(1, 1, figsize=(5, 3.5))
ax.plot(data['steps'], data['fac_loss'], label='Factorized', color=COLORS[0])
ax.plot(data['steps'], data['crf_loss'], label='CRF-LR', color=COLORS[1])
ax.set_xlabel('Training Steps')
ax.set_ylabel('Cross-Entropy Loss')
ax.legend(frameon=False)
save_fig(fig, 'fig2_training_curves')fig, ax = plt.subplots(1, 1, figsize=(5, 3))
methods = ['Baseline', 'Method A', 'Method B', 'Ours']
values = [82.3, 85.1, 86.7, 89.2]
bars = ax.bar(methods, values, color=[COLORS[i] for i in range(len(methods))])
ax.set_ylabel('Accuracy (%)')
# Add value labels on bars
for bar, val in zip(bars, values):
ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.3,
f'{val:.1f}', ha='center', va='bottom', fontsize=FONT_SIZE-1)
save_fig(fig, 'fig3_comparison')\begin{table}[t]
\centering
\caption{Comparison of estimation error bounds. $n$: sample size, $D$: ambient dim, $d$: latent dim, $K$: subspaces, $n_k$: modes.}
\label{tab:bounds}
\begin{tabular}{lccc}
\toprule
Method & Rate & Depends on $D$? & Multi-modal? \\
\midrule
\citet{MinimaxOkoAS23} & $n^{-s'/D}$ & Yes (curse) & No \\
\citet{ScoreMatchingdistributionrecovery} & $n^{-2/d}$ & No & No \\
\textbf{Ours} & $\sqrt{\sum n_k d_k / n}$ & No & Yes \\
\bottomrule
\end{tabular}
\end{table}figures/\includegraphics[MANUAL]latex_includes.tex# Run all figure generation scripts
for script in gen_fig*.py; do
python "$script"
done% === Fig 2: Training Curves ===
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/fig2_training_curves.pdf}
\caption{Training curves comparing factorized and CRF-LR denoising.}
\label{fig:training_curves}
\end{figure}figures/latex_includes.texmcp__codex__codex:
model: gpt-5.4
config: {"model_reasoning_effort": "xhigh"}
prompt: |
Review these figure/table plans for a [VENUE] submission.
For each figure:
1. Is the caption informative and self-contained?
2. Does the figure type match the data being shown?
3. Is the comparison fair and clear?
4. Any missing baselines or ablations?
5. Would a different visualization be more effective?
[list all figures with captions and descriptions]\caption{}emp_rateplt.titlefigures/
├── paper_plot_style.py # shared style config
├── gen_fig1_architecture.py # per-figure scripts
├── gen_fig2_training_curves.py
├── gen_fig3_comparison.py
├── fig1_architecture.pdf # generated figures
├── fig2_training_curves.pdf
├── fig3_comparison.pdf
├── latex_includes.tex # LaTeX snippets for all figures
└── TABLE_*.tex # standalone table LaTeX files| Type | When to Use | Typical Size |
|---|---|---|
| Line plot | Training curves, scaling trends | 0.48\textwidth |
| Bar chart | Method comparison, ablation | 0.48\textwidth |
| Grouped bar | Multi-metric comparison | 0.95\textwidth |
| Scatter plot | Correlation analysis | 0.48\textwidth |
| Heatmap | Attention, confusion matrix | 0.48\textwidth |
| Box/violin | Distribution comparison | 0.48\textwidth |
| Architecture | System overview | 0.95\textwidth |
| Multi-panel | Combined results (subfigures) | 0.95\textwidth |
| Comparison table | Prior bounds vs. ours (theory) | full width |