Loading...
Loading...
Found 11 Skills
Autonomous multi-round research review loop. Repeatedly reviews via Codex MCP, implements fixes, and re-reviews until positive assessment or max rounds reached. Use when user says "auto review loop", "review until it passes", or wants autonomous iterative improvement.
Autonomous multi-round research review loop using MiniMax API. Use when you want to use MiniMax instead of Codex MCP for external review. Trigger with "auto review loop minimax" or "minimax review".
Use when main results pass result-to-claim (claim_supported=yes or partial) and ablation studies are needed for paper submission. Codex designs ablations from a reviewer's perspective, CC reviews feasibility and implements.
Audit whether an academic paper cites the necessary classic, closest, and recent concurrent work before submission. Use this skill whenever the user worries that references are incomplete, wants missing citations found, needs related work coverage checked, asks whether a paper cites classic work or recent arXiv/OpenReview work, or wants a citation coverage report for ML/AI venues such as NeurIPS, ICML, ICLR, CVPR, ACL, EMNLP, or similar conferences.
Get a deep critical review of research from GPT via Codex MCP. Use when user says "review my research", "help me review", "get external review", or wants critical feedback on research ideas, papers, or experimental results.
Decide what an ML or AI paper should strategically sell before detailed writing or venue-specific polishing. Use this skill whenever the user has an idea, literature map, experiment results, figures, reviewer risks, or a draft and needs to choose the paper's primary contribution, claim scope, paper archetype, target audience, novelty framing, related-work boundary, title/abstract/main-figure story, or claims to avoid before using conference-writing-adapter.
Prepare a research artifact package for conference artifact evaluation, reproducibility review, badges, supplementary material, or post-acceptance artifact release. Use this skill whenever the user needs install instructions, reviewer-facing reproduction commands, Docker or environment checks, data/checkpoint packaging, hardware/runtime estimates, anonymized or public artifact metadata, artifact evaluation forms, or a claim-to-artifact reproducibility audit for ML/AI venues.
Turn a promising ML/AI research idea into a precise algorithm or method design before implementation. Use this skill whenever the user has an idea or project direction and wants to design the actual method, objective, architecture, inference procedure, assumptions, failure modes, ablations, implementation handoff, or method section plan before coding or experiment design.
Write structured experiment report documents from ML/research experiment notes, configs, logs, metrics, tables, and figures. Use this skill whenever the user asks to write an experiment report, research update, mentor update, weekly experiment summary, result analysis document, or presentation-ready experiment writeup, especially when the output should explain motivation, setup, algorithms, metrics, results, figures, interpretation, conclusions, limitations, and next steps.
Initialize, inspect, and maintain a hierarchical memory system for an ML research project across paper, code, worktrees, slides, reviewer simulation, rebuttal, experiments, claims, evidence, risks, and actions. Use this skill whenever the user wants cross-session project memory, project bootstrapping context, feedback-loop tracking, claim-evidence-risk-action alignment, worktree memory, or consistency between code results, paper writing, slides, reviews, and rebuttal.
Guides researchers through structured ideation frameworks to discover high-impact research directions. Use when exploring new problem spaces, pivoting between projects, or seeking novel angles on existing work.