Loading...
Loading...
Found 107 Skills
Audit a skill repository or installed skill collection for global consistency, lifecycle coverage, routing quality, documentation drift, memory writeback coverage, stale future-skill references, broken helper paths, and validation readiness. Use this skill whenever the user asks for a global consistency audit, skill taxonomy review, lifecycle audit, cross-skill routing audit, README or AGENTS inventory consistency check, or maintenance pass over a collection of agent skills.
Control a remote SSH server project from a local git repo with persistent project memory. Use when the user develops locally but runs remotely, wants the agent to understand remote repo mappings across sessions, needs safe local/remote git sync via GitHub, wants to inspect remote state, submit jobs, start interactive sessions, monitor logs, or recover project context at the start of a new coding session.
Finalize an accepted ML or AI paper for camera-ready submission after reviews, rebuttal, and acceptance. Use this skill whenever the user has an accepted paper, camera-ready deadline, final revision, acceptance email, meta-review, rebuttal promises, author-response commitments, de-anonymization tasks, supplement updates, code links, acknowledgements, final LaTeX checks, or needs to ensure the accepted paper's claims, figures, references, and artifacts are consistent before final submission.
Design hypothesis-driven ML/AI experiments before running them. Use this skill whenever the user wants to plan experiments, ablations, baselines, metrics, controls, seeds, logging, stop conditions, reviewer-proof evidence, or an experiment matrix for a paper claim before using run-experiment or writing results.
Maintain a paper-facing evidence board that aligns claims, experiments, figures, tables, sections, reviewer risks, and next actions during ML/AI paper writing. Use this skill whenever writing exposes missing experiments, new results require paper changes, reviewer simulation reveals evidence gaps, claims need support checks, figures/tables need mapping to claims, or the user wants a live paper evidence board before submission.
Review ML or AI experiment figures, tables, plots, captions, result narratives, and paper visual style before they are shown in a paper, advisor meeting, report, slide deck, rebuttal, or submission. Use this skill whenever the user has experimental results, plots, tables, metrics, screenshots, captions, draft result sections, or wants to audit figure style choices such as color, typography, markers, symbols, line widths, sizing, and venue-consistent visual conventions.
Build a retrospective or forward-looking work timeline from git commits, project docs, user notes, or chat records, then output a Markdown and/or HTML report with a Gantt chart or timeline visualization. Use when the user wants to review past work across one or more projects, explain time allocation to a mentor, summarize what was done in a period, or plan the next phase with a timeline.
Diagnose surprising, negative, unstable, or ambiguous ML/AI experiment results and decide whether to debug implementation, rerun experiments, change metrics or baselines, revise the algorithm, narrow the paper claim, park, or kill a direction. Use this skill whenever results do not match expectations, a method fails, metrics conflict, seeds vary, baselines beat the method, plots look suspicious, or the user asks what to do next after experimental results.
Records research provenance as a post-task epilogue, scanning conversation history at the end of a coding or research session to extract decisions, experiments, dead ends, claims, heuristics, and pivots, and writing them into the ara/ directory with user-vs-AI provenance tags. Use as a session epilogue — never during execution — to maintain a faithful, auditable trace of how a research project actually evolved.
Performs ARA Seal Level 2 semantic epistemic review on Agent-Native Research Artifacts, scoring six dimensions (evidence relevance, falsifiability, scope calibration, argument coherence, exploration integrity, methodological rigor) and producing a constructive, severity-ranked report with a Strong Accept-to-Reject recommendation. Use after Level 1 structural validation passes, when an ARA needs an objective epistemic critique before publication or release.
Plan, draft, and revise ML/AI limitations, scope, failure cases, ethics, broader impact, and conclusion caveats so they control claim boundaries without undermining the paper. Use when the user wants limitation wording, scope statements, failure-case interpretation, ethics/broader-impact text, or overclaim reduction.
Add items (research objects) to existing research outline.