Loading...
Loading...
Found 108 Skills
Fine-tunes and evaluates OpenVLA-OFT and OpenVLA-OFT+ policies for robot action generation with continuous action heads, LoRA adaptation, and FiLM conditioning on LIBERO simulation and ALOHA real-world setups. Use when reproducing OpenVLA-OFT paper results, training custom VLA action heads (L1 or diffusion), deploying server-client inference for ALOHA, or debugging normalization, LoRA merge, and cross-GPU issues.
Fine-tune and serve Physical Intelligence OpenPI models (pi0, pi0-fast, pi0.5) using JAX or PyTorch backends for robot policy inference across ALOHA, DROID, and LIBERO environments. Use when adapting pi0 models to custom datasets, converting JAX checkpoints to PyTorch, running policy inference servers, or debugging norm stats and GPU memory issues.
Evaluates NVIDIA Cosmos Policy on LIBERO and RoboCasa simulation environments. Use when setting up cosmos-policy for robot manipulation evaluation, running headless GPU evaluations with EGL rendering, or profiling inference latency on cluster or local GPU machines.
Provides guidance for automatically evolving and optimizing AI agents across any domain using LLM-driven evolution algorithms. Use when building self-improving agents, optimizing agent prompts and skills against benchmarks, or implementing automated agent evaluation loops.
Use this skill for "review this paper", "review this manuscript", "peer review", "review my paper", "critique this manuscript", "review this submission", "give me feedback on my paper", "check my methods", "review my statistics", "review as a peer reviewer", "evaluate this manuscript", "review this PDF", or mentions manuscript review, peer review, paper critique, or methodological review.
Compiles any research input — PDF papers, GitHub repositories, experiment logs, code directories, or raw notes — into a complete Agent-Native Research Artifact (ARA) with cognitive layer (claims, concepts, heuristics), physical layer (configs, code stubs), exploration graph, and grounded evidence. Use when ingesting a paper or codebase into a structured, machine-executable knowledge package, building an ARA from scratch, or converting research outputs into a falsifiable, agent-traversable form.
This skill should be used when executing the epic-dev workflow, creating epic branches, managing sprint phases, working with git worktrees for phased feature development, or when the user mentions "epic workflow", "sprint phases", "phased development", or "git worktree workflow".
Decide what an ML or AI paper should strategically sell before detailed writing or venue-specific polishing. Use this skill whenever the user has an idea, literature map, experiment results, figures, reviewer risks, or a draft and needs to choose the paper's primary contribution, claim scope, paper archetype, target audience, novelty framing, related-work boundary, title/abstract/main-figure story, or claims to avoid before using conference-writing-adapter.
Sync verified experiment results from the code repo or a code worktree into the paper's daily experiments log and project memory. Use when results in code/docs/results, code/docs/reports, code/docs/runs, worktree docs, logs, or user-confirmed metrics should be promoted into paper-facing evidence.
Guide a focused CS or AI literature review sprint that turns a topic, idea, claim, or project direction into a ranked paper map, closest-work risk assessment, method taxonomy, novelty implications, baseline implications, and next actions. Use this skill whenever the user needs to survey a topic, check novelty, map related work, prepare a project, find canonical or recent papers, decide read/skim/ignore priority, or turn papers into a research direction.
Prepare a research artifact package for conference artifact evaluation, reproducibility review, badges, supplementary material, or post-acceptance artifact release. Use this skill whenever the user needs install instructions, reviewer-facing reproduction commands, Docker or environment checks, data/checkpoint packaging, hardware/runtime estimates, anonymized or public artifact metadata, artifact evaluation forms, or a claim-to-artifact reproducibility audit for ML/AI venues.
Create a new Git branch or code worktree for experiments, features, baselines, rebuttal fixes, or method revisions. Use when starting an isolated code direction, creating a branch, creating a project-aware code worktree under a project control root, or setting up a worktree with UV sync, IDE config copying, linked assets, and worktree memory.