Total 44,246 skills, AI & Machine Learning has 7035 skills
Showing 12 of 7035 skills
Vector search indexing and querying workflows using MCP Vector Search, including setup, reindexing, auto-index strategies, and MCP integration.
Analyze codebase with parallel mapper agents to produce .planning codebase documents
Fact-forcing gate that blocks Edit/Write/Bash (including MultiEdit) and demands concrete investigation (importers, data schemas, user instruction) before allowing the action. Measurably improves output quality by +2.25 points vs ungated agents.
List, inspect, and run Glean AI agents. Use when discovering available agents, viewing agent schemas, or invoking agents programmatically.
Audit your Claude Code setup for token waste and context bloat. Use when the user says "audit my context", "check my settings", "why is Claude so slow", "token optimization", "context audit", or runs /context-audit. Starts by running /context to see real overhead, then audits MCP servers, CLAUDE.md rules, skills, settings, and file permissions. Returns a health score with specific fixes.
Generates structured literature survey reports from collected papers using a multi-stage pipeline: outline generation (query-type adaptive) → draft survey → section-by-section expansion → summary section refinement → final assembly. Produces survey-grade output with taxonomy-based method analysis, LaTeX formalizations, comparative tables, and dense citations. Use when: user wants a literature review, research survey, field overview, or systematic synthesis of multiple papers. Do NOT use for finding/searching papers (use paper-navigator), generating research ideas (use research-ideation), or writing a paper's Related Work section (use paper-writing).
Use when an agent needs to interact with PolyBaskets prediction market baskets on Vara Network — create baskets, place bets, query state, claim payouts, or understand the protocol. Do not use for building Sails programs or general Vara development (use vara-skills for that).
Use this skill whenever deciding what features to extract from raw marketplace assets — listing photos, owner-entered listing metadata, sitter wizard responses — to power item-to-item (similar listings), user-to-item (homefeed ranking), or user-to-user (mutual-fit matching) recommenders in a two-sided trust marketplace. Covers asset auditing, first-principles feature decomposition from the decision the user is making, vision-feature extraction (CLIP, room-type classification, amenity detection, aesthetic and quality scoring), listing text and metadata encoding (categoricals, multi-hot amenities, H3 geo-hashing, sentence-transformer description embeddings, structured pet triples), sitter wizard design (information-gain ordering, multiple-choice over free text, genuine skippability, hard constraint versus soft preference), derived-composition patterns for i2i / u2i / u2u (precomputed ANN shelves, multi-modal fusion, two-tower affinity, symmetric mutual-fit scoring, interpretable subscores), feature quality governance (single registry, training-serving parity, coverage and drift alarms, PII scrubbing, schema versioning), and incremental value proof (one feature at a time, ablation A/B, kill reviews, exploration slice, permanent feature-free baseline). Trigger even when the user does not explicitly say "feature engineering" but is asking how to get more signal out of listing photos, listing metadata, or the sitter onboarding wizard, or how to improve i2i / u2i / u2u quality without blindly ingesting a new model.
ElevenLabs voice changer - transform any voice to a different voice while preserving speech content and emotion via inference.sh CLI. Models: eleven_multilingual_sts_v2 (70+ languages), eleven_english_sts_v2. Capabilities: speech-to-speech, voice transformation, accent change, voice disguise. Use for: content creation, voice acting, privacy, dubbing, character voices. Triggers: voice changer, speech to speech, voice transformation, change voice, voice swap, voice conversion, voice disguise, eleven labs voice changer, elevenlabs sts, transform voice, ai voice changer, voice modifier
ElevenLabs multi-speaker dialogue generation - create conversations with different voices in a single audio file via inference.sh CLI. Capabilities: multi-voice dialogue, script-based generation, voice direction, conversation audio. Use for: podcasts, audiobooks, explainers, tutorials, character dialogue, video scripts. Triggers: elevenlabs dialogue, eleven labs dialogue, multi speaker, conversation audio, dialogue generation, text to dialogue, multi voice, voice acting, podcast dialogue, character voices, script to audio, elevenlabs conversation, two speakers
ElevenLabs speech-to-text with Scribe models and forced alignment via inference.sh CLI. Models: Scribe v1/v2 (98%+ accuracy, 90+ languages). Capabilities: transcription, speaker diarization, audio event tagging, word-level timestamps, forced alignment, subtitle generation. Use for: meeting transcription, subtitles, podcast transcripts, lip-sync timing, karaoke. Triggers: elevenlabs stt, elevenlabs transcription, scribe, elevenlabs speech to text, forced alignment, word alignment, subtitle timing, diarization, speaker identification, audio event detection, eleven labs transcribe
This skill should be used when the user asks to "offload context to files", "implement dynamic context discovery", "use filesystem for agent memory", "reduce context window bloat", or mentions file-based context management, tool output persistence, agent scratch pads, or just-in-time context loading. A core context engineering skill — also activates when the user mentions "context engineering" or "context-engineering" in the context of extending context beyond the window via filesystem strategies.