Loading...
Loading...
Found 1,282 Skills
Perform 12-Factor Agents compliance analysis on any codebase. Use when evaluating agent architecture, reviewing LLM-powered systems, or auditing agentic applications against the 12-Factor methodology.
Operational prompt engineering for production LLM apps: structured outputs (JSON/schema), deterministic extractors, RAG grounding/citations, tool/agent workflows, prompt safety (injection/exfiltration), and prompt evaluation/regression testing. Use when designing, debugging, or standardizing prompts for Codex CLI, Claude Code, and OpenAI/Anthropic/Gemini APIs.
Expert skill for AI model quantization and optimization. Covers 4-bit/8-bit quantization, GGUF conversion, memory optimization, and quality-performance tradeoffs for deploying LLMs in resource-constrained JARVIS environments.
Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when "build agent, AI agent, autonomous agent, tool use, function calling, multi-agent, agent memory, agent planning, langchain agent, crewai, autogen, claude agent sdk, ai-agents, langchain, autogen, crewai, tool-use, function-calling, autonomous, llm, orchestration" mentioned.
Designs robust function/tool calling schemas for LLMs with JSON schemas, validation strategies, typed interfaces, and example calls. Use when implementing "function calling", "tool use", "LLM tools", or "agent actions".
Configure Tavus CVI personas with custom LLMs, TTS engines, perception, and turn-taking. Use when customizing AI behavior, bringing your own LLM, configuring voice/TTS, enabling vision with Raven, or tuning conversation flow with Sparrow.
Design MCP resources to expose content for LLM consumption. Use when creating static or dynamic resources in xmcp.
PocketFlow framework for building LLM applications with graph-based abstractions, design patterns, and agentic coding workflows
Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, review experiments, and inspect datasets. Use when debugging AI/LLM applications, analyzing trace data, working with Phoenix observability, or investigating LLM performance issues.
Detect and flag AI-generated content markers in documentation and prose. Use when reviewing documentation for AI markers, cleaning up LLM-generated content, or auditing prose quality. Do not use when generating new content (use doc-generator) or learning writing styles (use style-learner).
Break LLM name defaults with external entropy. Use when character names cluster around statistical medians (Chen, Patel, Maya, Marcus), when cast has collision risks, or when fantasy cultures need phonologically consistent naming.
Interact with Google's Gemini model via CLI. Use when needing a second opinion from another LLM, cross-validation, or leveraging Gemini's Google Search grounding. Supports multi-turn conversations with session management.