Loading...
Loading...
Found 771 Skills
Reduce your AI API bill. Use when AI costs are too high, API calls are too expensive, you want to use cheaper models, optimize token usage, reduce LLM spending, route easy questions to cheap models, or make your AI feature more cost-effective. Covers DSPy cost optimization — cheaper models, smart routing, per-module LMs, fine-tuning, caching, and prompt reduction.
Control interactive terminal applications like vim, git rebase -i, git add -i, git add -p, apt, rclone config, sudo, w3m, and TUI apps. Can also supervise another CLI LLM (cursor-agent, codex, etc.) - approve or reject its actions by pressing y/n at confirmation prompts. Use when you need to interact with applications that require keyboard input, show prompts, menus, or have full-screen interfaces. Also use when commands fail or hang with errors like "Input is not a terminal" or "Output is not a terminal". Better than application specific hacks such as GIT_SEQUENCE_EDITOR or bypassing interactivity through file use.
Expert prompt optimization for LLMs and AI systems. Use when building AI features, improving agent performance, crafting system prompts, or optimizing LLM interactions. Masters prompt patterns and techniques.
Analyze AI/ML technical content (papers, articles, blog posts) and extract actionable insights filtered through enterprise AI engineering lens. Use when user provides URL/document for AI/ML content analysis, asks to "review this paper", or mentions technical content in domains like RAG, embeddings, fine-tuning, prompt engineering, LLM deployment.
Audit LLM token cost estimates against actual API usage. Activate on 'cost verification', 'token estimate accuracy', 'API cost audit', 'estimation variance'. NOT for pricing lookups, budget planning, or cost optimization strategies.
AI-led stakeholder interviews using LLMREI research-backed patterns. Conducts structured interviews to elicit requirements through context-adaptive questioning, active listening, and systematic requirement extraction.
Comprehensive patterns for building AI-powered code generation tools, code assistants, automated refactoring, code review, and structured output generation using LLMs with function calling and tool use. Use when "code generation, AI code assistant, function calling, structured output, code review AI, automated refactoring, tool use, code completion, agent code, " mentioned.
Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory persistence, long-term memory, chat history.
Structure Python so LLMs can understand it in 50 lines.
Expert in building comprehensive AI systems, integrating LLMs, RAG architectures, and autonomous agents into production applications. Use when building AI-powered features, implementing LLM integrations, designing RAG pipelines, or deploying AI systems.
LlamaIndex data framework for LLMs. Use for RAG applications.
Analyzes and improves LLM prompts and agent instructions for token efficiency, determinism, and clarity. Use when (1) writing a new system prompt, skill, or CLAUDE.md file, (2) reviewing or improving an existing prompt for clarity and efficiency, (3) diagnosing why a prompt produces inconsistent or unexpected results, (4) converting natural language instructions into imperative LLM directives, or (5) evaluating prompt anti-patterns and suggesting fixes. Applies to all LLM platforms (Claude, GPT, Gemini, Llama).