Loading...
Loading...
Found 19 Skills
Guides LLM agents through large-scale coding tasks using a spec-driven, phase-by-phase methodology covering requirement definition, planning, algorithm design, and implementation with OOP principles and language-specific coding standards. Use when starting a new software project, implementing a complex feature, refactoring existing code, or when you need a disciplined step-by-step approach to any non-trivial coding task.
Orchestrate a configurable, multi-member CLI planning council (Codex, Claude Code, Gemini, OpenCode, or custom) to produce independent implementation plans, anonymize and randomize them, then judge and merge into one final plan. Use when you need a robust, bias-resistant planning workflow, structured JSON outputs, retries, and failure handling across multiple CLI agents.
Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent.
Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support
Avoid common mistakes and debug issues in PydanticAI agents. Use when encountering errors, unexpected behavior, or when reviewing agent implementations.
Build new agent skills. Use when creating diagnostic frameworks, CLI tools, or data-driven generators that follow the established skill patterns.
Detects common LLM coding agent artifacts in codebases. Identifies test quality issues, dead code, over-abstraction, and verbose LLM style patterns. Use when cleaning up AI-generated code or reviewing for agent-introduced cruft.
Tool and function calling patterns with LangChain4j. Define tools, handle function calls, and integrate with LLM agents. Use when building agentic applications that interact with tools.
Integration patterns and best practices for adding persistent memory to LLM agents using the Letta Learning SDK
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
Event attribution and explanation. Use this skill whenever the user asks for the reason behind a price move. Trigger phrases include: why did X crash, what just happened, why is it pumping, what caused. MCP tools: news_events_get_latest_events, info_marketsnapshot_get_market_snapshot, news_events_get_event_detail, info_onchain_get_token_onchain, news_feed_search_news.
Build, debug, and deploy Google Agent Development Kit (ADK) applications in Go using the exact adk-go v0.6.0 APIs and patterns. Use when a task involves ADK Go agent architecture, llmagent configuration, tools/toolsets, sessions/state, memory/artifacts, workflow agents, A2A/REST/web serving, telemetry/plugins, or migration/troubleshooting for google.golang.org/adk@v0.6.0.