Loading...
Loading...
Found 1,279 Skills
Comprehensive LLM audit. Model currency, prompt quality, evals, observability, CI/CD. Ensures all LLM-powered features follow best practices and are properly instrumented. Auto-invoke when: model names/versions mentioned, AI provider config, prompt changes, .env with AI keys, aiProviders.ts or prompts.ts modified, AI-related PRs. CRITICAL: Training data lags months. ALWAYS web search before LLM decisions.
Build LLM applications using Dify's visual workflow platform. Use when creating AI chatbots, implementing RAG pipelines, developing agents with tools, managing knowledge bases, deploying LLM apps, or building workflows with drag-and-drop. Supports hundreds of LLMs, Docker/Kubernetes deployment.
BullMQ queue system reference for Redis-backed job queues, workers, flows, and schedulers. Use when: (1) creating queues and workers with BullMQ, (2) adding jobs (delayed, prioritized, repeatable, deduplicated), (3) setting up FlowProducer parent-child job hierarchies, (4) configuring retry strategies, rate limiting, or concurrency, (5) implementing job schedulers with cron/interval patterns, (6) preparing BullMQ for production (graceful shutdown, Redis config, monitoring), or (7) debugging stalled jobs or connection issues
Configure a Mac mini as a reliable local LLM server with remote access, observability, and power-safe operation.
Optimize websites for AI assistant recommendations. ChatGPT, Gemini, Perplexity, Claude. Get cited in AI answers.
Genera documentación llms.txt optimizada para LLMs. Usa cuando el usuario diga "crear llms.txt", "documentar para AI", "crear documentación para LLMs", "generar docs para modelos", o quiera hacer el repo legible para Claude/AI.
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
Enterprise LLM Fine-Tuning with LoRA, QLoRA, and PEFT techniques
Use when adding LangChain-based LLM routes or services in Python or Next.js stacks; pair with architect-stack-selector.
Use when you want rubric based LLM quality scoring on generated outputs; pair with addon-deterministic-eval-suite.
LLM fine-tuning expert for LoRA, QLoRA, dataset preparation, and training optimization
Auto-generates an LLM usage monitoring page in a PM admin dashboard. Tokuin CLI-based token/cost/latency tracking + user ranking system + inactive user tracking + data-driven PM insights + Cmd+K global search + per-user drilldown navigation. Supports OpenAI/Anthropic/Gemini/OpenRouter.