Loading...
Loading...
Found 389 Skills
Instrument, trace, evaluate, and monitor LLM applications and AI agents with LangSmith. Use when setting up observability for LLM pipelines, running offline or online evaluations, managing prompts in the Prompt Hub, creating datasets for regression testing, or deploying agent servers. Triggers on: langsmith, langchain tracing, llm tracing, llm observability, llm evaluation, trace llm calls, @traceable, wrap_openai, langsmith evaluate, langsmith dataset, langsmith feedback, langsmith prompt hub, langsmith project, llm monitoring, llm debugging, llm quality, openevals, langsmith cli, langsmith experiment, annotate llm, llm judge.
ABSOLUTE MUST to debug and inspect LLM/AI agent traces using PostHog's MCP tools. Use when the user pastes a trace URL (e.g. /llm-observability/traces/<id>), asks to debug a trace, figure out what went wrong, check if an agent used a tool correctly, verify context/files were surfaced, inspect subagent behavior, investigate LLM decisions, or analyze token usage and costs.
Build search applications and query log analytics data with OpenSearch. Use this skill when the user mentions OpenSearch, search app, index setup, search architecture, semantic search, vector search, hybrid search, BM25, dense vector, sparse vector, agentic search, RAG, embeddings, KNN, PDF ingestion, document processing, or any related search topic. Also use for log analytics and observability — when the user wants to set up log ingestion, query logs with PPL, analyze error patterns, set up index lifecycle policies, investigate traces, or check stack health. Activate even if the user says log analysis, Fluent Bit, Fluentd, Logstash, syslog, traceId, OpenTelemetry, or log analytics without mentioning OpenSearch.
Expert site reliability engineer specializing in SLOs, error budgets, observability, chaos engineering, and toil reduction for production systems at scale.
Query and analyze Coralogix Real User Monitoring (RUM) data. Use this skill when the user asks about frontend errors, page load times, web vitals, user interactions, browser errors, mobile crashes, Core Web Vitals (LCP, CLS, FID, INP, TTFB), JavaScript exceptions, page performance, session errors, RUM data, real user monitoring, or any frontend/client-side observability question - even if they don't explicitly say "RUM".
Deploy Tinybird pipes and datasources for enter.pollinations.ai observability. Validates and pushes changes to Tinybird Cloud.
Instrument applications with OpenTelemetry SDK and validate telemetry using Kopai. Use when setting up observability, adding tracing/logging/metrics, testing instrumentation, or debugging missing telemetry data.
Professional DevOps engineering skill for creating CI/CD pipelines, implementing infrastructure as code, managing environments, and establishing monitoring and observability across all deployment stages.
Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
Setup Spanora AI observability in any project (JavaScript/TypeScript or Python). Use when user asks to "add spanora", "setup spanora", "integrate spanora", "add AI observability", "monitor LLM calls with spanora", "track AI costs", or mentions spanora in the context of adding observability to their project. Detects the language and installed AI SDKs (Vercel AI, Anthropic, OpenAI, LangChain) and configures the optimal integration pattern.
Guide for implementing HolmesGPT - an AI agent for troubleshooting cloud-native environments. Use when investigating Kubernetes issues, analyzing alerts from Prometheus/AlertManager/PagerDuty, performing root cause analysis, configuring HolmesGPT installations (CLI/Helm/Docker), setting up AI providers (OpenAI/Anthropic/Azure), creating custom toolsets, or integrating with observability platforms (Grafana, Loki, Tempo, DataDog).
Instruments Python and TypeScript code with MLflow Tracing for observability. Triggers on questions about adding tracing, instrumenting agents/LLM apps, getting started with MLflow tracing, or tracing specific frameworks (LangGraph, LangChain, OpenAI, DSPy, CrewAI, AutoGen). Examples - "How do I add tracing?", "How to instrument my agent?", "How to trace my LangChain app?", "Getting started with MLflow tracing", "Trace my TypeScript app"