Loading...
Loading...
Setup Sentry AI Agent Monitoring in any project. Use when asked to monitor LLM calls, track AI agents, or instrument OpenAI/Anthropic/Vercel AI/LangChain/Google GenAI. Detects installed AI SDKs and configures appropriate integrations.
npx skill4agent add getsentry/agent-skills sentry-setup-ai-monitoringtracesSampleRate > 0# JavaScript
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
# Python
grep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null| Package | Integration | Min Sentry SDK | Auto? |
|---|---|---|---|
| | 10.2.0 | Yes |
| | 10.12.0 | Yes |
| | 10.6.0 | Node only* |
| | 10.22.0 | Yes |
| | 10.25.0 | Yes |
| | 10.14.0 | Yes |
experimental_telemetry| Package | Install | Min SDK |
|---|---|---|
| | 2.41.0 |
| | 2.x |
| | 2.x |
| | 2.x |
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0,
integrations: [
Sentry.openAIIntegration({ recordInputs: true, recordOutputs: true }),
],
});import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs";
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// Use 'openai' client as normalintegrations: [
Sentry.langChainIntegration({ recordInputs: true, recordOutputs: true }),
Sentry.langGraphIntegration({ recordInputs: true, recordOutputs: true }),
],sentry.edge.config.tsintegrations: [Sentry.vercelAIIntegration()],await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true },
});import sentry_sdk
from sentry_sdk.integrations.openai import OpenAIIntegration # or anthropic, langchain
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0,
send_default_pii=True, # Required for prompt capture
integrations=[OpenAIIntegration(include_prompts=True)],
) | Purpose |
|---|---|
| Individual LLM calls |
| Agent execution lifecycle |
| Tool/function calls |
| Agent-to-agent transitions |
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});| Attribute | Description |
|---|---|
| Model identifier |
| JSON input messages |
| Input token count |
| Output token count |
| Agent identifier |
| Tool identifier |
recordInputs: true, recordOutputs: trueinclude_prompts=Truesend_default_pii=True| Issue | Solution |
|---|---|
| AI spans not appearing | Verify |
| Token counts missing | Some providers don't return tokens for streaming |
| Prompts not captured | Enable |
| Vercel AI not working | Add |