sentry-setup-ai-monitoring
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSetup Sentry AI Agent Monitoring
配置Sentry AI Agent监控
Configure Sentry to track LLM calls, agent executions, tool usage, and token consumption.
配置Sentry以追踪LLM调用、Agent执行、工具使用以及令牌消耗。
Invoke This Skill When
当以下情况时调用该技能
- User asks to "monitor AI/LLM calls" or "track OpenAI/Anthropic usage"
- User wants "AI observability" or "agent monitoring"
- User asks about token usage, model latency, or AI costs
- 用户要求“监控AI/LLM调用”或“追踪OpenAI/Anthropic使用情况”
- 用户需要“AI可观测性”或“Agent监控”
- 用户询问令牌使用量、模型延迟或AI成本
Prerequisites
前置条件
AI monitoring requires tracing enabled ().
tracesSampleRate > 0AI监控需要启用追踪()。
tracesSampleRate > 0Detection First
先进行检测
Always detect installed AI SDKs before configuring:
bash
undefined配置前务必先检测已安装的AI SDK:
bash
undefinedJavaScript
JavaScript
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
grep -E '"(openai|@anthropic-ai/sdk|ai|@langchain|@google/genai)"' package.json
Python
Python
grep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null
undefinedgrep -E '(openai|anthropic|langchain|huggingface)' requirements.txt pyproject.toml 2>/dev/null
undefinedSupported SDKs
支持的SDK
JavaScript
JavaScript
| Package | Integration | Min Sentry SDK | Auto? |
|---|---|---|---|
| | 10.2.0 | Yes |
| | 10.12.0 | Yes |
| | 10.6.0 | Node only* |
| | 10.22.0 | Yes |
| | 10.25.0 | Yes |
| | 10.14.0 | Yes |
*Vercel AI requires explicit setup for Edge runtime and per-call.
experimental_telemetry| 包 | 集成 | 最低Sentry SDK版本 | 自动配置? |
|---|---|---|---|
| | 10.2.0 | 是 |
| | 10.12.0 | 是 |
| | 10.6.0 | 仅Node环境* |
| | 10.22.0 | 是 |
| | 10.25.0 | 是 |
| | 10.14.0 | 是 |
*Vercel AI在Edge运行时需要显式配置,且每次调用需开启。
experimental_telemetryPython
Python
| Package | Install | Min SDK |
|---|---|---|
| | 2.41.0 |
| | 2.x |
| | 2.x |
| | 2.x |
| 包 | 安装方式 | 最低SDK版本 |
|---|---|---|
| | 2.41.0 |
| | 2.x |
| | 2.x |
| | 2.x |
JavaScript Configuration
JavaScript配置
Auto-enabled integrations (OpenAI, Anthropic, Google GenAI, LangChain)
自动启用的集成(OpenAI、Anthropic、Google GenAI、LangChain)
Just ensure tracing is enabled. To capture prompts/outputs:
javascript
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0,
integrations: [
Sentry.openAIIntegration({ recordInputs: true, recordOutputs: true }),
],
});只需确保已启用追踪。如需捕获提示词/输出:
javascript
Sentry.init({
dsn: "YOUR_DSN",
tracesSampleRate: 1.0,
integrations: [
Sentry.openAIIntegration({ recordInputs: true, recordOutputs: true }),
],
});Next.js OpenAI (additional step required)
Next.js OpenAI(需额外步骤)
For Next.js projects using OpenAI, you must wrap the client:
javascript
import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs";
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// Use 'openai' client as normal对于使用OpenAI的Next.js项目,必须包装客户端:
javascript
import OpenAI from "openai";
import * as Sentry from "@sentry/nextjs";
const openai = Sentry.instrumentOpenAiClient(new OpenAI());
// 正常使用'openai'客户端LangChain / LangGraph (explicit)
LangChain / LangGraph(显式配置)
javascript
integrations: [
Sentry.langChainIntegration({ recordInputs: true, recordOutputs: true }),
Sentry.langGraphIntegration({ recordInputs: true, recordOutputs: true }),
],javascript
integrations: [
Sentry.langChainIntegration({ recordInputs: true, recordOutputs: true }),
Sentry.langGraphIntegration({ recordInputs: true, recordOutputs: true }),
],Vercel AI SDK
Vercel AI SDK
Add to for Edge runtime:
sentry.edge.config.tsjavascript
integrations: [Sentry.vercelAIIntegration()],Enable telemetry per-call:
javascript
await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true },
});在中添加以支持Edge运行时:
sentury.edge.config.tsjavascript
integrations: [Sentry.vercelAIIntegration()],每次调用时启用遥测:
javascript
await generateText({
model: openai("gpt-4o"),
prompt: "Hello",
experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true },
});Python Configuration
Python配置
python
import sentry_sdk
from sentry_sdk.integrations.openai import OpenAIIntegration # or anthropic, langchain
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0,
send_default_pii=True, # Required for prompt capture
integrations=[OpenAIIntegration(include_prompts=True)],
)python
import sentry_sdk
from sentry_sdk.integrations.openai import OpenAIIntegration # 或anthropic、langchain
sentry_sdk.init(
dsn="YOUR_DSN",
traces_sample_rate=1.0,
send_default_pii=True, # 捕获提示词需要开启此项
integrations=[OpenAIIntegration(include_prompts=True)],
)Manual Instrumentation
手动埋点
Use when no supported SDK is detected.
当未检测到支持的SDK时使用。
Span Types
Span类型
| Purpose |
|---|---|
| Individual LLM calls |
| Agent execution lifecycle |
| Tool/function calls |
| Agent-to-agent transitions |
| 用途 |
|---|---|
| 单个LLM调用 |
| Agent执行生命周期 |
| 工具/函数调用 |
| Agent间的切换 |
Example (JavaScript)
示例(JavaScript)
javascript
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});javascript
await Sentry.startSpan({
op: "gen_ai.request",
name: "LLM request gpt-4o",
attributes: { "gen_ai.request.model": "gpt-4o" },
}, async (span) => {
span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
const result = await llmClient.complete(prompt);
span.setAttribute("gen_ai.usage.input_tokens", result.inputTokens);
span.setAttribute("gen_ai.usage.output_tokens", result.outputTokens);
return result;
});Key Attributes
关键属性
| Attribute | Description |
|---|---|
| Model identifier |
| JSON input messages |
| Input token count |
| Output token count |
| Agent identifier |
| Tool identifier |
| 属性 | 描述 |
|---|---|
| 模型标识符 |
| JSON格式的输入消息 |
| 输入令牌数量 |
| 输出令牌数量 |
| Agent标识符 |
| 工具标识符 |
PII Considerations
隐私信息(PII)注意事项
Prompts/outputs are PII. To capture:
- JS: per-integration
recordInputs: true, recordOutputs: true - Python: +
include_prompts=Truesend_default_pii=True
提示词/输出属于隐私信息(PII)。如需捕获:
- JS:在每个集成中设置
recordInputs: true, recordOutputs: true - Python:设置+
include_prompts=Truesend_default_pii=True
Troubleshooting
故障排查
| Issue | Solution |
|---|---|
| AI spans not appearing | Verify |
| Token counts missing | Some providers don't return tokens for streaming |
| Prompts not captured | Enable |
| Vercel AI not working | Add |
| 问题 | 解决方案 |
|---|---|
| AI Span未显示 | 确认 |
| 令牌计数缺失 | 部分提供商不会返回流式调用的令牌数量 |
| 提示词未捕获 | 启用 |
| Vercel AI无法正常工作 | 在每次调用中添加 |