instrument

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Instrument — Add Opik Tracing to a Codebase

埋点工具 — 为代码库添加Opik链路追踪

You are instrumenting an existing codebase with Opik observability. Follow these steps precisely.
你正在为现有代码库接入Opik可观测性能力,请严格遵循以下步骤操作。

Step 1 — Scope

步骤1 — 确定作用范围

If
$ARGUMENTS
is provided, scope your work to those files or directories. Otherwise, discover the project root and instrument the main application code.
如果提供了
$ARGUMENTS
参数,则将你的工作范围限定在指定的文件或目录下。否则自动识别项目根目录,为核心应用代码添加埋点。

Step 2 — Detect Language & Frameworks

步骤2 — 检测编程语言与框架

Scan the codebase to determine:
  1. Language: Python (look for
    *.py
    ,
    pyproject.toml
    ,
    requirements.txt
    ) or TypeScript (look for
    *.ts
    ,
    *.tsx
    ,
    package.json
    )
  2. LLM frameworks in use — search imports for these patterns:
Import patternFrameworkIntegration
from openai
/
import OpenAI
OpenAI
track_openai
import anthropic
Anthropic
track_anthropic
from langchain
/
@langchain
LangChain
OpikTracer
callback
from langgraph
LangGraph
OpikTracer
with
graph=
from crewai
CrewAI
track_crewai
import dspy
DSPy
OpikCallback
from google
genai
Google Gemini
track_genai
import boto3
bedrock
AWS Bedrock
track_bedrock
from llama_index
LlamaIndex
LlamaIndexCallbackHandler
import litellm
LiteLLM
OpikLogger
callback
from pydantic_ai
Pydantic AILogfire OTLP bridge
from opik.integrations.adk
/
from google.adk
Google ADK
track_adk_agent_recursive
import ollama
Ollama
track_openai
with localhost base_url or manual
@opik.track
from agents import
/
from openai.agents
OpenAI Agents SDK
OpikTracingProcessor
from haystack
Haystack
OpikConnector
opik-openai
/
trackOpenAI
(TS)
OpenAI (TS)
trackOpenAI
opik-vercel
/
OpikExporter
(TS)
Vercel AI SDK
OpikExporter
opik-langchain
/
OpikCallbackHandler
(TS)
LangChain.js
OpikCallbackHandler
opik-gemini
/
trackGemini
(TS)
Gemini (TS)
trackGemini
  1. Existing Opik usage — check if
    opik
    or
    @opik.track
    is already imported. If so, audit rather than re-instrument.
扫描代码库确认以下信息:
  1. 编程语言:Python(查找
    *.py
    pyproject.toml
    requirements.txt
    文件)或TypeScript(查找
    *.ts
    *.tsx
    package.json
    文件)
  2. 当前使用的LLM框架 — 按以下导入模式搜索代码:
导入模式框架集成方式
from openai
/
import OpenAI
OpenAI
track_openai
import anthropic
Anthropic
track_anthropic
from langchain
/
@langchain
LangChain
OpikTracer
callback
from langgraph
LangGraph
OpikTracer
with
graph=
from crewai
CrewAI
track_crewai
import dspy
DSPy
OpikCallback
from google
genai
Google Gemini
track_genai
import boto3
bedrock
AWS Bedrock
track_bedrock
from llama_index
LlamaIndex
LlamaIndexCallbackHandler
import litellm
LiteLLM
OpikLogger
callback
from pydantic_ai
Pydantic AILogfire OTLP bridge
from opik.integrations.adk
/
from google.adk
Google ADK
track_adk_agent_recursive
import ollama
Ollama
track_openai
with localhost base_url or manual
@opik.track
from agents import
/
from openai.agents
OpenAI Agents SDK
OpikTracingProcessor
from haystack
Haystack
OpikConnector
opik-openai
/
trackOpenAI
(TS)
OpenAI (TS)
trackOpenAI
opik-vercel
/
OpikExporter
(TS)
Vercel AI SDK
OpikExporter
opik-langchain
/
OpikCallbackHandler
(TS)
LangChain.js
OpikCallbackHandler
opik-gemini
/
trackGemini
(TS)
Gemini (TS)
trackGemini
  1. 现有Opik使用情况 — 检查是否已经导入了
    opik
    或者
    @opik.track
    ,如果已存在则执行审计而非重复埋点。

Step 3 — Identify the Call Graph

步骤3 — 识别调用链路

Find:
  • Entrypoint: the top-level function that kicks off the agent (e.g.,
    main
    ,
    run
    ,
    agent
    ,
    handle_message
    , a route handler, or whatever the user's main orchestration function is)
  • LLM call sites: functions that call an LLM provider directly
  • Tool functions: retrieval, search, API calls, or other tool-like operations
  • Existing config classes: dataclasses, Pydantic models, or plain classes holding model names, temperatures, prompts, or other tunable parameters
找到以下内容:
  • 入口点:启动Agent的顶层函数(例如
    main
    run
    agent
    handle_message
    、路由处理函数,或是用户的核心编排函数)
  • LLM调用位点:直接调用LLM服务商接口的函数
  • 工具函数:检索、搜索、API调用或其他类似工具的操作函数
  • 现有配置类:存储模型名称、温度、提示词或其他可调参数的数据类、Pydantic模型或普通类

Step 4 — Add Framework Integrations

步骤4 — 添加框架集成

For each detected framework, add the appropriate integration at the module level. See the integration table above and
references/integrations.md
for the exact patterns.
Python examples:
python
undefined
针对每个检测到的框架,在模块层级添加对应的集成配置。可参考上方的集成表格与
references/integrations.md
文件获取精确的使用模式。
Python示例:
python
undefined

OpenAI

OpenAI

from opik.integrations.openai import track_openai client = track_openai(OpenAI()) # wrap existing client
from opik.integrations.openai import track_openai client = track_openai(OpenAI()) # wrap existing client

Anthropic

Anthropic

from opik.integrations.anthropic import track_anthropic client = track_anthropic(anthropic.Anthropic())
from opik.integrations.anthropic import track_anthropic client = track_anthropic(anthropic.Anthropic())

LangChain / LangGraph

LangChain / LangGraph

from opik.integrations.langchain import OpikTracer tracer = OpikTracer()
from opik.integrations.langchain import OpikTracer tracer = OpikTracer()

pass config={"callbacks": [tracer]} to invoke()

pass config={"callbacks": [tracer]} to invoke()

LiteLLM inside @opik.track — CRITICAL: pass span context

LiteLLM inside @opik.track — CRITICAL: pass span context

from opik.opik_context import get_current_span_data
from opik.opik_context import get_current_span_data

in every litellm.completion() call, add:

in every litellm.completion() call, add:

metadata={"opik": {"current_span_data": get_current_span_data()}}

metadata={"opik": {"current_span_data": get_current_span_data()}}


**TypeScript examples:**

```typescript
// OpenAI
import { trackOpenAI } from "opik-openai";
const trackedClient = trackOpenAI(openai);

// Vercel AI SDK
import { OpikExporter } from "opik-vercel";
// set up NodeSDK with OpikExporter

**TypeScript示例:**

```typescript
// OpenAI
import { trackOpenAI } from "opik-openai";
const trackedClient = trackOpenAI(openai);

// Vercel AI SDK
import { OpikExporter } from "opik-vercel";
// set up NodeSDK with OpikExporter

Step 5 — Add
@opik.track
Decorators (Python) or Client Tracing (TypeScript)

步骤5 — 添加
@opik.track
装饰器(Python)或客户端追踪(TypeScript)

Python

Python

Add
import opik
at the top of each file you instrument.
Function roleDecorator
Entrypoint (top-level agent)
@opik.track(entrypoint=True, name="<agent-name>")
LLM call
@opik.track(type="llm")
Tool / retrieval
@opik.track(type="tool")
Guardrail / validation
@opik.track(type="guardrail")
Other helper in the call chain
@opik.track
  • Place the decorator above any existing decorators (e.g., above
    @app.route
    )
  • For async functions,
    @opik.track
    works the same way — no changes needed
  • If the function is a script entrypoint (not a long-running server), add
    opik.flush_tracker()
    after the top-level call
在每个需要埋点的文件顶部添加
import opik
函数角色装饰器
入口点(顶层Agent函数)
@opik.track(entrypoint=True, name="<agent-name>")
LLM调用
@opik.track(type="llm")
工具/检索函数
@opik.track(type="tool")
护栏/校验函数
@opik.track(type="guardrail")
调用链路中的其他辅助函数
@opik.track
  • 将装饰器放在所有现有装饰器的上方(例如放在
    @app.route
    上方)
  • 对于异步函数,
    @opik.track
    无需修改即可正常使用
  • 如果函数是脚本入口点(非长期运行的服务),在顶层调用后添加
    opik.flush_tracker()

TypeScript

TypeScript

Use the client-based approach:
typescript
import { Opik } from "opik";
const client = new Opik({ projectName: "<project-name>" });

// In the entrypoint function:
const trace = client.trace({ name: "<agent-name>", input: { ... } });
const span = trace.span({ name: "<operation>", type: "tool", input: { ... } });
// ... logic
span.end({ output: { ... } });
trace.end({ output: { ... } });
await client.flush();
For entrypoints that should be discoverable by
opik connect
:
typescript
import { track } from "opik";

const myAgent = track(
  { name: "<agent-name>", entrypoint: true, params: [{ name: "query", type: "string" }] },
  async (query: string) => { /* ... */ }
);
使用基于客户端的实现方式:
typescript
import { Opik } from "opik";
const client = new Opik({ projectName: "<project-name>" });

// In the entrypoint function:
const trace = client.trace({ name: "<agent-name>", input: { ... } });
const span = trace.span({ name: "<operation>", type: "tool", input: { ... } });
// ... logic
span.end({ output: { ... } });
trace.end({ output: { ... } });
await client.flush();
对于需要被
opik connect
识别的入口点:
typescript
import { track } from "opik";

const myAgent = track(
  { name: "<agent-name>", entrypoint: true, params: [{ name: "query", type: "string" }] },
  async (query: string) => { /* ... */ }
);

Step 6 — Conversational Agents: Add
thread_id

步骤6 — 会话类Agent:添加
thread_id

If the agent handles multi-turn conversations (chat bots, support agents, multi-step assistants), wire
thread_id
:
python
@opik.track(entrypoint=True)
def handle_message(session_id: str, message: str) -> str:
    opik.update_current_trace(thread_id=session_id)
    return generate_response(session_id, message)
Skip this for single-shot agents or batch processing.
如果Agent处理多轮对话(聊天机器人、客服Agent、多步助手),请配置
thread_id
python
@opik.track(entrypoint=True)
def handle_message(session_id: str, message: str) -> str:
    opik.update_current_trace(thread_id=session_id)
    return generate_response(session_id, message)
单轮Agent或批处理场景可跳过此步骤。

Step 7 — Environment Config

步骤7 — 环境配置

Follow the setup decision tree from the main opik skill:
  1. If the project has
    .env
    /
    .env.local
    → append
    OPIK_API_KEY
    ,
    OPIK_WORKSPACE
    ,
    OPIK_URL_OVERRIDE
    (if missing)
  2. If no
    .env
    exists → Python: create/update
    ~/.opik.config
    ; TypeScript: create
    .env
    or
    .env.local
  3. Never introduce a second config mechanism
  4. Never overwrite existing values
  5. Update
    .env.example
    /
    .env.sample
    if one exists
  6. Set
    project_name
    in code, not in env files
遵循Opik核心技能的配置决策树:
  1. 如果项目有
    .env
    /
    .env.local
    文件 → 补充缺失的
    OPIK_API_KEY
    OPIK_WORKSPACE
    OPIK_URL_OVERRIDE
    配置
  2. 如果不存在
    .env
    文件 → Python:创建/更新
    ~/.opik.config
    ;TypeScript:创建
    .env
    .env.local
    文件
  3. 不要引入第二种配置机制
  4. 不要覆盖已有的配置值
  5. 如果存在
    .env.example
    /
    .env.sample
    文件则同步更新
  6. 在代码中设置
    project_name
    ,不要放在环境变量文件中

Step 8 — Install Dependencies

步骤8 — 安装依赖

Print the install command but do NOT run it automatically. Let the user decide.
Python:
pip install opik
Plus any integration packages if needed (most are included in
opik
).
TypeScript:
npm install opik
Plus framework-specific packages:
opik-openai
,
opik-vercel
,
opik-langchain
,
opik-gemini
as needed.
打印安装命令但不要自动执行,由用户决定是否运行。
Python:
pip install opik
如有需要可额外安装集成包(大多数集成已包含在
opik
包中)。
TypeScript:
npm install opik
根据需要安装框架对应的包:
opik-openai
opik-vercel
opik-langchain
opik-gemini

Step 9 — Verify

步骤9 — 验证

After instrumentation, do a quick audit:
  • Every LLM call site is traced (via integration wrapper or
    @opik.track
    )
  • Exactly one function has
    entrypoint=True
  • Script entrypoints call
    opik.flush_tracker()
    (Python) or
    await client.flush()
    (TypeScript)
  • LiteLLM calls inside
    @opik.track
    pass
    current_span_data
    via metadata
  • No hardcoded API keys were introduced
  • Existing tests still import correctly (no circular imports introduced)
完成埋点后,快速执行以下审计:
  • 所有LLM调用位点都已被追踪(通过集成包装或
    @opik.track
  • 仅有一个函数配置了
    entrypoint=True
  • 脚本入口点调用了
    opik.flush_tracker()
    (Python)或
    await client.flush()
    (TypeScript)
  • @opik.track
    代码内的LiteLLM调用通过metadata传递了
    current_span_data
  • 没有引入硬编码的API密钥
  • 现有测试仍可正常导入(没有引入循环依赖)

Anti-Patterns to Avoid

需要避免的反模式

  • Double-wrapping: Don't add
    @opik.track(type="llm")
    to a function that already uses a framework integration (e.g.,
    track_openai
    ). The integration handles tracing.
  • Orphaned LiteLLM traces: Always pass
    current_span_data
    when
    OpikLogger
    is used inside
    @opik.track
    code.
  • Missing entrypoint: Without
    entrypoint=True
    , Local Runner (
    opik connect
    ) won't discover the agent.
  • Missing flush: Scripts that exit without flushing lose trace data.
  • Overwriting config: Check before writing to
    .env
    or
    ~/.opik.config
    .
  • 重复包装:不要给已经使用了框架集成(例如
    track_openai
    )的函数再添加
    @opik.track(type="llm")
    ,集成已经处理了追踪逻辑。
  • 孤立的LiteLLM追踪:在
    @opik.track
    代码中使用
    OpikLogger
    时必须传递
    current_span_data
  • 缺失入口点:没有
    entrypoint=True
    配置的话,本地运行器(
    opik connect
    )无法识别Agent。
  • 缺失flush操作:脚本执行退出前没有flush会丢失追踪数据。
  • 覆盖配置:写入
    .env
    ~/.opik.config
    前先检查是否已存在对应配置。

References

参考资料

For detailed API signatures and advanced patterns, see:
  • ../opik/references/tracing-python.md
    — Python SDK reference
  • ../opik/references/tracing-typescript.md
    — TypeScript SDK reference
  • ../opik/references/integrations.md
    — All 40+ framework integrations
  • ../opik/references/observability.md
    — Core concepts (traces, spans, threads)
如需了解详细的API签名与高级模式,可查看:
  • ../opik/references/tracing-python.md
    — Python SDK参考
  • ../opik/references/tracing-typescript.md
    — TypeScript SDK参考
  • ../opik/references/integrations.md
    — 全部40+框架集成说明
  • ../opik/references/observability.md
    — 核心概念(追踪、跨度、会话)