logfire-instrumentation

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Instrument with Logfire

使用Logfire进行插桩

When to Use This Skill

何时使用此技能

Invoke this skill when:
  • User asks to "add logfire", "add observability", "add tracing", or "add monitoring"
  • User wants to instrument an app with structured logging or tracing (Python, JS/TS, or Rust)
  • User mentions Logfire in any context
  • User asks to "add logging" or "see what my app is doing"
  • User wants to monitor AI/LLM calls (PydanticAI, OpenAI, Anthropic)
  • User asks to add observability to an AI agent or LLM pipeline
在以下场景调用此技能:
  • 用户要求“添加logfire”、“添加可观测性”、“添加追踪”或“添加监控”
  • 用户希望为应用进行结构化日志或追踪插桩(支持Python、JS/TS或Rust)
  • 用户在任何语境中提及Logfire
  • 用户要求“添加日志”或“了解我的应用运行情况”
  • 用户希望监控AI/LLM调用(PydanticAI、OpenAI、Anthropic)
  • 用户要求为AI Agent或LLM流水线添加可观测性

How Logfire Works

Logfire工作原理

Logfire is an observability platform built on OpenTelemetry. It captures traces, logs, and metrics from applications. Logfire has native SDKs for Python, JavaScript/TypeScript, and Rust, plus support for any language via OpenTelemetry.
The reason this skill exists is that Claude tends to get a few things subtly wrong with Logfire - especially the ordering of
configure()
vs
instrument_*()
calls, the structured logging syntax, and which extras to install. These matter because a misconfigured setup silently drops traces.
Logfire是基于OpenTelemetry构建的可观测性平台。它捕获应用的追踪、日志和指标数据。Logfire拥有Python、JavaScript/TypeScript和Rust的原生SDK,同时支持通过OpenTelemetry对接任何编程语言。
此技能存在的原因是Claude在处理Logfire时容易在一些细节上出错——尤其是
configure()
instrument_*()
调用的顺序、结构化日志语法,以及需要安装哪些额外依赖。这些细节至关重要,因为配置错误会导致追踪数据被静默丢弃。

Step 1: Detect Language and Frameworks

步骤1:检测语言与框架

Identify the project language and instrumentable libraries:
  • Python: Read
    pyproject.toml
    or
    requirements.txt
    . Common instrumentable libraries: FastAPI, httpx, asyncpg, SQLAlchemy, psycopg, Redis, Celery, Django, Flask, requests, PydanticAI.
  • JavaScript/TypeScript: Read
    package.json
    . Common frameworks: Express, Next.js, Fastify. Also check for Cloudflare Workers or Deno.
  • Rust: Read
    Cargo.toml
    .
Then follow the language-specific steps below.

识别项目的编程语言和可插桩的库:
  • Python:读取
    pyproject.toml
    requirements.txt
    。常见可插桩库:FastAPI、httpx、asyncpg、SQLAlchemy、psycopg、Redis、Celery、Django、Flask、requests、PydanticAI。
  • JavaScript/TypeScript:读取
    package.json
    。常见框架:Express、Next.js、Fastify。同时检查是否使用Cloudflare Workers或Deno。
  • Rust:读取
    Cargo.toml
然后遵循以下对应语言的步骤操作。

Python

Python

Install with Extras

安装额外依赖

Install
logfire
with extras matching the detected frameworks. Each instrumented library needs its corresponding extra - without it, the
instrument_*()
call will fail at runtime with a missing dependency error.
bash
uv add 'logfire[fastapi,httpx,asyncpg]'
The full list of available extras:
fastapi
,
starlette
,
django
,
flask
,
httpx
,
requests
,
asyncpg
,
psycopg
,
psycopg2
,
sqlalchemy
,
redis
,
pymongo
,
mysql
,
sqlite3
,
celery
,
aiohttp
,
aws-lambda
,
system-metrics
,
litellm
,
dspy
,
google-genai
.
根据检测到的框架,安装带有对应额外依赖的
logfire
。每个可插桩的库都需要对应的额外依赖——如果缺少,
instrument_*()
调用会在运行时因依赖缺失而失败。
bash
uv add 'logfire[fastapi,httpx,asyncpg]'
可用的额外依赖完整列表:
fastapi
starlette
django
flask
httpx
requests
asyncpg
psycopg
psycopg2
sqlalchemy
redis
pymongo
mysql
sqlite3
celery
aiohttp
aws-lambda
system-metrics
litellm
dspy
google-genai

Configure and Instrument

配置与插桩

This is where ordering matters.
logfire.configure()
initializes the SDK and must come before everything else. The
instrument_*()
calls register hooks into each library. If you call
instrument_*()
before
configure()
, the hooks register but traces go nowhere.
python
from fastapi import FastAPI

import logfire

app = FastAPI()
这里调用顺序至关重要。
logfire.configure()
用于初始化SDK,必须在所有操作之前执行。
instrument_*()
调用用于注册各个库的钩子。如果在
configure()
之前调用
instrument_*()
,钩子会被注册,但追踪数据无法正常上报。
python
from fastapi import FastAPI

import logfire

app = FastAPI()

1. Configure first - always

1. 首先配置——务必优先执行

logfire.configure()
logfire.configure()

2. Instrument libraries - after configure, before app starts

2. 为库插桩——在配置之后、应用启动之前执行

logfire.instrument_fastapi(app) logfire.instrument_httpx() logfire.instrument_asyncpg()

Placement rules:
- `logfire.configure()` goes in the application entry point (`main.py`, or the module that creates the app)
- Call it **once per process** - not inside request handlers, not in library code
- `instrument_*()` calls go right after `configure()`
- Web framework instrumentors (`instrument_fastapi`, `instrument_flask`, `instrument_django`) need the app instance as an argument. HTTP client and database instrumentors (`instrument_httpx`, `instrument_asyncpg`) are global and take no arguments.
- In **Gunicorn** deployments, call `logfire.configure()` inside the `post_fork` hook, not at module level - each worker is a separate process
logfire.instrument_fastapi(app) logfire.instrument_httpx() logfire.instrument_asyncpg()

放置规则:
- `logfire.configure()`应放在应用入口文件(如`main.py`或创建应用实例的模块)中
- **每个进程只调用一次**——不要在请求处理器或库代码中调用
- `instrument_*()`调用应紧跟在`configure()`之后
- Web框架插桩函数(`instrument_fastapi`、`instrument_flask`、`instrument_django`)需要传入应用实例作为参数。HTTP客户端和数据库插桩函数(`instrument_httpx`、`instrument_asyncpg`)是全局的,无需传入参数。
- 在**Gunicorn**部署中,应在`post_fork`钩子中调用`logfire.configure()`,而非模块级别——每个Worker都是独立进程

Structured Logging

结构化日志

Replace
print()
and
logging.*()
calls with Logfire's structured logging. The key pattern: use
{key}
placeholders with keyword arguments, never f-strings.
python
import logfire

uid = 123
用Logfire的结构化日志替代
print()
logging.*()
调用。核心模式:使用
{key}
占位符搭配关键字参数,绝对不要使用f-string。
python
import logfire

uid = 123

Correct - each {key} becomes a searchable attribute in the Logfire UI

正确写法——每个{key}会成为Logfire UI中可搜索的属性

logfire.info('Created user {user_id}', user_id=uid) logfire.error('Payment failed {amount} {currency}', amount=100, currency='USD')
logfire.info('Created user {user_id}', user_id=uid) logfire.error('Payment failed {amount} {currency}', amount=100, currency='USD')

Wrong - creates a flat string, nothing is searchable

错误写法——生成扁平字符串,无法进行搜索

logfire.info(f'Created user {uid}')

For grouping related operations and measuring duration, use spans:

```python
import logfire


async def process_order(order_id: int):
    ...


async def handle_order(order_id: int):
    with logfire.span('Processing order {order_id}', order_id=order_id):
        total = 100
        logfire.info('Calculated total {total}', total=total)
For exceptions, use
logfire.exception()
which automatically captures the traceback:
python
import logfire


async def process_order(order_id: int):
    ...


async def handle_order(order_id: int):
    try:
        await process_order(order_id)
    except Exception:
        logfire.exception('Failed to process order {order_id}', order_id=order_id)
        raise
logfire.info(f'Created user {uid}')

如需对相关操作进行分组并测量时长,可使用span:

```python
import logfire


async def process_order(order_id: int):
    ...


async def handle_order(order_id: int):
    with logfire.span('Processing order {order_id}', order_id=order_id):
        total = 100
        logfire.info('Calculated total {total}', total=total)
对于异常,使用
logfire.exception()
可自动捕获堆栈跟踪:
python
import logfire


async def process_order(order_id: int):
    ...


async def handle_order(order_id: int):
    try:
        await process_order(order_id)
    except Exception:
        logfire.exception('Failed to process order {order_id}', order_id=order_id)
        raise

AI/LLM Instrumentation (Python)

AI/LLM插桩(Python)

Logfire auto-instruments AI libraries to capture LLM calls, token usage, tool invocations, and agent runs.
bash
uv add 'logfire[pydantic-ai]'
Logfire可自动为AI库插桩,捕获LLM调用、令牌使用量、工具调用和Agent运行情况。
bash
uv add 'logfire[pydantic-ai]'

or: uv add 'logfire[openai]' / uv add 'logfire[anthropic]'

或:uv add 'logfire[openai]' / uv add 'logfire[anthropic]'


Available AI extras: `pydantic-ai`, `openai`, `anthropic`, `litellm`, `dspy`, `google-genai`.

```python
import logfire

logfire.configure()
logfire.instrument_pydantic_ai()  # captures agent runs, tool calls, LLM request/response

可用的AI额外依赖:`pydantic-ai`、`openai`、`anthropic`、`litellm`、`dspy`、`google-genai`。

```python
import logfire

logfire.configure()
logfire.instrument_pydantic_ai()  # 捕获Agent运行、工具调用、LLM请求/响应

or:

或:

logfire.instrument_openai() # captures chat completions, embeddings, token counts logfire.instrument_anthropic() # captures messages, token usage

For PydanticAI, each agent run becomes a parent span containing child spans for every tool call and LLM request.

---
logfire.instrument_openai() # 捕获聊天完成、嵌入、令牌计数 logfire.instrument_anthropic() # 捕获消息、令牌使用量

对于PydanticAI,每个Agent运行都会成为一个父span,包含所有工具调用和LLM请求的子span。

---

JavaScript / TypeScript

JavaScript / TypeScript

Install

安装

bash
undefined
bash
undefined

Node.js

Node.js

npm install @pydantic/logfire-node
npm install @pydantic/logfire-node

Cloudflare Workers

Cloudflare Workers

npm install @pydantic/logfire-cf-workers logfire
npm install @pydantic/logfire-cf-workers logfire

Next.js / generic

Next.js / 通用场景

npm install logfire
undefined
npm install logfire
undefined

Configure

配置

Node.js (Express, Fastify, etc.) - create an
instrumentation.ts
loaded before your app:
typescript
import * as logfire from '@pydantic/logfire-node'
logfire.configure()
Launch with:
node --require ./instrumentation.js app.js
The SDK auto-instruments common libraries when loaded before the app. Set
LOGFIRE_TOKEN
in your environment or pass
token
to
configure()
.
Cloudflare Workers - wrap your handler with
instrument()
:
typescript
import { instrument } from '@pydantic/logfire-cf-workers'

export default instrument(handler, {
  service: { name: 'my-worker', version: '1.0.0' }
})
Next.js - set environment variables for OpenTelemetry export:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://logfire-api.pydantic.dev/v1/traces
OTEL_EXPORTER_OTLP_HEADERS=Authorization=<your-write-token>
Node.js(Express、Fastify等) - 创建
instrumentation.ts
并在应用加载前引入:
typescript
import * as logfire from '@pydantic/logfire-node'
logfire.configure()
启动命令:
node --require ./instrumentation.js app.js
SDK在应用加载前会自动为常见库插桩。在环境变量中设置
LOGFIRE_TOKEN
,或在
configure()
中传入
token
参数。
Cloudflare Workers - 用
instrument()
包裹处理器:
typescript
import { instrument } from '@pydantic/logfire-cf-workers'

export default instrument(handler, {
  service: { name: 'my-worker', version: '1.0.0' }
})
Next.js - 设置OpenTelemetry导出的环境变量:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://logfire-api.pydantic.dev/v1/traces
OTEL_EXPORTER_OTLP_HEADERS=Authorization=<your-write-token>

Structured Logging (JS/TS)

结构化日志(JS/TS)

typescript
// Structured attributes as second argument
logfire.info('Created user', { user_id: uid })
logfire.error('Payment failed', { amount: 100, currency: 'USD' })

// Spans
logfire.span('Processing order', { order_id }, {}, async () => {
  logfire.info('Processing step completed')
})

// Error reporting
logfire.reportError('order processing', error)
Log levels:
trace
,
debug
,
info
,
notice
,
warn
,
error
,
fatal
.

typescript
// 结构化属性作为第二个参数
logfire.info('Created user', { user_id: uid })
logfire.error('Payment failed', { amount: 100, currency: 'USD' })

// Spans
logfire.span('Processing order', { order_id }, {}, async () => {
  logfire.info('Processing step completed')
})

// 错误上报
logfire.reportError('order processing', error)
日志级别:
trace
debug
info
notice
warn
error
fatal

Rust

Rust

Install

安装

toml
[dependencies]
logfire = "0.6"
toml
[dependencies]
logfire = "0.6"

Configure

配置

rust
let shutdown_handler = logfire::configure()
    .install_panic_handler()
    .finish()?;
Set
LOGFIRE_TOKEN
in your environment or use the Logfire CLI to select a project.
rust
let shutdown_handler = logfire::configure()
    .install_panic_handler()
    .finish()?;
在环境变量中设置
LOGFIRE_TOKEN
,或使用Logfire CLI选择项目。

Structured Logging (Rust)

结构化日志(Rust)

The Rust SDK is built on
tracing
and
opentelemetry
- existing
tracing
macros work automatically.
rust
// Spans
logfire::span!("processing order", order_id = order_id).in_scope(|| {
    // traced code
});

// Events
logfire::info!("Created user {user_id}", user_id = uid);
Always call
shutdown_handler.shutdown()
before program exit to flush data.

Rust SDK基于
tracing
opentelemetry
构建——现有的
tracing
宏可自动生效。
rust
// Spans
logfire::span!("processing order", order_id = order_id).in_scope(|| {
    // 被追踪的代码
});

// 事件
logfire::info!("Created user {user_id}", user_id = uid);
程序退出前务必调用
shutdown_handler.shutdown()
以刷新数据。

Verify

验证

After instrumentation, verify the setup works:
  1. Run
    logfire auth
    to check authentication (or set
    LOGFIRE_TOKEN
    )
  2. Start the app and trigger a request
  3. Check https://logfire.pydantic.dev/ for traces
If traces aren't appearing: check that
configure()
is called before
instrument_*()
(Python), check that
LOGFIRE_TOKEN
is set, and check that the correct packages/extras are installed.
插桩完成后,验证设置是否生效:
  1. 运行
    logfire auth
    检查认证状态(或设置
    LOGFIRE_TOKEN
  2. 启动应用并触发请求
  3. 访问https://logfire.pydantic.dev/查看追踪数据
如果未出现追踪数据:检查Python中
configure()
是否在
instrument_*()
之前调用,检查
LOGFIRE_TOKEN
是否已设置,检查是否安装了正确的包/额外依赖。

References

参考资料

Detailed patterns and integration tables, organized by language:
  • Python: logging patterns (log levels, spans, stdlib integration, metrics, capfire testing) and integrations (full instrumentor table with extras)
  • JavaScript/TypeScript: patterns (log levels, spans, error handling, config) and frameworks (Node.js, Cloudflare Workers, Next.js, Deno setup)
  • Rust: patterns (macros, spans, tracing/log crate integration, async, shutdown)
按语言分类的详细模式和集成表:
  • Python日志模式(日志级别、span、标准库集成、指标、capfire测试)和集成列表(包含额外依赖的完整插桩函数表)
  • JavaScript/TypeScript日志模式(日志级别、span、错误处理、配置)和框架集成(Node.js、Cloudflare Workers、Next.js、Deno设置)
  • Rust日志模式(宏、span、tracing/log crate集成、异步、关闭)