integrate-flowlines-sdk-python
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseFlowlines SDK for Python — Agent Skill
Python版Flowlines SDK — Agent Skill
What is Flowlines
什么是Flowlines
Flowlines is an observability SDK for LLM-powered Python applications. It instruments LLM provider APIs using OpenTelemetry, automatically capturing requests, responses, timing, and errors. It filters telemetry to only LLM-related spans and exports them via OTLP/HTTP to the Flowlines backend.
Supported LLM providers: OpenAI, Anthropic, AWS Bedrock, Cohere, Google Generative AI, Vertex AI, Together AI.
Supported frameworks/tools: LangChain, LlamaIndex, MCP, Pinecone, ChromaDB, Qdrant.
Flowlines是面向LLM驱动的Python应用的可观测性SDK。它使用OpenTelemetry对LLM提供商的API进行插桩,自动捕获请求、响应、耗时和错误信息。它会过滤遥测数据,仅保留与LLM相关的追踪跨度(span),并通过OTLP/HTTP将其导出到Flowlines后端。
支持的LLM提供商:OpenAI、Anthropic、AWS Bedrock、Cohere、Google Generative AI、Vertex AI、Together AI。
支持的框架/工具:LangChain、LlamaIndex、MCP、Pinecone、ChromaDB、Qdrant。
Installation
安装
Requires Python 3.11+.
bash
pip install flowlinesThen install instrumentation extras for the providers used in the project:
bash
undefined要求Python 3.11及以上版本。
bash
pip install flowlines然后为项目中使用的提供商安装对应的插桩扩展包:
bash
undefinedSingle provider
单个提供商
pip install flowlines[openai]
pip install flowlines[openai]
Multiple providers
多个提供商
pip install flowlines[openai,anthropic]
pip install flowlines[openai,anthropic]
All supported providers
所有支持的提供商
pip install flowlines[all]
Available extras: `openai`, `anthropic`, `bedrock`, `cohere`, `google-generativeai`, `vertexai`, `together`, `pinecone`, `chromadb`, `qdrant`, `langchain`, `llamaindex`, `mcp`.pip install flowlines[all]
可用的扩展包:`openai`、`anthropic`、`bedrock`、`cohere`、`google-generativeai`、`vertexai`、`together`、`pinecone`、`chromadb`、`qdrant`、`langchain`、`llamaindex`、`mcp`。Integration
集成方式
There are three integration modes. Pick the one that matches the project's OpenTelemetry situation.
有三种集成模式,请根据项目的OpenTelemetry使用情况选择。
Mode A — No existing OpenTelemetry setup (default)
模式A — 无现有OpenTelemetry配置(默认)
Use this when the project does NOT already have its own OpenTelemetry . This is the most common case.
TracerProviderpython
from flowlines import Flowlines
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>")This single call:
- Creates an OpenTelemetry
TracerProvider - Auto-detects which LLM libraries are installed and instruments them
- Filters spans to only export LLM-related telemetry
- Sends data to the Flowlines backend via OTLP/HTTP
当项目未自行管理OpenTelemetry 时使用此模式,这是最常见的场景。
TracerProviderpython
from flowlines import Flowlines
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>")这一行代码会完成以下操作:
- 创建一个OpenTelemetry
TracerProvider - 自动检测已安装的LLM库并对其进行插桩
- 过滤追踪跨度,仅导出与LLM相关的遥测数据
- 通过OTLP/HTTP将数据发送到Flowlines后端
Mode B1 — Existing OpenTelemetry setup (has_external_otel=True
)
has_external_otel=True模式B1 — 已有OpenTelemetry配置(has_external_otel=True
)
has_external_otel=TrueUse this when the project already manages its own .
TracerProviderpython
from flowlines import Flowlines
from opentelemetry.sdk.trace import TracerProvider
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_external_otel=True)
provider = TracerProvider()当项目已自行管理时使用此模式。
TracerProviderpython
from flowlines import Flowlines
from opentelemetry.sdk.trace import TracerProvider
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_external_otel=True)
provider = TracerProvider()Add the Flowlines span processor to the existing provider
将Flowlines的跨度处理器添加到现有提供商
processor = flowlines.create_span_processor()
provider.add_span_processor(processor)
processor = flowlines.create_span_processor()
provider.add_span_processor(processor)
Instrument providers using the Flowlines instrumentor registry
使用Flowlines的插桩器注册表对提供商进行插桩
for instrumentor in flowlines.get_instrumentors():
instrumentor.instrument(tracer_provider=provider)
- `create_span_processor()` must be called exactly once.
- `get_instrumentors()` returns instrumentor instances only for libraries that are currently installed.for instrumentor in flowlines.get_instrumentors():
instrumentor.instrument(tracer_provider=provider)
- `create_span_processor()`必须恰好调用一次。
- `get_instrumentors()`仅返回当前已安装库对应的插桩器实例。Mode B2 — Traceloop already initialized (has_traceloop=True
)
has_traceloop=True模式B2 — 已初始化Traceloop(has_traceloop=True
)
has_traceloop=TrueUse this when Traceloop SDK is already initialized. Traceloop must be initialized BEFORE Flowlines.
python
from flowlines import Flowlines
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_traceloop=True)Flowlines adds its span processor to the existing Traceloop . No instrumentor registration needed.
TracerProvider当Traceloop SDK已初始化时使用此模式。必须在Flowlines之前初始化Traceloop。
python
from flowlines import Flowlines
flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_traceloop=True)Flowlines会将其跨度处理器添加到已有的Traceloop 中,无需注册插桩器。
TracerProviderCritical rules
重要规则
-
Initialize Flowlines BEFORE creating LLM clients. Theconstructor must run before any LLM provider client is instantiated (e.g.,
Flowlines(),OpenAI()). If the client is created first, its calls will not be captured.Anthropic() -
Flowlines is a singleton. Only oneinstance may exist. A second call raises
Flowlines(). Store the instance and reuse it. Do NOT instantiate it multiple times.RuntimeError -
and
has_external_otelare mutually exclusive. Setting both tohas_traceloopraisesTrue.ValueError -
is mandatory in
user_id. The context manager requirescontext()as a keyword argument.user_idandsession_idare optional.agent_id -
Context does not auto-propagate to child threads/tasks. If using threads or async tasks, set context in each thread/task explicitly.
-
必须在创建LLM客户端之前初始化Flowlines。构造函数必须在任何LLM提供商客户端实例化(例如
Flowlines()、OpenAI())之前运行。如果先创建客户端,其调用将无法被捕获。Anthropic() -
Flowlines是单例模式。只能存在一个实例,第二次调用会抛出
Flowlines()。请存储实例并复用,不要多次实例化。RuntimeError -
和
has_external_otel不可同时设置为True。同时设置两者会抛出has_traceloop。ValueError -
中的
context()是必填项。上下文管理器要求user_id作为关键字参数,user_id和session_id为可选。agent_id -
上下文不会自动传播到子线程/任务。如果使用线程或异步任务,需要在每个线程/任务中显式设置上下文。
User, session, and agent tracking
用户、会话和Agent追踪
Tag LLM calls with user/session/agent IDs using the context manager:
python
with flowlines.context(user_id="user-42", session_id="sess-abc", agent_id="agent-1"):
client.chat.completions.create(...) # this span gets user_id, session_id, and agent_idsession_idagent_idpython
with flowlines.context(user_id="user-42"):
client.chat.completions.create(...)For cases where a context manager doesn't fit (e.g., across request boundaries in web frameworks), use the imperative API:
python
token = Flowlines.set_context(user_id="user-42", session_id="sess-abc", agent_id="agent-1")
try:
client.chat.completions.create(...)
finally:
Flowlines.clear_context(token)set_context()clear_context()Flowlines使用上下文管理器为LLM调用标记用户/会话/Agent ID:
python
with flowlines.context(user_id="user-42", session_id="sess-abc", agent_id="agent-1"):
client.chat.completions.create(...) # 此追踪跨度会携带user_id、session_id和agent_idsession_idagent_idpython
with flowlines.context(user_id="user-42"):
client.chat.completions.create(...)对于上下文管理器不适用的场景(例如Web框架中的跨请求边界),使用命令式API:
python
token = Flowlines.set_context(user_id="user-42", session_id="sess-abc", agent_id="agent-1")
try:
client.chat.completions.create(...)
finally:
Flowlines.clear_context(token)set_context()clear_context()FlowlinesContext integration guidance
上下文集成指南
When integrating , you MUST wrap LLM calls with context. Follow these steps:
flowlines.context()-
Identify existing data in the codebase that maps to,
user_id, andsession_id:agent_id- : the end-user making the request (e.g., authenticated user ID, email, API key owner)
user_id - : the conversation or session grouping multiple interactions (e.g., chat thread ID, session token, conversation UUID)
session_id - : the AI agent or assistant handling the request (e.g., agent name, bot identifier, assistant ID)
agent_id
-
If obvious mappings exist, use them directly. For example, if the app hasand a
request.user.id, wire them in:thread_idpythonwith flowlines.context(user_id=request.user.id, session_id=thread_id): ... -
If mappings are unclear, ask the user which variables or fields should be used for,
user_id, andsession_id.agent_id -
If no data is available yet, propose using placeholder values with TODO comments so the integration is functional and easy to complete later:python
with flowlines.context( user_id="anonymous", # TODO: replace with actual user identifier session_id=f"sess-{uuid.uuid4().hex[:8]}", # TODO: replace with actual session/conversation ID agent_id="my-agent", # TODO: replace with actual agent identifier ): ...Only include fields that are relevant.andsession_idcan be omitted entirely if not applicable.agent_id
集成时,必须用上下文包裹LLM调用。请遵循以下步骤:
flowlines.context()-
识别代码库中映射到、
user_id和session_id的现有数据:agent_id- :发起请求的终端用户(例如已认证用户ID、邮箱、API密钥所有者)
user_id - :将多次交互分组的会话或对话(例如聊天线程ID、会话令牌、对话UUID)
session_id - :处理请求的AI Agent或助手(例如Agent名称、机器人标识符、助手ID)
agent_id
-
如果存在明确的映射关系,直接使用。例如,如果应用中有和
request.user.id,则可以这样配置:thread_idpythonwith flowlines.context(user_id=request.user.id, session_id=thread_id): ... -
如果映射关系不明确,询问用户应使用哪些变量或字段作为、
user_id和session_id。agent_id -
如果暂无可用数据,建议使用占位符值并添加TODO注释,以便集成功能可用且后续易于完善:python
with flowlines.context( user_id="anonymous", # TODO: 替换为实际用户标识符 session_id=f"sess-{uuid.uuid4().hex[:8]}", # TODO: 替换为实际会话/对话ID agent_id="my-agent", # TODO: 替换为实际Agent标识符 ): ...仅包含相关字段,如果和session_id不适用,可以完全省略。agent_id
Constructor parameters
构造函数参数
python
Flowlines(
api_key: str, # Required. The Flowlines API key.
endpoint: str = "https://ingest.flowlines.ai", # Backend URL.
has_external_otel: bool = False, # True if project has its own TracerProvider.
has_traceloop: bool = False, # True if Traceloop is already initialized.
verbose: bool = False, # True to enable debug logging to stderr.
)python
Flowlines(
api_key: str, # 必填项。Flowlines API密钥。
endpoint: str = "https://ingest.flowlines.ai", # 后端URL。
has_external_otel: bool = False, # 当项目有自己的TracerProvider时设为True。
has_traceloop: bool = False, # 当Traceloop已初始化时设为True。
verbose: bool = False, # 设为True可启用调试日志输出到stderr。
)Public API summary
公共API摘要
| Method / attribute | Description |
|---|---|
| Constructor. Initializes the SDK (singleton). |
| Context manager to tag spans with user/session/agent. |
| Static. Imperative context setting; returns a token. |
| Static. Restores previous context using the token. |
| Returns a |
| Returns list of available instrumentor instances. |
| Flush and shut down. Called automatically via |
| 方法/属性 | 描述 |
|---|---|
| 构造函数。初始化SDK(单例模式)。 |
| 上下文管理器,为追踪跨度标记用户/会话/Agent信息。 |
| 静态方法。命令式设置上下文,返回一个令牌。 |
| 静态方法。使用令牌恢复之前的上下文。 |
| 返回一个 |
| 返回可用插桩器实例的列表。 |
| 刷新并关闭。通过 |
Imports
导入方式
The public API is exported from the top-level package:
python
from flowlines import Flowlines
from flowlines import FlowlinesExporter # only needed for advanced use公共API从顶层包导出:
python
from flowlines import Flowlines
from flowlines import FlowlinesExporter # 仅高级场景需要Verbose / debug mode
详细/调试模式
Pass to print debug information to stderr:
verbose=Truepython
flowlines = Flowlines(api_key="...", verbose=True)This logs instrumentor discovery, span filtering, and export results.
传入可将调试信息打印到stderr:
verbose=Truepython
flowlines = Flowlines(api_key="...", verbose=True)此模式会记录插桩器发现、追踪跨度过滤和导出结果。
Shutdown
关闭
flowlines.shutdown()atexitflowlines.shutdown()atexitCommon mistakes to avoid
需避免的常见错误
- Do NOT create the LLM client before initializing Flowlines — spans will be missed.
- Do NOT instantiate more than once — it raises
Flowlines().RuntimeError - Do NOT set both and
has_external_otel=True.has_traceloop=True - Do NOT forget to install the instrumentation extras for the providers you use (e.g., ).
flowlines[openai] - Do NOT assume context propagates to child threads — set it explicitly in each thread/task.
- 不要在初始化Flowlines之前创建LLM客户端——会导致追踪跨度丢失。
- 不要多次实例化——会抛出
Flowlines()。RuntimeError - 不要同时设置和
has_external_otel=True。has_traceloop=True - 不要忘记为使用的提供商安装对应的插桩扩展包(例如)。
flowlines[openai] - 不要假设上下文会自动传播到子线程——在每个线程/任务中显式设置。