spanora-setup

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Spanora Setup Agent Skill

Spanora 集成Agent技能

You are integrating Spanora AI observability into the user's project. Follow this guide step by step.
您正在将Spanora AI可观测性集成到用户的项目中,请按照本指南逐步操作。

1. When to Invoke

1. 触发时机

Activate this skill when the user says any of:
  • "add spanora", "setup spanora", "integrate spanora"
  • "add AI observability", "add LLM monitoring"
  • "monitor LLM calls with spanora", "track AI costs"
  • "instrument my agent", "add tracing to my agent"
  • mentions "spanora" in the context of adding observability
当用户说出以下任意内容时,激活本技能:
  • "添加spanora"、"搭建spanora"、"集成spanora"
  • "添加AI可观测性"、"添加LLM监控"
  • "使用spanora监控LLM调用"、"跟踪AI成本"
  • "为我的agent添加埋点"、"为我的agent添加追踪"
  • 在添加可观测性的语境下提及"spanora"

2. Public Documentation — Source of Truth

2. 公开文档——权威来源

The official Spanora documentation at https://spanora.ai/docs is always up to date and is the canonical source of truth. The bundled
references/
files in this skill are the primary step-by-step guide, but if you encounter ambiguity, an unfamiliar API, edge cases, or something that doesn't match what you see in the user's code — fetch the relevant doc page using WebFetch. If the public docs contradict a bundled reference, the public docs win.
Key pages by integration pattern:
You do not need to fetch docs on every run — only when something is unclear or you suspect the bundled references may be stale.

3. Prerequisites

3. 前置条件

The user must have a Spanora API key (starts with
ak_
). Never ask the user to paste their API key into the conversation.
  1. Check if
    SPANORA_API_KEY
    is already set in
    .env
    (or
    .env.local
    ) or as a shell environment variable. Only check for presence — do not output or log the value.
  2. If already set, proceed to the next step.
  3. If not set, instruct the user to add it themselves:
    • Tell them: "Please add your Spanora API key to your
      .env
      file as
      SPANORA_API_KEY=ak_...
      . You can find your key at https://spanora.ai/settings."
    • Do not accept the key in conversation or write the key value to any file.
    • Wait for the user to confirm they have set it before proceeding.
  4. If
    .env
    is not in
    .gitignore
    , remind the user to add it.
用户必须拥有Spanora API密钥(以
ak_
开头)。绝对不要让用户在对话中粘贴其API密钥
  1. 检查
    .env
    (或
    .env.local
    )中是否已设置
    SPANORA_API_KEY
    ,或是作为Shell环境变量存在。仅检查是否存在——绝对不要输出或记录密钥值。
  2. 如果已设置,进入下一步。
  3. 如果未设置,指导用户自行添加:
    • 告知用户:“请将您的Spanora API密钥添加到
      .env
      文件中,格式为
      SPANORA_API_KEY=ak_...
      。您可以在https://spanora.ai/settings找到您的密钥。”
    • 绝对不要在对话中接收密钥,也不要将密钥值写入任何文件。
    • 等待用户确认已设置完成后再继续。
  4. 如果
    .env
    未被添加到
    .gitignore
    ,提醒用户添加。

4. Language Detection

4. 语言检测

Determine the project language by checking for config files in the project root:
File foundLanguage
package.json
JavaScript / TypeScript
pyproject.toml
Python
setup.py
Python
requirements.txt
Python
If both JS and Python files are present, ask the user which part of the project to instrument.
通过检查项目根目录的配置文件确定项目语言:
检测到的文件项目语言
package.json
JavaScript / TypeScript
pyproject.toml
Python
setup.py
Python
requirements.txt
Python
如果同时存在JS和Python文件,请询问用户要为项目的哪一部分添加埋点。

5. Detection — Determine the Integration Pattern

5. 检测——确定集成方案

JavaScript / TypeScript

JavaScript / TypeScript

Read
package.json
and check
dependencies
and
devDependencies
:
Dependency foundPattern to use
ai
Pattern A — Vercel AI SDK
@anthropic-ai/sdk
Pattern B — Anthropic SDK
openai
Pattern C — OpenAI SDK
None of the abovePattern D — Raw Core SDK
If multiple are present, prefer in order: A > B > C. Use the pattern matching the SDK the user's code actually calls. If unsure, ask.
读取
package.json
并检查
dependencies
devDependencies
检测到的依赖使用的集成方案
ai
方案A — Vercel AI SDK
@anthropic-ai/sdk
方案B — Anthropic SDK
openai
方案C — OpenAI SDK
以上都不是方案D — 原生核心SDK
如果存在多个依赖,优先顺序为:A > B > C。请使用与用户代码实际调用的SDK匹配的方案。若不确定,询问用户。

Python

Python

Read
pyproject.toml
(or
requirements.txt
/
setup.py
) and check dependencies:
Dependency foundPattern to use
langchain
Pattern E — LangChain / LangGraph
More Python patterns may be added in the future. If the user's Python project does not use LangChain, inform them that Spanora supports any Python framework via raw OpenTelemetry — refer them to the LangChain reference as a template for OTEL setup.
读取
pyproject.toml
(或
requirements.txt
/
setup.py
)并检查依赖:
检测到的依赖使用的集成方案
langchain
方案E — LangChain / LangGraph
未来可能会添加更多Python集成方案。如果用户的Python项目未使用LangChain,请告知用户Spanora通过原生OpenTelemetry支持任意Python框架——可参考LangChain的参考文档作为OTEL搭建模板。

6. Package Manager Detection

6. 包管理器检测

JavaScript / TypeScript

JavaScript / TypeScript

File foundPackage manager
pnpm-lock.yaml
pnpm
yarn.lock
yarn
bun.lockb
bun
package-lock.json
npm
检测到的文件包管理器
pnpm-lock.yaml
pnpm
yarn.lock
yarn
bun.lockb
bun
package-lock.json
npm

Python

Python

File foundPackage manager
uv.lock
uv
poetry.lock
poetry
Pipfile.lock
pipenv
Otherwise
pip
检测到的文件包管理器
uv.lock
uv
poetry.lock
poetry
Pipfile.lock
pipenv
其他情况
pip

7. Install

7. 安装

JavaScript / TypeScript

JavaScript / TypeScript

bash
pnpm add @spanora-ai/sdk
bash
pnpm add @spanora-ai/sdk

or: npm install @spanora-ai/sdk / yarn add @spanora-ai/sdk / bun add @spanora-ai/sdk

或:npm install @spanora-ai/sdk / yarn add @spanora-ai/sdk / bun add @spanora-ai/sdk

undefined
undefined

Python (LangChain)

Python (LangChain)

bash
pip install opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-instrumentation-langchain langgraph
bash
pip install opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-instrumentation-langchain langgraph

or: uv add ... / poetry add ... / pipenv install ...

或:uv add ... / poetry add ... / pipenv install ...


No Spanora SDK is needed for Python — tracing uses standard OpenTelemetry.

Python无需安装Spanora SDK——追踪使用标准OpenTelemetry实现。

8. Integration — Read the Matching Reference

8. 集成——读取对应参考文档

Based on the detected pattern, read the corresponding reference file for code examples and API usage:
JavaScript / TypeScript:
  • Pattern A (Vercel AI SDK): Read
    references/vercel-ai.md
  • Pattern B (Anthropic SDK): Read
    references/anthropic.md
  • Pattern C (OpenAI SDK): Read
    references/openai.md
  • Pattern D (Raw Core SDK): Read
    references/core-sdk.md
Python:
  • Pattern E (LangChain / LangGraph): Read
    references/langchain-python.md
For JS/TS patterns, always also read
references/common.md
for shared patterns:
init()
,
shutdown()
, tool tracking (
trackToolHandler
,
runTool
), multi-agent shared context, agent naming guidance, API key setup, and the migration checklist. Python patterns are self-contained in their reference file.
Apply the patterns from the reference files to the user's code. The reference files contain production-ready examples verified against the SDK source and integration tests.
根据检测到的集成方案,读取对应的参考文件获取代码示例及API用法:
JavaScript / TypeScript:
  • 方案A(Vercel AI SDK):读取
    references/vercel-ai.md
  • 方案B(Anthropic SDK):读取
    references/anthropic.md
  • 方案C(OpenAI SDK):读取
    references/openai.md
  • 方案D(原生核心SDK):读取
    references/core-sdk.md
Python:
  • 方案E(LangChain / LangGraph):读取
    references/langchain-python.md
**对于JS/TS方案,还需读取
references/common.md
**获取通用方案:
init()
shutdown()
、工具追踪(
trackToolHandler
runTool
)、多Agent共享上下文、Agent命名规范、API密钥配置及迁移清单。Python方案的参考文件已包含所有必要内容。
将参考文件中的方案应用到用户代码中。参考文件包含经过SDK源码及集成测试验证的生产级示例。

9. Ensure Full Instrumentation Coverage

9. 确保完整埋点覆盖

Every AI execution must produce at least one trace. For each LLM call site in the user's code, use the highest-fidelity approach available:
  1. Auto-telemetry
    experimental_telemetry
    for Vercel AI SDK, auto-instrumentation for LangChain. Preferred when available — zero manual work.
  2. Provider wrappers
    trackOpenAI
    ,
    trackAnthropic
    ,
    trackVercelAI
    /
    trackVercelAIStream
    . Use when auto-telemetry is unavailable for a call site (e.g. tool-loop agents, custom agent patterns).
  3. Core SDK functions
    trackLlm
    ,
    trackLlmStream
    ,
    recordLlm
    . Fallback for any LLM call not covered by the above.
After applying the base integration, scan the user's code for any LLM call that would not produce a span. If found, wrap it with the appropriate tracking function from the list above. Do not leave blind spots.
每一次AI执行都必须生成至少一条追踪数据。 对于用户代码中的每一处LLM调用,使用可用的最高保真度方案:
  1. 自动遥测——Vercel AI SDK的
    experimental_telemetry
    ,LangChain的自动埋点。若可用则优先使用——无需手动操作。
  2. 供应商包装器——
    trackOpenAI
    trackAnthropic
    trackVercelAI
    /
    trackVercelAIStream
    。当自动遥测无法覆盖某调用点时使用(如工具循环Agent、自定义Agent方案)。
  3. 核心SDK函数——
    trackLlm
    trackLlmStream
    recordLlm
    。用于上述方案未覆盖的任意LLM调用的兜底方案。
完成基础集成后,扫描用户代码中所有不会生成追踪数据的LLM调用。若发现此类调用,使用上述列表中的合适追踪函数进行包装,不要留下监控盲区。

10. Offer Optional Enrichments

10. 提供可选增强功能

After applying the base integration, mention these optional features to the user. Do not add them by default — only include them if the user's code has the relevant context available or the user asks for them:
  • User & org context
    userId
    ,
    orgId
    ,
    agentSessionId
    on
    track()
    calls. Links traces to end users, tenants, and sessions in the dashboard. Only add if the code has access to these values (e.g. from a request context, auth session, or API input).
  • Operation type
    operation
    on LLM meta (
    trackLlm
    ,
    trackOpenAI
    ,
    trackAnthropic
    ,
    recordLlm
    ). Defaults to
    "chat"
    . Set to
    "embeddings"
    for embedding calls or
    "text_completion"
    for completion calls. Only relevant when the user's code makes non-chat LLM calls.
Field name reference:
  • track()
    uses
    agent
    (not
    agentName
    ) for the agent name
  • LLM tracking functions use
    prompt
    (not
    promptInput
    ) for the input prompt
  • LLM result/extractors use
    output
    (not
    promptOutput
    ) for the output text
Each reference file has an "Optional Enrichments" section with code examples for these features.
完成基础集成后,向用户提及以下可选功能。默认不要添加这些功能——仅当用户代码包含相关上下文或用户主动要求时才添加:
  • 用户与组织上下文——在
    track()
    调用中添加
    userId
    orgId
    agentSessionId
    。将追踪数据与仪表盘中的终端用户、租户及会话关联。仅当代码可获取这些值时添加(如从请求上下文、认证会话或API输入中获取)。
  • 操作类型——在LLM元数据(
    trackLlm
    trackOpenAI
    trackAnthropic
    recordLlm
    )中添加
    operation
    。默认值为
    "chat"
    。对于嵌入调用设置为
    "embeddings"
    ,对于文本补全调用设置为
    "text_completion"
    。仅当用户代码包含非聊天类LLM调用时相关。
字段名称参考:
  • track()
    使用
    agent
    (而非
    agentName
    )作为Agent名称
  • LLM追踪函数使用
    prompt
    (而非
    promptInput
    )作为输入提示
  • LLM结果/提取器使用
    output
    (而非
    promptOutput
    )作为输出文本
每个参考文件都有“可选增强功能”部分,包含这些功能的代码示例。