instrument-llm-analytics

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Add PostHog LLM analytics

添加PostHog LLM分析

Use this skill to add PostHog LLM analytics that trace AI model usage in new or changed code. Use it after implementing LLM features or reviewing PRs to ensure all generations are captured with token counts, latency, and costs. If PostHog is not yet installed, this skill also covers initial SDK setup. Supports any provider or framework.
Supported providers: OpenAI, Azure OpenAI, Anthropic, Google, Cohere, Mistral, Perplexity, DeepSeek, Groq, Together AI, Fireworks AI, xAI, Cerebras, Hugging Face, Ollama, OpenRouter.
Supported frameworks: LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, LangGraph, Pydantic AI, Vercel AI, LiteLLM, Instructor, Semantic Kernel, Mirascope, Mastra, SmolAgents, OpenAI Agents.
Proxy/gateway: Portkey, Helicone.
使用此工具在新增或修改的代码中添加可追踪AI模型使用情况的PostHog LLM分析。在实现LLM功能或审核PR后使用,确保捕获所有生成内容的令牌数量、延迟和成本。如果尚未安装PostHog,此工具也可覆盖SDK的初始设置。支持任何提供商或框架。
支持的提供商:OpenAI、Azure OpenAI、Anthropic、Google、Cohere、Mistral、Perplexity、DeepSeek、Groq、Together AI、Fireworks AI、xAI、Cerebras、Hugging Face、Ollama、OpenRouter。
支持的框架:LangChain、LlamaIndex、CrewAI、AutoGen、DSPy、LangGraph、Pydantic AI、Vercel AI、LiteLLM、Instructor、Semantic Kernel、Mirascope、Mastra、SmolAgents、OpenAI Agents。
代理/网关:Portkey、Helicone。

Instructions

操作步骤

Follow these steps IN ORDER:
STEP 1: Analyze the codebase and detect the LLM stack.
  • Look for LLM provider SDKs (openai, anthropic, google-generativeai, etc.) and AI frameworks (langchain, llamaindex, crewai, etc.) in dependency files and imports.
  • Look for lockfiles to determine the package manager.
  • Check for existing PostHog or observability setup. If PostHog is already installed and LLM tracing is configured, skip to STEP 4 to add tracing for any new LLM calls.
STEP 2: Research instrumentation. (Skip if PostHog LLM tracing is already set up.) 2.1. Find the reference file below that matches the detected provider or framework — it is the source of truth for callback setup, middleware configuration, and event capture. Read it now. 2.2. If no reference matches, use manual-capture.md as a fallback — it covers the generic event capture approach that works with any provider.
STEP 3: Install the PostHog SDK. (Skip if PostHog is already set up.)
  • Add the PostHog SDK and any required callback/integration packages.
  • Do not manually edit dependency files — use the package manager's install command.
  • Always install packages as a background task. Don't await completion; proceed with other work immediately.
STEP 4: Add LLM tracing.
  • Instrument LLM calls to capture input tokens, output tokens, model name, latency, and costs for every generation.
  • Follow the provider-specific reference for the exact callback/middleware setup.
  • Do not alter the fundamental architecture of existing files. Make additions minimal and targeted.
  • You must read a file immediately before attempting to write it.
STEP 5: Link to users.
  • Associate LLM generations with identified users via distinct IDs when possible.
STEP 6: Set up environment variables.
  • If an env-file-tools MCP server is connected, use check_env_keys then set_env_values to configure the PostHog API key and host.
  • Reference these environment variables in code instead of hardcoding them.
请按以下顺序执行步骤:
步骤1:分析代码库并检测LLM技术栈。
  • 在依赖文件和导入语句中查找LLM提供商SDK(openai、anthropic、google-generativeai等)和AI框架(langchain、llamaindex、crewai等)。
  • 查看锁定文件以确定包管理器。
  • 检查是否已存在PostHog或可观测性设置。如果PostHog已安装且LLM追踪已配置,跳至步骤4为新增的LLM调用添加追踪。
步骤2:研究埋点方案。(如果已设置PostHog LLM追踪则跳过此步骤。) 2.1. 找到与检测到的提供商或框架匹配的下方参考文件——它是回调设置、中间件配置和事件捕获的权威指南。请立即阅读。 2.2. 如果没有匹配的参考文件,使用manual-capture.md作为备选方案——它涵盖了适用于任何提供商的通用事件捕获方法。
步骤3:安装PostHog SDK。(如果已设置PostHog则跳过此步骤。)
  • 添加PostHog SDK及任何所需的回调/集成包。
  • 不要手动编辑依赖文件——使用包管理器的安装命令。
  • 始终以后台任务形式安装包。无需等待完成,立即进行其他工作。
步骤4:添加LLM追踪。
  • 为LLM调用添加埋点,以捕获每次生成的输入令牌、输出令牌、模型名称、延迟和成本。
  • 按照特定提供商的参考文件进行准确的回调/中间件设置。
  • 不要更改现有文件的基础架构。尽量做到最小化、针对性的添加。
  • 在尝试修改文件前必须先阅读该文件。
步骤5:关联用户。
  • 尽可能通过唯一ID将LLM生成内容与已识别用户关联起来。
步骤6:设置环境变量。
  • 如果已连接env-file-tools MCP服务器,使用check_env_keys然后set_env_values来配置PostHog API密钥和主机地址。
  • 在代码中引用这些环境变量,而非硬编码。

Reference files

参考文件

  • references/openai.md
    - Openai llm analytics installation - docs
  • references/azure-openai.md
    - Azure openai llm analytics installation - docs
  • references/anthropic.md
    - Anthropic llm analytics installation - docs
  • references/google.md
    - Google llm analytics installation - docs
  • references/cohere.md
    - Cohere llm analytics installation - docs
  • references/mistral.md
    - Mistral llm analytics installation - docs
  • references/perplexity.md
    - Perplexity llm analytics installation - docs
  • references/deepseek.md
    - Deepseek llm analytics installation - docs
  • references/groq.md
    - Groq llm analytics installation - docs
  • references/together-ai.md
    - Together ai llm analytics installation - docs
  • references/fireworks-ai.md
    - Fireworks ai llm analytics installation - docs
  • references/xai.md
    - Xai llm analytics installation - docs
  • references/cerebras.md
    - Cerebras llm analytics installation - docs
  • references/hugging-face.md
    - Hugging face llm analytics installation - docs
  • references/ollama.md
    - Ollama llm analytics installation - docs
  • references/openrouter.md
    - Openrouter llm analytics installation - docs
  • references/langchain.md
    - Langchain llm analytics installation - docs
  • references/llamaindex.md
    - Llamaindex llm analytics installation - docs
  • references/crewai.md
    - Crewai llm analytics installation - docs
  • references/autogen.md
    - Autogen llm analytics installation - docs
  • references/dspy.md
    - Dspy llm analytics installation - docs
  • references/langgraph.md
    - Langgraph llm analytics installation - docs
  • references/pydantic-ai.md
    - Pydantic ai llm analytics installation - docs
  • references/vercel-ai.md
    - Vercel ai SDK llm analytics installation - docs
  • references/litellm.md
    - Litellm llm analytics installation - docs
  • references/instructor.md
    - Instructor llm analytics installation - docs
  • references/semantic-kernel.md
    - Semantic kernel llm analytics installation - docs
  • references/mirascope.md
    - Mirascope llm analytics installation - docs
  • references/mastra.md
    - Mastra llm analytics installation - docs
  • references/smolagents.md
    - Smolagents llm analytics installation - docs
  • references/openai-agents.md
    - Openai agents SDK llm analytics installation - docs
  • references/portkey.md
    - Portkey llm analytics installation - docs
  • references/helicone.md
    - Helicone llm analytics installation - docs
  • references/manual-capture.md
    - Manual capture llm analytics installation - docs
  • references/basics.md
    - Llm analytics basics - docs
  • references/traces.md
    - Traces - docs
  • references/calculating-costs.md
    - Calculating llm costs - docs
Each provider reference contains installation instructions, SDK setup, and code examples specific to that provider or framework. Find the reference that matches the user's stack.
If the user's provider isn't listed, use
manual-capture.md
as a fallback — it covers the generic event capture approach that works with any provider.
  • references/openai.md
    - OpenAI LLM分析安装 - 文档
  • references/azure-openai.md
    - Azure OpenAI LLM分析安装 - 文档
  • references/anthropic.md
    - Anthropic LLM分析安装 - 文档
  • references/google.md
    - Google LLM分析安装 - 文档
  • references/cohere.md
    - Cohere LLM分析安装 - 文档
  • references/mistral.md
    - Mistral LLM分析安装 - 文档
  • references/perplexity.md
    - Perplexity LLM分析安装 - 文档
  • references/deepseek.md
    - DeepSeek LLM分析安装 - 文档
  • references/groq.md
    - Groq LLM分析安装 - 文档
  • references/together-ai.md
    - Together AI LLM分析安装 - 文档
  • references/fireworks-ai.md
    - Fireworks AI LLM分析安装 - 文档
  • references/xai.md
    - xAI LLM分析安装 - 文档
  • references/cerebras.md
    - Cerebras LLM分析安装 - 文档
  • references/hugging-face.md
    - Hugging Face LLM分析安装 - 文档
  • references/ollama.md
    - Ollama LLM分析安装 - 文档
  • references/openrouter.md
    - OpenRouter LLM分析安装 - 文档
  • references/langchain.md
    - LangChain LLM分析安装 - 文档
  • references/llamaindex.md
    - LlamaIndex LLM分析安装 - 文档
  • references/crewai.md
    - CrewAI LLM分析安装 - 文档
  • references/autogen.md
    - AutoGen LLM分析安装 - 文档
  • references/dspy.md
    - DSPy LLM分析安装 - 文档
  • references/langgraph.md
    - LangGraph LLM分析安装 - 文档
  • references/pydantic-ai.md
    - Pydantic AI LLM分析安装 - 文档
  • references/vercel-ai.md
    - Vercel AI SDK LLM分析安装 - 文档
  • references/litellm.md
    - LiteLLM LLM分析安装 - 文档
  • references/instructor.md
    - Instructor LLM分析安装 - 文档
  • references/semantic-kernel.md
    - Semantic Kernel LLM分析安装 - 文档
  • references/mirascope.md
    - Mirascope LLM分析安装 - 文档
  • references/mastra.md
    - Mastra LLM分析安装 - 文档
  • references/smolagents.md
    - SmolAgents LLM分析安装 - 文档
  • references/openai-agents.md
    - OpenAI Agents SDK LLM分析安装 - 文档
  • references/portkey.md
    - Portkey LLM分析安装 - 文档
  • references/helicone.md
    - Helicone LLM分析安装 - 文档
  • references/manual-capture.md
    - 手动捕获LLM分析安装 - 文档
  • references/basics.md
    - LLM分析基础 - 文档
  • references/traces.md
    - 追踪 - 文档
  • references/calculating-costs.md
    - LLM成本计算 - 文档
每个提供商参考文件都包含针对该提供商或框架的安装说明、SDK设置和代码示例。找到与用户技术栈匹配的参考文件。
如果用户的提供商未列出,请使用
manual-capture.md
作为备选方案——它涵盖了适用于任何提供商的通用事件捕获方法。

Key principles

核心原则

  • Environment variables: Always use environment variables for PostHog and LLM provider keys. Never hardcode them.
  • Minimal changes: Add LLM analytics alongside existing LLM calls. Don't replace or restructure existing code.
  • Trace all generations: Capture input tokens, output tokens, model name, latency, and costs for every LLM call.
  • Link to users: Associate LLM generations with identified users via distinct IDs when possible.
  • One provider at a time: Only instrument the provider(s) the user is actually using. Don't add instrumentation for providers not present in the codebase.
  • 环境变量:始终使用环境变量存储PostHog和LLM提供商密钥。绝不要硬编码。
  • 最小修改:在现有LLM调用旁添加LLM分析。不要替换或重构现有代码。
  • 追踪所有生成内容:为每个LLM调用捕获输入令牌、输出令牌、模型名称、延迟和成本。
  • 关联用户:尽可能通过唯一ID将LLM生成内容与已识别用户关联起来。
  • 一次处理一个提供商:仅为用户实际使用的提供商添加埋点。不要为代码库中不存在的提供商添加埋点。