integrate

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Integrate Olakai into Existing AI Code

将Olakai集成到现有AI代码中

This skill guides you through adding Olakai monitoring to an existing AI agent or LLM-powered application with minimal code changes.
For full SDK documentation, see: https://app.olakai.ai/llms.txt
本指南将引导你以最少的代码改动,为现有的AI Agent或基于LLM的应用添加Olakai监控功能。
完整SDK文档请查看:https://app.olakai.ai/llms.txt

Prerequisites

前置条件

  • Existing working AI agent/application using OpenAI, Anthropic, or other LLM
  • Olakai CLI installed and authenticated (
    npm install -g olakai-cli && olakai login
    )
  • Olakai API key for your agent (get via CLI:
    olakai agents get AGENT_ID --json | jq '.apiKey'
    )
  • Node.js 18+ (for TypeScript) or Python 3.7+ (for Python)
Note: Each agent can have its own API key. Create one with
olakai agents create --name "Name" --with-api-key
  • 已在运行的AI Agent/应用,使用OpenAI、Anthropic或其他LLM
  • 已安装并认证Olakai CLI(
    npm install -g olakai-cli && olakai login
  • 你的Agent对应的Olakai API密钥(通过CLI获取:
    olakai agents get AGENT_ID --json | jq '.apiKey'
  • Node.js 18+(用于TypeScript)或Python 3.7+(用于Python)
注意: 每个Agent可以拥有独立的API密钥。使用
olakai agents create --name "Name" --with-api-key
创建新Agent并生成密钥。

Why Custom KPIs Are Essential

自定义KPI的重要性

Adding monitoring is only the first step. The real value of Olakai comes from tracking custom KPIs specific to your agent's business purpose.
Without KPIs configured:
  • Only basic token counts and request data
  • No aggregated business KPIs on dashboard
  • No alerting capabilities
  • No ROI tracking
With KPIs configured:
  • Custom KPIs (items processed, success rates, quality scores)
  • Trend analysis and performance dashboards
  • Threshold-based alerting
  • Business value calculations
Plan to configure at least 2-4 KPIs that answer: "How do I know this agent is performing well?"
KPIs are unique per agent. If adding monitoring to an agent that needs the same KPIs as another already-configured agent, you must still create new KPI definitions for this agent. KPIs cannot be shared or reused across agents.
添加监控只是第一步。Olakai的真正价值在于追踪与你的Agent业务目标相关的自定义KPI。
未配置KPI时:
  • 仅追踪基础的令牌数量和请求数据
  • 仪表板上无聚合的业务KPI
  • 无告警功能
  • 无法追踪投资回报率(ROI)
配置KPI后:
  • 自定义KPI(如处理项数量、成功率、质量评分)
  • 趋势分析和性能仪表板
  • 基于阈值的告警
  • 业务价值计算
建议配置至少2-4个KPI,回答核心问题:"如何判断这个Agent表现良好?"
KPI为Agent独有。如果要为某个Agent添加监控,且该Agent需要与已配置Agent相同的KPI,你仍需为其创建新的KPI定义。KPI无法在多个Agent之间共享或复用。

Understanding the customData to KPI Pipeline

理解customData到KPI的流转流程

Before adding monitoring, understand how custom data flows through Olakai:
SDK customData → CustomDataConfig (Schema) → Context Variable → KPI Formula → kpiData
添加监控前,请先了解自定义数据在Olakai中的流转路径:
SDK customData → CustomDataConfig( Schema)→ 上下文变量 → KPI公式 → kpiData

Critical Rules

关键规则

RuleConsequence
Only CustomDataConfig fields become variablesUnregistered customData fields are NOT usable in KPIs
Formula evaluation is case-insensitive
stepCount
,
STEPCOUNT
,
StepCount
all work in formulas
NUMBER configs need numeric valuesDon't send
"5"
(string), send
5
(number)
IMPORTANT: The SDK accepts any JSON in
customData
, but only fields registered as CustomDataConfigs are processed. Unregistered fields are stored but cannot be used in KPIs.
规则后果
只有CustomDataConfig中注册的字段可成为变量未注册的customData字段无法用于KPI
公式计算不区分大小写
stepCount
STEPCOUNT
StepCount
在公式中均可正常使用
NUMBER类型的配置需要传入数值请勿传入
"5"
(字符串),应传入
5
(数字)
重要提示:SDK接受任意JSON格式的
customData
,但只有注册为CustomDataConfig的字段会被处理。未注册的字段会被存储,但无法用于KPI计算。

Quick Start (5-Minute Integration)

快速开始(5分钟集成)

For TypeScript/JavaScript

TypeScript/JavaScript 版本

1. Install the SDK:
bash
npm install @olakai/sdk
2. Add tracking after your LLM call:
Before:
typescript
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
});
After:
typescript
import OpenAI from "openai";
import { olakaiConfig, olakai } from "@olakai/sdk";

olakaiConfig({ apiKey: process.env.OLAKAI_API_KEY });

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
});

// Track the interaction (fire-and-forget)
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: response.choices[0].message.content,
  tokens: response.usage?.total_tokens,
  userEmail: user.email,
  task: "Customer Experience",
});
1. 安装SDK:
bash
npm install @olakai/sdk
2. 在LLM调用后添加追踪代码:
集成前:
typescript
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
});
集成后:
typescript
import OpenAI from "openai";
import { olakaiConfig, olakai } from "@olakai/sdk";

olakaiConfig({ apiKey: process.env.OLAKAI_API_KEY });

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
});

// 追踪交互(发送即忘模式)
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: response.choices[0].message.content,
  tokens: response.usage?.total_tokens,
  userEmail: user.email,
  task: "Customer Experience",
});

For Python

Python 版本

1. Install the SDK:
bash
pip install olakai-sdk
2. Add tracking after your LLM call:
Before:
python
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_message}],
)
After:
python
from openai import OpenAI
from olakaisdk import olakai_config, olakai, OlakaiEventParams

olakai_config(os.getenv("OLAKAI_API_KEY"))
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_message}],
)
1. 安装SDK:
bash
pip install olakai-sdk
2. 在LLM调用后添加追踪代码:
集成前:
python
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_message}],
)
集成后:
python
from openai import OpenAI
from olakaisdk import olakai_config, olakai, OlakaiEventParams

olakai_config(os.getenv("OLAKAI_API_KEY"))
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": user_message}],
)

Track the interaction

追踪交互

olakai("event", "ai_activity", OlakaiEventParams( prompt=user_message, response=response.choices[0].message.content, tokens=response.usage.total_tokens, userEmail=user.email, task="Customer Experience", ))

---
olakai("event", "ai_activity", OlakaiEventParams( prompt=user_message, response=response.choices[0].message.content, tokens=response.usage.total_tokens, userEmail=user.email, task="Customer Experience", ))

---

Detailed Integration Guide

详细集成指南

Step 1: Identify Your Integration Pattern

步骤1:确定你的集成模式

Pattern A: Single LLM Client You have one OpenAI/Anthropic client used throughout your app. Use the fire-and-forget
olakai()
call after each completion.
Pattern B: Multiple LLM Calls per Request Your agent makes several LLM calls to complete one task. Use manual event tracking to aggregate calls into a single event.
Pattern C: Streaming Responses You stream LLM responses to users. Track after the stream completes with the full accumulated response.
Pattern D: Third-Party LLM (not OpenAI/Anthropic) You use Perplexity, Groq, local models, etc. Use manual event tracking via
olakai()
or
olakai_event()
.
模式A:单一LLM客户端 你的应用中仅使用一个OpenAI/Anthropic客户端。 在每次生成结果后,使用"发送即忘"模式的
olakai()
调用进行追踪。
模式B:单次请求包含多个LLM调用 你的Agent需要多次调用LLM才能完成一个任务。 使用手动事件追踪,将多个调用聚合为单个事件。
模式C:流式响应 你向用户流式返回LLM响应。 在流完成后,使用完整的聚合响应进行追踪。
模式D:第三方LLM(非OpenAI/Anthropic) 你使用Perplexity、Groq、本地模型等。 通过
olakai()
olakai_event()
进行手动事件追踪。

Step 2: Install and Configure

步骤2:安装与配置

TypeScript Setup

TypeScript 配置

typescript
// lib/olakai.ts - Initialize once at app startup
import { olakaiConfig } from "@olakai/sdk";

olakaiConfig({
  apiKey: process.env.OLAKAI_API_KEY!,
  debug: process.env.NODE_ENV === "development",
});
typescript
// lib/olakai.ts - 在应用启动时初始化一次
import { olakaiConfig } from "@olakai/sdk";

olakaiConfig({
  apiKey: process.env.OLAKAI_API_KEY!,
  debug: process.env.NODE_ENV === "development",
});

Python Setup

Python 配置

python
undefined
python
undefined

lib/olakai.py - Initialize once at app startup

lib/olakai.py - 在应用启动时初始化一次

import os from olakaisdk import olakai_config
olakai_config( api_key=os.getenv("OLAKAI_API_KEY"), debug=os.getenv("DEBUG") == "true" )
undefined
import os from olakaisdk import olakai_config
olakai_config( api_key=os.getenv("OLAKAI_API_KEY"), debug=os.getenv("DEBUG") == "true" )
undefined

Step 3: Add Context to Calls

步骤3:为调用添加上下文信息

Adding User Information

添加用户信息

TypeScript:
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  userEmail: user.email,
  task: "Customer Experience",
});
Python:
python
olakai("event", "ai_activity", OlakaiEventParams(
    prompt=user_message,
    response=ai_response,
    userEmail=user.email,
    task="Customer Experience",
))
TypeScript:
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  userEmail: user.email,
  task: "Customer Experience",
});
Python:
python
olakai("event", "ai_activity", OlakaiEventParams(
    prompt=user_message,
    response=ai_response,
    userEmail=user.email,
    task="Customer Experience",
))

Grouping Events by Conversation (chatId)

通过chatId按对话分组事件

For assistive AI (chatbots/copilots), use
chatId
to group multiple turns of a conversation together. This is required for CHAT-scoped KPIs that analyze the full conversation.
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  chatId: conversationId,  // groups turns in the same conversation
  userEmail: user.email,
});
When to use
chatId
:
If your agent handles multi-turn conversations and you want KPIs that evaluate the entire conversation (e.g., sentiment scoring, satisfaction), pass a consistent
chatId
across all turns.
对于辅助类AI(聊天机器人/ copilots),使用
chatId
将对话的多轮交互分组。这是分析完整对话的CHAT范围KPI的必要配置。
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  chatId: conversationId,  // 将同一会话的多轮交互分组
  userEmail: user.email,
});
何时使用
chatId
如果你的Agent处理多轮对话,且你需要评估整个对话的KPI(如情感评分、满意度),请在所有轮次中传递一致的
chatId

Adding Custom Data

添加自定义数据

IMPORTANT: Only send fields you've registered as CustomDataConfigs (Step 5.3). Unregistered fields are stored but cannot be used in KPIs.
Only send data you'll use in KPIs or for filtering. Don't duplicate fields already tracked by the platform (session ID, agent ID, user email, timestamps, token count, model, provider — all tracked automatically).
TypeScript:
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  userEmail: user.email,
  customData: {
    // Only include fields registered as CustomDataConfigs
    Department: user.department,
    ProjectId: currentProject.id,
    Priority: ticket.priority,
  },
});
重要提示: 仅发送你已注册为CustomDataConfig的字段(步骤5.3)。未注册的字段会被存储,但无法用于KPI计算。
仅发送你将用于KPI或过滤的字段。请勿重复平台已自动追踪的字段(会话ID、Agent ID、用户邮箱、时间戳、令牌数量、模型、提供商——这些均会自动追踪)。
TypeScript:
typescript
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: aiResponse,
  userEmail: user.email,
  customData: {
    // 仅包含已注册为CustomDataConfig的字段
    Department: user.department,
    ProjectId: currentProject.id,
    Priority: ticket.priority,
  },
});

Step 4: Handle Agentic Workflows

步骤4:处理Agent工作流

If your agent makes multiple LLM calls per task, aggregate them into a single event.
taskExecutionId
— Critical for multi-agent workflows.
If multiple agents collaborate on the same task, the orchestrator must generate ONE
taskExecutionId
and pass it to all agents. This is how Olakai correlates cross-agent work as a single logical task.
typescript
async function processDocument(doc: Document): Promise<string> {
  const startTime = Date.now();
  let totalTokens = 0;

  // Step 1: Extract
  const extraction = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `Extract from: ${doc.content}` }],
  });
  totalTokens += extraction.usage?.total_tokens ?? 0;

  // Step 2: Analyze
  const analysis = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `Analyze: ${extraction.choices[0].message.content}` }],
  });
  totalTokens += analysis.usage?.total_tokens ?? 0;

  const result = analysis.choices[0].message.content ?? "";

  // Track the complete workflow as ONE event
  olakai("event", "ai_activity", {
    prompt: `Process document: ${doc.title}`,
    response: result,
    tokens: totalTokens,
    requestTime: Date.now() - startTime,
    taskExecutionId: crypto.randomUUID(),
    task: "Data Processing & Analysis",
    customData: {
      DocumentType: doc.type,
      StepCount: 2,
      Success: 1,
    },
  });

  return result;
}
如果你的Agent在完成一个任务时需要多次调用LLM,请将这些调用聚合为单个事件。
taskExecutionId
——多Agent工作流的关键
。如果多个Agent协作完成同一任务,编排器必须生成一个唯一的
taskExecutionId
并传递给所有Agent。Olakai通过该ID将跨Agent的工作关联为单个逻辑任务。
typescript
async function processDocument(doc: Document): Promise<string> {
  const startTime = Date.now();
  let totalTokens = 0;

  // 步骤1:提取信息
  const extraction = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `Extract from: ${doc.content}` }],
  });
  totalTokens += extraction.usage?.total_tokens ?? 0;

  // 步骤2:分析内容
  const analysis = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `Analyze: ${extraction.choices[0].message.content}` }],
  });
  totalTokens += analysis.usage?.total_tokens ?? 0;

  const result = analysis.choices[0].message.content ?? "";

  // 将完整工作流作为单个事件追踪
  olakai("event", "ai_activity", {
    prompt: `Process document: ${doc.title}`,
    response: result,
    tokens: totalTokens,
    requestTime: Date.now() - startTime,
    taskExecutionId: crypto.randomUUID(),
    task: "Data Processing & Analysis",
    customData: {
      DocumentType: doc.type,
      StepCount: 2,
      Success: 1,
    },
  });

  return result;
}

Step 5: Configure Custom KPIs (Essential for Value)

步骤5:配置自定义KPI(获取价值的核心步骤)

This step is required to get real value from Olakai. Without KPIs, you're only tracking events — not gaining actionable insights.
此步骤是从Olakai获取实际价值的必要条件。没有KPI,你仅能追踪事件,无法获得可执行的洞察。

5.1 Install CLI (if not already)

5.1 安装CLI(如未安装)

bash
npm install -g olakai-cli
olakai login
bash
npm install -g olakai-cli
olakai login

5.2 Register Your Agent

5.2 注册你的Agent

bash
olakai agents create \
  --name "Document Processor" \
  --description "Processes and summarizes documents" \
  --workflow WORKFLOW_ID \
  --with-api-key
bash
olakai agents create \
  --name "Document Processor" \
  --description "Processes and summarizes documents" \
  --workflow WORKFLOW_ID \
  --with-api-key

5.2.1 Ensure Agent Has a Workflow

5.2.1 确保Agent已关联工作流

Every agent MUST belong to a workflow, even if it's the only agent.
bash
undefined
每个Agent必须属于一个工作流,即使它是该工作流中的唯一Agent。
bash
undefined

Check if agent has a workflow

检查Agent是否已关联工作流

olakai agents get YOUR_AGENT_ID --json | jq '.workflowId'
olakai agents get YOUR_AGENT_ID --json | jq '.workflowId'

If null, create a workflow and associate:

如果返回null,创建工作流并关联:

olakai workflows create --name "Your Workflow Name" --json olakai agents update YOUR_AGENT_ID --workflow WORKFLOW_ID
undefined
olakai workflows create --name "Your Workflow Name" --json olakai agents update YOUR_AGENT_ID --workflow WORKFLOW_ID
undefined

5.3 Create Custom Data Configs FIRST

5.3 先创建自定义数据配置

IMPORTANT: Create configs for ALL fields you send in
customData
. Only registered fields can be used in KPIs. CustomDataConfigs are agent-scoped.
bash
olakai custom-data create --agent-id YOUR_AGENT_ID --name "DocumentType" --type STRING
olakai custom-data create --agent-id YOUR_AGENT_ID --name "StepCount" --type NUMBER
olakai custom-data create --agent-id YOUR_AGENT_ID --name "Success" --type NUMBER
重要提示: 为你在
customData
中发送的所有字段创建配置。只有已注册的字段可用于KPI计算。CustomDataConfig是Agent级别的配置。
bash
olakai custom-data create --agent-id YOUR_AGENT_ID --name "DocumentType" --type STRING
olakai custom-data create --agent-id YOUR_AGENT_ID --name "StepCount" --type NUMBER
olakai custom-data create --agent-id YOUR_AGENT_ID --name "Success" --type NUMBER

Verify all configs exist for this agent

验证该Agent的所有配置是否已创建

olakai custom-data list --agent-id YOUR_AGENT_ID
undefined
olakai custom-data list --agent-id YOUR_AGENT_ID
undefined

5.4 Create KPIs

5.4 创建KPI

bash
olakai kpis create \
  --name "Documents Processed" \
  --agent-id YOUR_AGENT_ID \
  --calculator-id formula \
  --formula "IF(Success = 1, 1, 0)" \
  --aggregation SUM

olakai kpis create \
  --name "Avg Steps per Document" \
  --agent-id YOUR_AGENT_ID \
  --calculator-id formula \
  --formula "StepCount" \
  --aggregation AVERAGE
bash
olakai kpis create \
  --name "Documents Processed" \
  --agent-id YOUR_AGENT_ID \
  --calculator-id formula \
  --formula "IF(Success = 1, 1, 0)" \
  --aggregation SUM

olakai kpis create \
  --name "Avg Steps per Document" \
  --agent-id YOUR_AGENT_ID \
  --calculator-id formula \
  --formula "StepCount" \
  --aggregation AVERAGE

5.5 Update SDK Code to Match

5.5 更新SDK代码以匹配配置

After creating configs, ensure your SDK code sends exactly those field names:
typescript
customData: {
  DocumentType: doc.type,     // Matches CustomDataConfig "DocumentType"
  StepCount: 2,               // Matches CustomDataConfig "StepCount"
  Success: true ? 1 : 0,      // Matches CustomDataConfig "Success"
}
创建配置后,确保你的SDK代码发送完全匹配的字段名:
typescript
customData: {
  DocumentType: doc.type,     // 匹配CustomDataConfig中的"DocumentType"
  StepCount: 2,               // 匹配CustomDataConfig中的"StepCount"
  Success: true ? 1 : 0,      // 匹配CustomDataConfig中的"Success"
}

Framework-Specific Integrations

特定框架集成

Next.js API Routes

Next.js API路由

typescript
// app/api/chat/route.ts
import { NextRequest, NextResponse } from "next/server";
import { olakai } from "@olakai/sdk";
import { auth } from "@/auth";

export async function POST(req: NextRequest) {
  const session = await auth();
  if (!session?.user) {
    return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
  }

  const { message } = await req.json();

  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: message }],
  });

  olakai("event", "ai_activity", {
    prompt: message,
    response: response.choices[0].message.content,
    userEmail: session.user.email!,
    task: "Customer Experience",
  });

  return NextResponse.json({ reply: response.choices[0].message.content });
}
typescript
// app/api/chat/route.ts
import { NextRequest, NextResponse } from "next/server";
import { olakai } from "@olakai/sdk";
import { auth } from "@/auth";

export async function POST(req: NextRequest) {
  const session = await auth();
  if (!session?.user) {
    return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
  }

  const { message } = await req.json();

  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: message }],
  });

  olakai("event", "ai_activity", {
    prompt: message,
    response: response.choices[0].message.content,
    userEmail: session.user.email!,
    task: "Customer Experience",
  });

  return NextResponse.json({ reply: response.choices[0].message.content });
}

FastAPI (Python)

FastAPI(Python)

python
from fastapi import FastAPI, Depends
from olakaisdk import olakai_config, olakai, OlakaiEventParams

app = FastAPI()

@app.on_event("startup")
async def startup():
    olakai_config(os.getenv("OLAKAI_API_KEY"))

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

@app.post("/chat")
async def chat(message: str, user: User = Depends(get_current_user)):
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": message}]
    )

    olakai("event", "ai_activity", OlakaiEventParams(
        prompt=message,
        response=response.choices[0].message.content,
        userEmail=user.email,
        task="Customer Experience",
    ))

    return {"reply": response.choices[0].message.content}
python
from fastapi import FastAPI, Depends
from olakaisdk import olakai_config, olakai, OlakaiEventParams

app = FastAPI()

@app.on_event("startup")
async def startup():
    olakai_config(os.getenv("OLAKAI_API_KEY"))

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

@app.post("/chat")
async def chat(message: str, user: User = Depends(get_current_user)):
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": message}]
    )

    olakai("event", "ai_activity", OlakaiEventParams(
        prompt=message,
        response=response.choices[0].message.content,
        userEmail=user.email,
        task="Customer Experience",
    ))

    return {"reply": response.choices[0].message.content}

Handling Edge Cases

处理边缘情况

Streaming Responses

流式响应

Track after the stream completes with the full response:
typescript
const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
  stream: true,
});

let fullResponse = "";
for await (const chunk of stream) {
  fullResponse += chunk.choices[0]?.delta?.content ?? "";
  res.write(chunk.choices[0]?.delta?.content ?? "");
}

// Track after stream completes
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: fullResponse,
  userEmail: user.email,
});
在流完成后,使用完整响应进行追踪:
typescript
const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
  stream: true,
});

let fullResponse = "";
for await (const chunk of stream) {
  fullResponse += chunk.choices[0]?.delta?.content ?? "";
  res.write(chunk.choices[0]?.delta?.content ?? "");
}

// 流完成后进行追踪
olakai("event", "ai_activity", {
  prompt: userMessage,
  response: fullResponse,
  userEmail: user.email,
});

Error Handling

错误处理

typescript
try {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
  });
  return response.choices[0].message.content;
} catch (error) {
  // Track the failed attempt
  olakai("event", "ai_activity", {
    prompt: messages[messages.length - 1].content,
    response: `Error: ${error instanceof Error ? error.message : "Unknown"}`,
    task: "Software Development",
    customData: { Success: 0 },
  });
  throw error;
}
typescript
try {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
  });
  return response.choices[0].message.content;
} catch (error) {
  // 追踪失败的请求
  olakai("event", "ai_activity", {
    prompt: messages[messages.length - 1].content,
    response: `Error: ${error instanceof Error ? error.message : "Unknown"}`,
    task: "Software Development",
    customData: { Success: 0 },
  });
  throw error;
}

Non-OpenAI Providers

非OpenAI提供商

For Anthropic, Perplexity, or other providers, use manual tracking:
typescript
import Anthropic from "@anthropic-ai/sdk";

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function callClaude(prompt: string): Promise<string> {
  const startTime = Date.now();

  const response = await anthropic.messages.create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: prompt }],
  });

  const content = response.content[0].type === "text" ? response.content[0].text : "";

  olakai("event", "ai_activity", {
    prompt,
    response: content,
    tokens: response.usage.input_tokens + response.usage.output_tokens,
    requestTime: Date.now() - startTime,
    task: "Content Development",
  });

  return content;
}
对于Anthropic、Perplexity或其他提供商,使用手动追踪:
typescript
import Anthropic from "@anthropic-ai/sdk";

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function callClaude(prompt: string): Promise<string> {
  const startTime = Date.now();

  const response = await anthropic.messages.create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    messages: [{ role: "user", content: prompt }],
  });

  const content = response.content[0].type === "text" ? response.content[0].text : "";

  olakai("event", "ai_activity", {
    prompt,
    response: content,
    tokens: response.usage.input_tokens + response.usage.output_tokens,
    requestTime: Date.now() - startTime,
    task: "Content Development",
  });

  return content;
}

Test-Validate-Iterate Cycle

测试-验证-迭代流程

Never assume your integration is working. Always validate by generating a test event and inspecting the actual data.
永远不要假设你的集成正常工作。请始终通过生成测试事件并检查实际数据来验证。

Step 1: Generate a Test Event

步骤1:生成测试事件

Run your application to trigger at least one LLM call.
运行你的应用,触发至少一次LLM调用。

Step 2: Fetch and Inspect the Event

步骤2:获取并检查事件

bash
olakai activity list --limit 1 --json
olakai activity get EVENT_ID --json
bash
olakai activity list --limit 1 --json
olakai activity get EVENT_ID --json

Step 3: Validate Each Component

步骤3:验证每个组件

Check customData is present:
bash
olakai activity get EVENT_ID --json | jq '.customData'
Check KPIs are numeric (not strings or null):
bash
olakai activity get EVENT_ID --json | jq '.kpiData'
CORRECT:
json
{ "My KPI": 42 }
WRONG (formula stored as string):
json
{ "My KPI": "MyVariable" }
Fix:
olakai kpis update KPI_ID --formula "MyVariable"
WRONG (null value):
json
{ "My KPI": null }
Fix by ensuring:
  1. CustomDataConfig exists:
    olakai custom-data create --agent-id ID --name "MyVariable" --type NUMBER
  2. Field name case matches exactly (case-sensitive)
  3. SDK actually sends the field in customData
检查customData是否存在:
bash
olakai activity get EVENT_ID --json | jq '.customData'
检查KPI是否为数值(非字符串或null):
bash
olakai activity get EVENT_ID --json | jq '.kpiData'
正确示例:
json
{ "My KPI": 42 }
错误示例(公式以字符串存储):
json
{ "My KPI": "MyVariable" }
修复方法:
olakai kpis update KPI_ID --formula "MyVariable"
错误示例(值为null):
json
{ "My KPI": null }
通过以下步骤修复:
  1. 创建CustomDataConfig:
    olakai custom-data create --agent-id ID --name "MyVariable" --type NUMBER
  2. 确保字段名称完全匹配(区分大小写)
  3. 确保SDK确实在customData中发送了该字段

Validation Flow

验证流程

1. Trigger LLM call (generate event)
2. Fetch: olakai activity get ID --json
3. Event exists? NO → Check API key, SDK init, debug mode
4. customData correct? NO → Fix SDK customData parameter
5. kpiData numeric? NO → olakai kpis update ID --formula "X"
6. kpiData not null? NO → Create CustomDataConfig, check field name case
✅ Integration validated
1. 触发LLM调用(生成事件)
2. 获取事件:olakai activity get ID --json
3. 事件是否存在?否 → 检查API密钥、SDK初始化、调试模式
4. customData是否正确?否 → 修复SDK的customData参数
5. kpiData是否为数值?否 → olakai kpis update ID --formula "X"
6. kpiData是否不为null?否 → 创建CustomDataConfig,检查字段名称大小写
✅ 集成验证通过

KPI Formula Reference

KPI公式参考

Supported Operators

支持的运算符

CategoryOperators
Arithmetic
+
,
-
,
*
,
/
Comparison
<
,
<=
,
=
,
<>
,
>=
,
>
Logical
AND
,
OR
,
NOT
Conditional
IF(condition, true_val, false_val)
Null handling
ISNA(value)
,
ISDEFINED(value)
类别运算符
算术运算
+
,
-
,
*
,
/
比较运算
<
,
<=
,
=
,
<>
,
>=
,
>
逻辑运算
AND
,
OR
,
NOT
条件运算
IF(condition, true_val, false_val)
空值处理
ISNA(value)
,
ISDEFINED(value)

Common Formula Patterns

常见公式模式

bash
--formula "StepCount"                          # passthrough
--formula "SuccessRate * 100"                  # percentage conversion
--formula "IF(Success = 1, 1, 0)"              # conditional counting
--formula "IF(PII detected, 1, 0)"             # built-in variable
--formula "IF(ISDEFINED(MyField), MyField, 0)" # null-safe
bash
--formula "StepCount"                          # 直接传递变量值
--formula "SuccessRate * 100"                  # 转换为百分比
--formula "IF(Success = 1, 1, 0)"              # 条件计数
--formula "IF(PII detected, 1, 0)"             # 使用内置变量
--formula "IF(ISDEFINED(MyField), MyField, 0)" # 空值安全处理

Aggregation Types

聚合类型

AggregationUse For
SUM
Totals, counts
AVERAGE
Rates, percentages
聚合类型使用场景
SUM
总计、计数
AVERAGE
比率、百分比

Quick Reference

快速参考

typescript
// TypeScript — initialize once
import { olakaiConfig, olakai } from "@olakai/sdk";
olakaiConfig({ apiKey: process.env.OLAKAI_API_KEY });

// Track any interaction
olakai("event", "ai_activity", {
  prompt: "input",
  response: "output",
  tokens: 1500,
  requestTime: 5000,
  userEmail: "user@example.com",
  chatId: "conversation-id",
  taskExecutionId: "uuid-shared-across-agents",
  task: "Data Processing & Analysis",
  customData: { StepCount: 3, Success: 1 },
});
python
undefined
typescript
// TypeScript — 初始化一次
import { olakaiConfig, olakai } from "@olakai/sdk";
olakaiConfig({ apiKey: process.env.OLAKAI_API_KEY });

// 追踪任意交互
olakai("event", "ai_activity", {
  prompt: "input",
  response: "output",
  tokens: 1500,
  requestTime: 5000,
  userEmail: "user@example.com",
  chatId: "conversation-id",
  taskExecutionId: "uuid-shared-across-agents",
  task: "Data Processing & Analysis",
  customData: { StepCount: 3, Success: 1 },
});
python
undefined

Python — initialize once

Python — 初始化一次

from olakaisdk import olakai_config, olakai, OlakaiEventParams olakai_config(os.getenv("OLAKAI_API_KEY"))
from olakaisdk import olakai_config, olakai, OlakaiEventParams olakai_config(os.getenv("OLAKAI_API_KEY"))

Track any interaction

追踪任意交互

olakai("event", "ai_activity", OlakaiEventParams( prompt="input", response="output", tokens=1500, requestTime=5000, userEmail="user@example.com", chatId="conversation-id", taskExecutionId="uuid-shared-across-agents", task="Data Processing & Analysis", customData={"StepCount": 3, "Success": 1}, ))
undefined
olakai("event", "ai_activity", OlakaiEventParams( prompt="input", response="output", tokens=1500, requestTime=5000, userEmail="user@example.com", chatId="conversation-id", taskExecutionId="uuid-shared-across-agents", task="Data Processing & Analysis", customData={"StepCount": 3, "Success": 1}, ))
undefined