pica-openai-agents

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

PICA MCP Integration with the OpenAI Agents SDK

使用OpenAI Agents SDK集成PICA MCP

PICA provides a unified API platform that connects AI agents to third-party services (CRMs, email, calendars, databases, etc.) through MCP tool calling.
PICA是一个统一的API平台,通过MCP工具调用功能将AI Agent连接到第三方服务(如CRM、邮件、日历、数据库等)。

PICA MCP Server

PICA MCP服务器

PICA exposes its capabilities through an MCP server distributed as
@picahq/mcp
. It uses stdio transport — it runs as a local subprocess via
npx
.
PICA通过以
@picahq/mcp
包形式分发的MCP服务器对外开放其功能。它采用stdio传输方式——可通过
npx
在本地作为子进程运行。

MCP Configuration

MCP配置

json
{
  "mcpServers": {
    "pica": {
      "command": "npx",
      "args": ["@picahq/mcp"],
      "env": {
        "PICA_SECRET": "your-pica-secret-key"
      }
    }
  }
}
  • Package:
    @picahq/mcp
    (run via
    npx
    , no install needed)
  • Auth:
    PICA_SECRET
    environment variable (obtain from the PICA dashboard https://app.picaos.com/settings/api-keys)
  • Transport: stdio (standard input/output)
json
{
  "mcpServers": {
    "pica": {
      "command": "npx",
      "args": ["@picahq/mcp"],
      "env": {
        "PICA_SECRET": "your-pica-secret-key"
      }
    }
  }
}

Environment Variable

环境变量

Always store the PICA secret in an environment variable, never hardcode it:
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...
Add them to
.env.local
(or equivalent) and document in
.env.example
.
请始终将PICA密钥存储在环境变量中,绝对不要硬编码:
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...
将它们添加到
.env.local
(或等效文件)中,并在
.env.example
中进行说明。

Using PICA with the OpenAI Agents SDK

结合OpenAI Agents SDK使用PICA

The OpenAI Agents SDK (
@openai/agents
) has first-class MCP support via
MCPServerStdio
. No additional MCP client package is needed — the SDK handles tool discovery, conversion, and execution automatically.
OpenAI Agents SDK(
@openai/agents
)通过
MCPServerStdio
提供一等MCP支持,无需额外的MCP客户端包——SDK会自动处理工具发现、转换和执行。

Required packages

所需包

bash
pnpm add @openai/agents zod
  • @openai/agents
    : Main SDK (includes
    MCPServerStdio
    ,
    Agent
    ,
    run
    )
  • zod
    : Required by the SDK (v4+)
bash
pnpm add @openai/agents zod
  • @openai/agents
    :主SDK(包含
    MCPServerStdio
    Agent
    run
  • zod
    :SDK依赖(需v4及以上版本)

Before implementing: look up the latest docs

实现前:查阅最新文档

The OpenAI Agents SDK API may change between versions. Always check the latest docs first:
OpenAI Agents SDK的API可能会在版本间发生变化。请务必先查看最新文档:

Integration pattern

集成模式

  1. Create an MCP server using
    MCPServerStdio
    with
    command: "npx"
    ,
    args: ["@picahq/mcp"]
  2. Connect the server via
    await mcpServer.connect()
  3. Create an Agent with
    mcpServers: [mcpServer]
    — tools are discovered automatically
  4. Run the agent with
    run(agent, input, { stream: true })
    — the SDK handles the full agent loop (tool calls, execution, multi-step)
  5. Stream events by iterating the result — handle
    raw_model_stream_event
    for text deltas and
    run_item_stream_event
    for tool calls
  6. Close the MCP server when done via
    await mcpServer.close()
When passing environment variables, spread
process.env
so the subprocess inherits PATH and other system vars:
typescript
env: {
  ...(process.env as Record<string, string>),
  PICA_SECRET: process.env.PICA_SECRET!,
}
  1. 创建MCP服务器:使用
    MCPServerStdio
    ,配置
    command: "npx"
    args: ["@picahq/mcp"]
  2. 连接服务器:调用
    await mcpServer.connect()
  3. 创建Agent:配置
    mcpServers: [mcpServer]
    ——工具会自动被发现
  4. 运行Agent:调用
    run(agent, input, { stream: true })
    ——SDK会处理完整的Agent循环(工具调用、执行、多步骤流程)
  5. 流式事件处理:遍历结果以处理流式事件——针对文本增量处理
    raw_model_stream_event
    ,针对工具调用处理
    run_item_stream_event
  6. 关闭服务器:使用
    await mcpServer.close()
    在使用完成后关闭MCP服务器
传递环境变量时,展开
process.env
以便子进程继承PATH和其他系统变量:
typescript
env: {
  ...(process.env as Record<string, string>),
  PICA_SECRET: process.env.PICA_SECRET!,
}

Minimal example

最简示例

typescript
import { Agent, run, MCPServerStdio } from "@openai/agents";

const mcpServer = new MCPServerStdio({
  name: "PICA MCP Server",
  command: "npx",
  args: ["@picahq/mcp"],
  env: {
    ...(process.env as Record<string, string>),
    PICA_SECRET: process.env.PICA_SECRET!,
  },
});

await mcpServer.connect();

try {
  const agent = new Agent({
    name: "PICA Assistant",
    model: "gpt-4o-mini",
    instructions: "You are a helpful assistant.",
    mcpServers: [mcpServer],
  });

  // Non-streaming
  const result = await run(agent, "List my connected integrations");
  console.log(result.finalOutput);

  // Streaming
  const streamResult = await run(agent, "List my connected integrations", {
    stream: true,
  });
  for await (const event of streamResult) {
    if (event.type === "raw_model_stream_event") {
      const data = event.data as Record<string, unknown>;
      if (data.type === "response.output_text.delta") {
        process.stdout.write(data.delta as string);
      }
    }
  }
  await streamResult.completed;
} finally {
  await mcpServer.close();
}
typescript
import { Agent, run, MCPServerStdio } from "@openai/agents";

const mcpServer = new MCPServerStdio({
  name: "PICA MCP Server",
  command: "npx",
  args: ["@picahq/mcp"],
  env: {
    ...(process.env as Record<string, string>),
    PICA_SECRET: process.env.PICA_SECRET!,
  },
});

await mcpServer.connect();

try {
  const agent = new Agent({
    name: "PICA Assistant",
    model: "gpt-4o-mini",
    instructions: "You are a helpful assistant.",
    mcpServers: [mcpServer],
  });

  // 非流式调用
  const result = await run(agent, "List my connected integrations");
  console.log(result.finalOutput);

  // 流式调用
  const streamResult = await run(agent, "List my connected integrations", {
    stream: true,
  });
  for await (const event of streamResult) {
    if (event.type === "raw_model_stream_event") {
      const data = event.data as Record<string, unknown>;
      if (data.type === "response.output_text.delta") {
        process.stdout.write(data.delta as string);
      }
    }
  }
  await streamResult.completed;
} finally {
  await mcpServer.close();
}

Streaming SSE events for a chat UI

为聊天UI流式传输SSE事件

When building a Next.js API route, stream responses as SSE events using a
ReadableStream
. Emit events in this format for compatibility with the
PythonChat
frontend component:
  • { type: "text", content: "..." }
    — streamed text chunks
  • { type: "tool_start", name: "tool_name", input: "..." }
    — tool execution starting
  • { type: "tool_end", name: "tool_name", output: "..." }
    — tool execution result
  • { type: "error", content: "..." }
    — error messages
  • data: [DONE]
    — stream finished
构建Next.js API路由时,可使用
ReadableStream
将响应作为SSE事件流式传输。为兼容
PythonChat
前端组件,请按以下格式发送事件:
  • { type: "text", content: "..." }
    —— 流式文本片段
  • { type: "tool_start", name: "tool_name", input: "..." }
    —— 工具执行开始
  • { type: "tool_end", name: "tool_name", output: "..." }
    —— 工具执行结果
  • { type: "error", content: "..." }
    —— 错误信息
  • data: [DONE]
    —— 流结束

Handling streaming events

处理流式事件

The SDK emits three event types when streaming:
Event TypePurposeKey Fields
raw_model_stream_event
Raw model token deltas
data.type
,
data.delta
run_item_stream_event
Tool calls, outputs, messages
item.rawItem.type
,
item.rawItem.*
agent_updated_stream_event
Agent switched (handoff)
agent.name
For text streaming, match
data.type === "response.output_text.delta"
and read
data.delta
.
For tool events, check
item.rawItem.type
:
  • "function_call"
    — tool was invoked (has
    call_id
    ,
    name
    ,
    arguments
    )
  • "function_call_output"
    — tool returned (has
    call_id
    ,
    output
    , but no
    name
    — track names via a
    Map<call_id, name>
    )
Important:
run_item_stream_event
may fire multiple times for the same tool call (created, in-progress, completed). Use a
Set<call_id>
to deduplicate
tool_start
events.
Fallback: After the stream loop completes, check
result.finalOutput
— if no text deltas were streamed (e.g., the model returned a single non-streamed response), send
finalOutput
as a text event.
流式传输时,SDK会发出三种事件类型:
事件类型用途关键字段
raw_model_stream_event
原始模型token增量
data.type
,
data.delta
run_item_stream_event
工具调用、输出、消息
item.rawItem.type
,
item.rawItem.*
agent_updated_stream_event
Agent切换(转交)
agent.name
对于文本流式传输,匹配
data.type === "response.output_text.delta"
并读取
data.delta
对于工具事件,检查
item.rawItem.type
  • "function_call"
    —— 工具已被调用(包含
    call_id
    name
    arguments
  • "function_call_output"
    —— 工具返回结果(包含
    call_id
    output
    ,但没有
    name
    ——需通过
    Map<call_id, name>
    跟踪名称)
重要提示:同一工具调用可能会多次触发
run_item_stream_event
(已创建、执行中、已完成)。请使用
Set<call_id>
tool_start
事件进行去重。
回退方案:遍历流循环完成后,检查
result.finalOutput
——如果没有流式传输文本增量(例如模型返回单个非流式响应),则将
finalOutput
作为文本事件发送。

Multi-turn input format

多轮输入格式

Pass conversation history as an array of message objects:
typescript
const input = messages.map((m: { role: string; content: string }) => ({
  role: m.role as "user" | "assistant",
  content: m.content,
}));

const result = await run(agent, input, { stream: true });
将对话历史作为消息对象数组传递:
typescript
const input = messages.map((m: { role: string; content: string }) => ({
  role: m.role as "user" | "assistant",
  content: m.content,
}));

const result = await run(agent, input, { stream: true });

Checklist

检查清单

When setting up PICA MCP with the OpenAI Agents SDK:
  • @openai/agents
    is installed
  • zod
    (v4+) is installed
  • OPENAI_API_KEY
    is set in
    .env.local
  • PICA_SECRET
    is set in
    .env.local
  • .env.example
    documents both
    OPENAI_API_KEY
    and
    PICA_SECRET
  • MCPServerStdio
    uses
    command: "npx"
    ,
    args: ["@picahq/mcp"]
  • Full
    process.env
    is spread into the MCP server's
    env
    option
  • mcpServer.connect()
    is called before creating the agent
  • Agent has
    mcpServers: [mcpServer]
    — tools are auto-discovered
  • run()
    is called with
    { stream: true }
    for streaming responses
  • result.completed
    is awaited after iterating the stream
  • Fallback to
    result.finalOutput
    if no text deltas were streamed
  • Tool call names are tracked by
    call_id
    (output events lack
    name
    )
  • Tool start events are deduplicated with a
    Set<call_id>
  • mcpServer.close()
    is called in a
    finally
    block
使用OpenAI Agents SDK配置PICA MCP时,请确认:
  • 已安装
    @openai/agents
  • 已安装
    zod
    (v4+)
  • 已在
    .env.local
    中设置
    OPENAI_API_KEY
  • 已在
    .env.local
    中设置
    PICA_SECRET
  • 已在
    .env.example
    中说明
    OPENAI_API_KEY
    PICA_SECRET
  • MCPServerStdio
    使用了
    command: "npx"
    args: ["@picahq/mcp"]
  • MCP服务器的
    env
    选项中已展开完整的
    process.env
  • 创建Agent前已调用
    mcpServer.connect()
  • Agent配置了
    mcpServers: [mcpServer]
    ——工具会自动发现
  • 调用
    run()
    时设置了
    { stream: true }
    以实现流式响应
  • 遍历流后已等待
    result.completed
  • 如果没有流式传输文本增量,已回退使用
    result.finalOutput
  • 已通过
    call_id
    跟踪工具调用名称(输出事件不包含
    name
  • 已使用
    Set<call_id>
    对工具启动事件进行去重
  • 已在
    finally
    块中调用
    mcpServer.close()