pica-openai-agents
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePICA MCP Integration with the OpenAI Agents SDK
使用OpenAI Agents SDK集成PICA MCP
PICA provides a unified API platform that connects AI agents to third-party services (CRMs, email, calendars, databases, etc.) through MCP tool calling.
PICA是一个统一的API平台,通过MCP工具调用功能将AI Agent连接到第三方服务(如CRM、邮件、日历、数据库等)。
PICA MCP Server
PICA MCP服务器
PICA exposes its capabilities through an MCP server distributed as . It uses stdio transport — it runs as a local subprocess via .
@picahq/mcpnpxPICA通过以包形式分发的MCP服务器对外开放其功能。它采用stdio传输方式——可通过在本地作为子进程运行。
@picahq/mcpnpxMCP Configuration
MCP配置
json
{
"mcpServers": {
"pica": {
"command": "npx",
"args": ["@picahq/mcp"],
"env": {
"PICA_SECRET": "your-pica-secret-key"
}
}
}
}- Package: (run via
@picahq/mcp, no install needed)npx - Auth: environment variable (obtain from the PICA dashboard https://app.picaos.com/settings/api-keys)
PICA_SECRET - Transport: stdio (standard input/output)
json
{
"mcpServers": {
"pica": {
"command": "npx",
"args": ["@picahq/mcp"],
"env": {
"PICA_SECRET": "your-pica-secret-key"
}
}
}
}- 包:(通过
@picahq/mcp运行,无需安装)npx - 认证:环境变量(可从PICA控制台https://app.picaos.com/settings/api-keys获取)
PICA_SECRET - 传输方式:stdio(标准输入/输出)
Environment Variable
环境变量
Always store the PICA secret in an environment variable, never hardcode it:
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...Add them to (or equivalent) and document in .
.env.local.env.example请始终将PICA密钥存储在环境变量中,绝对不要硬编码:
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...将它们添加到(或等效文件)中,并在中进行说明。
.env.local.env.exampleUsing PICA with the OpenAI Agents SDK
结合OpenAI Agents SDK使用PICA
The OpenAI Agents SDK () has first-class MCP support via . No additional MCP client package is needed — the SDK handles tool discovery, conversion, and execution automatically.
@openai/agentsMCPServerStdioOpenAI Agents SDK()通过提供一等MCP支持,无需额外的MCP客户端包——SDK会自动处理工具发现、转换和执行。
@openai/agentsMCPServerStdioRequired packages
所需包
bash
pnpm add @openai/agents zod- : Main SDK (includes
@openai/agents,MCPServerStdio,Agent)run - : Required by the SDK (v4+)
zod
bash
pnpm add @openai/agents zod- :主SDK(包含
@openai/agents、MCPServerStdio、Agent)run - :SDK依赖(需v4及以上版本)
zod
Before implementing: look up the latest docs
实现前:查阅最新文档
The OpenAI Agents SDK API may change between versions. Always check the latest docs first:
OpenAI Agents SDK的API可能会在版本间发生变化。请务必先查看最新文档:
Integration pattern
集成模式
- Create an MCP server using with
MCPServerStdio,command: "npx"args: ["@picahq/mcp"] - Connect the server via
await mcpServer.connect() - Create an Agent with — tools are discovered automatically
mcpServers: [mcpServer] - Run the agent with — the SDK handles the full agent loop (tool calls, execution, multi-step)
run(agent, input, { stream: true }) - Stream events by iterating the result — handle for text deltas and
raw_model_stream_eventfor tool callsrun_item_stream_event - Close the MCP server when done via
await mcpServer.close()
When passing environment variables, spread so the subprocess inherits PATH and other system vars:
process.envtypescript
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
}- 创建MCP服务器:使用,配置
MCPServerStdio、command: "npx"args: ["@picahq/mcp"] - 连接服务器:调用
await mcpServer.connect() - 创建Agent:配置——工具会自动被发现
mcpServers: [mcpServer] - 运行Agent:调用——SDK会处理完整的Agent循环(工具调用、执行、多步骤流程)
run(agent, input, { stream: true }) - 流式事件处理:遍历结果以处理流式事件——针对文本增量处理,针对工具调用处理
raw_model_stream_eventrun_item_stream_event - 关闭服务器:使用在使用完成后关闭MCP服务器
await mcpServer.close()
传递环境变量时,展开以便子进程继承PATH和其他系统变量:
process.envtypescript
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
}Minimal example
最简示例
typescript
import { Agent, run, MCPServerStdio } from "@openai/agents";
const mcpServer = new MCPServerStdio({
name: "PICA MCP Server",
command: "npx",
args: ["@picahq/mcp"],
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
},
});
await mcpServer.connect();
try {
const agent = new Agent({
name: "PICA Assistant",
model: "gpt-4o-mini",
instructions: "You are a helpful assistant.",
mcpServers: [mcpServer],
});
// Non-streaming
const result = await run(agent, "List my connected integrations");
console.log(result.finalOutput);
// Streaming
const streamResult = await run(agent, "List my connected integrations", {
stream: true,
});
for await (const event of streamResult) {
if (event.type === "raw_model_stream_event") {
const data = event.data as Record<string, unknown>;
if (data.type === "response.output_text.delta") {
process.stdout.write(data.delta as string);
}
}
}
await streamResult.completed;
} finally {
await mcpServer.close();
}typescript
import { Agent, run, MCPServerStdio } from "@openai/agents";
const mcpServer = new MCPServerStdio({
name: "PICA MCP Server",
command: "npx",
args: ["@picahq/mcp"],
env: {
...(process.env as Record<string, string>),
PICA_SECRET: process.env.PICA_SECRET!,
},
});
await mcpServer.connect();
try {
const agent = new Agent({
name: "PICA Assistant",
model: "gpt-4o-mini",
instructions: "You are a helpful assistant.",
mcpServers: [mcpServer],
});
// 非流式调用
const result = await run(agent, "List my connected integrations");
console.log(result.finalOutput);
// 流式调用
const streamResult = await run(agent, "List my connected integrations", {
stream: true,
});
for await (const event of streamResult) {
if (event.type === "raw_model_stream_event") {
const data = event.data as Record<string, unknown>;
if (data.type === "response.output_text.delta") {
process.stdout.write(data.delta as string);
}
}
}
await streamResult.completed;
} finally {
await mcpServer.close();
}Streaming SSE events for a chat UI
为聊天UI流式传输SSE事件
When building a Next.js API route, stream responses as SSE events using a . Emit events in this format for compatibility with the frontend component:
ReadableStreamPythonChat- — streamed text chunks
{ type: "text", content: "..." } - — tool execution starting
{ type: "tool_start", name: "tool_name", input: "..." } - — tool execution result
{ type: "tool_end", name: "tool_name", output: "..." } - — error messages
{ type: "error", content: "..." } - — stream finished
data: [DONE]
构建Next.js API路由时,可使用将响应作为SSE事件流式传输。为兼容前端组件,请按以下格式发送事件:
ReadableStreamPythonChat- —— 流式文本片段
{ type: "text", content: "..." } - —— 工具执行开始
{ type: "tool_start", name: "tool_name", input: "..." } - —— 工具执行结果
{ type: "tool_end", name: "tool_name", output: "..." } - —— 错误信息
{ type: "error", content: "..." } - —— 流结束
data: [DONE]
Handling streaming events
处理流式事件
The SDK emits three event types when streaming:
| Event Type | Purpose | Key Fields |
|---|---|---|
| Raw model token deltas | |
| Tool calls, outputs, messages | |
| Agent switched (handoff) | |
For text streaming, match and read .
data.type === "response.output_text.delta"data.deltaFor tool events, check :
item.rawItem.type- — tool was invoked (has
"function_call",call_id,name)arguments - — tool returned (has
"function_call_output",call_id, but nooutput— track names via aname)Map<call_id, name>
Important: may fire multiple times for the same tool call (created, in-progress, completed). Use a to deduplicate events.
run_item_stream_eventSet<call_id>tool_startFallback: After the stream loop completes, check — if no text deltas were streamed (e.g., the model returned a single non-streamed response), send as a text event.
result.finalOutputfinalOutput流式传输时,SDK会发出三种事件类型:
| 事件类型 | 用途 | 关键字段 |
|---|---|---|
| 原始模型token增量 | |
| 工具调用、输出、消息 | |
| Agent切换(转交) | |
对于文本流式传输,匹配并读取。
data.type === "response.output_text.delta"data.delta对于工具事件,检查:
item.rawItem.type- —— 工具已被调用(包含
"function_call"、call_id、name)arguments - —— 工具返回结果(包含
"function_call_output"、call_id,但没有output——需通过name跟踪名称)Map<call_id, name>
重要提示:同一工具调用可能会多次触发(已创建、执行中、已完成)。请使用对事件进行去重。
run_item_stream_eventSet<call_id>tool_start回退方案:遍历流循环完成后,检查——如果没有流式传输文本增量(例如模型返回单个非流式响应),则将作为文本事件发送。
result.finalOutputfinalOutputMulti-turn input format
多轮输入格式
Pass conversation history as an array of message objects:
typescript
const input = messages.map((m: { role: string; content: string }) => ({
role: m.role as "user" | "assistant",
content: m.content,
}));
const result = await run(agent, input, { stream: true });将对话历史作为消息对象数组传递:
typescript
const input = messages.map((m: { role: string; content: string }) => ({
role: m.role as "user" | "assistant",
content: m.content,
}));
const result = await run(agent, input, { stream: true });Checklist
检查清单
When setting up PICA MCP with the OpenAI Agents SDK:
- is installed
@openai/agents - (v4+) is installed
zod - is set in
OPENAI_API_KEY.env.local - is set in
PICA_SECRET.env.local - documents both
.env.exampleandOPENAI_API_KEYPICA_SECRET - uses
MCPServerStdio,command: "npx"args: ["@picahq/mcp"] - Full is spread into the MCP server's
process.envoptionenv - is called before creating the agent
mcpServer.connect() - Agent has — tools are auto-discovered
mcpServers: [mcpServer] - is called with
run()for streaming responses{ stream: true } - is awaited after iterating the stream
result.completed - Fallback to if no text deltas were streamed
result.finalOutput - Tool call names are tracked by (output events lack
call_id)name - Tool start events are deduplicated with a
Set<call_id> - is called in a
mcpServer.close()blockfinally
使用OpenAI Agents SDK配置PICA MCP时,请确认:
- 已安装
@openai/agents - 已安装(v4+)
zod - 已在中设置
.env.localOPENAI_API_KEY - 已在中设置
.env.localPICA_SECRET - 已在中说明
.env.example和OPENAI_API_KEYPICA_SECRET - 使用了
MCPServerStdio、command: "npx"args: ["@picahq/mcp"] - MCP服务器的选项中已展开完整的
envprocess.env - 创建Agent前已调用
mcpServer.connect() - Agent配置了——工具会自动发现
mcpServers: [mcpServer] - 调用时设置了
run()以实现流式响应{ stream: true } - 遍历流后已等待
result.completed - 如果没有流式传输文本增量,已回退使用
result.finalOutput - 已通过跟踪工具调用名称(输出事件不包含
call_id)name - 已使用对工具启动事件进行去重
Set<call_id> - 已在块中调用
finallymcpServer.close()