openai-responses
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOpenAI Responses API
OpenAI Responses API
Status: Production Ready
Last Updated: 2026-01-21
API Launch: March 2025
Dependencies: openai@6.16.0 (Node.js) or fetch API (Cloudflare Workers)
状态: 已可用于生产环境
最后更新: 2026-01-21
API发布时间: 2025年3月
依赖项: openai@6.16.0 (Node.js) 或 fetch API (Cloudflare Workers)
What Is the Responses API?
什么是Responses API?
OpenAI's unified interface for agentic applications, launched March 2025. Provides stateful conversations with preserved reasoning state across turns.
Key Innovation: Unlike Chat Completions (reasoning discarded between turns), Responses preserves the model's reasoning notebook, improving performance by 5% on TAUBench and enabling better multi-turn interactions.
vs Chat Completions:
| Feature | Chat Completions | Responses API |
|---|---|---|
| State | Manual history tracking | Automatic (conversation IDs) |
| Reasoning | Dropped between turns | Preserved across turns (+5% TAUBench) |
| Tools | Client-side round trips | Server-side hosted |
| Output | Single message | Polymorphic (8 types) |
| Cache | Baseline | 40-80% better utilization |
| MCP | Manual | Built-in |
OpenAI于2025年3月推出的智能体应用统一接口。提供跨对话轮次保留推理状态的有状态对话功能。
核心创新: 与Chat Completions(对话轮次间丢弃推理信息)不同,Responses API会保留模型的推理笔记,在TAUBench测试中性能提升5%,并支持更出色的多轮交互。
与Chat Completions对比:
| 特性 | Chat Completions | Responses API |
|---|---|---|
| 状态管理 | 手动历史记录跟踪 | 自动管理(对话ID) |
| 推理能力 | 轮次间丢弃 | 跨轮次保留(TAUBench性能提升5%) |
| 工具支持 | 客户端往返调用 | 服务器端托管 |
| 输出形式 | 单一消息 | 多态输出(8种类型) |
| 缓存 | 基础缓存 | 缓存利用率提升40-80% |
| MCP集成 | 手动集成 | 内置支持 |
Quick Start
快速开始
bash
npm install openai@6.16.0typescript
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.responses.create({
model: 'gpt-5',
input: 'What are the 5 Ds of dodgeball?',
});
console.log(response.output_text);Key differences from Chat Completions:
- Endpoint: (not
/v1/responses)/v1/chat/completions - Parameter: (not
input)messages - Role: (not
developer)system - Output: (not
response.output_text)choices[0].message.content
bash
npm install openai@6.16.0typescript
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.responses.create({
model: 'gpt-5',
input: 'What are the 5 Ds of dodgeball?',
});
console.log(response.output_text);与Chat Completions的主要区别:
- 端点: (而非
/v1/responses)/v1/chat/completions - 参数: (而非
input)messages - 角色: (而非
developer)system - 输出: (而非
response.output_text)choices[0].message.content
When to Use Responses vs Chat Completions
Responses与Chat Completions的适用场景
Use Responses:
- Agentic applications (reasoning + actions)
- Multi-turn conversations (preserved reasoning = +5% TAUBench)
- Built-in tools (Code Interpreter, File Search, Web Search, MCP)
- Background processing (60s standard, 10min extended timeout)
Use Chat Completions:
- Simple one-off generation
- Fully stateless interactions
- Legacy integrations
选择Responses API的场景:
- 智能体应用(推理+操作)
- 多轮对话(保留推理能力=TAUBench性能提升5%)
- 使用内置工具(Code Interpreter、文件搜索、网页搜索、MCP)
- 后台处理任务(标准超时60秒,扩展超时10分钟)
选择Chat Completions的场景:
- 简单一次性生成任务
- 完全无状态交互
- 遗留系统集成
Stateful Conversations
有状态对话
Automatic State Management using conversation IDs:
typescript
// Create conversation
const conv = await openai.conversations.create({
metadata: { user_id: 'user_123' },
});
// First turn
const response1 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id,
input: 'What are the 5 Ds of dodgeball?',
});
// Second turn - model remembers context + reasoning
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id,
input: 'Tell me more about the first one',
});Benefits: No manual history tracking, reasoning preserved, 40-80% better cache utilization
Conversation Limits: 90-day expiration
使用对话ID实现自动状态管理:
typescript
// 创建对话
const conv = await openai.conversations.create({
metadata: { user_id: 'user_123' },
});
// 第一轮对话
const response1 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id,
input: 'What are the 5 Ds of dodgeball?',
});
// 第二轮对话 - 模型会记住上下文和推理过程
const response2 = await openai.responses.create({
model: 'gpt-5',
conversation: conv.id,
input: 'Tell me more about the first one',
});优势: 无需手动跟踪历史记录、保留推理能力、缓存利用率提升40-80%
对话限制: 90天过期
Built-in Tools (Server-Side)
内置服务器端工具
Server-side hosted tools eliminate backend round trips:
| Tool | Purpose | Notes |
|---|---|---|
| Execute Python code | Sandboxed, 30s timeout (use |
| RAG without vector stores | Max 512MB per file, supports PDF/Word/Markdown/HTML/code |
| Real-time web information | Automatic source citations |
| DALL-E integration | DALL-E 3 default |
| Connect external tools | OAuth supported, tokens NOT stored |
Usage:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Calculate mean of: 10, 20, 30, 40, 50',
tools: [{ type: 'code_interpreter' }],
});服务器端托管工具无需后端往返调用:
| 工具 | 用途 | 说明 |
|---|---|---|
| 执行Python代码 | 沙箱环境,默认超时30秒(使用 |
| 无需向量数据库的RAG | 单文件最大512MB,支持PDF/Word/Markdown/HTML/代码文件 |
| 获取实时网页信息 | 自动添加来源引用 |
| DALL-E集成 | 默认使用DALL-E 3 |
| 连接外部工具 | 支持OAuth,令牌不会被存储 |
使用示例:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Calculate mean of: 10, 20, 30, 40, 50',
tools: [{ type: 'code_interpreter' }],
});Web Search TypeScript Note
Web Search TypeScript注意事项
TypeScript Limitation: The tool's option is missing from SDK types (as of v6.16.0).
web_searchexternal_web_accessWorkaround:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Search for recent news',
tools: [{
type: 'web_search',
external_web_access: true,
} as any], // ✅ Type assertion to suppress error
});Source: GitHub Issue #1716
TypeScript限制: 截至v6.16.0版本,SDK类型定义中缺少工具的选项。
web_searchexternal_web_access解决方法:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Search for recent news',
tools: [{
type: 'web_search',
external_web_access: true,
} as any], // ✅ 使用类型断言抑制错误
});MCP Server Integration
MCP服务器集成
Built-in support for Model Context Protocol (MCP) servers to connect external tools (Stripe, databases, custom APIs).
内置支持**模型上下文协议(MCP)**服务器,可连接外部工具(Stripe、数据库、自定义API)。
User Approval Requirement
用户授权要求
By default, explicit user approval is required before any data is shared with a remote MCP server (security feature).
Handling Approval:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Get my Stripe balance',
tools: [{
type: 'mcp',
server_label: 'stripe',
server_url: 'https://mcp.stripe.com',
authorization: process.env.STRIPE_TOKEN,
}],
});
if (response.status === 'requires_approval') {
// Show user: "This action requires sharing data with Stripe. Approve?"
// After user approves, retry with approval token
}Alternative: Pre-approve MCP servers in OpenAI dashboard (users configure trusted servers via settings)
Source: Official MCP Guide
默认情况下,在与远程MCP服务器共享任何数据前,需要获得用户的明确授权(安全特性)。
处理授权流程:
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Get my Stripe balance',
tools: [{
type: 'mcp',
server_label: 'stripe',
server_url: 'https://mcp.stripe.com',
authorization: process.env.STRIPE_TOKEN,
}],
});
if (response.status === 'requires_approval') {
// 向用户展示: "此操作需要与Stripe共享数据,是否批准?"
// 用户批准后,使用授权令牌重试请求
}替代方案: 在OpenAI控制台预先批准MCP服务器(用户可通过设置配置可信服务器)
来源: 官方MCP指南
Basic MCP Usage
MCP基础使用示例
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Roll 2d6 dice',
tools: [{
type: 'mcp',
server_label: 'dice',
server_url: 'https://example.com/mcp',
authorization: process.env.TOKEN, // ⚠️ NOT stored, required each request
}],
});MCP Output Types:
- - Tools discovered on server
mcp_list_tools - - Tool invocation + result
mcp_call - - Final response
message
typescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Roll 2d6 dice',
tools: [{
type: 'mcp',
server_label: 'dice',
server_url: 'https://example.com/mcp',
authorization: process.env.TOKEN, // ⚠️ 不会被存储,每次请求都需要提供
}],
});MCP输出类型:
- - 服务器上发现的工具列表
mcp_list_tools - - 工具调用+结果
mcp_call - - 最终响应
message
Reasoning Preservation
推理能力保留
Key Innovation: Model's internal reasoning state survives across turns (unlike Chat Completions which discards it).
Visual Analogy:
- Chat Completions: Model tears out scratchpad page before responding
- Responses API: Scratchpad stays open for next turn
Performance: +5% on TAUBench (GPT-5) purely from preserved reasoning
Reasoning Summaries (free):
typescript
response.output.forEach(item => {
if (item.type === 'reasoning') console.log(item.summary[0].text);
if (item.type === 'message') console.log(item.content[0].text);
});核心创新: 模型的内部推理状态可跨对话轮次保留(Chat Completions会在轮次间丢弃推理状态)。
可视化类比:
- Chat Completions: 模型在响应前会撕掉草稿页
- Responses API: 草稿页保持打开状态,供下一轮对话使用
性能表现: 仅通过保留推理能力,GPT-5在TAUBench测试中性能提升5%
推理摘要(免费获取):
typescript
response.output.forEach(item => {
if (item.type === 'reasoning') console.log(item.summary[0].text);
if (item.type === 'message') console.log(item.content[0].text);
});Important: Reasoning Traces Privacy
重要提示: 推理轨迹隐私
What You Get: Reasoning summaries (not full internal traces)
What OpenAI Keeps: Full chain-of-thought reasoning (proprietary, for security/privacy)
For GPT-5-Thinking models:
- OpenAI preserves reasoning internally in their backend
- This preserved reasoning improves multi-turn performance (+5% TAUBench)
- But developers only receive summaries, not the actual chain-of-thought
- Full reasoning traces are not exposed (OpenAI's IP protection)
Source: Sean Goedecke Analysis
开发者可获取: 推理摘要(而非完整内部轨迹)
OpenAI保留: 完整的思维链推理过程(专有内容,用于安全/隐私保护)
对于GPT-5-Thinking模型:
- OpenAI在后端内部保留推理过程
- 保留的推理过程可提升多轮对话性能(TAUBench提升5%)
- 但开发者仅能获取摘要,无法获取实际的思维链
- 完整推理轨迹不会对外暴露(OpenAI的知识产权保护)
来源: Sean Goedecke分析
Background Mode
后台模式
For long-running tasks, use :
background: truetypescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Analyze 500-page document',
background: true,
tools: [{ type: 'file_search', file_ids: [fileId] }],
});
// Poll for completion (check every 5s)
const result = await openai.responses.retrieve(response.id);
if (result.status === 'completed') console.log(result.output_text);Timeout Limits:
- Standard: 60 seconds
- Background: 10 minutes
对于长时间运行的任务,使用:
background: truetypescript
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Analyze 500-page document',
background: true,
tools: [{ type: 'file_search', file_ids: [fileId] }],
});
// 轮询任务完成状态(每5秒检查一次)
const result = await openai.responses.retrieve(response.id);
if (result.status === 'completed') console.log(result.output_text);超时限制:
- 标准模式: 60秒
- 后台模式: 10分钟
Performance Considerations
性能注意事项
Time-to-First-Token (TTFT) Latency:
Background mode currently has higher TTFT compared to synchronous responses. OpenAI is working to reduce this gap.
Recommendation:
- For user-facing real-time responses, use sync mode (lower latency)
- For long-running async tasks, use background mode (latency acceptable)
Source: OpenAI Background Mode Docs
首令牌生成时间(TTFT)延迟:
当前后台模式的TTFT比同步响应更高。OpenAI正在努力缩小这一差距。
建议:
- 面向用户的实时响应使用同步模式(延迟更低)
- 长时间异步任务使用后台模式(延迟可接受)
来源: OpenAI后台模式文档
Data Retention and Privacy
多态输出
Default Retention: 30 days when (default)
Zero Data Retention (ZDR): Organizations with ZDR automatically enforce
Background Mode: NOT ZDR compatible (stores data ~10 minutes for polling)
store: truestore: falseTimeline:
- September 26, 2025: OpenAI court-ordered retention ended
- Current: 30-day default retention with
store: true
Control Storage:
typescript
// Disable storage (no retention)
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Hello!',
store: false, // ✅ No retention
});
// ZDR organizations: store always treated as false
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Hello!',
store: true, // ⚠️ Ignored by OpenAI for ZDR orgs, treated as false
});ZDR Compliance:
- Avoid background mode (requires temporary storage)
- Explicitly set for clarity
store: false - Note: 60s timeout applies in sync mode
Source: OpenAI Data Controls
返回8种输出类型,而非单一消息:
| 类型 | 示例 |
|---|---|
| 最终答案、解释说明 |
| 分步思维过程(免费获取!) |
| Python代码+执行结果 |
| 工具名称、参数、输出 |
| MCP服务器提供的工具定义 |
| 匹配的文本块、引用来源 |
| URL、文本片段 |
| 图片URL |
输出处理示例:
typescript
response.output.forEach(item => {
if (item.type === 'reasoning') console.log(item.summary[0].text);
if (item.type === 'web_search_call') console.log(item.results);
if (item.type === 'message') console.log(item.content[0].text);
});
// 或使用仅文本输出的辅助方法
console.log(response.output_text);Polymorphic Outputs
从Chat Completions迁移
Returns 8 output types instead of single message:
| Type | Example |
|---|---|
| Final answer, explanation |
| Step-by-step thought process (free!) |
| Python code + results |
| Tool name, args, output |
| Tool definitions from MCP server |
| Matched chunks, citations |
| URLs, snippets |
| Image URL |
Processing:
typescript
response.output.forEach(item => {
if (item.type === 'reasoning') console.log(item.summary[0].text);
if (item.type === 'web_search_call') console.log(item.results);
if (item.type === 'message') console.log(item.content[0].text);
});
// Or use helper for text-only
console.log(response.output_text);重大变更:
| 特性 | Chat Completions | Responses API |
|---|---|---|
| 端点 | | |
| 参数 | | |
| 角色 | | |
| 输出 | | |
| 状态管理 | 手动数组管理 | 自动管理(对话ID) |
| 流式输出 | | 包含8种类型的SSE输出 |
迁移示例:
typescript
// 迁移前(Chat Completions)
const response = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.choices[0].message.content);
// 迁移后(Responses API)
const response = await openai.responses.create({
model: 'gpt-5',
input: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.output_text);Migration from Chat Completions
从Assistants API迁移
Breaking Changes:
| Feature | Chat Completions | Responses API |
|---|---|---|
| Endpoint | | |
| Parameter | | |
| Role | | |
| Output | | |
| State | Manual array | Automatic (conversation ID) |
| Streaming | | SSE with 8 item types |
Example:
typescript
// Before
const response = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.choices[0].message.content);
// After
const response = await openai.responses.create({
model: 'gpt-5',
input: [
{ role: 'developer', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
});
console.log(response.output_text);重要提示: Assistants API停用时间线
- 2025年8月26日: Assistants API正式弃用
- 2025-2026年: OpenAI提供迁移工具
- 2026年8月26日: Assistants API停止服务
请在2026年8月26日前完成迁移,避免服务中断。
重大变更:
| Assistants API | Responses API |
|---|---|
| Assistants(通过API创建) | 提示词(在控制台创建) |
| Threads | Conversations(存储项目,而非仅消息) |
| Runs(服务器端生命周期) | Responses(无状态调用) |
| Run-Steps | Items(多态输出) |
迁移示例:
typescript
// 迁移前(Assistants API - 已弃用)
const assistant = await openai.beta.assistants.create({
model: 'gpt-4',
instructions: 'You are helpful.',
});
const thread = await openai.beta.threads.create();
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});
// 迁移后(Responses API - 当前版本)
const conversation = await openai.conversations.create({
metadata: { purpose: 'customer_support' },
});
const response = await openai.responses.create({
model: 'gpt-5',
conversation: conversation.id,
input: [
{ role: 'developer', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' },
],
});迁移指南: 官方Assistants迁移文档
Migration from Assistants API
已知问题预防
CRITICAL: Assistants API Sunset Timeline
- August 26, 2025: Assistants API officially deprecated
- 2025-2026: OpenAI providing migration utilities
- August 26, 2026: Assistants API sunset (stops working)
Migrate before August 26, 2026 to avoid breaking changes.
Key Breaking Changes:
| Assistants API | Responses API |
|---|---|
| Assistants (created via API) | Prompts (created in dashboard) |
| Threads | Conversations (store items, not just messages) |
| Runs (server-side lifecycle) | Responses (stateless calls) |
| Run-Steps | Items (polymorphic outputs) |
Migration Example:
typescript
// Before (Assistants API - deprecated)
const assistant = await openai.beta.assistants.create({
model: 'gpt-4',
instructions: 'You are helpful.',
});
const thread = await openai.beta.threads.create();
const run = await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});
// After (Responses API - current)
const conversation = await openai.conversations.create({
metadata: { purpose: 'customer_support' },
});
const response = await openai.responses.create({
model: 'gpt-5',
conversation: conversation.id,
input: [
{ role: 'developer', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' },
],
});Migration Guide: Official Assistants Migration Docs
本技能可预防11种已记录的错误:
1. 会话状态无法持久化
- 原因: 未使用对话ID或每轮对话使用不同的ID
- 解决方法: 仅创建一次对话(),所有轮次复用
const conv = await openai.conversations.create()conv.id
2. MCP服务器连接失败 ()
mcp_connection_error- 原因: URL无效、授权令牌缺失/过期、服务器宕机
- 解决方法: 验证URL正确性,使用手动测试,检查令牌过期时间
fetch()
3. Code Interpreter超时 ()
code_interpreter_timeout- 原因: 代码运行时间超过30秒
- 解决方法: 使用延长超时(最长10分钟)
background: true
4. 图片生成速率限制 ()
rate_limit_error- 原因: DALL-E请求过于频繁
- 解决方法: 实现指数退避重试机制(延迟1秒、2秒、3秒依次重试)
5. 文件搜索相关性问题
- 原因: 模糊查询返回无关结果
- 解决方法: 使用具体查询(如“2024年第四季度定价”而非“查找定价”),通过过滤结果
chunk.score > 0.7
6. 成本跟踪混乱
- 原因: Responses API按输入+输出+工具+存储对话计费(Chat Completions仅按输入+输出计费)
- 解决方法: 若不需要存储对话则设置,监控
store: falseresponse.usage.tool_tokens
7. 对话未找到 ()
invalid_request_error- 原因: ID输入错误、对话已删除或过期(90天限制)
- 解决方法: 使用验证对话存在后再使用
openai.conversations.list()
8. 工具输出解析失败
- 原因: 访问错误的输出结构
- 解决方法: 使用辅助方法,或遍历
response.output_text并检查response.output.forEach(item => ...)item.type
9. Zod v4与结构化输出不兼容
- 错误信息:
Invalid schema for response_format 'name': schema must be a JSON Schema of 'type: "object"', got 'type: "string"'. - 来源: GitHub Issue #1597
- 原因: SDK内置的库不支持Zod v4(缺少
zod-to-json-schema导出)ZodFirstPartyTypeKind - 预防方法: 固定使用Zod v3(),或使用自定义
"zod": "^3.23.8"并配合zodTextFormatz.toJSONSchema({ target: "draft-7" })
typescript
// 解决方法: 固定使用Zod v3(推荐)
{
"dependencies": {
"openai": "^6.16.0",
"zod": "^3.23.8" // 暂不要升级到v4
}
}10. 后台模式下网页搜索缺少来源
- 错误现象: 输出仅包含查询内容,无来源/结果
web_search_call - 来源: GitHub Issue #1676
- 原因: 当同时使用和
background: true工具时,OpenAI不会在响应中返回来源web_search - 预防方法: 若需要网页搜索来源,使用同步模式()
background: false
typescript
// ✅ 同步模式下可获取来源
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Latest AI news?',
background: false, // 需要获取来源时必须设置
tools: [{ type: 'web_search' }],
});11. 流式模式下缺少output_text辅助方法
- 错误现象: 流式模式中为
finalResponse().output_textundefined - 来源: GitHub Issue #1662
- 原因: 不包含
stream.finalResponse()便捷字段(仅在非流式响应中可用)output_text - 预防方法: 监听事件,或从
output_text.done项中手动提取output
typescript
// 解决方法: 监听事件
const stream = openai.responses.stream({ model: 'gpt-5', input: 'Hello!' });
let outputText = '';
for await (const event of stream) {
if (event.type === 'output_text.done') {
outputText = event.output_text; // ✅ 可从事件中获取
}
}Known Issues Prevention
关键实践准则
This skill prevents 11 documented errors:
1. Session State Not Persisting
- Cause: Not using conversation IDs or using different IDs per turn
- Fix: Create conversation once (), reuse
const conv = await openai.conversations.create()for all turnsconv.id
2. MCP Server Connection Failed ()
mcp_connection_error- Causes: Invalid URL, missing/expired auth token, server down
- Fix: Verify URL is correct, test manually with , check token expiration
fetch()
3. Code Interpreter Timeout ()
code_interpreter_timeout- Cause: Code runs longer than 30 seconds
- Fix: Use for extended timeout (up to 10 min)
background: true
4. Image Generation Rate Limit ()
rate_limit_error- Cause: Too many DALL-E requests
- Fix: Implement exponential backoff retry (1s, 2s, 3s delays)
5. File Search Relevance Issues
- Cause: Vague queries return irrelevant results
- Fix: Use specific queries ("pricing in Q4 2024" not "find pricing"), filter by
chunk.score > 0.7
6. Cost Tracking Confusion
- Cause: Responses bills for input + output + tools + stored conversations (vs Chat Completions: input + output only)
- Fix: Set if not needed, monitor
store: falseresponse.usage.tool_tokens
7. Conversation Not Found ()
invalid_request_error- Causes: ID typo, conversation deleted, or expired (90-day limit)
- Fix: Verify exists with before using
openai.conversations.list()
8. Tool Output Parsing Failed
- Cause: Accessing wrong output structure
- Fix: Use helper or iterate
response.output_textcheckingresponse.output.forEach(item => ...)item.type
9. Zod v4 Incompatibility with Structured Outputs
- Error:
Invalid schema for response_format 'name': schema must be a JSON Schema of 'type: "object"', got 'type: "string"'. - Source: GitHub Issue #1597
- Why It Happens: SDK's vendored library doesn't support Zod v4 (missing
zod-to-json-schemaexport)ZodFirstPartyTypeKind - Prevention: Pin to Zod v3 () or use custom
"zod": "^3.23.8"withzodTextFormatz.toJSONSchema({ target: "draft-7" })
typescript
// Workaround: Pin to Zod v3 (recommended)
{
"dependencies": {
"openai": "^6.16.0",
"zod": "^3.23.8" // DO NOT upgrade to v4 yet
}
}10. Background Mode Web Search Missing Sources
- Error: output items contain query but no sources/results
web_search_call - Source: GitHub Issue #1676
- Why It Happens: When using +
background: truetool, OpenAI doesn't return sources in the responseweb_search - Prevention: Use synchronous mode () when web search sources are needed
background: false
typescript
// ✅ Sources available in sync mode
const response = await openai.responses.create({
model: 'gpt-5',
input: 'Latest AI news?',
background: false, // Required for sources
tools: [{ type: 'web_search' }],
});11. Streaming Mode Missing output_text Helper
- Error: is
finalResponse().output_textin streaming modeundefined - Source: GitHub Issue #1662
- Why It Happens: doesn't include
stream.finalResponse()convenience field (only available in non-streaming responses)output_text - Prevention: Listen for event or manually extract from
output_text.doneitemsoutput
typescript
// Workaround: Listen for event
const stream = openai.responses.stream({ model: 'gpt-5', input: 'Hello!' });
let outputText = '';
for await (const event of stream) {
if (event.type === 'output_text.done') {
outputText = event.output_text; // ✅ Available in event
}
}✅ 推荐做法:
- 多轮对话使用对话ID(缓存利用率提升40-80%)
- 处理多态响应中的所有8种输出类型
- 任务超过30秒时使用
background: true - 每次请求都提供MCP令牌(不会被存储,必须每次提供)
authorization - 监控以控制成本
response.usage.total_tokens
❌ 禁止做法:
- 在客户端代码中暴露API密钥
- 假设输出仅为单一消息(使用辅助方法)
response.output_text - 跨用户复用对话ID(安全风险)
- 忽略错误类型(需专门处理、
rate_limit_error)mcp_connection_error - 后台任务轮询间隔小于1秒(建议使用5秒间隔)
Critical Patterns
参考资料
✅ Always:
- Use conversation IDs for multi-turn (40-80% better cache)
- Handle all 8 output types in polymorphic responses
- Use for tasks >30s
background: true - Provide MCP tokens (NOT stored, required each request)
authorization - Monitor for cost control
response.usage.total_tokens
❌ Never:
- Expose API keys in client-side code
- Assume single message output (use helper)
response.output_text - Reuse conversation IDs across users (security risk)
- Ignore error types (handle ,
rate_limit_errorspecifically)mcp_connection_error - Poll faster than 1s for background tasks (use 5s intervals)
官方文档:
- Responses API指南: https://platform.openai.com/docs/guides/responses
- API参考: https://platform.openai.com/docs/api-reference/responses
- MCP集成指南: https://platform.openai.com/docs/guides/tools-connectors-mcp
- 博客文章: https://developers.openai.com/blog/responses-api/
- 入门应用: https://github.com/openai/openai-responses-starter-app
技能资源: , , , , ,
templates/references/responses-vs-chat-completions.mdreferences/mcp-integration-guide.mdreferences/built-in-tools-guide.mdreferences/migration-guide.mdreferences/top-errors.md最后验证时间: 2026-01-21 | 技能版本: 2.1.0 | 更新内容: 新增3个一级问题(Zod v4兼容性、后台网页搜索缺失来源、流式output_text问题)、2个二级发现(MCP授权、推理隐私)、数据保留与ZDR章节、Assistants API停用时间线、后台模式TTFT说明、网页搜索TypeScript限制。SDK版本更新至6.16.0。
References
—
Official Docs:
- Responses API Guide: https://platform.openai.com/docs/guides/responses
- API Reference: https://platform.openai.com/docs/api-reference/responses
- MCP Integration: https://platform.openai.com/docs/guides/tools-connectors-mcp
- Blog Post: https://developers.openai.com/blog/responses-api/
- Starter App: https://github.com/openai/openai-responses-starter-app
Skill Resources: , , , , ,
templates/references/responses-vs-chat-completions.mdreferences/mcp-integration-guide.mdreferences/built-in-tools-guide.mdreferences/migration-guide.mdreferences/top-errors.mdLast verified: 2026-01-21 | Skill version: 2.1.0 | Changes: Added 3 TIER 1 issues (Zod v4, background web search, streaming output_text), 2 TIER 2 findings (MCP approval, reasoning privacy), Data Retention & ZDR section, Assistants API sunset timeline, background mode TTFT note, web search TypeScript limitation. Updated SDK version to 6.16.0.
—