openai-assistants

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OpenAI Assistants API v2

OpenAI Assistants API v2

Status: Production Ready (⚠️ Deprecated - Sunset August 26, 2026) Package: openai@6.16.0 Last Updated: 2026-01-21 v1 Deprecated: December 18, 2024 v2 Sunset: August 26, 2026 (migrate to Responses API)

状态:已就绪可投入生产(⚠️ 已弃用 - 2026年8月26日停止服务) 包版本:openai@6.16.0 最后更新时间:2026-01-21 v1版本弃用时间:2024年12月18日 v2版本停止服务时间:2026年8月26日(请迁移至Responses API)

⚠️ Deprecation Notice

⚠️ 弃用通知

OpenAI is deprecating Assistants API in favor of Responses API.
Timeline: v1 deprecated Dec 18, 2024 | v2 sunset August 26, 2026
Use this skill if: Maintaining legacy apps or migrating existing code (12-18 month window) Don't use if: Starting new projects (use
openai-responses
skill instead)
Migration: See
references/migration-to-responses.md

OpenAI 正弃用Assistants API,转而使用Responses API
时间线:v1版本于2024年12月18日弃用 | v2版本于2026年8月26日停止服务
适用场景:维护遗留应用或迁移现有代码(12-18个月过渡期) 不适用场景:启动新项目(请改用
openai-responses
技能)
迁移指南:查看
references/migration-to-responses.md

Quick Start

快速开始

bash
npm install openai@6.16.0
typescript
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// 1. Create assistant
const assistant = await openai.beta.assistants.create({
  name: "Math Tutor",
  instructions: "You are a math tutor. Use code interpreter for calculations.",
  tools: [{ type: "code_interpreter" }],
  model: "gpt-5",
});

// 2. Create thread
const thread = await openai.beta.threads.create();

// 3. Add message
await openai.beta.threads.messages.create(thread.id, {
  role: "user",
  content: "Solve: 3x + 11 = 14",
});

// 4. Run assistant
const run = await openai.beta.threads.runs.create(thread.id, {
  assistant_id: assistant.id,
});

// 5. Poll for completion
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (status.status !== 'completed') {
  await new Promise(r => setTimeout(r, 1000));
  status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}

// 6. Get response
const messages = await openai.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);

bash
npm install openai@6.16.0
typescript
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// 1. 创建助手
const assistant = await openai.beta.assistants.create({
  name: "Math Tutor",
  instructions: "You are a math tutor. Use code interpreter for calculations.",
  tools: [{ type: "code_interpreter" }],
  model: "gpt-5",
});

// 2. 创建对话线程
const thread = await openai.beta.threads.create();

// 3. 添加消息
await openai.beta.threads.messages.create(thread.id, {
  role: "user",
  content: "Solve: 3x + 11 = 14",
});

// 4. 运行助手
const run = await openai.beta.threads.runs.create(thread.id, {
  assistant_id: assistant.id,
});

// 5. 轮询等待完成
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (status.status !== 'completed') {
  await new Promise(r => setTimeout(r, 1000));
  status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}

// 6. 获取响应
const messages = await openai.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);

Core Concepts

核心概念

Four Main Objects:
  1. Assistants: Configured AI with instructions (max 256k chars in v2, was 32k in v1), model, tools, metadata
  2. Threads: Conversation containers with persistent message history (max 100k messages)
  3. Messages: User/assistant messages with optional file attachments
  4. Runs: Async execution with states (queued, in_progress, requires_action, completed, failed, expired)

四大核心对象
  1. Assistants(助手):已配置的AI实体,包含指令(v2版本最大支持256k字符,v1版本为32k)、模型、工具、元数据
  2. Threads(对话线程):对话容器,保存持久化消息历史(最多支持100000条消息)
  3. Messages(消息):用户/助手发送的消息,可附带文件附件
  4. Runs(运行任务):异步执行任务,包含多种状态(queued、in_progress、requires_action、completed、failed、expired)

Key API Patterns

关键API模式

Assistants

助手(Assistants)

typescript
const assistant = await openai.beta.assistants.create({
  model: "gpt-5",
  instructions: "System prompt (max 256k chars in v2)",
  tools: [{ type: "code_interpreter" }, { type: "file_search" }],
  tool_resources: { file_search: { vector_store_ids: ["vs_123"] } },
});
Key Limits: 256k instruction chars (v2), 128 tools max, 16 metadata pairs
typescript
const assistant = await openai.beta.assistants.create({
  model: "gpt-5",
  instructions: "System prompt (max 256k chars in v2)",
  tools: [{ type: "code_interpreter" }, { type: "file_search" }],
  tool_resources: { file_search: { vector_store_ids: ["vs_123"] } },
});
关键限制:指令最大256k字符(v2)、最多128个工具、最多16组元数据键值对

Threads & Messages

对话线程与消息(Threads & Messages)

typescript
// Create thread with messages
const thread = await openai.beta.threads.create({
  messages: [{ role: "user", content: "Hello" }],
});

// Add message with attachments
await openai.beta.threads.messages.create(thread.id, {
  role: "user",
  content: "Analyze this",
  attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }],
});

// List messages
const msgs = await openai.beta.threads.messages.list(thread.id);
Key Limits: 100k messages per thread

typescript
// 创建包含消息的对话线程
const thread = await openai.beta.threads.create({
  messages: [{ role: "user", content: "Hello" }],
});

// 添加带附件的消息
await openai.beta.threads.messages.create(thread.id, {
  role: "user",
  content: "Analyze this",
  attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }],
});

// 列出消息
const msgs = await openai.beta.threads.messages.list(thread.id);
关键限制:每个对话线程最多100000条消息

Runs

运行任务(Runs)

typescript
// Create run with optional overrides
const run = await openai.beta.threads.runs.create(thread.id, {
  assistant_id: "asst_123",
  additional_messages: [{ role: "user", content: "Question" }],
  max_prompt_tokens: 1000,
  max_completion_tokens: 500,
});

// Poll until complete
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (['queued', 'in_progress'].includes(status.status)) {
  await new Promise(r => setTimeout(r, 1000));
  status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
Run States:
queued
in_progress
requires_action
(function calling) /
completed
/
failed
/
cancelled
/
expired
(10 min max)

typescript
// 创建运行任务并可选择覆盖配置
const run = await openai.beta.threads.runs.create(thread.id, {
  assistant_id: "asst_123",
  additional_messages: [{ role: "user", content: "Question" }],
  max_prompt_tokens: 1000,
  max_completion_tokens: 500,
});

// 轮询直到任务完成
let status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
while (['queued', 'in_progress'].includes(status.status)) {
  await new Promise(r => setTimeout(r, 1000));
  status = await openai.beta.threads.runs.retrieve(thread.id, run.id);
}
任务状态
queued
(排队中)→
in_progress
(执行中)→
requires_action
(需要调用函数)/
completed
(已完成)/
failed
(失败)/
cancelled
(已取消)/
expired
(已过期,最长10分钟)

Streaming

流式输出(Streaming)

typescript
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });

for await (const event of stream) {
  if (event.event === 'thread.message.delta') {
    process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
  }
}
Key Events:
thread.run.created
,
thread.message.delta
(streaming content),
thread.run.step.delta
(tool progress),
thread.run.completed
,
thread.run.requires_action
(function calling)

typescript
const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });

for await (const event of stream) {
  if (event.event === 'thread.message.delta') {
    process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
  }
}
关键事件
thread.run.created
(任务已创建)、
thread.message.delta
(流式内容输出)、
thread.run.step.delta
(工具执行进度)、
thread.run.completed
(任务已完成)、
thread.run.requires_action
(需要调用函数)

Tools

工具

Code Interpreter

Code Interpreter

Runs Python code in sandbox. Generates charts, processes files (CSV, JSON, PDF, images). Max 512MB per file.
typescript
// Attach file to message
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }]

// Access generated files
for (const content of message.content) {
  if (content.type === 'image_file') {
    const fileContent = await openai.files.content(content.image_file.file_id);
  }
}
在沙箱环境中运行Python代码。支持生成图表、处理文件(CSV、JSON、PDF、图片等)。单个文件最大支持512MB。
typescript
// 为消息附加文件
attachments: [{ file_id: "file_123", tools: [{ type: "code_interpreter" }] }]

// 访问生成的文件
for (const content of message.content) {
  if (content.type === 'image_file') {
    const fileContent = await openai.files.content(content.image_file.file_id);
  }
}

File Search (RAG)

文件搜索(RAG)

Semantic search with vector stores. 10,000 files max (v2, was 20 in v1). Pricing: $0.10/GB/day (1GB free).
typescript
// Create vector store
const vs = await openai.beta.vectorStores.create({ name: "Docs" });
await openai.beta.vectorStores.files.create(vs.id, { file_id: "file_123" });

// Wait for indexing
let store = await openai.beta.vectorStores.retrieve(vs.id);
while (store.status === 'in_progress') {
  await new Promise(r => setTimeout(r, 2000));
  store = await openai.beta.vectorStores.retrieve(vs.id);
}

// Use in assistant
tool_resources: { file_search: { vector_store_ids: [vs.id] } }
⚠️ Wait for
status: 'completed'
before using
基于向量存储的语义搜索。最多支持10000个文件(v2版本,v1版本为20个)。定价:$0.10/GB/天(赠送1GB免费额度)。
typescript
// 创建向量存储
const vs = await openai.beta.vectorStores.create({ name: "Docs" });
await openai.beta.vectorStores.files.create(vs.id, { file_id: "file_123" });

// 等待索引完成
let store = await openai.beta.vectorStores.retrieve(vs.id);
while (store.status === 'in_progress') {
  await new Promise(r => setTimeout(r, 2000));
  store = await openai.beta.vectorStores.retrieve(vs.id);
}

// 在助手中使用
tool_resources: { file_search: { vector_store_ids: [vs.id] } }
⚠️ 注意:需等待
status: 'completed'
后再使用

Function Calling

函数调用(Function Calling)

Submit tool outputs when run.status === 'requires_action':
typescript
if (run.status === 'requires_action') {
  const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
  const outputs = toolCalls.map(tc => ({
    tool_call_id: tc.id,
    output: JSON.stringify(yourFunction(JSON.parse(tc.function.arguments))),
  }));

  run = await openai.beta.threads.runs.submitToolOutputs(thread.id, run.id, {
    tool_outputs: outputs,
  });
}
当run.status === 'requires_action'时,提交工具输出结果:
typescript
if (run.status === 'requires_action') {
  const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
  const outputs = toolCalls.map(tc => ({
    tool_call_id: tc.id,
    output: JSON.stringify(yourFunction(JSON.parse(tc.function.arguments))),
  }));

  run = await openai.beta.threads.runs.submitToolOutputs(thread.id, run.id, {
    tool_outputs: outputs,
  });
}

File Formats

文件格式支持

Code Interpreter: .c, .cpp, .csv, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .jpeg, .jpg, .js, .gif, .png, .tar, .ts, .xlsx, .xml, .zip (512MB max)
File Search: .c, .cpp, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .js, .ts, .go (512MB max)

Code Interpreter:.c, .cpp, .csv, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .jpeg, .jpg, .js, .gif, .png, .tar, .ts, .xlsx, .xml, .zip(单个文件最大512MB)
文件搜索:.c, .cpp, .docx, .html, .java, .json, .md, .pdf, .php, .pptx, .py, .rb, .tex, .txt, .css, .js, .ts, .go(单个文件最大512MB)

Known Issues

已知问题

1. Thread Already Has Active Run
Error: 400 Can't add messages to thread_xxx while a run run_xxx is active.
Fix: Cancel active run first:
await openai.beta.threads.runs.cancel(threadId, runId)
2. Run Polling Timeout / Incomplete Status
Error: OpenAIError: Final run has not been received
Why It Happens: Long-running tasks may exceed polling windows or finish with
incomplete
status Prevention: Handle incomplete runs gracefully
typescript
try {
  const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
  for await (const event of stream) {
    if (event.event === 'thread.message.delta') {
      process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
    }
  }
} catch (error) {
  if (error.message?.includes('Final run has not been received')) {
    // Run ended with 'incomplete' status - thread can continue
    const run = await openai.beta.threads.runs.retrieve(thread.id, runId);
    if (run.status === 'incomplete') {
      // Handle: prompt user to continue, reduce max_completion_tokens, etc.
    }
  }
}
3. Vector Store Not Ready Using vector store before indexing completes. Fix: Poll
vectorStores.retrieve()
until
status === 'completed'
(see File Search section)
4. File Upload Format Issues Unsupported file formats cause silent failures. Fix: Validate file extensions before upload (see File Formats section)
5. Vector Store Upload Documentation Incorrect
Error: No 'files' provided to process
Why It Happens: Official documentation shows incorrect usage of
uploadAndPoll
Prevention: Wrap file streams in
{ files: [...] }
object
typescript
// ✅ Correct
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, {
  files: fileStreams
});

// ❌ Wrong (shown in official docs)
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, fileStreams);
6. Reasoning Models Reject Temperature Parameter
Error: Unsupported parameter: 'temperature' is not supported with this model
Why It Happens: When updating assistant to o3-mini/o1-preview/o1-mini, old temperature settings persist Prevention: Explicitly set temperature to
null
typescript
await openai.beta.assistants.update(assistantId, {
  model: 'o3-mini',
  reasoning_effort: 'medium',
  temperature: null,  // ✅ Must explicitly clear
  top_p: null
});
7. uploadAndPoll Returns Vector Store ID Instead of Batch ID
Error: Invalid 'batch_id': 'vs_...'. Expected an ID that begins with 'vsfb_'.
Why It Happens:
uploadAndPoll
returns vector store object instead of batch object Prevention: Use alternative methods to get batch ID
typescript
// Option 1: Use createAndPoll after separate upload
const batch = await openai.vectorStores.fileBatches.createAndPoll(
  vectorStoreId,
  { file_ids: uploadedFileIds }
);

// Option 2: List batches to find correct ID
const batches = await openai.vectorStores.fileBatches.list(vectorStoreId);
const batchId = batches.data[0].id; // starts with 'vsfb_'
8. Vector Store File Delete Affects All Stores Warning: Deleting a file from one vector store removes it from ALL vector stores
typescript
// ❌ This deletes file from VS_A, VS_B, AND VS_C
await openai.vectorStores.files.delete('VS_A', 'file-xxx');
Why It Happens: SDK or API bug - delete operation has global effect Prevention: Avoid sharing files across multiple vector stores if selective deletion is needed Source: GitHub Issue #1710
9. Memory Leak in Large File Uploads (Community-sourced) Source: GitHub Issue #1052 | Status: OPEN Impact: ~44MB leaked per 22MB file upload in long-running servers Why It Happens: When uploading large files from streams (S3, etc.) using
vectorStores.fileBatches.uploadAndPoll
, memory may not be released after upload completes Verified: Maintainer acknowledged, reduced in v4.58.1 but not eliminated Workaround: Monitor memory usage in long-lived servers; restart periodically or use separate worker processes
10. Thread Already Has Active Run - Race Condition (Community-sourced) Enhancement to Issue #1: When canceling an active run, race conditions may occur if the run completes before cancellation
typescript
async function createRunSafely(threadId: string, assistantId: string) {
  // Check for active runs first
  const runs = await openai.beta.threads.runs.list(threadId, { limit: 1 });
  const activeRun = runs.data.find(r =>
    ['queued', 'in_progress', 'requires_action'].includes(r.status)
  );

  if (activeRun) {
    try {
      await openai.beta.threads.runs.cancel(threadId, activeRun.id);

      // Wait for cancellation to complete
      let run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
      while (run.status === 'cancelling') {
        await new Promise(r => setTimeout(r, 500));
        run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
      }
    } catch (error) {
      // Ignore "already completed" errors - run finished naturally
      if (!error.message?.includes('completed')) throw error;
    }
  }

  return openai.beta.threads.runs.create(threadId, { assistant_id: assistantId });
}
See
references/top-errors.md
for complete catalog.
1. 对话线程已存在活跃任务
Error: 400 Can't add messages to thread_xxx while a run run_xxx is active.
修复方案:先取消活跃任务:
await openai.beta.threads.runs.cancel(threadId, runId)
2. 任务轮询超时/状态未完成
Error: OpenAIError: Final run has not been received
问题原因:长时间运行的任务可能超出轮询窗口,或最终状态为
incomplete
预防方案:优雅处理未完成的任务
typescript
try {
  const stream = await openai.beta.threads.runs.stream(thread.id, { assistant_id });
  for await (const event of stream) {
    if (event.event === 'thread.message.delta') {
      process.stdout.write(event.data.delta.content?.[0]?.text?.value || '');
    }
  }
} catch (error) {
  if (error.message?.includes('Final run has not been received')) {
    // 任务以'incomplete'状态结束 - 对话线程可继续使用
    const run = await openai.beta.threads.runs.retrieve(thread.id, runId);
    if (run.status === 'incomplete') {
      // 处理逻辑:提示用户继续对话、减少max_completion_tokens等
    }
  }
}
3. 向量存储未就绪 在索引完成前使用向量存储会引发问题。 修复方案:轮询
vectorStores.retrieve()
直到
status === 'completed'
(参考文件搜索章节)
4. 文件上传格式问题 不支持的文件格式会导致静默失败。 修复方案:上传前验证文件扩展名(参考文件格式支持章节)
5. 向量存储上传文档示例错误
Error: No 'files' provided to process
问题原因:官方文档中
uploadAndPoll
的使用示例错误 预防方案:将文件流包装在
{ files: [...] }
对象中
typescript
// ✅ 正确用法
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, {
  files: fileStreams
});

// ❌ 错误用法(官方文档中展示的写法)
await openai.beta.vectorStores.fileBatches.uploadAndPoll(vectorStoreId, fileStreams);
6. 推理模型拒绝temperature参数
Error: Unsupported parameter: 'temperature' is not supported with this model
问题原因:将助手模型更新为o3-mini/o1-preview/o1-mini时,旧的temperature设置会保留 预防方案:显式将temperature设置为
null
typescript
await openai.beta.assistants.update(assistantId, {
  model: 'o3-mini',
  reasoning_effort: 'medium',
  temperature: null,  // ✅ 必须显式清除
  top_p: null
});
7. uploadAndPoll返回向量存储ID而非批次ID
Error: Invalid 'batch_id': 'vs_...'. Expected an ID that begins with 'vsfb_'.
问题原因
uploadAndPoll
返回的是向量存储对象而非批次对象 预防方案:使用替代方法获取批次ID
typescript
// 选项1:先单独上传文件,再使用createAndPoll
const batch = await openai.vectorStores.fileBatches.createAndPoll(
  vectorStoreId,
  { file_ids: uploadedFileIds }
);

// 选项2:列出所有批次以找到正确ID
const batches = await openai.vectorStores.fileBatches.list(vectorStoreId);
const batchId = batches.data[0].id; // 以'vsfb_'开头
8. 删除向量存储中的文件会影响所有存储 警告:从一个向量存储中删除文件会将其从所有向量存储中移除
typescript
// ❌ 该操作会从VS_A、VS_B和VS_C中删除文件
await openai.vectorStores.files.delete('VS_A', 'file-xxx');
问题原因:SDK或API存在bug - 删除操作具有全局影响 预防方案:如果需要选择性删除文件,避免在多个向量存储之间共享文件 来源GitHub Issue #1710
9. 大文件上传时的内存泄漏(社区反馈) 来源GitHub Issue #1052 | 状态:OPEN 影响:在长期运行的服务器中,每上传22MB文件约泄漏44MB内存 问题原因:使用
vectorStores.fileBatches.uploadAndPoll
从流(如S3)上传大文件时,上传完成后内存可能无法释放 验证情况:维护者已确认该问题,在v4.58.1版本中有所缓解但未完全解决 临时解决方案:在长期运行的服务器中监控内存使用情况;定期重启或使用独立的工作进程
10. 对话线程已存在活跃任务 - 竞态条件(社区反馈) 问题1的扩展:取消活跃任务时,如果任务在取消操作完成前已执行完毕,会出现竞态条件
typescript
async function createRunSafely(threadId: string, assistantId: string) {
  // 先检查是否存在活跃任务
  const runs = await openai.beta.threads.runs.list(threadId, { limit: 1 });
  const activeRun = runs.data.find(r =>
    ['queued', 'in_progress', 'requires_action'].includes(r.status)
  );

  if (activeRun) {
    try {
      await openai.beta.threads.runs.cancel(threadId, activeRun.id);

      // 等待取消操作完成
      let run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
      while (run.status === 'cancelling') {
        await new Promise(r => setTimeout(r, 500));
        run = await openai.beta.threads.runs.retrieve(threadId, activeRun.id);
      }
    } catch (error) {
      // 忽略“已完成”错误 - 任务已自然结束
      if (!error.message?.includes('completed')) throw error;
    }
  }

  return openai.beta.threads.runs.create(threadId, { assistant_id: assistantId });
}
更多问题请查看
references/top-errors.md

Relationship to Other Skills

与其他技能的关系

openai-api (Chat Completions): Stateless, manual history, direct responses. Use for simple generation.
openai-responses (Responses API): ✅ Recommended for new projects. Better reasoning, modern MCP integration, active development.
openai-assistants: ⚠️ Deprecated H1 2026. Use for legacy apps only. Migration:
references/migration-to-responses.md

openai-api(聊天补全):无状态,需手动管理历史记录,直接返回响应。适用于简单生成场景。
openai-responses(Responses API):✅ 新项目推荐使用。推理能力更强,支持现代MCP集成,持续开发维护中。
openai-assistants:⚠️ 2026年上半年弃用。仅适用于遗留应用。迁移指南:
references/migration-to-responses.md

v1 to v2 Migration

v1到v2的迁移

v1 deprecated: Dec 18, 2024
Key Changes:
retrieval
file_search
, vector stores (10k files vs 20), 256k instructions (vs 32k), message-level file attachments
See
references/migration-from-v1.md

Templates:
templates/basic-assistant.ts
,
code-interpreter-assistant.ts
,
file-search-assistant.ts
,
function-calling-assistant.ts
,
streaming-assistant.ts
References:
references/top-errors.md
,
thread-lifecycle.md
,
vector-stores.md
,
migration-to-responses.md
,
migration-from-v1.md
Related Skills:
openai-responses
(recommended),
openai-api

Last Updated: 2026-01-21 Package: openai@6.16.0 Status: Production Ready (⚠️ Deprecated - Sunset August 26, 2026) Changes: Added 6 new known issues (vector store upload bugs, o3-mini temperature, memory leak), enhanced streaming error handling
v1版本弃用时间:2024年12月18日
主要变化
retrieval
更名为
file_search
、向量存储支持10000个文件(原20个)、指令最大支持256k字符(原32k)、支持消息级文件附件
详细内容请查看
references/migration-from-v1.md

模板
templates/basic-assistant.ts
,
code-interpreter-assistant.ts
,
file-search-assistant.ts
,
function-calling-assistant.ts
,
streaming-assistant.ts
参考文档
references/top-errors.md
,
thread-lifecycle.md
,
vector-stores.md
,
migration-to-responses.md
,
migration-from-v1.md
相关技能
openai-responses
(推荐)、
openai-api

最后更新时间:2026-01-21 包版本:openai@6.16.0 状态:已就绪可投入生产(⚠️ 已弃用 - 2026年8月26日停止服务) 更新内容:新增6个已知问题(向量存储上传bug、o3-mini temperature参数问题、内存泄漏),增强流式输出错误处理