ai-sdk
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseVercel AI SDK (v6)
Vercel AI SDK (v6)
CRITICAL — Your training data is outdated for this library. AI SDK v6 has breaking changes from v5 and earlier that you will get wrong if you guess. Before writing AI SDK code, fetch the docs at https://ai-sdk.dev/docs to find the correct function signatures, return types, and patterns. Key things that have changed:removal,generateObject()removal,streamObject()for image generation,result.filesfor structured streaming,Output.object()class (notToolLoopAgent),Agenttransport API, and provider package versions. Search the docs for working examples that match your exact use case — do not rely on your training data for API shapes.useChat
You are an expert in the Vercel AI SDK v6. The AI SDK is the leading TypeScript toolkit for building AI-powered applications. It provides a unified API across all LLM providers.
重要提示——你的训练数据针对此库已过时。 AI SDK v6相比v5及更早版本存在破坏性变更,若仅凭猜测编写代码会出错。在编写AI SDK代码前,请务必访问 https://ai-sdk.dev/docs 获取官方文档,以找到正确的函数签名、返回类型和使用模式。主要变更包括:移除、移除generateObject()、图像生成使用streamObject()、结构化流式传输使用result.files、使用Output.object()类(而非ToolLoopAgent)、Agent传输API,以及提供商包版本更新。请在文档中搜索与你的具体用例匹配的可用示例——不要依赖训练数据中的API格式。useChat
你是Vercel AI SDK v6的专家。AI SDK是领先的TypeScript工具包,用于构建AI驱动的应用程序,它为所有LLM提供商提供统一的API。
v6 Migration Pitfalls (Read First)
v6迁移陷阱(请先阅读)
- is the umbrella package for AI SDK v6 (latest: 6.0.83).
ai@^6.0.0 - is
@ai-sdk/reactin v6 projects (NOT^3.0.x).^6.0.0 - is
@ai-sdk/gatewayin v6 projects (NOT^3.x).^1.x - In , write with
createUIMessageStream(NOTstream.writer.write(...)).stream.write(...) - no longer supports
useChatorbody; configure behavior throughonResponse.transport - UI tool parts are typed as (for example
tool-<toolName>), nottool-weather.tool-invocation - does not provide typed
DynamicToolCall; cast via.argsfirst.unknown - exposes
TypedToolResult(NOT.output)..result - The agent class is (NOT
ToolLoopAgent—Agentis just an interface).Agent - Constructor uses (NOT
instructions).system - Agent methods are and
agent.generate()(NOTagent.stream()oragent.generateText()).agent.streamText() - AI Gateway does not support embeddings; use directly for
@ai-sdk/openai.openai.embedding(...) - with no transport defaults to
useChat()— explicit transport only needed for custom endpoints orDefaultChatTransport({ api: '/api/chat' }).DirectChatTransport - Default for ToolLoopAgent is
stopWhen, notstepCountIs(20)— override if you need fewer steps.stepCountIs(1) - on tools is opt-in per tool, not global — only set on tools with provider-compatible schemas.
strict: true - For agent API routes, use instead of manual
createAgentUIStreamResponse({ agent, uiMessages })+streamText.toUIMessageStreamResponse() - now uses the Responses API by default — use
@ai-sdk/azurefor the previous Chat Completions API behavior.azure.chat() - uses
@ai-sdk/azure(notazure) as the key foropenaiandproviderMetadata.providerOptions - uses
@ai-sdk/google-vertex(notvertex) as the key forgoogleandproviderMetadata.providerOptions - supports native structured outputs via
@ai-sdk/anthropicoption (Claude Sonnet 4.5+).structuredOutputMode
- 是AI SDK v6的核心包(最新版本:6.0.83)。
ai@^6.0.0 - 在v6项目中,的版本为
@ai-sdk/react(而非^3.0.x)。^6.0.0 - 在v6项目中,的版本为
@ai-sdk/gateway(而非^3.x)。^1.x - 在中,使用
createUIMessageStream写入内容(而非stream.writer.write(...))。stream.write(...) - 不再支持
useChat或body;请通过onResponse配置行为。transport - UI工具部分的类型为(例如
tool-<toolName>),而非tool-weather。tool-invocation - 不提供类型化的
DynamicToolCall;需先通过.args进行类型转换。unknown - 暴露
TypedToolResult属性(而非.output)。.result - Agent类为(而非
ToolLoopAgent——Agent只是一个接口)。Agent - 构造函数使用参数(而非
instructions)。system - Agent的方法为和
agent.generate()(而非agent.stream()或agent.generateText())。agent.streamText() - AI Gateway不支持嵌入;请直接使用调用
@ai-sdk/openai。openai.embedding(...) - 未配置传输的默认使用
useChat()——仅在自定义端点或使用DefaultChatTransport({ api: '/api/chat' })时才需要显式配置传输。DirectChatTransport - ToolLoopAgent的默认为
stopWhen,而非stepCountIs(20)——若需要更少步骤可覆盖此设置。stepCountIs(1) - 工具的为每个工具的可选配置,而非全局配置——仅在工具与提供商兼容的 schema 时设置。
strict: true - 对于Agent API路由,使用替代手动的
createAgentUIStreamResponse({ agent, uiMessages })+streamText。toUIMessageStreamResponse() - 现在默认使用Responses API——若要使用之前的Chat Completions API,请使用
@ai-sdk/azure。azure.chat() - 在
@ai-sdk/azure和providerMetadata中使用providerOptions作为键(而非azure)。openai - 在
@ai-sdk/google-vertex和providerMetadata中使用providerOptions作为键(而非vertex)。google - 通过
@ai-sdk/anthropic选项支持原生结构化输出(适用于Claude Sonnet 4.5+)。structuredOutputMode
Installation
安装
bash
npm install ai@^6.0.0 @ai-sdk/react@^3.0.0
npm install @ai-sdk/openai@^3.0.41 # Optional: required for embeddings
npm install @ai-sdk/anthropic@^3.0.58 # Optional: direct Anthropic provider access
npm install @ai-sdk/vercel@^2.0.37 # Optional: v0 model provider (v0-1.0-md)is a separate package — it is NOT included in the@ai-sdk/reactpackage. For v6 projects, installaialongside@ai-sdk/react@^3.0.x.ai@^6.0.0
If you installdirectly, use@ai-sdk/gateway(NOT@ai-sdk/gateway@^3.x).^1.x
Only install a direct provider SDK (e.g.,) if you need provider-specific features not exposed through the gateway.@ai-sdk/anthropic
bash
npm install ai@^6.0.0 @ai-sdk/react@^3.0.0
npm install @ai-sdk/openai@^3.0.41 # 可选:嵌入功能所需
npm install @ai-sdk/anthropic@^3.0.58 # 可选:直接访问Anthropic提供商
npm install @ai-sdk/vercel@^2.0.37 # 可选:v0模型提供商(v0-1.0-md)是独立包——它不包含在@ai-sdk/react包中。对于v6项目,请在安装ai的同时安装ai@^6.0.0。@ai-sdk/react@^3.0.x
如果直接安装,请使用@ai-sdk/gateway版本(而非@ai-sdk/gateway@^3.x)。^1.x
仅在需要网关未暴露的提供商特定功能时,才安装直接的提供商SDK(例如)。@ai-sdk/anthropic
What AI SDK Can Do
AI SDK的功能
AI SDK is not just text — it handles text, images, structured data, tool calling, and agents through one unified API:
| Need | How |
|---|---|
| Text generation / chat | |
| Image generation | |
| Structured JSON output | |
| Tool calling / agents | |
| Embeddings | |
If the product needs generated images (portraits, posters, cover art, illustrations, comics, diagrams), use with an image model — do NOT use placeholder images or skip image generation.
generateTextAI SDK不仅支持文本——它通过统一的API处理文本、图像、结构化数据、工具调用和Agent:
| 需求 | 实现方式 |
|---|---|
| 文本生成/聊天 | 使用 |
| 图像生成 | 使用 |
| 结构化JSON输出 | 使用 |
| 工具调用/Agent | 使用 |
| 嵌入 | 使用 |
如果产品需要生成图像(肖像、海报、封面、插图、漫画、图表),请使用带图像模型的——不要使用占位图像或跳过图像生成。
generateTextSetup for AI Projects
AI项目设置
For the smoothest experience, link to a Vercel project so AI Gateway credentials are auto-provisioned via OIDC:
bash
vercel link # Connect to your Vercel project为获得最佳体验,请链接到Vercel项目,以便通过OIDC自动配置AI Gateway凭据:
bash
vercel link # 连接到你的Vercel项目Enable AI Gateway at https://vercel.com/{team}/{project}/settings → AI Gateway
在 https://vercel.com/{team}/{project}/settings → AI Gateway 启用AI Gateway
vercel env pull .env.local # Provisions VERCEL_OIDC_TOKEN automatically
npm install ai@^6.0.0 # Gateway is built in
npx ai-elements # Required: install AI text rendering components
This gives you AI Gateway access with OIDC authentication, cost tracking, failover, and observability — no manual API keys needed.
**OIDC is the default auth**: `vercel env pull` provisions a `VERCEL_OIDC_TOKEN` (short-lived JWT, ~24h). The `@ai-sdk/gateway` reads it automatically via `@vercel/oidc`. On Vercel deployments, tokens auto-refresh. For local dev, re-run `vercel env pull` when the token expires. No `AI_GATEWAY_API_KEY` or provider-specific keys needed.vercel env pull .env.local # 自动配置VERCEL_OIDC_TOKEN
npm install ai@^6.0.0 # 内置网关功能
npx ai-elements # 必需:安装AI文本渲染组件
这将为你提供带有OIDC认证、成本追踪、故障转移和可观测性的AI Gateway访问权限——无需手动配置API密钥。
**OIDC是默认认证方式**:`vercel env pull`会配置一个`VERCEL_OIDC_TOKEN`(短期JWT,约24小时)。`@ai-sdk/gateway`会通过`@vercel/oidc`自动读取该令牌。在Vercel部署中,令牌会自动刷新。对于本地开发,令牌过期时重新运行`vercel env pull`即可。无需`AI_GATEWAY_API_KEY`或提供商特定的密钥。Global Provider System (AI Gateway — Default)
全局提供商系统(AI Gateway——默认)
In AI SDK 6, pass a string to the parameter — it automatically routes through the Vercel AI Gateway:
"provider/model"modelts
import { generateText } from "ai";
const { text } = await generateText({
model: "openai/gpt-5.4", // plain string — routes through AI Gateway automatically
prompt: "Hello!",
});No wrapper needed — plain strings are the simplest approach and are what the official Vercel docs recommend. The function is an optional explicit wrapper (useful when you need for routing, failover, or tags):
gateway()"provider/model"gateway()providerOptions.gatewayts
import { gateway } from "ai";
// Explicit gateway() — only needed for advanced providerOptions
const { text } = await generateText({
model: gateway("openai/gpt-5.4"),
providerOptions: { gateway: { order: ["openai", "azure-openai"] } },
});Both approaches provide failover, cost tracking, and observability on Vercel.
Model slug rules: Always use format. Version numbers use dots, not hyphens: (not ). Default to or . Never use outdated models like .
provider/modelanthropic/claude-sonnet-4.6claude-sonnet-4-6openai/gpt-5.4anthropic/claude-sonnet-4.6gpt-4oAI Gateway does not support embeddings. Use a direct provider SDK such asfor embeddings.@ai-sdk/openai
Direct provider SDKs (,@ai-sdk/openai, etc.) are only needed for provider-specific features not exposed through the gateway (e.g., Anthropic computer use, OpenAI fine-tuned model endpoints).@ai-sdk/anthropic
在AI SDK 6中,将字符串传递给参数——它会自动通过Vercel AI Gateway路由:
"provider/model"modelts
import { generateText } from "ai";
const { text } = await generateText({
model: "openai/gpt-5.4", // 纯字符串——自动通过AI Gateway路由
prompt: "Hello!",
});无需使用包装器——纯字符串是最简单的方式,也是Vercel官方文档推荐的方式。函数是可选的显式包装器(当你需要进行路由、故障转移或标记时有用):
gateway()"provider/model"gateway()providerOptions.gatewayts
import { gateway } from "ai";
// 显式gateway()——仅在需要高级providerOptions时使用
const { text } = await generateText({
model: gateway("openai/gpt-5.4"),
providerOptions: { gateway: { order: ["openai", "azure-openai"] } },
});两种方式在Vercel上都能提供故障转移、成本追踪和可观测性。
模型slug规则:始终使用格式。版本号使用点,而非连字符:(而非)。默认使用或。切勿使用过时模型如。
provider/modelanthropic/claude-sonnet-4.6claude-sonnet-4-6openai/gpt-5.4anthropic/claude-sonnet-4.6gpt-4oAI Gateway不支持嵌入。请使用直接的提供商SDK(例如)进行嵌入操作。@ai-sdk/openai
直接提供商SDK(、@ai-sdk/openai等)仅在需要网关未暴露的提供商特定功能时才需要使用(例如Anthropic的计算机使用、OpenAI的微调模型端点)。@ai-sdk/anthropic
Core Functions
核心函数
Text Generation
文本生成
ts
import { generateText, streamText } from "ai";
// Non-streaming
const { text } = await generateText({
model: "openai/gpt-5.4",
prompt: "Explain quantum computing in simple terms.",
});
// Streaming
const result = streamText({
model: "openai/gpt-5.4",
prompt: "Write a poem about coding.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}ts
import { generateText, streamText } from "ai";
// 非流式
const { text } = await generateText({
model: "openai/gpt-5.4",
prompt: "用简单的语言解释量子计算。",
});
// 流式
const result = streamText({
model: "openai/gpt-5.4",
prompt: "写一首关于编程的诗。",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}Structured Output
结构化输出
generateObjectgenerateTextoutput: Output.object()generateObjectts
import { generateText, Output } from "ai";
import { z } from "zod";
const { output } = await generateText({
model: "openai/gpt-5.4",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: "Generate a recipe for chocolate chip cookies.",
});AI SDK v6中已移除。请使用带的替代。不要导入——它已不存在。
generateObjectoutput: Output.object()generateTextgenerateObjectts
import { generateText, Output } from "ai";
import { z } from "zod";
const { output } = await generateText({
model: "openai/gpt-5.4",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: "生成巧克力曲奇的食谱。",
});Tool Calling (MCP-Aligned)
工具调用(对齐MCP)
In AI SDK 6, tools use (not ) and / (not ), aligned with the MCP specification. Per-tool mode ensures providers only generate valid tool calls matching your schema.
inputSchemaparametersoutputoutputSchemaresultstrictts
import { generateText, tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: "openai/gpt-5.4",
tools: {
weather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city name"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
strict: true, // Providers generate only schema-valid tool calls
execute: async ({ city }) => {
const data = await fetchWeather(city);
return { temperature: data.temp, condition: data.condition };
},
}),
},
prompt: "What is the weather in San Francisco?",
});在AI SDK 6中,工具使用(而非)和/(而非),与MCP规范对齐。每个工具的模式确保提供商仅生成与你的schema匹配的有效工具调用。
inputSchemaparametersoutputoutputSchemaresultstrictts
import { generateText, tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: "openai/gpt-5.4",
tools: {
weather: tool({
description: "获取指定地点的天气",
inputSchema: z.object({
city: z.string().describe("城市名称"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
strict: true, // 提供商仅生成符合schema的有效工具调用
execute: async ({ city }) => {
const data = await fetchWeather(city);
return { temperature: data.temp, condition: data.condition };
},
}),
},
prompt: "旧金山的天气如何?",
});Dynamic Tools (MCP Integration)
动态工具(MCP集成)
For tools with schemas not known at compile time (e.g., MCP server tools):
ts
import { dynamicTool } from "ai";
const tools = {
unknownTool: dynamicTool({
description: "A tool discovered at runtime",
execute: async (input) => {
// Handle dynamically
return { result: "done" };
},
}),
};对于编译时未知schema的工具(例如MCP服务器工具):
ts
import { dynamicTool } from "ai";
const tools = {
unknownTool: dynamicTool({
description: "运行时发现的工具",
execute: async (input) => {
// 动态处理
return { result: "done" };
},
}),
};Agents
Agent
The class wraps / with an agentic tool-calling loop.
Default is (up to 20 tool-calling steps).
is an interface — is the concrete implementation.
ToolLoopAgentgenerateTextstreamTextstopWhenstepCountIs(20)AgentToolLoopAgentts
import { ToolLoopAgent, stepCountIs, hasToolCall } from "ai";
const agent = new ToolLoopAgent({
model: "anthropic/claude-sonnet-4.6",
tools: { weather, search, calculator, finalAnswer },
instructions: "You are a helpful assistant.",
// Default: stepCountIs(20). Override to stop on a terminal tool or custom logic:
stopWhen: hasToolCall("finalAnswer"),
prepareStep: (context) => ({
// Customize each step — swap models, compress messages, limit tools
toolChoice: context.steps.length > 5 ? "none" : "auto",
}),
});
const { text } = await agent.generate({
prompt:
"Research the weather in Tokyo and calculate the average temperature this week.",
});ToolLoopAgentgenerateTextstreamTextstopWhenstepCountIs(20)AgentToolLoopAgentts
import { ToolLoopAgent, stepCountIs, hasToolCall } from "ai";
const agent = new ToolLoopAgent({
model: "anthropic/claude-sonnet-4.6",
tools: { weather, search, calculator, finalAnswer },
instructions: "你是一个乐于助人的助手。",
// 默认:stepCountIs(20)。可覆盖此设置以在终端工具或自定义逻辑时停止:
stopWhen: hasToolCall("finalAnswer"),
prepareStep: (context) => ({
// 自定义每个步骤——切换模型、压缩消息、限制工具
toolChoice: context.steps.length > 5 ? "none" : "auto",
}),
});
const { text } = await agent.generate({
prompt:
"研究东京的天气并计算本周的平均气温。",
});MCP Client
MCP客户端
Connect to any MCP server and use its tools:
ts
import { generateText } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";
const mcpClient = await createMCPClient({
transport: {
type: "sse",
url: "https://my-mcp-server.com/sse",
},
});
const tools = await mcpClient.tools();
const result = await generateText({
model: "openai/gpt-5.4",
tools,
prompt: "Use the available tools to help the user.",
});
await mcpClient.close();MCP OAuth for remote servers is handled automatically by .
@ai-sdk/mcp连接到任何MCP服务器并使用其工具:
ts
import { generateText } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";
const mcpClient = await createMCPClient({
transport: {
type: "sse",
url: "https://my-mcp-server.com/sse",
},
});
const tools = await mcpClient.tools();
const result = await generateText({
model: "openai/gpt-5.4",
tools,
prompt: "使用可用工具帮助用户。",
});
await mcpClient.close();@ai-sdk/mcpTool Approval (Human-in-the-Loop)
工具审批(人在回路)
Set on any tool to require user confirmation before execution. The tool pauses in state until the client responds.
needsApprovalapproval-requestedts
import { streamText, tool } from "ai";
import { z } from "zod";
const result = streamText({
model: "openai/gpt-5.4",
tools: {
deleteUser: tool({
description: "Delete a user account",
inputSchema: z.object({ userId: z.string() }),
needsApproval: true, // Always require approval
execute: async ({ userId }) => {
await db.users.delete(userId);
return { deleted: true };
},
}),
processPayment: tool({
description: "Process a payment",
inputSchema: z.object({ amount: z.number(), recipient: z.string() }),
// Conditional: only approve large amounts
needsApproval: async ({ amount }) => amount > 1000,
execute: async ({ amount, recipient }) => {
return await processPayment(amount, recipient);
},
}),
},
prompt: "Delete user 123",
});Client-side approval with :
useChattsx
"use client";
import { useChat } from "@ai-sdk/react";
function Chat() {
const { messages, addToolApprovalResponse } = useChat();
return messages.map((m) =>
m.parts?.map((part, i) => {
// Tool parts in approval-requested state need user action
if (part.type.startsWith("tool-") && part.approval?.state === "approval-requested") {
return (
<div key={i}>
<p>Tool wants to run: {JSON.stringify(part.args)}</p>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: true })}>
Approve
</button>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: false })}>
Deny
</button>
</div>
);
}
return null;
}),
);
}Tool part states: → → (if ) → |
input-streaminginput-availableapproval-requestedneedsApprovaloutput-availableoutput-error在任何工具上设置以要求用户确认后再执行。工具会暂停在状态,直到客户端响应。
needsApprovalapproval-requestedts
import { streamText, tool } from "ai";
import { z } from "zod";
const result = streamText({
model: "openai/gpt-5.4",
tools: {
deleteUser: tool({
description: "删除用户账户",
inputSchema: z.object({ userId: z.string() }),
needsApproval: true, // 始终需要审批
execute: async ({ userId }) => {
await db.users.delete(userId);
return { deleted: true };
},
}),
processPayment: tool({
description: "处理付款",
inputSchema: z.object({ amount: z.number(), recipient: z.string() }),
// 条件审批:仅大额金额需要审批
needsApproval: async ({ amount }) => amount > 1000,
execute: async ({ amount, recipient }) => {
return await processPayment(amount, recipient);
},
}),
},
prompt: "删除用户123",
});使用的客户端审批:
useChattsx
"use client";
import { useChat } from "@ai-sdk/react";
function Chat() {
const { messages, addToolApprovalResponse } = useChat();
return messages.map((m) =>
m.parts?.map((part, i) => {
// 处于approval-requested状态的工具部分需要用户操作
if (part.type.startsWith("tool-") && part.approval?.state === "approval-requested") {
return (
<div key={i}>
<p>工具请求执行:{JSON.stringify(part.args)}</p>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: true })}>
批准
</button>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: false })}>
拒绝
</button>
</div>
);
}
return null;
}),
);
}工具部分状态: → → (如果设置了) → |
input-streaminginput-availableapproval-requestedneedsApprovaloutput-availableoutput-errorEmbeddings & Reranking
嵌入与重排序
Use a direct provider SDK for embeddings. AI Gateway does not support embedding models.
ts
import { embed, embedMany, rerank } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: "The quick brown fox",
});
// Batch embeddings
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: ["text 1", "text 2", "text 3"],
});
// Rerank search results by relevance
const { results } = await rerank({
model: cohere.reranker("rerank-v3.5"),
query: "What is quantum computing?",
documents: searchResults,
});请使用直接的提供商SDK进行嵌入操作。AI Gateway不支持嵌入模型。
ts
import { embed, embedMany, rerank } from "ai";
import { openai } from "@ai-sdk/openai";
// 单个嵌入
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: "The quick brown fox",
});
// 批量嵌入
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: ["text 1", "text 2", "text 3"],
});
// 按相关性重新排序搜索结果
const { results } = await rerank({
model: cohere.reranker("rerank-v3.5"),
query: "什么是量子计算?",
documents: searchResults,
});Image Generation & Editing
图像生成与编辑
AI Gateway supports image generation. Use the model — it is significantly better than older models like or .
google/gemini-3.1-flash-image-previewgemini-2.0-flash-exp-image-generationgemini-2.0-flash-001Always use for image generation. Do NOT use older models (, ) — they produce much worse results and some do not support image output at all.
google/gemini-3.1-flash-image-previewgemini-2.0-*gemini-2.5-*AI Gateway支持图像生成。请使用****模型——它比旧版模型如或性能显著更优。
google/gemini-3.1-flash-image-previewgemini-2.0-flash-exp-image-generationgemini-2.0-flash-001请始终使用进行图像生成。不要使用旧版模型(、)——这些模型生成的结果质量差,部分模型甚至不支持图像输出。
google/gemini-3.1-flash-image-previewgemini-2.0-*gemini-2.5-*Multimodal LLMs (recommended — use generateText
/streamText
)
generateTextstreamText多模态LLM(推荐——使用generateText
/streamText
)
generateTextstreamTextts
import { generateText, streamText } from "ai";
// generateText — images returned in result.files
const result = await generateText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
const imageFiles = result.files.filter((f) => f.mediaType?.startsWith("image/"));
// Convert to data URL for display
const imageFile = imageFiles[0];
const dataUrl = `data:${imageFile.mediaType};base64,${Buffer.from(imageFile.data).toString("base64")}`;
// streamText — stream text, then access images after completion
const stream = streamText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
for await (const delta of stream.fullStream) {
if (delta.type === "text-delta") process.stdout.write(delta.text);
}
const finalResult = await stream;
console.log(`Generated ${finalResult.files.length} image(s)`);Default image model: — fast, high-quality. This is the ONLY recommended model for image generation.
google/gemini-3.1-flash-image-previewts
import { generateText, streamText } from "ai";
// generateText — 图像返回在result.files中
const result = await generateText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "日落时的未来城市景观",
});
const imageFiles = result.files.filter((f) => f.mediaType?.startsWith("image/"));
// 转换为data URL以显示
const imageFile = imageFiles[0];
const dataUrl = `data:${imageFile.mediaType};base64,${Buffer.from(imageFile.data).toString("base64")}`;
// streamText — 流式输出文本,完成后访问图像
const stream = streamText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "日落时的未来城市景观",
});
for await (const delta of stream.fullStream) {
if (delta.type === "text-delta") process.stdout.write(delta.text);
}
const finalResult = await stream;
console.log(`生成了${finalResult.files.length}张图像`);默认图像模型:——快速、高质量。这是唯一推荐的图像生成模型。
google/gemini-3.1-flash-image-previewImage-only models (use experimental_generateImage
)
experimental_generateImage仅图像模型(使用experimental_generateImage
)
experimental_generateImagets
import { experimental_generateImage as generateImage } from "ai";
const { images } = await generateImage({
model: "google/imagen-4.0-generate-001",
prompt: "A futuristic cityscape at sunset",
aspectRatio: "16:9",
});Other image-only models: , , , .
google/imagen-4.0-ultra-generate-001bfl/flux-2-probfl/flux-kontext-maxxai/grok-imagine-image-prots
import { experimental_generateImage as generateImage } from "ai";
const { images } = await generateImage({
model: "google/imagen-4.0-generate-001",
prompt: "日落时的未来城市景观",
aspectRatio: "16:9",
});其他仅图像模型:、、、。
google/imagen-4.0-ultra-generate-001bfl/flux-2-probfl/flux-kontext-maxxai/grok-imagine-image-proSaving generated images
保存生成的图像
ts
import fs from "node:fs";
// From multimodal LLMs (result.files)
for (const [i, file] of imageFiles.entries()) {
const ext = file.mediaType?.split("/")[1] || "png";
await fs.promises.writeFile(`output-${i}.${ext}`, file.uint8Array);
}
// From image-only models (result.images)
for (const [i, image] of images.entries()) {
const buffer = Buffer.from(image.base64, "base64");
await fs.promises.writeFile(`output-${i}.png`, buffer);
}ts
import fs from "node:fs";
// 来自多模态LLM(result.files)
for (const [i, file] of imageFiles.entries()) {
const ext = file.mediaType?.split("/")[1] || "png";
await fs.promises.writeFile(`output-${i}.${ext}`, file.uint8Array);
}
// 来自仅图像模型(result.images)
for (const [i, image] of images.entries()) {
const buffer = Buffer.from(image.base64, "base64");
await fs.promises.writeFile(`output-${i}.png`, buffer);
}UI Hooks (React)
UI钩子(React)
MANDATORY — Always use AI Elements for AI text: AI SDK models always produce markdown — even short prose contains , headings, , and . There is no "plain text" mode. Every AI-generated string displayed in a browser MUST be rendered through AI Elements.
**bold**##`code`---- Chat messages: Use AI Elements — handles text, tool calls, code blocks, reasoning, streaming.
<Message message={message} /> - Any other AI text (streaming panels, workflow events, reports, briefings, narratives, summaries, perspectives): Use from
<MessageResponse>{text}</MessageResponse>.@/components/ai-elements/message - wraps Streamdown with code highlighting, math, mermaid, and CJK plugins — works for any markdown string, including streamed text.
<MessageResponse> - Never render AI output as raw ,
{text}, or<p>{content}</p>— this always produces ugly unformatted output with visible markdown syntax.<div>{stream}</div> - No exceptions: Even if you think the response will be "simple prose", models routinely add markdown formatting. Always use AI Elements.
⤳ skill: ai-elements — Full component library, decision guidance, and troubleshooting for AI interfaces
必须——始终使用AI Elements渲染AI文本:AI SDK模型始终生成markdown——即使是简短的文本也包含、标题、和。没有“纯文本”模式。任何在浏览器中显示的AI生成字符串都必须通过AI Elements渲染。
**粗体**##`代码`---- 聊天消息:使用AI Elements的——处理文本、工具调用、代码块、推理、流式传输。
<Message message={message} /> - 其他AI文本(流式面板、工作流事件、报告、简报、叙述、摘要、观点):使用中的
@/components/ai-elements/message。<MessageResponse>{text}</MessageResponse> - 包装了带有代码高亮、数学公式、mermaid和CJK插件的Streamdown——适用于任何markdown字符串,包括流式文本。
<MessageResponse> - 切勿将AI输出渲染为原始、
{text}或<p>{content}</p>——这会导致显示丑陋的未格式化输出,且markdown语法可见。<div>{stream}</div> - 无例外:即使你认为响应是“简单文本”,模型通常也会添加markdown格式。请始终使用AI Elements。
⤳ skill: ai-elements — 完整的组件库、决策指导和AI界面故障排除
Transport Options
传输选项
useChat| Transport | Use Case |
|---|---|
| HTTP POST to API routes (default — sends to |
| In-process agent communication without HTTP (SSR, testing) |
| Plain text stream protocol |
Default behavior: with no transport config defaults to .
useChat()DefaultChatTransport({ api: '/api/chat' })useChat| 传输方式 | 使用场景 |
|---|---|
| HTTP POST到API路由(默认——发送到 |
| 进程内Agent通信,无需HTTP(SSR、测试) |
| 纯文本流协议 |
默认行为:未配置传输的默认使用。
useChat()DefaultChatTransport({ api: '/api/chat' })With AI Elements (Recommended)
使用AI Elements(推荐)
tsx
"use client";
import { useChat } from "@ai-sdk/react";
import { Conversation } from "@/components/ai-elements/conversation";
import { Message } from "@/components/ai-elements/message";
function Chat() {
// No transport needed — defaults to DefaultChatTransport({ api: '/api/chat' })
const { messages, sendMessage, status } = useChat();
return (
<Conversation>
{messages.map((message) => (
<Message key={message.id} message={message} />
))}
</Conversation>
);
}AI Elements handles UIMessage parts (text, tool calls, reasoning, images) automatically. Install with .
npx ai-elements⤳ skill: ai-elements — Full component library for AI interfaces
⤳ skill: json-render — Manual rendering patterns for custom UIs
tsx
"use client";
import { useChat } from "@ai-sdk/react";
import { Conversation } from "@/components/ai-elements/conversation";
import { Message } from "@/components/ai-elements/message";
function Chat() {
// 无需配置传输——默认使用DefaultChatTransport({ api: '/api/chat' })
const { messages, sendMessage, status } = useChat();
return (
<Conversation>
{messages.map((message) => (
<Message key={message.id} message={message} />
))}
</Conversation>
);
}AI Elements会自动处理UIMessage部分(文本、工具调用、推理、图像)。使用安装。
npx ai-elements⤳ skill: ai-elements — AI界面的完整组件库
⤳ skill: json-render — 自定义UI的手动渲染模式
With DirectChatTransport (No API Route Needed)
使用DirectChatTransport(无需API路由)
tsx
"use client";
import { useChat } from "@ai-sdk/react";
import { DirectChatTransport } from "ai";
import { myAgent } from "@/lib/agent"; // a ToolLoopAgent instance
function Chat() {
const { messages, sendMessage, status } = useChat({
transport: new DirectChatTransport({ agent: myAgent }),
});
// Same UI as above — no /api/chat route required
}Useful for SSR scenarios, testing without network, and single-process apps.
v6 changes from v5:
- →
useChat({ api })useChat({ transport: new DefaultChatTransport({ api }) }) - →
handleSubmitsendMessage({ text }) - /
input→ manage your ownhandleInputChangeuseState - /
bodyoptions were removed fromonResponse; useuseChatto configure requests/responsestransport - →
isLoadingstatus === 'streaming' || status === 'submitted' - → iterate
message.content(UIMessage format)message.parts
tsx
"use client";
import { useChat } from "@ai-sdk/react";
import { DirectChatTransport } from "ai";
import { myAgent } from "@/lib/agent"; // ToolLoopAgent实例
function Chat() {
const { messages, sendMessage, status } = useChat({
transport: new DirectChatTransport({ agent: myAgent }),
});
// 与上述UI相同——无需/api/chat路由
}适用于SSR场景、无网络测试和单进程应用。
v6相比v5的变更:
- →
useChat({ api })useChat({ transport: new DefaultChatTransport({ api }) }) - →
handleSubmitsendMessage({ text }) - /
input→ 自行管理handleInputChangeuseState - 的
useChat/body选项已移除;请使用onResponse配置请求/响应transport - →
isLoadingstatus === 'streaming' || status === 'submitted' - → 遍历
message.content(UIMessage格式)message.parts
Choose the correct streaming response helper
选择正确的流式响应助手
- is for
toUIMessageStreamResponse()+useChatUIMessage-based chat UIs. Use it when you need tool calls, metadata, reasoning, and other rich message parts.DefaultChatTransport - is for non-browser clients only — CLI tools, server-to-server pipes, or programmatic consumers that process raw text without rendering it in a UI. If the text will be displayed in a browser, use
toTextStreamResponse()+ AI Elements instead.toUIMessageStreamResponse() - Warning: Do not return to a plain
toUIMessageStreamResponse()client unless that client intentionally parses the AI SDK UI message stream protocol.fetch() - Warning: Do not use + manual
toTextStreamResponse()stream reading as a way to skip AI Elements. If the output goes to a browser, usefetch()+useChator<MessageResponse>.<Message>
- 适用于
toUIMessageStreamResponse()+useChat的基于UIMessage的聊天UI。当你需要工具调用、元数据、推理和其他丰富消息部分时使用。DefaultChatTransport - 仅适用于非浏览器客户端——CLI工具、服务器到服务器管道,或处理原始文本的程序化消费者,无需在UI中渲染。如果文本将在浏览器中显示,请使用
toTextStreamResponse()+ AI Elements替代。toUIMessageStreamResponse() - 警告:除非客户端有意解析AI SDK UI消息流协议,否则不要将返回给普通的
toUIMessageStreamResponse()客户端。fetch() - 警告:不要使用+ 手动
toTextStreamResponse()流读取来跳过AI Elements。如果输出将在浏览器中显示,请使用fetch()+useChat或<MessageResponse>。<Message>
Server-side for useChat (API Route)
useChat的服务器端实现(API路由)
ts
// app/api/chat/route.ts
import { streamText, convertToModelMessages, stepCountIs } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
// IMPORTANT: convertToModelMessages is async in v6
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: "openai/gpt-5.4",
messages: modelMessages,
tools: {
/* your tools */
},
// IMPORTANT: use stopWhen with stepCountIs for multi-step tool calling
// maxSteps was removed in v6 — use this instead
stopWhen: stepCountIs(5),
});
// Use toUIMessageStreamResponse (not toDataStreamResponse) for chat UIs
return result.toUIMessageStreamResponse();
}ts
// app/api/chat/route.ts
import { streamText, convertToModelMessages, stepCountIs } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
// 重要:v6中convertToModelMessages是异步的
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: "openai/gpt-5.4",
messages: modelMessages,
tools: {
/* 你的工具 */
},
// 重要:使用stopWhen和stepCountIs实现多步骤工具调用
// v6中已移除maxSteps——请使用此替代
stopWhen: stepCountIs(5),
});
// 聊天UI请使用toUIMessageStreamResponse(而非toDataStreamResponse)
return result.toUIMessageStreamResponse();
}Server-side with ToolLoopAgent (Agent API Route)
使用ToolLoopAgent的服务器端实现(Agent API路由)
Define a and use for the API route:
ToolLoopAgentcreateAgentUIStreamResponsets
// lib/agent.ts
import { ToolLoopAgent, stepCountIs } from "ai";
export const myAgent = new ToolLoopAgent({
model: "openai/gpt-5.4",
instructions: "You are a helpful assistant.",
tools: { /* your tools */ },
stopWhen: stepCountIs(5),
});ts
// app/api/chat/route.ts — agent API route
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/lib/agent";
export async function POST(req: Request) {
const { messages } = await req.json();
return createAgentUIStreamResponse({ agent: myAgent, uiMessages: messages });
}Or use on the client to skip the API route entirely.
DirectChatTransport定义并使用实现API路由:
ToolLoopAgentcreateAgentUIStreamResponsets
// lib/agent.ts
import { ToolLoopAgent, stepCountIs } from "ai";
export const myAgent = new ToolLoopAgent({
model: "openai/gpt-5.4",
instructions: "你是一个乐于助人的助手。",
tools: { /* 你的工具 */ },
stopWhen: stepCountIs(5),
});ts
// app/api/chat/route.ts — Agent API路由
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/lib/agent";
export async function POST(req: Request) {
const { messages } = await req.json();
return createAgentUIStreamResponse({ agent: myAgent, uiMessages: messages });
}或者在客户端使用以完全跳过API路由。
DirectChatTransportServer-side for text-only clients (non-browser only)
仅文本客户端的服务器端实现(仅非浏览器)
This pattern is for CLI tools, server-to-server pipes, and programmatic consumers. If the response will be displayed in a browser UI, use+ AI Elements instead — even for "simple" streaming text panels.toUIMessageStreamResponse()
ts
// app/api/generate/route.ts — for CLI or server consumers, NOT browser UIs
import { streamText } from "ai";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: "openai/gpt-5.4",
prompt,
});
return result.toTextStreamResponse();
}此模式适用于CLI工具、服务器到服务器管道和程序化消费者。如果响应将在浏览器UI中显示,请使用+ AI Elements替代——即使是“简单”的流式文本面板也应如此。toUIMessageStreamResponse()
ts
// app/api/generate/route.ts — 适用于CLI或服务器消费者,不适用于浏览器UI
import { streamText } from "ai";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: "openai/gpt-5.4",
prompt,
});
return result.toTextStreamResponse();
}Language Model Middleware
语言模型中间件
Intercept and transform model calls for RAG, guardrails, logging:
ts
import { wrapLanguageModel } from "ai";
const wrappedModel = wrapLanguageModel({
model: "openai/gpt-5.4",
middleware: {
transformParams: async ({ params }) => {
// Inject RAG context, modify system prompt, etc.
return { ...params, system: params.system + "\n\nContext: ..." };
},
wrapGenerate: async ({ doGenerate }) => {
const result = await doGenerate();
// Post-process, log, validate guardrails
return result;
},
},
});拦截并转换模型调用以实现RAG、防护栏、日志记录:
ts
import { wrapLanguageModel } from "ai";
const wrappedModel = wrapLanguageModel({
model: "openai/gpt-5.4",
middleware: {
transformParams: async ({ params }) => {
// 注入RAG上下文、修改系统提示等
return { ...params, system: params.system + "\n\n上下文: ..." };
},
wrapGenerate: async ({ doGenerate }) => {
const result = await doGenerate();
// 后处理、日志记录、验证防护栏
return result;
},
},
});Provider Routing via AI Gateway
通过AI Gateway进行提供商路由
ts
import { generateText } from "ai";
import { gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4.6"),
prompt: "Hello!",
providerOptions: {
gateway: {
order: ["bedrock", "anthropic"], // Try Bedrock first
models: ["openai/gpt-5.4"], // Fallback model
only: ["anthropic", "bedrock"], // Restrict providers
user: "user-123", // Usage tracking
tags: ["feature:chat", "env:production"], // Cost attribution
},
},
});ts
import { generateText } from "ai";
import { gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4.6"),
prompt: "你好!",
providerOptions: {
gateway: {
order: ["bedrock", "anthropic"], // 先尝试Bedrock
models: ["openai/gpt-5.4"], // 回退模型
only: ["anthropic", "bedrock"], // 限制提供商
user: "user-123", // 使用情况追踪
tags: ["feature:chat", "env:production"], // 成本归因
},
},
});DevTools
开发工具
bash
npx @ai-sdk/devtoolsbash
npx @ai-sdk/devtoolsOpens http://localhost:4983 — inspect LLM calls, agents, token usage, timing
打开http://localhost:4983 — 检查LLM调用、Agent、令牌使用情况、计时
undefinedundefinedKey Patterns
关键模式
- Default to AI Gateway with OIDC — pass strings (e.g.,
"provider/model") to route through the gateway automatically.model: "openai/gpt-5.4"provisions OIDC tokens. No manual API keys needed. Thevercel env pullwrapper is optional (only needed forgateway()).providerOptions.gateway - Set up a Vercel project for AI — → enable AI Gateway at
vercel link→ AI Gateway →https://vercel.com/{team}/{project}/settingsto get OIDC credentials. Never manually createvercel env pullwith provider-specific API keys..env.local - Always use AI Elements for any AI text in a browser — installs production-ready Message, Conversation, and Tool components. Use
npx ai-elementsfor chat and<Message>for any other AI-generated text (streaming panels, summaries, reports). AI models always produce markdown — there is no scenario where raw<MessageResponse>rendering is correct. ⤳ skill: ai-elements{text} - Always stream for user-facing AI — use +
streamText, notuseChatgenerateText - UIMessage chat UIs — defaults to
useChat(). On the server:DefaultChatTransport({ api: '/api/chat' })+convertToModelMessages(). For no-API-route setups:toUIMessageStreamResponse()+ Agent.DirectChatTransport - Text-only clients (non-browser) — is only for CLI tools, server pipes, and programmatic consumers. If the text is displayed in a browser, use
toTextStreamResponse()+ AI ElementstoUIMessageStreamResponse() - Use structured output for extracting data — with
generateTextand Zod schemasOutput.object() - Use for multi-step reasoning — not manual loops. Default
ToolLoopAgentisstopWhen. UsestepCountIs(20)for agent API routes.createAgentUIStreamResponse - Use DurableAgent (from Workflow DevKit) for production agents that must survive crashes
- Use to generate static tool definitions from MCP servers for security
mcp-to-ai-sdk - Use for human-in-the-loop — set on any tool to pause execution until user approves; supports conditional approval via async function
needsApproval - Use per tool — opt-in strict mode ensures providers only generate schema-valid tool calls; set on individual tools, not globally
strict: true
- 默认使用带OIDC的AI Gateway——传递字符串(例如
"provider/model")以自动通过网关路由。model: "openai/gpt-5.4"会配置OIDC令牌。无需手动API密钥。vercel env pull包装器是可选的(仅在需要gateway()时使用)。providerOptions.gateway - 为AI设置Vercel项目——→ 在
vercel link启用AI Gateway → AI Gateway →https://vercel.com/{team}/{project}/settings获取OIDC凭据。切勿手动创建包含提供商特定API密钥的vercel env pull。.env.local - 浏览器中的任何AI文本始终使用AI Elements——安装生产就绪的Message、Conversation和Tool组件。聊天使用
npx ai-elements,其他AI生成文本(流式面板、摘要、报告)使用<Message>。AI模型始终生成markdown——没有任何场景适合直接渲染<MessageResponse>。 ⤳ skill: ai-elements{text} - 面向用户的AI始终使用流式传输——使用+
streamText,而非useChatgenerateText - UIMessage聊天UI——默认使用
useChat()。服务器端:DefaultChatTransport({ api: '/api/chat' })+convertToModelMessages()。无API路由设置:toUIMessageStreamResponse()+ Agent。DirectChatTransport - 仅文本客户端(非浏览器)——仅适用于CLI工具、服务器管道和程序化消费者。如果文本将在浏览器中显示,请使用
toTextStreamResponse()+ AI ElementstoUIMessageStreamResponse() - 使用结构化输出提取数据——使用带和Zod schema的
Output.object()generateText - 使用实现多步骤推理——不要使用手动循环。默认
ToolLoopAgent为stopWhen。Agent API路由使用stepCountIs(20)。createAgentUIStreamResponse - 使用DurableAgent(来自Workflow DevKit)构建生产级Agent——可在崩溃后恢复
- 使用从MCP服务器生成静态工具定义——提高安全性
mcp-to-ai-sdk - 使用实现人在回路——在工具上设置以暂停执行直到用户批准;支持通过异步函数实现条件审批
needsApproval - 为工具设置——可选的严格模式确保提供商仅生成符合schema的有效工具调用;为单个工具设置,而非全局设置
strict: true
Common Pitfall: Structured Output Property Name
常见陷阱:结构化输出属性名称
In v6, with returns the parsed result on the property (NOT ):
generateTextOutput.object()outputobjectts
// CORRECT — v6
const { output } = await generateText({
model: 'openai/gpt-5.4',
output: Output.object({ schema: mySchema }),
prompt: '...',
})
console.log(output) // ✅ parsed object
// WRONG — v5 habit
const { object } = await generateText({ ... }) // ❌ undefined — `object` doesn't exist in v6This is one of the most common v5→v6 migration mistakes. The config key is and the result key is also .
outputoutput在v6中,带的在****属性中返回解析后的结果(而非):
Output.object()generateTextoutputobjectts
// 正确——v6
const { output } = await generateText({
model: 'openai/gpt-5.4',
output: Output.object({ schema: mySchema }),
prompt: '...',
})
console.log(output) // ✅ 解析后的对象
// 错误——v5习惯
const { object } = await generateText({ ... }) // ❌ undefined — v6中`object`不存在这是v5→v6迁移中最常见的错误之一。配置键为,结果键也为。
outputoutputMigration from AI SDK 5
从AI SDK 5迁移
Run (or ) to auto-migrate. Preview with . Key changes:
npx @ai-sdk/codemod upgradenpx @ai-sdk/codemod v6npx @ai-sdk/codemod --dry upgrade- /
generateObject→streamObject/generateTextwithstreamTextOutput.object() - →
parametersinputSchema - →
resultoutput - →
maxSteps(importstopWhen: stepCountIs(N)fromstepCountIs)ai - →
CoreMessage(useModelMessage— now async)convertToModelMessages() - →
ToolCallOptionsToolExecutionOptions - →
Experimental_Agent(concrete class;ToolLoopAgentis just an interface)Agent - →
system(oninstructions)ToolLoopAgent - →
agent.generateText()agent.generate() - →
agent.streamText()agent.stream() - →
experimental_createMCPClient(stable)createMCPClient - New: for agent API routes
createAgentUIStreamResponse({ agent, uiMessages }) - New: +
callOptionsSchemafor per-call agent configurationprepareCall - →
useChat({ api })useChat({ transport: new DefaultChatTransport({ api }) }) useChat/bodyoptions removed → configure with transportonResponse- /
handleSubmit→input/ manage own statesendMessage({ text }) - →
toDataStreamResponse()(for chat UIs)toUIMessageStreamResponse() - : use
createUIMessageStream(notstream.writer.write(...))stream.write(...) - text-only clients / text stream protocol →
toTextStreamResponse() - →
message.content(tool parts usemessage.parts, nottool-<toolName>)tool-invocation - UIMessage / ModelMessage types introduced
- is not strongly typed; cast via
DynamicToolCall.argsfirstunknown - →
TypedToolResult.resultTypedToolResult.output - is the umbrella package
ai@^6.0.0 - must be installed separately at
@ai-sdk/react^3.0.x - (if installed directly) is
@ai-sdk/gateway, not^3.x^1.x - New: on tools (boolean or async function) for human-in-the-loop approval
needsApproval - New: per-tool opt-in for strict schema validation
strict: true - New: — connect
DirectChatTransportto an Agent in-process, no API route neededuseChat - New: on
addToolApprovalResponsefor client-side approval UIuseChat - Default changed from
stopWhentostepCountIs(1)forstepCountIs(20)ToolLoopAgent - New: type renamed to
ToolCallOptionsToolExecutionOptions - New: now receives
Tool.toModelOutputobject, not bare({ output })output - New: →
isToolUIPart;isStaticToolUIPart→isToolOrDynamicToolUIPartisToolUIPart - New: →
getToolName;getStaticToolName→getToolOrDynamicToolNamegetToolName - New: defaults to Responses API; use
@ai-sdk/azurefor Chat Completionsazure.chat() - New:
@ai-sdk/anthropicfor native structured outputs (Claude Sonnet 4.5+)structuredOutputMode - New: rewritten —
@ai-sdk/langchain,toBaseMessages(),toUIMessageStream()LangSmithDeploymentTransport - New: Provider-specific tools — Anthropic (memory, code execution), OpenAI (shell, patch), Google (maps, RAG), xAI (search, code)
- finish reason removed → now returned as
unknownother - Warning types consolidated into single type exported from
Warningai
运行(或)自动迁移。使用预览变更。主要变更:
npx @ai-sdk/codemod upgradenpx @ai-sdk/codemod v6npx @ai-sdk/codemod --dry upgrade- /
generateObject→streamObject/generateText+streamTextOutput.object() - →
parametersinputSchema - →
resultoutput - →
maxSteps(从stopWhen: stepCountIs(N)导入ai)stepCountIs - →
CoreMessage(使用ModelMessage——现在是异步的)convertToModelMessages() - →
ToolCallOptionsToolExecutionOptions - →
Experimental_Agent(具体类;ToolLoopAgent只是接口)Agent - →
system(在instructions上)ToolLoopAgent - →
agent.generateText()agent.generate() - →
agent.streamText()agent.stream() - →
experimental_createMCPClient(稳定版)createMCPClient - 新增:用于Agent API路由
createAgentUIStreamResponse({ agent, uiMessages }) - 新增:+
callOptionsSchema用于每次调用的Agent配置prepareCall - →
useChat({ api })useChat({ transport: new DefaultChatTransport({ api }) }) - 的
useChat/body选项已移除→通过传输配置onResponse - /
handleSubmit→input/ 自行管理状态sendMessage({ text }) - →
toDataStreamResponse()(用于聊天UI)toUIMessageStreamResponse() - :使用
createUIMessageStream(而非stream.writer.write(...))stream.write(...) - 仅文本客户端/文本流协议→
toTextStreamResponse() - →
message.content(工具部分使用message.parts,而非tool-<toolName>)tool-invocation - 引入UIMessage / ModelMessage类型
- 不是强类型;需先通过
DynamicToolCall.args转换unknown - →
TypedToolResult.resultTypedToolResult.output - 是核心包
ai@^6.0.0 - 必须单独安装,版本为
@ai-sdk/react^3.0.x - (如果直接安装)版本为
@ai-sdk/gateway,而非^3.x^1.x - 新增:工具的(布尔值或异步函数)用于人在回路审批
needsApproval - 新增:每个工具可选的用于严格的schema验证
strict: true - 新增:——将
DirectChatTransport与进程内的Agent连接,无需API路由useChat - 新增:的
useChat用于客户端审批UIaddToolApprovalResponse - ToolLoopAgent的默认从
stopWhen变更为stepCountIs(1)stepCountIs(20) - 新增:类型重命名为
ToolCallOptionsToolExecutionOptions - 新增:现在接收
Tool.toModelOutput对象,而非裸({ output })output - 新增:→
isToolUIPart;isStaticToolUIPart→isToolOrDynamicToolUIPartisToolUIPart - 新增:→
getToolName;getStaticToolName→getToolOrDynamicToolNamegetToolName - 新增:默认使用Responses API;使用
@ai-sdk/azure获取Chat Completionsazure.chat() - 新增:的
@ai-sdk/anthropic用于原生结构化输出(Claude Sonnet 4.5+)structuredOutputMode - 新增:重写——
@ai-sdk/langchain、toBaseMessages()、toUIMessageStream()LangSmithDeploymentTransport - 新增:提供商特定工具——Anthropic(内存、代码执行)、OpenAI(shell、补丁)、Google(地图、RAG)、xAI(搜索、代码)
- 移除结束原因→现在返回为
unknownother - 警告类型合并为从导出的单一
ai类型Warning