vercel-ai-sdk
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseVercel AI SDK - Build AI-Powered Apps
Vercel AI SDK - 构建AI驱动应用
Stream AI responses, call functions, and build conversational interfaces
流式AI响应、函数调用与对话界面构建
When to Use
适用场景
Use Vercel AI SDK when you need:
- Streaming responses from LLMs with real-time UI updates
- React hooks (useChat, useCompletion) for chat/completion interfaces
- Function calling and tool use for AI agents
- Structured outputs with Zod schema validation
- Multi-provider support (OpenAI, Anthropic, Google, etc.)
- Edge runtime compatibility for fast global responses
Choose alternatives when:
- Building non-JavaScript/TypeScript applications
- Need direct provider SDKs for specialized features
- Not using streaming (simple REST API might suffice)
- Building complex agent frameworks (consider LangChain, AutoGPT)
在以下场景中使用Vercel AI SDK:
- 流式响应:从大语言模型(LLMs)获取响应并实现实时UI更新
- React hooks(useChat、useCompletion):用于构建聊天/补全界面
- 函数调用与工具使用:为AI Agent提供能力
- 结构化输出:通过Zod模式验证确保输出格式
- 多提供商支持:兼容OpenAI、Anthropic、Google等
- Edge runtime兼容性:实现全球范围内的快速响应
选择替代方案的场景:
- 构建非JavaScript/TypeScript应用
- 需要使用提供商专属SDK的特殊功能
- 无需流式响应(简单REST API即可满足需求)
- 构建复杂Agent框架(可考虑LangChain、AutoGPT)
Critical Patterns
关键实践模式
Pattern 1: Streaming with Error Handling
模式1:带错误处理的流式响应
typescript
// ✅ Good: Proper error handling and loading states
'use client';
import { useChat } from 'ai/react';
export function Chat() {
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
error,
reload,
} = useChat({
api: '/api/chat',
onError: (error) => {
console.error('Chat error:', error);
toast.error('Failed to send message');
},
onFinish: (message) => {
console.log('Message completed:', message);
},
});
return (
<div>
<div className="messages">
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong> {message.content}
</div>
))}
{isLoading && <div className="loading">AI is thinking...</div>}
{error && (
<div className="error">
<p>Error: {error.message}</p>
<button onClick={() => reload()}>Retry</button>
</div>
)}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
disabled={isLoading}
placeholder="Type a message..."
/>
<button type="submit" disabled={isLoading || !input.trim()}>
Send
</button>
</form>
</div>
);
}
// ❌ Bad: No error handling, no loading states
export function BadChat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => <div key={m.id}>{m.content}</div>)}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button>Send</button>
</form>
</div>
);
}Why: Error handling improves UX; loading states provide feedback; retry gives users control.
typescript
// ✅ 推荐:完善的错误处理与加载状态
'use client';
import { useChat } from 'ai/react';
export function Chat() {
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
error,
reload,
} = useChat({
api: '/api/chat',
onError: (error) => {
console.error('Chat error:', error);
toast.error('Failed to send message');
},
onFinish: (message) => {
console.log('Message completed:', message);
},
});
return (
<div>
<div className="messages">
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong> {message.content}
</div>
))}
{isLoading && <div className="loading">AI is thinking...</div>}
{error && (
<div className="error">
<p>Error: {error.message}</p>
<button onClick={() => reload()}>Retry</button>
</div>
)}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
disabled={isLoading}
placeholder="Type a message..."
/>
<button type="submit" disabled={isLoading || !input.trim()}>
Send
</button>
</form>
</div>
);
}
// ❌ 不推荐:无错误处理、无加载状态
export function BadChat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => <div key={m.id}>{m.content}</div>)}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button>Send</button>
</form>
</div>
);
}原因:错误处理可提升用户体验;加载状态可提供实时反馈;重试功能让用户拥有操作控制权。
Pattern 2: Tool Calling with UI Feedback
模式2:带UI反馈的工具调用
typescript
// ✅ Good: Show tool calls in UI, handle execution properly
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
getWeather: tool({
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City and state, e.g. San Francisco, CA'),
unit: z.enum(['celsius', 'fahrenheit']).default('fahrenheit'),
}),
execute: async ({ location, unit }) => {
const weather = await fetchWeatherAPI(location, unit);
return {
location,
temperature: weather.temp,
conditions: weather.conditions,
unit,
};
},
}),
},
maxSteps: 5,
});
return result.toDataStreamResponse();
}
// Client: Display tool calls
'use client';
export function ChatWithTools() {
const { messages } = useChat({ api: '/api/chat' });
return (
<div>
{messages.map((message) => (
<div key={message.id}>
<div>{message.content}</div>
{message.toolInvocations?.map((tool, i) => (
<div key={i} className="tool-call">
{tool.state === 'call' && (
<p>Calling {tool.toolName}...</p>
)}
{tool.state === 'result' && (
<div>
<p>Used {tool.toolName}</p>
<pre>{JSON.stringify(tool.result, null, 2)}</pre>
</div>
)}
</div>
))}
</div>
))}
</div>
);
}Why: Showing tool calls builds trust; users understand AI's actions; debugging is easier.
typescript
// ✅ 推荐:在UI中展示工具调用,正确处理执行流程
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
getWeather: tool({
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City and state, e.g. San Francisco, CA'),
unit: z.enum(['celsius', 'fahrenheit']).default('fahrenheit'),
}),
execute: async ({ location, unit }) => {
const weather = await fetchWeatherAPI(location, unit);
return {
location,
temperature: weather.temp,
conditions: weather.conditions,
unit,
};
},
}),
},
maxSteps: 5,
});
return result.toDataStreamResponse();
}
// 客户端:展示工具调用
'use client';
export function ChatWithTools() {
const { messages } = useChat({ api: '/api/chat' });
return (
<div>
{messages.map((message) => (
<div key={message.id}>
<div>{message.content}</div>
{message.toolInvocations?.map((tool, i) => (
<div key={i} className="tool-call">
{tool.state === 'call' && (
<p>Calling {tool.toolName}...</p>
)}
{tool.state === 'result' && (
<div>
<p>Used {tool.toolName}</p>
<pre>{JSON.stringify(tool.result, null, 2)}</pre>
</div>
)}
</div>
))}
</div>
))}
</div>
);
}原因:展示工具调用可增强用户信任感;用户能理解AI的操作逻辑;便于调试问题。
Pattern 3: Structured Outputs with Validation
模式3:带验证的结构化输出
typescript
// ✅ Good: Use generateObject for structured data
// app/api/extract/route.ts
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
const RecipeSchema = z.object({
name: z.string().describe('Recipe name'),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
unit: z.string().optional(),
})
),
steps: z.array(z.string()).min(1),
prepTime: z.number().describe('Prep time in minutes'),
cookTime: z.number().describe('Cook time in minutes'),
servings: z.number().positive(),
difficulty: z.enum(['easy', 'medium', 'hard']),
});
export async function POST(req: Request) {
const { prompt } = await req.json();
try {
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: RecipeSchema,
prompt: `Extract recipe information: ${prompt}`,
});
return Response.json({ success: true, data: object });
} catch (error) {
return Response.json(
{ success: false, error: 'Failed to extract recipe' },
{ status: 500 }
);
}
}Why: generateObject ensures valid output; Zod schema provides type safety; reduces parsing errors.
For complete streaming and hook examples, see references/streaming.md.
typescript
// ✅ 推荐:使用generateObject获取结构化数据
// app/api/extract/route.ts
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
const RecipeSchema = z.object({
name: z.string().describe('Recipe name'),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
unit: z.string().optional(),
})
),
steps: z.array(z.string()).min(1),
prepTime: z.number().describe('Prep time in minutes'),
cookTime: z.number().describe('Cook time in minutes'),
servings: z.number().positive(),
difficulty: z.enum(['easy', 'medium', 'hard']),
});
export async function POST(req: Request) {
const { prompt } = await req.json();
try {
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: RecipeSchema,
prompt: `Extract recipe information: ${prompt}`,
});
return Response.json({ success: true, data: object });
} catch (error) {
return Response.json(
{ success: false, error: 'Failed to extract recipe' },
{ status: 500 }
);
}
}原因:generateObject可确保输出格式有效;Zod模式提供类型安全;减少解析错误。
如需完整的流式响应与Hook示例,请查看references/streaming.md。
Anti-Patterns
反模式
❌ Anti-Pattern 1: Not Streaming When Beneficial
❌ 反模式1:在适合流式响应的场景中未使用流式响应
Don't do this:
typescript
// ❌ Using generateText instead of streamText
export async function POST(req: Request) {
const { messages } = await req.json();
const { text } = await generateText({
model: openai('gpt-4-turbo'),
messages,
});
return Response.json({ text }); // User waits for entire response
}Why it's wrong: Poor UX; long wait times; higher perceived latency.
Do this instead:
typescript
// ✅ Stream for better UX
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}请勿这样做:
typescript
// ❌ 使用generateText而非streamText
export async function POST(req: Request) {
const { messages } = await req.json();
const { text } = await generateText({
model: openai('gpt-4-turbo'),
messages,
});
return Response.json({ text }); // 用户需等待完整响应生成
}错误原因:用户体验差;等待时间长;感知延迟高。
正确做法:
typescript
// ✅ 使用流式响应提升用户体验
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}❌ Anti-Pattern 2: Exposing API Keys Client-Side
❌ 反模式2:在客户端暴露API密钥
Don't do this:
typescript
// ❌ Using OpenAI directly from client
'use client';
import OpenAI from 'openai';
export function Chat() {
const openai = new OpenAI({
apiKey: process.env.NEXT_PUBLIC_OPENAI_KEY, // EXPOSED!
});
}Why it's wrong: API keys exposed in browser; security risk; quota abuse.
Do this instead:
typescript
// ✅ Use API routes (server-side)
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}
// Client calls your API
'use client';
export function Chat() {
const { messages } = useChat({ api: '/api/chat' });
// API key never exposed
}请勿这样做:
typescript
// ❌ 直接在客户端使用OpenAI
'use client';
import OpenAI from 'openai';
export function Chat() {
const openai = new OpenAI({
apiKey: process.env.NEXT_PUBLIC_OPENAI_KEY, // 已暴露!
});
}错误原因:API密钥在浏览器中暴露;存在安全风险;可能导致配额滥用。
正确做法:
typescript
// ✅ 使用API路由(服务端处理)
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}
// 客户端调用自有API
'use client';
export function Chat() {
const { messages } = useChat({ api: '/api/chat' });
// API密钥从未暴露
}❌ Anti-Pattern 3: No Token/Cost Limits
❌ 反模式3:未设置Token/成本限制
Don't do this:
typescript
// ❌ No limits on token usage
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
// No maxTokens, no checks
});
return result.toDataStreamResponse();
}Why it's wrong: Runaway costs; unpredictable bills.
Do this instead:
typescript
// ✅ Set limits and validate input
export async function POST(req: Request) {
const { messages } = await req.json();
if (messages.length > 50) {
return Response.json({ error: 'Too many messages' }, { status: 400 });
}
const totalLength = messages.reduce((sum, msg) => sum + msg.content.length, 0);
if (totalLength > 10000) {
return Response.json({ error: 'Messages too long' }, { status: 400 });
}
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
maxTokens: 1000, // Limit response length
});
return result.toDataStreamResponse();
}For more anti-patterns and solutions, see references/best-practices.md.
请勿这样做:
typescript
// ❌ 未限制Token使用量
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
// 未设置maxTokens,无校验
});
return result.toDataStreamResponse();
}错误原因:成本失控;账单金额不可预测。
正确做法:
typescript
// ✅ 设置限制并校验输入
export async function POST(req: Request) {
const { messages } = await req.json();
if (messages.length > 50) {
return Response.json({ error: 'Too many messages' }, { status: 400 });
}
const totalLength = messages.reduce((sum, msg) => sum + msg.content.length, 0);
if (totalLength > 10000) {
return Response.json({ error: 'Messages too long' }, { status: 400 });
}
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
maxTokens: 1000, // 限制响应长度
});
return result.toDataStreamResponse();
}如需了解更多反模式及解决方案,请查看references/best-practices.md。
Quick Reference
快速参考
React Hooks
React Hooks
typescript
// useChat
const { messages, input, handleInputChange, handleSubmit, isLoading, error, reload } = useChat({
api: '/api/chat',
initialMessages: [],
onFinish: (message) => {},
onError: (error) => {},
});
// useCompletion
const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
api: '/api/completion',
});typescript
// useChat
const { messages, input, handleInputChange, handleSubmit, isLoading, error, reload } = useChat({
api: '/api/chat',
initialMessages: [],
onFinish: (message) => {},
onError: (error) => {},
});
// useCompletion
const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
api: '/api/completion',
});Server Functions
服务端函数
typescript
// streamText
const result = streamText({
model: openai('gpt-4-turbo'),
messages: [],
system: 'System message',
tools: {},
maxSteps: 5,
temperature: 0.7,
maxTokens: 1000,
});
return result.toDataStreamResponse();
// generateObject
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: z.object({...}),
prompt: 'Extract data',
});typescript
// streamText
const result = streamText({
model: openai('gpt-4-turbo'),
messages: [],
system: 'System message',
tools: {},
maxSteps: 5,
temperature: 0.7,
maxTokens: 1000,
});
return result.toDataStreamResponse();
// generateObject
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: z.object({...}),
prompt: 'Extract data',
});Learn More
更多学习资源
- Streaming & Hooks: references/streaming.md - Complete useChat, useCompletion examples
- Advanced Patterns: references/advanced.md - Multiple providers, abort requests, useAssistant
- Full Examples: references/examples.md - Tool calls UI, structured outputs, error handling
- Best Practices: references/best-practices.md - Security, cost optimization
- 流式响应与Hooks:references/streaming.md - 完整的useChat、useCompletion示例
- 高级模式:references/advanced.md - 多提供商支持、请求中断、useAssistant
- 完整示例:references/examples.md - 工具调用UI、结构化输出、错误处理
- 最佳实践:references/best-practices.md - 安全、成本优化