vercel-ai-sdk
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseVercel AI SDK v6 Implementation Guide
Vercel AI SDK v6 实现指南
When to Use This Skill
何时使用此技能
Use this skill when:
- Implementing AI chat interfaces with hook
useChat - Creating API routes that generate or stream AI responses
- Building agentic applications with
ToolLoopAgent - Adding tool calling / function calling capabilities
- Generating structured output with ,
Output.object(), etc.Output.array() - Generating text embeddings for semantic search or RAG
- Migrating from AI SDK v5 to v6
- Integrating Model Context Protocol (MCP) servers
- Implementing middleware for caching, logging, or guardrails
- Building workflow patterns (sequential, parallel, routing, etc.)
- Working with streaming responses or message persistence
在以下场景中使用本技能:
- 使用钩子实现AI聊天界面
useChat - 创建用于生成或流式传输AI响应的API路由
- 基于构建智能体应用
ToolLoopAgent - 添加工具调用/函数调用功能
- 借助、
Output.object()等生成结构化输出Output.array() - 生成文本嵌入用于语义搜索或RAG(检索增强生成)
- 从AI SDK v5迁移至v6
- 集成模型上下文协议(MCP)服务器
- 实现用于缓存、日志记录或防护的中间件
- 构建工作流模式(顺序、并行、路由等)
- 处理流式响应或消息持久化
Structured Implementation Workflow
结构化实现工作流
<workflow>
<step id="1" name="verify-requirements">
<description>Understand the task requirements</description>
<actions>
- Identify what AI functionality is needed (chat, generation, agents, tools, embeddings)
- Determine if client-side (useChat) or server-side (API route) implementation
- Check if streaming or non-streaming response is required
- Verify model provider (Anthropic, OpenAI, etc.)
- Determine if structured output is needed (Output.object, Output.array, etc.)
</actions>
</step>
<step id="2" name="check-documentation">
<description>Verify current API patterns if uncertain</description>
<actions>
- Use WebFetch to check https://ai-sdk.dev/docs/ if API patterns are unclear
- Confirm model specification format for the provider
- Verify function signatures for complex features
</actions>
</step>
<step id="3" name="implement">
<description>Implement using correct v6 patterns</description>
<actions>
- Use provider function model specification: anthropic('claude-sonnet-4-5')
- For chat: use sendMessage (not append), parts-based messages
- For tools: MUST import and use tool() helper from 'ai', MUST use inputSchema (NOT parameters), MUST use zod
- For structured output: use Output.object(), Output.array(), Output.choice(), Output.json()
- For streaming: use toUIMessageStreamResponse() or toTextStreamResponse()
- For agents: use ToolLoopAgent class with createAgentUIStreamResponse()
- For embeddings: use provider.textEmbeddingModel()
</actions>
</step>
<step id="4" name="verify-types">
<description>Ensure TypeScript types are correct</description>
<actions>
- Check for proper imports from 'ai' package
- Verify message types (UIMessage for useChat)
- Ensure tool parameter types are inferred correctly
- Add explicit types for async functions
</actions>
</step>
<step id="5" name="install-dependencies">
<description>Install any missing dependencies with the CORRECT package manager</description>
<actions>
- **CRITICAL: Detect which package manager the project uses FIRST**
* Check for lockfiles: pnpm-lock.yaml → use pnpm, package-lock.json → use npm, yarn.lock → use yarn, bun.lockb → use bun
* If pnpm-lock.yaml exists, you MUST use pnpm (NOT npm!)
- Check if all imported packages are installed
- If build fails with "Module not found", identify the package name from the error
- Add the package to package.json dependencies
- Install using the CORRECT package manager:
* If pnpm-lock.yaml exists: `pnpm install [package]` or `pnpm add [package]`
* If package-lock.json exists: `npm install [package]`
* If yarn.lock exists: `yarn add [package]`
* If bun.lockb exists: `bun install [package]` or `bun add [package]`
- Re-run build to verify installation succeeded
</actions>
<critical>
**NEVER use the wrong package manager!**
- Using npm when the project uses pnpm creates package-lock.json alongside pnpm-lock.yaml
- This causes dependency version mismatches and breaks the build
- ALWAYS check for existing lockfiles and use the matching package manager
</step>
<step id="6" name="verify-build">
<description>Run build and fix ALL errors until it passes</description>
<actions>
- Run: npm run build (or bun run build)
- If build fails, read the FULL error message carefully
- Common fixes:
* "Module not found" → Install the package (go back to step 5)
* Type errors → Fix TypeScript types
* Config errors → Check next.config.js/ts
* Framework errors → Research the error, try different approaches
- Apply fix and re-run build
- REPEAT until build passes
</actions>
<critical>
NEVER stop at "build fails but code is correct"
NEVER blame "environment" or "framework bugs" without debugging
KEEP ITERATING until the build PASSES
If you encounter a framework error, research it, try workarounds, debug it
DO NOT give up until you have a passing build
</critical>
</step>
<step id="7" name="verify-tests">
<description>Run tests and debug ALL failures until they pass</description>
<actions>
- Run: npm run test (or bun run test)
- If tests fail, read the FULL test error message carefully
- Debug the failure:
* Check what the test expects vs what you're providing
* Look at test assertions
* Verify your implementation matches the requirements
* Check for type mismatches or missing properties
- Fix the implementation
- Re-run tests
- REPEAT until all tests pass
</actions>
<critical>
NEVER accept test failures without debugging
NEVER dismiss failures as "test framework bugs" without investigation
Each test failure is telling you something - LISTEN to it
KEEP DEBUGGING until all tests pass
Test failures are NOT acceptable - they mean your implementation is incomplete
</critical>
</step>
<step id="8" name="final-verification">
<description>Verify EVERYTHING passes</description>
<actions>
- Run: npm run build && npm run lint && npm run test
- All three must pass: ✅ Build, ✅ Lint, ✅ Tests
- If any fail, go back and fix them
- Do not consider the task complete until ALL checks pass
</actions>
<critical>
The task is NOT complete until:
- ✅ Build passes
- ✅ Lint passes
- ✅ Tests pass
</step>
</workflow>
NEVER accept "Module not found" errors as environment issues
YOU must install the required packages with the CORRECT package manager
Common packages needed:
- ai (core AI SDK)
- @ai-sdk/openai (OpenAI provider)
- @ai-sdk/anthropic (Anthropic provider)
- @ai-sdk/mcp (MCP integration)
- @modelcontextprotocol/sdk (MCP client SDK)
- zod (for tool schemas)
</critical> "Code is correct" is NOT enough
You must achieve FULL PASSING status
This is what it means to be an autonomous agent
</critical><workflow>
<step id="1" name="verify-requirements">
<description>理解任务需求</description>
<actions>
- 确定所需的AI功能(聊天、生成、智能体、工具、嵌入)
- 判断是客户端(useChat)还是服务端(API路由)实现
- 确认是否需要流式或非流式响应
- 验证模型提供商(Anthropic、OpenAI等)
- 确定是否需要结构化输出(Output.object、Output.array等)
</actions>
</step>
<step id="2" name="check-documentation">
<description>若有疑问,验证当前API模式</description>
<actions>
- 若API模式不明确,使用WebFetch查看https://ai-sdk.dev/docs/
- 确认提供商的模型规范格式
- 验证复杂功能的函数签名
</actions>
</step>
<step id="3" name="implement">
<description>使用正确的v6模式实现</description>
<actions>
- 使用提供商函数指定模型:anthropic('claude-sonnet-4-5')
- 聊天场景:使用sendMessage(而非append),基于parts的消息结构
- 工具场景:必须从'ai'导入并使用tool()助手,必须使用inputSchema(而非parameters),必须使用zod
- 结构化输出场景:使用Output.object()、Output.array()、Output.choice()、Output.json()
- 流式传输场景:使用toUIMessageStreamResponse()或toTextStreamResponse()
- 智能体场景:使用ToolLoopAgent类搭配createAgentUIStreamResponse()
- 嵌入场景:使用provider.textEmbeddingModel()
</actions>
</step>
<step id="4" name="verify-types">
<description>确保TypeScript类型正确</description>
<actions>
- 检查是否从'ai'包正确导入类型
- 验证消息类型(useChat对应的UIMessage)
- 确保工具参数类型被正确推断
- 为异步函数添加显式类型
</actions>
</step>
<step id="5" name="install-dependencies">
<description>使用正确的包管理器安装缺失的依赖</description>
<actions>
- **关键:先检测项目使用的包管理器**
* 检查锁文件:pnpm-lock.yaml → 使用pnpm,package-lock.json → 使用npm,yarn.lock → 使用yarn,bun.lockb → 使用bun
* 若存在pnpm-lock.yaml,必须使用pnpm(禁止使用npm!)
- 检查所有导入的包是否已安装
- 若构建失败提示"Module not found",从错误信息中识别包名
- 将包添加至package.json的依赖项
- 使用正确的包管理器安装:
* 若存在pnpm-lock.yaml:`pnpm install [package]`或`pnpm add [package]`
* 若存在package-lock.json:`npm install [package]`
* 若存在yarn.lock:`yarn add [package]`
* 若存在bun.lockb:`bun install [package]`或`bun add [package]`
- 重新运行构建以验证安装成功
</actions>
<critical>
**绝对禁止使用错误的包管理器!**
- 当项目使用pnpm时使用npm,会在pnpm-lock.yaml旁生成package-lock.json
- 这会导致依赖版本不匹配,破坏构建
- 务必检查现有锁文件并使用匹配的包管理器
</step>
<step id="6" name="verify-build">
<description>运行构建并修复所有错误直至构建通过</description>
<actions>
- 运行:npm run build(或bun run build)
- 若构建失败,仔细阅读完整错误信息
- 常见修复方案:
* "Module not found" → 安装对应包(返回步骤5)
* 类型错误 → 修复TypeScript类型
* 配置错误 → 检查next.config.js/ts
* 框架错误 → 搜索错误信息,尝试不同解决方法
- 应用修复后重新运行构建
- 重复上述步骤直至构建通过
</actions>
<critical>
绝不能停留在"构建失败但代码正确"的状态
未调试前绝不能归咎于"环境"或"框架bug"
持续迭代直至构建通过
若遇到框架错误,搜索相关信息、尝试变通方案、进行调试
直至构建通过再停止
</critical>
</step>
<step id="7" name="verify-tests">
<description>运行测试并调试所有失败用例直至全部通过</description>
<actions>
- 运行:npm run test(或bun run test)
- 若测试失败,仔细阅读完整测试错误信息
- 调试失败原因:
* 对比测试预期结果与实际输出
* 查看测试断言
* 验证实现是否符合需求
* 检查类型不匹配或缺失属性的问题
- 修复实现代码
- 重新运行测试
- 重复上述步骤直至所有测试通过
</actions>
<critical>
绝不能接受测试失败而不调试
未调查前绝不能将失败归咎于"测试框架bug"
每个测试失败都在传递信息——请重视
持续调试直至所有测试通过
测试失败是不可接受的——这意味着实现不完整
</critical>
</step>
<step id="8" name="final-verification">
<description>验证所有环节均通过</description>
<actions>
- 运行:npm run build && npm run lint && npm run test
- 以下三项必须全部通过:✅ 构建、✅ 代码检查、✅ 测试
- 若任意环节失败,返回对应步骤修复
- 直至所有检查通过,方可视为任务完成
</actions>
<critical>
任务完成的标准是:
- ✅ 构建通过
- ✅ 代码检查通过
- ✅ 测试通过
</step>
</workflow>
绝不能将"Module not found"错误视为环境问题
必须使用正确的包管理器安装所需包
常见所需包:
- ai(核心AI SDK)
- @ai-sdk/openai(OpenAI提供商)
- @ai-sdk/anthropic(Anthropic提供商)
- @ai-sdk/mcp(MCP集成)
- @modelcontextprotocol/sdk(MCP客户端SDK)
- zod(用于工具Schema) "代码正确"并不足够
必须实现全环节通过的状态
这才是自主智能体的职责所在⚠️ AUTONOMOUS AGENT MINDSET
⚠️ 自主智能体思维模式
You are not just writing code - you are COMPLETING TASKS AUTONOMOUSLY.
This means:
- ✅ Write correct implementation
- ✅ Install any required dependencies
- ✅ Run build and fix ALL errors
- ✅ Run tests and debug ALL failures
- ✅ Iterate until EVERYTHING passes
- ✅ Never make excuses or give up
你不只是在编写代码——你是在自主完成任务。
这意味着:
- ✅ 编写正确的实现代码
- ✅ 安装所有所需依赖
- ✅ 运行构建并修复所有错误
- ✅ 运行测试并调试所有失败用例
- ✅ 持续迭代直至所有环节通过
- ✅ 绝不找借口或放弃
Common Failure Patterns to AVOID
需避免的常见失败模式
❌ WRONG: "The code is correct, but the package isn't installed - that's an environment issue"
✅ CORRECT: "Build failed due to missing package - installing it now with npm install [package]"
❌ WRONG: "Tests pass but build fails - not my problem"
✅ CORRECT: "Build is failing - debugging the error and fixing it now"
❌ WRONG: "There's a framework bug, can't fix it"
✅ CORRECT: "Framework error detected - researching the issue, trying workarounds, debugging until I find a solution"
❌ WRONG: "The implementation is complete" (with failing tests)
✅ CORRECT: "Tests are failing - debugging and fixing until they all pass"
❌ 错误做法: "代码是正确的,但包未安装——这是环境问题"
✅ 正确做法: "构建因缺失包失败——现在使用npm install [package]安装"
❌ 错误做法: "测试通过但构建失败——这不是我的问题"
✅ 正确做法: "构建失败——正在调试错误并修复"
❌ 错误做法: "存在框架bug,无法修复"
✅ 正确做法: "检测到框架错误——正在搜索相关信息、尝试变通方案、调试直至找到解决方法"
❌ 错误做法: "实现已完成"(但测试失败)
✅ 正确做法: "测试失败——正在调试并修复直至全部通过"
Dependency Installation Workflow
依赖安装工作流
When you encounter "Module not found" errors:
-
Detect the package manager FIRST - Check for lockfiles:bash
ls -la | grep -E "lock" # Look for: pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb -
Identify the package from the import statement
Error: Cannot find module '@ai-sdk/anthropic' Import: import { anthropic } from '@ai-sdk/anthropic' Package needed: @ai-sdk/anthropic -
Install with the CORRECT package managerbash
# If pnpm-lock.yaml exists (MOST COMMON for Next.js evals): pnpm install @ai-sdk/anthropic # or pnpm add @ai-sdk/anthropic # If package-lock.json exists: npm install @ai-sdk/anthropic # If yarn.lock exists: yarn add @ai-sdk/anthropic # If bun.lockb exists: bun install @ai-sdk/anthropic -
Re-run build to verifybash
npm run build # or pnpm run build, yarn build, bun run build -
Fix any new errors that appear
⚠️ CRITICAL WARNING:
Using the WRONG package manager (e.g., npm when the project uses pnpm) will:
- Create a second conflicting lockfile
- Install different versions of dependencies
- Cause dependency version mismatches
- Break the build with cryptic errors like "Cannot read properties of null"
当遇到"Module not found"错误时:
-
先检测包管理器——检查锁文件:bash
ls -la | grep -E "lock" # 查找:pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb -
从导入语句中识别包名
错误:Cannot find module '@ai-sdk/anthropic' 导入语句:import { anthropic } from '@ai-sdk/anthropic' 所需包:@ai-sdk/anthropic -
使用正确的包管理器安装bash
# 若存在pnpm-lock.yaml(Next.js评估中最常见): pnpm install @ai-sdk/anthropic # 或 pnpm add @ai-sdk/anthropic # 若存在package-lock.json: npm install @ai-sdk/anthropic # 若存在yarn.lock: yarn add @ai-sdk/anthropic # 若存在bun.lockb: bun install @ai-sdk/anthropic -
重新运行构建以验证bash
npm run build # 或pnpm run build, yarn build, bun run build -
修复出现的新错误
⚠️ 关键警告:
使用错误的包管理器(例如项目使用pnpm时使用npm)会:
- 生成第二个冲突的锁文件
- 安装不同版本的依赖
- 导致依赖版本不匹配
- 引发诸如"Cannot read properties of null"之类的模糊构建错误
Build Error Debugging Workflow
构建错误调试工作流
When build fails:
- Read the FULL error message - don't skim it
- Identify the root cause:
- Module not found → Install package
- Type error → Fix types
- Config error → Check config files
- Next.js error → Research, try different approaches
- Apply the fix
- Re-run build
- Repeat until build passes
当构建失败时:
- 阅读完整错误信息——不要略读
- 识别根本原因:
- 模块未找到 → 安装对应包
- 类型错误 → 修复类型
- 配置错误 → 检查配置文件
- Next.js错误 → 搜索相关信息,尝试不同解决方法
- 应用修复方案
- 重新运行构建
- 重复上述步骤直至构建通过
Test Failure Debugging Workflow
测试失败调试工作流
When tests fail:
- Read the FULL test error - understand what's expected
- Compare expected vs actual behavior
- Check your implementation against test assertions
- Fix the issue in your code
- Re-run tests
- Repeat until all tests pass
当测试失败时:
- 阅读完整测试错误信息——理解预期结果
- 对比预期行为与实际行为
- 检查实现代码是否符合测试断言
- 修复代码中的问题
- 重新运行测试
- 重复上述步骤直至所有测试通过
Success Criteria
成功标准
Task is ONLY complete when:
- ✅ Build passes (succeeds)
npm run build - ✅ Lint passes (succeeds)
npm run lint - ✅ Tests pass (succeeds)
npm run test
NEVER stop at "code is correct" - achieve FULL PASSING status!
仅当满足以下条件时,任务才算完成:
- ✅ 构建通过(成功)
npm run build - ✅ 代码检查通过(成功)
npm run lint - ✅ 测试通过(成功)
npm run test
绝不能停留在"代码正确"的状态——要实现全环节通过!
⚠️ CRITICAL v6 CHANGES: Structured Output
⚠️ v6关键变更:结构化输出
In v6, and are DEPRECATED. Use / with helpers instead.
generateObjectstreamObjectgenerateTextstreamTextOutput在v6中,和已被弃用。请使用/搭配Output助手替代。
generateObjectstreamObjectgenerateTextstreamText❌ WRONG - Deprecated v5 Pattern
❌ 错误示例 - 已弃用的v5模式
typescript
// DO NOT USE - DEPRECATED in v6
import { generateObject } from "ai";
const result = await generateObject({
model: anthropic("claude-sonnet-4-5"),
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
}),
prompt: "Analyze sentiment",
});typescript
// 禁止使用 - v6中已弃用
import { generateObject } from "ai";
const result = await generateObject({
model: anthropic("claude-sonnet-4-5"),
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
}),
prompt: "Analyze sentiment",
});✅ CORRECT - v6 Output Pattern
✅ 正确示例 - v6 Output模式
typescript
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
// Access typed output
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]typescript
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
// 访问类型化输出
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]Output Helper Types
Output助手类型
| Helper | Purpose | Example |
|---|---|---|
| Generate typed object | |
| Generate typed array | |
| Generate enum value | |
| Unstructured JSON | |
| 助手方法 | 用途 | 示例 |
|---|---|---|
| 生成类型化对象 | |
| 生成类型化数组 | |
| 生成枚举值 | |
| 生成非结构化JSON | |
⚠️ CRITICAL: Tool Calling API - MUST USE tool() Helper
⚠️ 关键要求:工具调用API - 必须使用tool()助手
When implementing tool calling, you MUST use the helper function from the 'ai' package.
tool()实现工具调用时,必须使用'ai'包中的助手函数。
tool()❌ WRONG - Plain Object (WILL CAUSE BUILD ERROR)
❌ 错误示例 - 普通对象(会导致构建错误)
typescript
// DO NOT DO THIS - This pattern is INCORRECT
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ WRONG - "parameters" doesn't exist in v6
execute: async ({...}) => {...},
}
}This will fail with:
Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'typescript
// 禁止这样做 - 此模式不正确
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ 错误 - v6中不存在"parameters"
execute: async ({...}) => {...},
}
}此代码会失败并提示:
Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'✅ CORRECT - Use tool() Helper (REQUIRED)
✅ 正确示例 - 使用tool()助手(必填)
typescript
// ALWAYS DO THIS - This is the ONLY correct pattern
import { tool } from 'ai'; // ⚠️ MUST import tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ MUST wrap with tool()
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ MUST use "inputSchema" (not "parameters")
execute: async ({...}) => {...},
}),
}typescript
// 必须这样做 - 这是唯一正确的模式
import { tool } from 'ai'; // ⚠️ 必须导入tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ 必须用tool()包裹
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ 必须使用"inputSchema"(而非"parameters")
execute: async ({...}) => {...},
}),
}Tool Calling Checklist
工具调用检查清单
Before implementing any tool, verify:
- [ ] Imported from 'ai' package:
toolimport { tool } from 'ai'; - [ ] Wrapped tool definition with
tool({ ... }) - [ ] Used property (NOT
inputSchema)parameters - [ ] Used zod schema:
z.object({ ... }) - [ ] Defined function with async callback
execute - [ ] Added string for the tool
description
实现任何工具前,请验证:
- [ ] 已从'ai'包导入:
toolimport { tool } from 'ai'; - [ ] 已用包裹工具定义
tool({ ... }) - [ ] 使用了属性(而非
inputSchema)parameters - [ ] 使用了zod Schema:
z.object({ ... }) - [ ] 定义了带异步回调的函数
execute - [ ] 为工具添加了字符串
description
⚠️ NEW in v6: ToolLoopAgent for Agentic Applications
⚠️ v6新增功能:用于智能体应用的ToolLoopAgent
Agent Definition
智能体定义
typescript
import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions: "You are a helpful assistant that can search and analyze data.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// Implement data fetching
return { result: "data for " + query };
},
}),
analyzeData: tool({
description: "Analyze fetched data",
inputSchema: z.object({
data: z.string(),
}),
execute: async ({ data }) => {
return { analysis: "Analysis of " + data };
},
}),
},
stopWhen: stepCountIs(20), // Stop after 20 steps max
});
// Non-streaming execution
const { text, toolCalls } = await myAgent.generate({
prompt: "Find and analyze user data",
});
// Streaming execution
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
// Handle streaming chunks
}typescript
import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions: "You are a helpful assistant that can search and analyze data.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// 实现数据获取逻辑
return { result: "data for " + query };
},
}),
analyzeData: tool({
description: "Analyze fetched data",
inputSchema: z.object({
data: z.string(),
}),
execute: async ({ data }) => {
return { analysis: "Analysis of " + data };
},
}),
},
stopWhen: stepCountIs(20), // 最多执行20步后停止
});
// 非流式执行
const { text, toolCalls } = await myAgent.generate({
prompt: "Find and analyze user data",
});
// 流式执行
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
// 处理流式数据块
}Agent API Route Integration
智能体API路由集成
typescript
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}typescript
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}Agent Configuration Options
智能体配置选项
| Parameter | Purpose | Example |
|---|---|---|
| AI model to use | |
| System prompt | |
| Available tools | |
| Termination condition | |
| Tool usage mode | |
| Structured output schema | |
| Dynamic per-step adjustments | Function returning step config |
| Runtime options injection | Async function for RAG, etc. |
| 参数 | 用途 | 示例 |
|---|---|---|
| 使用的AI模型 | |
| 系统提示词 | |
| 可用工具集合 | |
| 终止条件 | |
| 工具使用模式 | |
| 结构化输出Schema | |
| 每步的动态调整逻辑 | 返回步骤配置的函数 |
| 运行时选项注入 | 用于RAG等场景的异步函数 |
⚠️ CRITICAL: Common v5 to v6 Breaking Changes
⚠️ 关键变更:v5至v6的常见破坏性变更
1. useChat Hook Changes
1. useChat钩子变更
❌ WRONG (v5 pattern):
typescript
const { messages, input, setInput, append } = useChat();
// Sending message
append({ content: text, role: "user" });✅ CORRECT (v6 pattern):
typescript
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');
// Sending message
sendMessage({ text: input });
// New in v6: Handle tool outputs
addToolOutput({ toolCallId: 'xxx', result: { ... } });❌ 错误示例(v5模式):
typescript
const { messages, input, setInput, append } = useChat();
// 发送消息
append({ content: text, role: "user" });✅ 正确示例(v6模式):
typescript
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');
// 发送消息
sendMessage({ text: input });
// v6新增:处理工具输出
addToolOutput({ toolCallId: 'xxx', result: { ... } });2. Message Structure
2. 消息结构
❌ WRONG (v5 simple content):
typescript
<div>{message.content}</div>✅ CORRECT (v6 parts-based):
typescript
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>❌ 错误示例(v5简单内容结构):
typescript
<div>{message.content}</div>✅ 正确示例(v6基于parts的结构):
typescript
<div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>3. Response Methods
3. 响应方法
❌ WRONG (v5):
typescript
return result.toDataStreamResponse();✅ CORRECT (v6):
typescript
return result.toUIMessageStreamResponse();❌ 错误示例(v5):
typescript
return result.toDataStreamResponse();✅ 正确示例(v6):
typescript
return result.toUIMessageStreamResponse();4. Model Specification
4. 模型指定方式
typescript
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
// Use provider functions (direct provider access)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");typescript
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
// 使用提供商函数(直接访问提供商)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");5. Vercel AI Gateway
5. Vercel AI网关
Purpose: Use Vercel AI Gateway for unified model access, rate limiting, caching, and observability across multiple providers.
Import:
typescript
import { gateway } from "ai";Available Anthropic Models via Gateway:
typescript
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");When to Use Gateway:
- Production applications requiring rate limiting and caching
- Multi-provider applications needing unified interface
- Applications requiring observability and analytics
- When you want automatic retries and error handling
When to Use Direct Provider:
- Development/testing environments
- When you need provider-specific features not available via gateway
- When you want direct control over API calls
Example:
typescript
import { generateText, gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-5"),
prompt: "Hello, world!",
});Comparison:
typescript
// Option 1: Direct provider
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");
// Option 2: Gateway (recommended for production)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");用途: 使用Vercel AI网关实现跨多个提供商的统一模型访问、速率限制、缓存和可观测性。
导入方式:
typescript
import { gateway } from "ai";通过网关可用的Anthropic模型:
typescript
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");何时使用网关:
- 需要速率限制和缓存的生产应用
- 需要统一接口的多提供商应用
- 需要可观测性和分析的应用
- 需要自动重试和错误处理的场景
何时直接使用提供商:
- 开发/测试环境
- 需要网关不支持的提供商特定功能时
- 需要直接控制API调用时
示例:
typescript
import { generateText, gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-5"),
prompt: "Hello, world!",
});对比:
typescript
// 选项1:直接使用提供商
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");
// 选项2:使用网关(推荐生产环境使用)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");Core API Reference
核心API参考
1. generateText - Non-Streaming Text Generation
1. generateText - 非流式文本生成
Purpose: Generate text for non-interactive use cases (email drafts, summaries, agents with tools).
Signature:
typescript
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
tools?: { ... },
maxSteps?: 5,
output?: Output.object({ schema: z.object({...}) }),
});Return Value:
typescript
{
text: string; // Generated text output
output?: T; // Typed structured output (if Output specified)
toolCalls: ToolCall[]; // Tool invocations made
finishReason: string; // Why generation stopped
usage: TokenUsage; // Token consumption
response: RawResponse; // Raw provider response
warnings: Warning[]; // Provider-specific alerts
}Example:
typescript
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function GET() {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Why is the sky blue?",
});
return Response.json({ text: result.text });
}用途: 为非交互式场景生成文本(邮件草稿、摘要、带工具的智能体等)。
签名:
typescript
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
tools?: { ... },
maxSteps?: 5,
output?: Output.object({ schema: z.object({...}) }),
});返回值:
typescript
{
text: string; // 生成的文本输出
output?: T; // 类型化结构化输出(若指定了Output)
toolCalls: ToolCall[]; // 已调用的工具
finishReason: string; // 生成终止原因
usage: TokenUsage; // Token消耗
response: RawResponse; // 提供商原始响应
warnings: Warning[]; // 提供商特定警告
}示例:
typescript
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function GET() {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Why is the sky blue?",
});
return Response.json({ text: result.text });
}2. streamText - Streaming Text Generation
2. streamText - 流式文本生成
Purpose: Stream responses for interactive chat applications.
Signature:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[],
tools?: { ... },
onChunk?: (chunk) => { ... },
onStepFinish?: (step) => { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});Return Methods:
typescript
// For chat applications with useChat hook
result.toUIMessageStreamResponse();
// For simple text streaming
result.toTextStreamResponse();Example - Chat API Route:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}用途: 为交互式聊天应用流式传输响应。
签名:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[],
tools?: { ... },
onChunk?: (chunk) => { ... },
onStepFinish?: (step) => { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});返回方法:
typescript
// 搭配useChat钩子的聊天应用
result.toUIMessageStreamResponse();
// 简单文本流式传输
result.toTextStreamResponse();示例 - 聊天API路由:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}3. useChat Hook - Client-Side Chat Interface
3. useChat钩子 - 客户端聊天界面
Purpose: Build interactive chat UIs with streaming support.
Signature:
typescript
import { useChat } from '@ai-sdk/react';
const {
messages, // Array of UIMessage with parts-based structure
sendMessage, // Function to send messages (replaces append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // Abort current streaming
regenerate, // Reprocess last message
setMessages, // Manually modify history
error, // Error object if request fails
clearError, // Clear error state
addToolOutput, // Submit tool results (NEW in v6)
resumeStream, // Resume interrupted stream (NEW in v6)
} = useChat({
api: '/api/chat',
id?: 'chat-id',
messages?: initialMessages,
onToolCall?: async (toolCall) => { ... },
onFinish?: (message) => { ... },
onError?: (error) => { ... },
sendAutomaticallyWhen?: (messages) => boolean,
resume?: true,
});Complete Example:
typescript
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
// Handle client-side tool execution
if (toolCall.name === 'confirm') {
const result = await showConfirmDialog(toolCall.args);
addToolOutput({ toolCallId: toolCall.id, result });
}
},
});
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-call':
return <div key={index}>Tool: {part.name}</div>;
default:
return null;
}
})}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}用途: 构建支持流式传输的交互式聊天UI。
签名:
typescript
import { useChat } from '@ai-sdk/react';
const {
messages, // 基于parts结构的UIMessage数组
sendMessage, // 发送消息的函数(替代append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // 中止当前流式传输
regenerate, // 重新处理最后一条消息
setMessages, // 手动修改消息历史
error, // 请求失败时的错误对象
clearError, // 清除错误状态
addToolOutput, // 提交工具结果(v6新增)
resumeStream, // 恢复中断的流式传输(v6新增)
} = useChat({
api: '/api/chat',
id?: 'chat-id',
messages?: initialMessages,
onToolCall?: async (toolCall) => { ... },
onFinish?: (message) => { ... },
onError?: (error) => { ... },
sendAutomaticallyWhen?: (messages) => boolean,
resume?: true,
});完整示例:
typescript
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
// 处理客户端工具执行
if (toolCall.name === 'confirm') {
const result = await showConfirmDialog(toolCall.args);
addToolOutput({ toolCallId: toolCall.id, result });
}
},
});
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-call':
return <div key={index}>Tool: {part.name}</div>;
default:
return null;
}
})}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}4. Tool Calling / Function Calling
4. 工具调用/函数调用
Purpose: Enable AI models to call functions with structured parameters.
Defining Tools:
typescript
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
unit: z.enum(["C", "F"]).describe("Temperature unit"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ location, unit }) => {
// Fetch or mock weather data
return {
temperature: 24,
condition: "Sunny",
};
},
});Using Tools with generateText/streamText:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city to get the weather for"),
unit: z
.enum(["C", "F"])
.describe("The unit to display the temperature in"),
}),
execute: async ({ city, unit }) => {
// API call or mock data
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
});
return result.toUIMessageStreamResponse();
}Multi-Step Tool Calling:
typescript
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: "What is the weather in San Francisco and find hotels there?",
maxSteps: 5, // Allow up to 5 tool call steps
});用途: 让AI模型调用带结构化参数的函数。
定义工具:
typescript
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
unit: z.enum(["C", "F"]).describe("Temperature unit"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ location, unit }) => {
// 获取或模拟天气数据
return {
temperature: 24,
condition: "Sunny",
};
},
});在generateText/streamText中使用工具:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city to get the weather for"),
unit: z
.enum(["C", "F"])
.describe("The unit to display the temperature in"),
}),
execute: async ({ city, unit }) => {
// API调用或模拟数据
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
});
return result.toUIMessageStreamResponse();
}多步工具调用:
typescript
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: "What is the weather in San Francisco and find hotels there?",
maxSteps: 5, // 允许最多5步工具调用
});5. Text Embeddings
5. 文本嵌入
Purpose: Convert text into numerical vectors for semantic search, RAG, or similarity.
Signature:
typescript
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const result = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: "Text to embed",
});
// Batch embeddings
const batchResult = await embedMany({
model: openai.textEmbeddingModel("text-embedding-3-small"),
values: ["Text 1", "Text 2", "Text 3"],
});Return Value:
typescript
{
embedding: number[]; // Numerical array representing the text
usage: { tokens: number }; // Token consumption
response: RawResponse; // Raw provider response
}Example - Embedding API Route:
typescript
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { text } = await req.json();
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: text,
});
return Response.json({ embedding, usage });
}用途: 将文本转换为数值向量,用于语义搜索、RAG或相似度计算。
签名:
typescript
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
// 单个嵌入
const result = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: "Text to embed",
});
// 批量嵌入
const batchResult = await embedMany({
model: openai.textEmbeddingModel("text-embedding-3-small"),
values: ["Text 1", "Text 2", "Text 3"],
});返回值:
typescript
{
embedding: number[]; // 代表文本的数值数组
usage: { tokens: number }; // Token消耗
response: RawResponse; // 提供商原始响应
}示例 - 嵌入API路由:
typescript
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { text } = await req.json();
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: text,
});
return Response.json({ embedding, usage });
}6. Middleware
6. 中间件
Purpose: Intercept and modify model behavior for logging, caching, guardrails, RAG, etc.
Built-in Middleware:
typescript
import {
extractReasoningMiddleware,
simulateStreamingMiddleware,
defaultSettingsMiddleware,
wrapLanguageModel,
} from "ai";
// Extract reasoning from models like Claude
const modelWithReasoning = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});
// Apply default settings
const modelWithDefaults = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: defaultSettingsMiddleware({
temperature: 0.7,
maxOutputTokens: 1000,
}),
});Custom Middleware:
typescript
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";
// Logging middleware
const loggingMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
console.log("Request params:", params);
return params;
},
wrapGenerate: async ({ doGenerate, params }) => {
const result = await doGenerate();
console.log("Response:", result);
return result;
},
};
// Caching middleware
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params.prompt);
if (cache.has(cacheKey)) {
return { text: cache.get(cacheKey)! };
}
const result = await doGenerate();
cache.set(cacheKey, result.text);
return result;
},
};
// RAG middleware
const ragMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
const relevantDocs = await vectorSearch(params.prompt);
return {
...params,
prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
};
},
};
// Apply multiple middleware
const enhancedModel = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});用途: 拦截并修改模型行为,用于日志记录、缓存、防护、RAG等场景。
内置中间件:
typescript
import {
extractReasoningMiddleware,
simulateStreamingMiddleware,
defaultSettingsMiddleware,
wrapLanguageModel,
} from "ai";
// 从Claude等模型中提取推理过程
const modelWithReasoning = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});
// 应用默认设置
const modelWithDefaults = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: defaultSettingsMiddleware({
temperature: 0.7,
maxOutputTokens: 1000,
}),
});自定义中间件:
typescript
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";
// 日志中间件
const loggingMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
console.log("Request params:", params);
return params;
},
wrapGenerate: async ({ doGenerate, params }) => {
const result = await doGenerate();
console.log("Response:", result);
return result;
},
};
// 缓存中间件
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params.prompt);
if (cache.has(cacheKey)) {
return { text: cache.get(cacheKey)! };
}
const result = await doGenerate();
cache.set(cacheKey, result.text);
return result;
},
};
// RAG中间件
const ragMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
const relevantDocs = await vectorSearch(params.prompt);
return {
...params,
prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
};
},
};
// 应用多个中间件
const enhancedModel = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});7. Model Context Protocol (MCP) Integration
7. 模型上下文协议(MCP)集成
Purpose: Connect to external MCP servers for dynamic tool access.
Installation:
bash
bun add @ai-sdk/mcp @modelcontextprotocol/sdkHTTP Transport (Production):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { prompt } = await req.json();
const httpTransport = new StreamableHTTPClientTransport(
new URL("https://mcp-server.example.com/mcp"),
{ headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
);
const mcpClient = await createMCPClient({ transport: httpTransport });
try {
const tools = await mcpClient.tools();
const response = streamText({
model: anthropic("claude-sonnet-4-5"),
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
return response.toTextStreamResponse();
} catch (error) {
await mcpClient.close();
return new Response("Internal Server Error", { status: 500 });
}
}Stdio Transport (Development):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";
const stdioTransport = new Experimental_StdioMCPTransport({
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/dir",
],
});
const mcpClient = await createMCPClient({ transport: stdioTransport });Key Points:
- Always close the client in and
onFinishonError - Tools are fetched dynamically with
mcpClient.tools() - Use HTTP/SSE for production, Stdio for development
- Multiple MCP clients can be combined by merging tool collections
用途: 连接外部MCP服务器以动态获取工具。
安装:
bash
bun add @ai-sdk/mcp @modelcontextprotocol/sdkHTTP传输(生产环境):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { prompt } = await req.json();
const httpTransport = new StreamableHTTPClientTransport(
new URL("https://mcp-server.example.com/mcp"),
{ headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
);
const mcpClient = await createMCPClient({ transport: httpTransport });
try {
const tools = await mcpClient.tools();
const response = streamText({
model: anthropic("claude-sonnet-4-5"),
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
return response.toTextStreamResponse();
} catch (error) {
await mcpClient.close();
return new Response("Internal Server Error", { status: 500 });
}
}标准输入输出传输(开发环境):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";
const stdioTransport = new Experimental_StdioMCPTransport({
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/dir",
],
});
const mcpClient = await createMCPClient({ transport: stdioTransport });关键点:
- 务必在和
onFinish中关闭客户端onError - 通过动态获取工具
mcpClient.tools() - 生产环境使用HTTP/SSE,开发环境使用标准输入输出
- 可通过合并工具集合来组合多个MCP客户端
8. Message Utilities
8. 消息工具函数
convertToModelMessages:
Converts UI messages from into objects for AI functions.
useChatModelMessagetypescript
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}convertToModelMessages:
将的UI消息转换为AI函数所需的对象。
useChatModelMessagetypescript
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}Workflow Patterns
工作流模式
1. Sequential (Chain) Pattern
1. 顺序(链式)模式
typescript
async function sequentialWorkflow(input: string) {
// Step 1: Generate initial content
const { text: draft } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Write marketing copy for: ${input}`,
});
// Step 2: Evaluate quality
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
score: z.number().min(1).max(10),
feedback: z.string(),
}),
}),
prompt: `Evaluate this copy: ${draft}`,
});
// Step 3: Improve if needed
if (evaluation.score < 7) {
const { text: improved } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
});
return improved;
}
return draft;
}typescript
async function sequentialWorkflow(input: string) {
// 步骤1:生成初始内容
const { text: draft } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Write marketing copy for: ${input}`,
});
// 步骤2:评估质量
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
score: z.number().min(1).max(10),
feedback: z.string(),
}),
}),
prompt: `Evaluate this copy: ${draft}`,
});
// 步骤3:若需要则优化
if (evaluation.score < 7) {
const { text: improved } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
});
return improved;
}
return draft;
}2. Parallel Pattern
2. 并行模式
typescript
async function parallelReview(code: string) {
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for security issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for performance issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for maintainability:\n\n${code}`,
}),
]);
return {
security: securityReview.text,
performance: performanceReview.text,
maintainability: maintainabilityReview.text,
};
}typescript
async function parallelReview(code: string) {
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for security issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for performance issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for maintainability:\n\n${code}`,
}),
]);
return {
security: securityReview.text,
performance: performanceReview.text,
maintainability: maintainabilityReview.text,
};
}3. Routing Pattern
3. 路由模式
typescript
async function routeQuery(query: string) {
// Classify the query
const { output: classification } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.choice({
choices: ["technical", "billing", "general"] as const,
}),
prompt: `Classify this customer query: ${query}`,
});
// Route to appropriate handler
switch (classification) {
case "technical":
return handleTechnicalQuery(query);
case "billing":
return handleBillingQuery(query);
default:
return handleGeneralQuery(query);
}
}typescript
async function routeQuery(query: string) {
// 分类查询
const { output: classification } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.choice({
choices: ["technical", "billing", "general"] as const,
}),
prompt: `Classify this customer query: ${query}`,
});
// 路由到对应的处理函数
switch (classification) {
case "technical":
return handleTechnicalQuery(query);
case "billing":
return handleBillingQuery(query);
default:
return handleGeneralQuery(query);
}
}4. Orchestrator-Worker Pattern
4. 编排器-工作者模式
typescript
async function implementFeature(requirement: string) {
// Orchestrator: Break down the task
const { output: plan } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
tasks: z.array(
z.object({
type: z.enum(["frontend", "backend", "database"]),
description: z.string(),
})
),
}),
}),
prompt: `Break down this feature into tasks: ${requirement}`,
});
// Workers: Execute tasks in parallel
const results = await Promise.all(
plan.tasks.map((task) =>
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Implement this ${task.type} task: ${task.description}`,
})
)
);
return results.map((r) => r.text);
}typescript
async function implementFeature(requirement: string) {
// 编排器:分解任务
const { output: plan } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
tasks: z.array(
z.object({
type: z.enum(["frontend", "backend", "database"]),
description: z.string(),
})
),
}),
}),
prompt: `Break down this feature into tasks: ${requirement}`,
});
// 工作者:并行执行任务
const results = await Promise.all(
plan.tasks.map((task) =>
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Implement this ${task.type} task: ${task.description}`,
})
)
);
return results.map((r) => r.text);
}5. Evaluator-Optimizer Pattern
5. 评估器-优化器模式
typescript
async function optimizeOutput(input: string, maxIterations = 3) {
let output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: input,
});
for (let i = 0; i < maxIterations; i++) {
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
isGood: z.boolean(),
improvements: z.array(z.string()),
}),
}),
prompt: `Evaluate this output: ${output.text}`,
});
if (evaluation.isGood) break;
output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
});
}
return output.text;
}typescript
async function optimizeOutput(input: string, maxIterations = 3) {
let output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: input,
});
for (let i = 0; i < maxIterations; i++) {
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
isGood: z.boolean(),
improvements: z.array(z.string()),
}),
}),
prompt: `Evaluate this output: ${output.text}`,
});
if (evaluation.isGood) break;
output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
});
}
return output.text;
}Message Part Types (v6)
消息部件类型(v6)
| Part Type | Description | Properties |
| ----------- | -------------------- | ------------------------------------------ | ---------- | -------- | --------------- |
| | Text content | , |
| | Tool invocation | , , ('input-streaming' | 'invoking' | 'output' | 'output-error') |
| | Model thinking | , |
| | File attachment | , or |
| | RAG source reference | or , |
| | Workflow boundary | Marks step boundaries |
| | Custom data | Any custom payload |
texttextisStreamingtool-callnameargsstatereasoningtextisStreamingfilemediaTypeurldatasourceurldocumentIdtitlestepdata| 部件类型 | 描述 | 属性 |
|---|---|---|
| 文本内容 | |
| 工具调用 | |
| 模型思考过程 | |
| 文件附件 | |
| RAG源引用 | |
| 工作流边界 | 标记步骤边界 |
| 自定义数据 | 任意自定义负载 |
TypeScript Best Practices
TypeScript最佳实践
Type Imports
类型导入
typescript
import type {
UIMessage, // Message type from useChat
ModelMessage, // Message type for model functions
ToolCall, // Tool call information
TokenUsage, // Token consumption data
} from "ai";typescript
import type {
UIMessage, // useChat的消息类型
ModelMessage, // 模型函数的消息类型
ToolCall, // 工具调用信息
TokenUsage, // Token消耗数据
} from "ai";Agent Type Safety
智能体类型安全
typescript
import type { InferAgentUIMessage } from "ai";
// Type-safe messages from agent
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;typescript
import type { InferAgentUIMessage } from "ai";
// 智能体的类型安全消息
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;Strongly Typed Tools
强类型工具
typescript
import { tool } from "ai";
import { z } from "zod";
// Tool helper infers execute parameter types
const myTool = tool({
description: "My tool",
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
outputSchema: z.object({
result: z.string(),
}),
execute: async ({ param1, param2 }) => {
// param1 is inferred as string
// param2 is inferred as number
return { result: "success" };
},
});typescript
import { tool } from "ai";
import { z } from "zod";
// Tool助手会自动推断execute参数类型
const myTool = tool({
description: "My tool",
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
outputSchema: z.object({
result: z.string(),
}),
execute: async ({ param1, param2 }) => {
// param1被推断为string类型
// param2被推断为number类型
return { result: "success" };
},
});Common Patterns
常见模式
Pattern 1: Simple Chat Application
模式1:简单聊天应用
Client ():
app/page.tsxtypescript
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}Server ():
app/api/chat/route.tstypescript
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}客户端():
app/page.tsxtypescript
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}服务端():
app/api/chat/route.tstypescript
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}Pattern 2: Chat with Structured Output
模式2:带结构化输出的聊天
typescript
import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
output: Output.object({
schema: z.object({
response: z.string(),
sentiment: z.enum(["positive", "neutral", "negative"]),
confidence: z.number().min(0).max(1),
}),
}),
});
return result.toUIMessageStreamResponse();
}typescript
import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
output: Output.object({
schema: z.object({
response: z.string(),
sentiment: z.enum(["positive", "neutral", "negative"]),
confidence: z.number().min(0).max(1),
}),
}),
});
return result.toUIMessageStreamResponse();
}Pattern 3: Agent with Multiple Tools
模式3:带多工具的智能体
typescript
import {
ToolLoopAgent,
tool,
stepCountIs,
createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const researchAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions:
"You are a research assistant that can search and analyze information.",
tools: {
webSearch: tool({
description: "Search the web for information",
inputSchema: z.object({
query: z.string().describe("Search query"),
}),
execute: async ({ query }) => {
// Implement web search
return { results: ["..."] };
},
}),
analyze: tool({
description: "Analyze collected information",
inputSchema: z.object({
data: z.string().describe("Data to analyze"),
}),
execute: async ({ data }) => {
return { analysis: "..." };
},
}),
summarize: tool({
description: "Summarize findings",
inputSchema: z.object({
findings: z.array(z.string()),
}),
execute: async ({ findings }) => {
return { summary: "..." };
},
}),
},
stopWhen: stepCountIs(10),
});
// API Route
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: researchAgent,
uiMessages: messages,
});
}typescript
import {
ToolLoopAgent,
tool,
stepCountIs,
createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const researchAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions:
"You are a research assistant that can search and analyze information.",
tools: {
webSearch: tool({
description: "Search the web for information",
inputSchema: z.object({
query: z.string().describe("Search query"),
}),
execute: async ({ query }) => {
// 实现网页搜索
return { results: ["..."] };
},
}),
analyze: tool({
description: "Analyze collected information",
inputSchema: z.object({
data: z.string().describe("Data to analyze"),
}),
execute: async ({ data }) => {
return { analysis: "..." };
},
}),
summarize: tool({
description: "Summarize findings",
inputSchema: z.object({
findings: z.array(z.string()),
}),
execute: async ({ findings }) => {
return { summary: "..." };
},
}),
},
stopWhen: stepCountIs(10),
});
// API路由
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: researchAgent,
uiMessages: messages,
});
}Pattern 4: Semantic Search with Embeddings
模式4:基于嵌入的语义搜索
typescript
// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { query } = await req.json();
// Generate embedding for search query
const { embedding } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: query,
});
// Use embedding for similarity search in vector database
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}typescript
// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { query } = await req.json();
// 为搜索查询生成嵌入
const { embedding } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: query,
});
// 使用嵌入在向量数据库中进行相似度搜索
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}Common Pitfalls and Solutions
常见陷阱与解决方案
Pitfall 1: Using Deprecated generateObject/streamObject
陷阱1:使用已弃用的generateObject/streamObject
typescript
// ❌ WRONG - Deprecated in v6
import { generateObject } from 'ai';
const result = await generateObject({
schema: z.object({...}),
prompt: '...',
});
// ✅ CORRECT - Use Output with generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
output: Output.object({ schema: z.object({...}) }),
prompt: '...',
});typescript
// ❌ 错误 - v6中已弃用
import { generateObject } from 'ai';
const result = await generateObject({
schema: z.object({...}),
prompt: '...',
});
// ✅ 正确 - 使用Output搭配generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
output: Output.object({ schema: z.object({...}) }),
prompt: '...',
});Pitfall 2: NOT Using tool() Helper for Tools
陷阱2:未使用tool()助手定义工具
typescript
// ❌ WRONG - Plain object (WILL CAUSE BUILD FAILURE)
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ Wrong property name
execute: async ({...}) => {...},
},
}
// ✅ CORRECT - Use tool() helper (REQUIRED)
import { tool } from 'ai';
tools: {
myTool: tool({
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ Use inputSchema
execute: async ({...}) => {...},
}),
}typescript
// ❌ 错误 - 普通对象(会导致构建失败)
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ 属性名称错误
execute: async ({...}) => {...},
},
}
// ✅ 正确 - 使用tool()助手(必填)
import { tool } from 'ai';
tools: {
myTool: tool({
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ 使用inputSchema
execute: async ({...}) => {...},
}),
}Pitfall 3: Using v5 useChat API in v6
陷阱3:在v6中使用v5的useChat API
typescript
// ❌ WRONG - v5 pattern
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });
// ✅ CORRECT - v6 pattern
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });typescript
// ❌ 错误 - v5模式
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });
// ✅ 正确 - v6模式
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });Pitfall 4: Accessing message.content instead of message.parts
陷阱4:访问message.content而非message.parts
typescript
// ❌ WRONG - v5 pattern
<div>{message.content}</div>
// ✅ CORRECT - v6 parts-based
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>typescript
// ❌ 错误 - v5模式
<div>{message.content}</div>
// ✅ 正确 - v6基于parts的结构
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>Pitfall 5: Using Wrong Response Method
陷阱5:使用错误的响应方法
typescript
// ❌ WRONG - v5 method
return result.toDataStreamResponse();
// ✅ CORRECT - v6 method
return result.toUIMessageStreamResponse();typescript
// ❌ 错误 - v5方法
return result.toDataStreamResponse();
// ✅ 正确 - v6方法
return result.toUIMessageStreamResponse();Pitfall 6: Forgetting MCP Client Cleanup
陷阱6:忘记清理MCP客户端
typescript
// ❌ WRONG - no cleanup
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ CORRECT - cleanup in callbacks
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});typescript
// ❌ 错误 - 未清理
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ 正确 - 在回调中清理
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});Migration Checklist (v5 → v6)
迁移检查清单(v5 → v6)
When migrating from v5 to v6, update:
- [ ] Replace /
generateObjectwithstreamObject/generateText+streamTextOutput - [ ] Replace with
appendin useChatsendMessage - [ ] Remove ,
input,setInputfrom useChat destructuringhandleInputChange - [ ] Add local state management for input:
const [input, setInput] = useState('') - [ ] Update message rendering from to
message.contentmessage.parts.map(...) - [ ] Update sendMessage calls to use structure
{ text: input } - [ ] Replace with
toDataStreamResponse()toUIMessageStreamResponse() - [ ] Update tool definitions to use helper with
tool()inputSchema - [ ] Update model IDs (e.g., )
claude-sonnet-4-5 - [ ] Consider using for agentic applications
ToolLoopAgent - [ ] Update TypeScript types (,
UIMessage)ModelMessage - [ ] Add handling if using client-side tools
addToolOutput - [ ] Consider implementing middleware for logging, caching, or guardrails
从v5迁移至v6时,需更新以下内容:
- [ ] 用/
generateText+streamText替代Output/generateObjectstreamObject - [ ] 在useChat中用替代
sendMessageappend - [ ] 从useChat的解构中移除、
input、setInputhandleInputChange - [ ] 添加输入的本地状态管理:
const [input, setInput] = useState('') - [ ] 将消息渲染从更新为
message.contentmessage.parts.map(...) - [ ] 更新sendMessage调用为结构
{ text: input } - [ ] 用替代
toUIMessageStreamResponse()toDataStreamResponse() - [ ] 更新工具定义为使用助手搭配
tool()inputSchema - [ ] 更新模型ID(例如)
claude-sonnet-4-5 - [ ] 考虑为智能体应用使用
ToolLoopAgent - [ ] 更新TypeScript类型(、
UIMessage)ModelMessage - [ ] 若使用客户端工具,添加处理逻辑
addToolOutput - [ ] 考虑实现用于日志记录、缓存或防护的中间件
Decision Guide
决策指南
When implementing AI SDK features, ask:
-
Is this client-side or server-side?
- Client: Use hook
useChat - Server: Use or
generateTextstreamText - Agent: Use with
ToolLoopAgentcreateAgentUIStreamResponse
- Client: Use
-
Do I need streaming or non-streaming?
- Streaming chat: +
streamTexttoUIMessageStreamResponse() - Non-streaming:
generateText - Simple text stream: +
streamTexttoTextStreamResponse()
- Streaming chat:
-
Do I need structured output?
- Yes: Use ,
Output.object(),Output.array(), orOutput.choice()Output.json() - Pass to or
generateTextviastreamTextparameteroutput
- Yes: Use
-
Do I need tool calling?
- Yes: Define tools with helper and
tool()(zod)inputSchema - Pass tools object to ,
generateText, orstreamTextToolLoopAgent
- Yes: Define tools with
-
Am I building an agent?
- Yes: Use class
ToolLoopAgent - Configure ,
stopWhen,toolChoiceas neededprepareStep - Use for API routes
createAgentUIStreamResponse
- Yes: Use
-
Am I using the correct message format?
- Client (useChat): Returns with
UIMessage[]propertyparts - Server: Convert with to
convertToModelMessages()ModelMessage[] - Render messages using
message.parts.map(...)
- Client (useChat): Returns
-
Is my model specification correct?
- Direct provider:
anthropic('claude-sonnet-4-5') - Gateway (production):
gateway('anthropic/claude-sonnet-4-5') - Embeddings:
openai.textEmbeddingModel('text-embedding-3-small')
- Direct provider:
-
Do I need embeddings?
- Use for single values
embed - Use for batches
embedMany - Use method
textEmbeddingModel()
- Use
-
Do I need middleware?
- Logging: Custom middleware with /
transformParamswrapGenerate - Caching: Custom middleware with result storage
- RAG: Custom middleware to inject context
- Guardrails: Custom middleware to filter output
- Logging: Custom middleware with
实现AI SDK功能时,可按以下步骤决策:
-
是客户端还是服务端实现?
- 客户端:使用钩子
useChat - 服务端:使用或
generateTextstreamText - 智能体:使用搭配
ToolLoopAgentcreateAgentUIStreamResponse
- 客户端:使用
-
需要流式还是非流式?
- 流式聊天:+
streamTexttoUIMessageStreamResponse() - 非流式:
generateText - 简单文本流式传输:+
streamTexttoTextStreamResponse()
- 流式聊天:
-
是否需要结构化输出?
- 是:使用、
Output.object()、Output.array()或Output.choice()Output.json() - 通过参数传递给
output或generateTextstreamText
- 是:使用
-
是否需要工具调用?
- 是:使用助手搭配
tool()(zod)定义工具inputSchema - 将工具对象传递给、
generateText或streamTextToolLoopAgent
- 是:使用
-
是否在构建智能体?
- 是:使用类
ToolLoopAgent - 根据需要配置、
stopWhen、toolChoiceprepareStep - 为API路由使用
createAgentUIStreamResponse
- 是:使用
-
是否使用了正确的消息格式?
- 客户端(useChat):返回带属性的
partsUIMessage[] - 服务端:使用转换为
convertToModelMessages()ModelMessage[] - 使用渲染消息
message.parts.map(...)
- 客户端(useChat):返回带
-
模型指定是否正确?
- 直接提供商:
anthropic('claude-sonnet-4-5') - 网关(生产环境):
gateway('anthropic/claude-sonnet-4-5') - 嵌入:
openai.textEmbeddingModel('text-embedding-3-small')
- 直接提供商:
-
是否需要嵌入?
- 单个值:使用
embed - 批量值:使用
embedMany - 使用方法
textEmbeddingModel()
- 单个值:使用
-
是否需要中间件?
- 日志记录:带/
transformParams的自定义中间件wrapGenerate - 缓存:带结果存储的自定义中间件
- RAG:用于注入上下文的自定义中间件
- 防护:用于过滤输出的自定义中间件
- 日志记录:带
Quick Reference
快速参考
| Task | Function | Key Parameters |
|---|---|---|
| Generate text | | |
| Stream text | | |
| Chat UI | | |
| Build agent | | |
| Tool calling | | |
| Structured output | | |
| Text embedding | | |
| Batch embedding | | |
| Message conversion | | |
| MCP integration | | |
| Add middleware | | |
| Gateway model | | |
| 任务 | 函数 | 关键参数 |
|---|---|---|
| 生成文本 | | |
| 流式文本 | | |
| 聊天UI | | |
| 构建智能体 | | |
| 工具调用 | | |
| 结构化输出 | | |
| 文本嵌入 | | |
| 批量嵌入 | | |
| 消息转换 | | |
| MCP集成 | | |
| 添加中间件 | | |
| 网关模型 | | |
Additional Resources
额外资源
When in doubt, check the official documentation:
- Main docs: https://ai-sdk.dev/docs
- Agents: https://ai-sdk.dev/docs/agents
- API reference: https://ai-sdk.dev/docs/reference
- Examples: https://ai-sdk.dev/examples
Remember: AI SDK v6 uses provider function model specification (or for production), parts-based messages, instead of , helpers instead of , instead of , and requires in API routes.
gateway()sendMessageappendOutputgenerateObjecttoUIMessageStreamResponsetoDataStreamResponseconvertToModelMessages有疑问时,请查阅官方文档:
- 主文档:https://ai-sdk.dev/docs
- 智能体:https://ai-sdk.dev/docs/agents
- API参考:https://ai-sdk.dev/docs/reference
- 示例:https://ai-sdk.dev/examples
注意: AI SDK v6使用提供商函数指定模型(生产环境可使用)、基于parts的消息结构、替代、Output助手替代、替代,且在API路由中需要使用。
gateway()sendMessageappendgenerateObjecttoUIMessageStreamResponsetoDataStreamResponseconvertToModelMessages