vercel-ai-sdk

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Vercel AI SDK v6 Implementation Guide

Vercel AI SDK v6 实现指南

When to Use This Skill

何时使用此技能

Use this skill when:
  • Implementing AI chat interfaces with
    useChat
    hook
  • Creating API routes that generate or stream AI responses
  • Building agentic applications with
    ToolLoopAgent
  • Adding tool calling / function calling capabilities
  • Generating structured output with
    Output.object()
    ,
    Output.array()
    , etc.
  • Generating text embeddings for semantic search or RAG
  • Migrating from AI SDK v5 to v6
  • Integrating Model Context Protocol (MCP) servers
  • Implementing middleware for caching, logging, or guardrails
  • Building workflow patterns (sequential, parallel, routing, etc.)
  • Working with streaming responses or message persistence
在以下场景中使用本技能:
  • 使用
    useChat
    钩子实现AI聊天界面
  • 创建用于生成或流式传输AI响应的API路由
  • 基于
    ToolLoopAgent
    构建智能体应用
  • 添加工具调用/函数调用功能
  • 借助
    Output.object()
    Output.array()
    等生成结构化输出
  • 生成文本嵌入用于语义搜索或RAG(检索增强生成)
  • 从AI SDK v5迁移至v6
  • 集成模型上下文协议(MCP)服务器
  • 实现用于缓存、日志记录或防护的中间件
  • 构建工作流模式(顺序、并行、路由等)
  • 处理流式响应或消息持久化

Structured Implementation Workflow

结构化实现工作流

<workflow> <step id="1" name="verify-requirements"> <description>Understand the task requirements</description> <actions> - Identify what AI functionality is needed (chat, generation, agents, tools, embeddings) - Determine if client-side (useChat) or server-side (API route) implementation - Check if streaming or non-streaming response is required - Verify model provider (Anthropic, OpenAI, etc.) - Determine if structured output is needed (Output.object, Output.array, etc.) </actions> </step> <step id="2" name="check-documentation"> <description>Verify current API patterns if uncertain</description> <actions> - Use WebFetch to check https://ai-sdk.dev/docs/ if API patterns are unclear - Confirm model specification format for the provider - Verify function signatures for complex features </actions> </step> <step id="3" name="implement"> <description>Implement using correct v6 patterns</description> <actions> - Use provider function model specification: anthropic('claude-sonnet-4-5') - For chat: use sendMessage (not append), parts-based messages - For tools: MUST import and use tool() helper from 'ai', MUST use inputSchema (NOT parameters), MUST use zod - For structured output: use Output.object(), Output.array(), Output.choice(), Output.json() - For streaming: use toUIMessageStreamResponse() or toTextStreamResponse() - For agents: use ToolLoopAgent class with createAgentUIStreamResponse() - For embeddings: use provider.textEmbeddingModel() </actions> </step> <step id="4" name="verify-types"> <description>Ensure TypeScript types are correct</description> <actions> - Check for proper imports from 'ai' package - Verify message types (UIMessage for useChat) - Ensure tool parameter types are inferred correctly - Add explicit types for async functions </actions> </step> <step id="5" name="install-dependencies"> <description>Install any missing dependencies with the CORRECT package manager</description> <actions> - **CRITICAL: Detect which package manager the project uses FIRST** * Check for lockfiles: pnpm-lock.yaml → use pnpm, package-lock.json → use npm, yarn.lock → use yarn, bun.lockb → use bun * If pnpm-lock.yaml exists, you MUST use pnpm (NOT npm!) - Check if all imported packages are installed - If build fails with "Module not found", identify the package name from the error - Add the package to package.json dependencies - Install using the CORRECT package manager: * If pnpm-lock.yaml exists: `pnpm install [package]` or `pnpm add [package]` * If package-lock.json exists: `npm install [package]` * If yarn.lock exists: `yarn add [package]` * If bun.lockb exists: `bun install [package]` or `bun add [package]` - Re-run build to verify installation succeeded </actions> <critical> **NEVER use the wrong package manager!** - Using npm when the project uses pnpm creates package-lock.json alongside pnpm-lock.yaml - This causes dependency version mismatches and breaks the build - ALWAYS check for existing lockfiles and use the matching package manager
  NEVER accept "Module not found" errors as environment issues
  YOU must install the required packages with the CORRECT package manager

  Common packages needed:
  - ai (core AI SDK)
  - @ai-sdk/openai (OpenAI provider)
  - @ai-sdk/anthropic (Anthropic provider)
  - @ai-sdk/mcp (MCP integration)
  - @modelcontextprotocol/sdk (MCP client SDK)
  - zod (for tool schemas)
</critical>
</step> <step id="6" name="verify-build"> <description>Run build and fix ALL errors until it passes</description> <actions> - Run: npm run build (or bun run build) - If build fails, read the FULL error message carefully - Common fixes: * "Module not found" → Install the package (go back to step 5) * Type errors → Fix TypeScript types * Config errors → Check next.config.js/ts * Framework errors → Research the error, try different approaches - Apply fix and re-run build - REPEAT until build passes </actions> <critical> NEVER stop at "build fails but code is correct" NEVER blame "environment" or "framework bugs" without debugging KEEP ITERATING until the build PASSES If you encounter a framework error, research it, try workarounds, debug it DO NOT give up until you have a passing build </critical> </step> <step id="7" name="verify-tests"> <description>Run tests and debug ALL failures until they pass</description> <actions> - Run: npm run test (or bun run test) - If tests fail, read the FULL test error message carefully - Debug the failure: * Check what the test expects vs what you're providing * Look at test assertions * Verify your implementation matches the requirements * Check for type mismatches or missing properties - Fix the implementation - Re-run tests - REPEAT until all tests pass </actions> <critical> NEVER accept test failures without debugging NEVER dismiss failures as "test framework bugs" without investigation Each test failure is telling you something - LISTEN to it KEEP DEBUGGING until all tests pass Test failures are NOT acceptable - they mean your implementation is incomplete </critical> </step> <step id="8" name="final-verification"> <description>Verify EVERYTHING passes</description> <actions> - Run: npm run build && npm run lint && npm run test - All three must pass: ✅ Build, ✅ Lint, ✅ Tests - If any fail, go back and fix them - Do not consider the task complete until ALL checks pass </actions> <critical> The task is NOT complete until: - ✅ Build passes - ✅ Lint passes - ✅ Tests pass
  "Code is correct" is NOT enough
  You must achieve FULL PASSING status
  This is what it means to be an autonomous agent
</critical>
</step> </workflow>
<workflow> <step id="1" name="verify-requirements"> <description>理解任务需求</description> <actions> - 确定所需的AI功能(聊天、生成、智能体、工具、嵌入) - 判断是客户端(useChat)还是服务端(API路由)实现 - 确认是否需要流式或非流式响应 - 验证模型提供商(Anthropic、OpenAI等) - 确定是否需要结构化输出(Output.object、Output.array等) </actions> </step> <step id="2" name="check-documentation"> <description>若有疑问,验证当前API模式</description> <actions> - 若API模式不明确,使用WebFetch查看https://ai-sdk.dev/docs/ - 确认提供商的模型规范格式 - 验证复杂功能的函数签名 </actions> </step> <step id="3" name="implement"> <description>使用正确的v6模式实现</description> <actions> - 使用提供商函数指定模型:anthropic('claude-sonnet-4-5') - 聊天场景:使用sendMessage(而非append),基于parts的消息结构 - 工具场景:必须从'ai'导入并使用tool()助手,必须使用inputSchema(而非parameters),必须使用zod - 结构化输出场景:使用Output.object()、Output.array()、Output.choice()、Output.json() - 流式传输场景:使用toUIMessageStreamResponse()或toTextStreamResponse() - 智能体场景:使用ToolLoopAgent类搭配createAgentUIStreamResponse() - 嵌入场景:使用provider.textEmbeddingModel() </actions> </step> <step id="4" name="verify-types"> <description>确保TypeScript类型正确</description> <actions> - 检查是否从'ai'包正确导入类型 - 验证消息类型(useChat对应的UIMessage) - 确保工具参数类型被正确推断 - 为异步函数添加显式类型 </actions> </step> <step id="5" name="install-dependencies"> <description>使用正确的包管理器安装缺失的依赖</description> <actions> - **关键:先检测项目使用的包管理器** * 检查锁文件:pnpm-lock.yaml → 使用pnpm,package-lock.json → 使用npm,yarn.lock → 使用yarn,bun.lockb → 使用bun * 若存在pnpm-lock.yaml,必须使用pnpm(禁止使用npm!) - 检查所有导入的包是否已安装 - 若构建失败提示"Module not found",从错误信息中识别包名 - 将包添加至package.json的依赖项 - 使用正确的包管理器安装: * 若存在pnpm-lock.yaml:`pnpm install [package]`或`pnpm add [package]` * 若存在package-lock.json:`npm install [package]` * 若存在yarn.lock:`yarn add [package]` * 若存在bun.lockb:`bun install [package]`或`bun add [package]` - 重新运行构建以验证安装成功 </actions> <critical> **绝对禁止使用错误的包管理器!** - 当项目使用pnpm时使用npm,会在pnpm-lock.yaml旁生成package-lock.json - 这会导致依赖版本不匹配,破坏构建 - 务必检查现有锁文件并使用匹配的包管理器
  绝不能将"Module not found"错误视为环境问题
  必须使用正确的包管理器安装所需包

  常见所需包:
  - ai(核心AI SDK)
  - @ai-sdk/openai(OpenAI提供商)
  - @ai-sdk/anthropic(Anthropic提供商)
  - @ai-sdk/mcp(MCP集成)
  - @modelcontextprotocol/sdk(MCP客户端SDK)
  - zod(用于工具Schema)
</step> <step id="6" name="verify-build"> <description>运行构建并修复所有错误直至构建通过</description> <actions> - 运行:npm run build(或bun run build) - 若构建失败,仔细阅读完整错误信息 - 常见修复方案: * "Module not found" → 安装对应包(返回步骤5) * 类型错误 → 修复TypeScript类型 * 配置错误 → 检查next.config.js/ts * 框架错误 → 搜索错误信息,尝试不同解决方法 - 应用修复后重新运行构建 - 重复上述步骤直至构建通过 </actions> <critical> 绝不能停留在"构建失败但代码正确"的状态 未调试前绝不能归咎于"环境"或"框架bug" 持续迭代直至构建通过 若遇到框架错误,搜索相关信息、尝试变通方案、进行调试 直至构建通过再停止 </critical> </step> <step id="7" name="verify-tests"> <description>运行测试并调试所有失败用例直至全部通过</description> <actions> - 运行:npm run test(或bun run test) - 若测试失败,仔细阅读完整测试错误信息 - 调试失败原因: * 对比测试预期结果与实际输出 * 查看测试断言 * 验证实现是否符合需求 * 检查类型不匹配或缺失属性的问题 - 修复实现代码 - 重新运行测试 - 重复上述步骤直至所有测试通过 </actions> <critical> 绝不能接受测试失败而不调试 未调查前绝不能将失败归咎于"测试框架bug" 每个测试失败都在传递信息——请重视 持续调试直至所有测试通过 测试失败是不可接受的——这意味着实现不完整 </critical> </step> <step id="8" name="final-verification"> <description>验证所有环节均通过</description> <actions> - 运行:npm run build && npm run lint && npm run test - 以下三项必须全部通过:✅ 构建、✅ 代码检查、✅ 测试 - 若任意环节失败,返回对应步骤修复 - 直至所有检查通过,方可视为任务完成 </actions> <critical> 任务完成的标准是: - ✅ 构建通过 - ✅ 代码检查通过 - ✅ 测试通过
  "代码正确"并不足够
  必须实现全环节通过的状态
  这才是自主智能体的职责所在
</step> </workflow>

⚠️ AUTONOMOUS AGENT MINDSET

⚠️ 自主智能体思维模式

You are not just writing code - you are COMPLETING TASKS AUTONOMOUSLY.
This means:
  1. ✅ Write correct implementation
  2. Install any required dependencies
  3. Run build and fix ALL errors
  4. Run tests and debug ALL failures
  5. Iterate until EVERYTHING passes
  6. Never make excuses or give up
你不只是在编写代码——你是在自主完成任务。
这意味着:
  1. ✅ 编写正确的实现代码
  2. 安装所有所需依赖
  3. 运行构建并修复所有错误
  4. 运行测试并调试所有失败用例
  5. 持续迭代直至所有环节通过
  6. 绝不找借口或放弃

Common Failure Patterns to AVOID

需避免的常见失败模式

WRONG: "The code is correct, but the package isn't installed - that's an environment issue" ✅ CORRECT: "Build failed due to missing package - installing it now with npm install [package]"
WRONG: "Tests pass but build fails - not my problem" ✅ CORRECT: "Build is failing - debugging the error and fixing it now"
WRONG: "There's a framework bug, can't fix it" ✅ CORRECT: "Framework error detected - researching the issue, trying workarounds, debugging until I find a solution"
WRONG: "The implementation is complete" (with failing tests) ✅ CORRECT: "Tests are failing - debugging and fixing until they all pass"
错误做法: "代码是正确的,但包未安装——这是环境问题" ✅ 正确做法: "构建因缺失包失败——现在使用npm install [package]安装"
错误做法: "测试通过但构建失败——这不是我的问题" ✅ 正确做法: "构建失败——正在调试错误并修复"
错误做法: "存在框架bug,无法修复" ✅ 正确做法: "检测到框架错误——正在搜索相关信息、尝试变通方案、调试直至找到解决方法"
错误做法: "实现已完成"(但测试失败) ✅ 正确做法: "测试失败——正在调试并修复直至全部通过"

Dependency Installation Workflow

依赖安装工作流

When you encounter "Module not found" errors:
  1. Detect the package manager FIRST - Check for lockfiles:
    bash
    ls -la | grep -E "lock"
    # Look for: pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb
  2. Identify the package from the import statement
    Error: Cannot find module '@ai-sdk/anthropic'
    Import: import { anthropic } from '@ai-sdk/anthropic'
    Package needed: @ai-sdk/anthropic
  3. Install with the CORRECT package manager
    bash
    # If pnpm-lock.yaml exists (MOST COMMON for Next.js evals):
    pnpm install @ai-sdk/anthropic
    # or
    pnpm add @ai-sdk/anthropic
    
    # If package-lock.json exists:
    npm install @ai-sdk/anthropic
    
    # If yarn.lock exists:
    yarn add @ai-sdk/anthropic
    
    # If bun.lockb exists:
    bun install @ai-sdk/anthropic
  4. Re-run build to verify
    bash
    npm run build
    # or pnpm run build, yarn build, bun run build
  5. Fix any new errors that appear
⚠️ CRITICAL WARNING: Using the WRONG package manager (e.g., npm when the project uses pnpm) will:
  • Create a second conflicting lockfile
  • Install different versions of dependencies
  • Cause dependency version mismatches
  • Break the build with cryptic errors like "Cannot read properties of null"
当遇到"Module not found"错误时:
  1. 先检测包管理器——检查锁文件:
    bash
    ls -la | grep -E "lock"
    # 查找:pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockb
  2. 从导入语句中识别包名
    错误:Cannot find module '@ai-sdk/anthropic'
    导入语句:import { anthropic } from '@ai-sdk/anthropic'
    所需包:@ai-sdk/anthropic
  3. 使用正确的包管理器安装
    bash
    # 若存在pnpm-lock.yaml(Next.js评估中最常见):
    pnpm install @ai-sdk/anthropic
    # 或
    pnpm add @ai-sdk/anthropic
    
    # 若存在package-lock.json:
    npm install @ai-sdk/anthropic
    
    # 若存在yarn.lock:
    yarn add @ai-sdk/anthropic
    
    # 若存在bun.lockb:
    bun install @ai-sdk/anthropic
  4. 重新运行构建以验证
    bash
    npm run build
    # 或pnpm run build, yarn build, bun run build
  5. 修复出现的新错误
⚠️ 关键警告: 使用错误的包管理器(例如项目使用pnpm时使用npm)会:
  • 生成第二个冲突的锁文件
  • 安装不同版本的依赖
  • 导致依赖版本不匹配
  • 引发诸如"Cannot read properties of null"之类的模糊构建错误

Build Error Debugging Workflow

构建错误调试工作流

When build fails:
  1. Read the FULL error message - don't skim it
  2. Identify the root cause:
    • Module not found → Install package
    • Type error → Fix types
    • Config error → Check config files
    • Next.js error → Research, try different approaches
  3. Apply the fix
  4. Re-run build
  5. Repeat until build passes
当构建失败时:
  1. 阅读完整错误信息——不要略读
  2. 识别根本原因
    • 模块未找到 → 安装对应包
    • 类型错误 → 修复类型
    • 配置错误 → 检查配置文件
    • Next.js错误 → 搜索相关信息,尝试不同解决方法
  3. 应用修复方案
  4. 重新运行构建
  5. 重复上述步骤直至构建通过

Test Failure Debugging Workflow

测试失败调试工作流

When tests fail:
  1. Read the FULL test error - understand what's expected
  2. Compare expected vs actual behavior
  3. Check your implementation against test assertions
  4. Fix the issue in your code
  5. Re-run tests
  6. Repeat until all tests pass
当测试失败时:
  1. 阅读完整测试错误信息——理解预期结果
  2. 对比预期行为与实际行为
  3. 检查实现代码是否符合测试断言
  4. 修复代码中的问题
  5. 重新运行测试
  6. 重复上述步骤直至所有测试通过

Success Criteria

成功标准

Task is ONLY complete when:
  • ✅ Build passes (
    npm run build
    succeeds)
  • ✅ Lint passes (
    npm run lint
    succeeds)
  • ✅ Tests pass (
    npm run test
    succeeds)
NEVER stop at "code is correct" - achieve FULL PASSING status!
仅当满足以下条件时,任务才算完成:
  • ✅ 构建通过(
    npm run build
    成功)
  • ✅ 代码检查通过(
    npm run lint
    成功)
  • ✅ 测试通过(
    npm run test
    成功)
绝不能停留在"代码正确"的状态——要实现全环节通过!

⚠️ CRITICAL v6 CHANGES: Structured Output

⚠️ v6关键变更:结构化输出

In v6,
generateObject
and
streamObject
are DEPRECATED.
Use
generateText
/
streamText
with
Output
helpers instead.
在v6中,
generateObject
streamObject
已被弃用。请使用
generateText
/
streamText
搭配Output助手替代。

❌ WRONG - Deprecated v5 Pattern

❌ 错误示例 - 已弃用的v5模式

typescript
// DO NOT USE - DEPRECATED in v6
import { generateObject } from "ai";

const result = await generateObject({
  model: anthropic("claude-sonnet-4-5"),
  schema: z.object({
    sentiment: z.enum(["positive", "neutral", "negative"]),
  }),
  prompt: "Analyze sentiment",
});
typescript
// 禁止使用 - v6中已弃用
import { generateObject } from "ai";

const result = await generateObject({
  model: anthropic("claude-sonnet-4-5"),
  schema: z.object({
    sentiment: z.enum(["positive", "neutral", "negative"]),
  }),
  prompt: "Analyze sentiment",
});

✅ CORRECT - v6 Output Pattern

✅ 正确示例 - v6 Output模式

typescript
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const { output } = await generateText({
  model: anthropic("claude-sonnet-4-5"),
  output: Output.object({
    schema: z.object({
      sentiment: z.enum(["positive", "neutral", "negative"]),
      topics: z.array(z.string()),
    }),
  }),
  prompt: "Analyze this feedback...",
});

// Access typed output
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]
typescript
import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const { output } = await generateText({
  model: anthropic("claude-sonnet-4-5"),
  output: Output.object({
    schema: z.object({
      sentiment: z.enum(["positive", "neutral", "negative"]),
      topics: z.array(z.string()),
    }),
  }),
  prompt: "Analyze this feedback...",
});

// 访问类型化输出
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]

Output Helper Types

Output助手类型

HelperPurposeExample
Output.object()
Generate typed object
Output.object({ schema: z.object({...}) })
Output.array()
Generate typed array
Output.array({ schema: z.string() })
Output.choice()
Generate enum value
Output.choice({ choices: ['A', 'B', 'C'] })
Output.json()
Unstructured JSON
Output.json()
助手方法用途示例
Output.object()
生成类型化对象
Output.object({ schema: z.object({...}) })
Output.array()
生成类型化数组
Output.array({ schema: z.string() })
Output.choice()
生成枚举值
Output.choice({ choices: ['A', 'B', 'C'] })
Output.json()
生成非结构化JSON
Output.json()

⚠️ CRITICAL: Tool Calling API - MUST USE tool() Helper

⚠️ 关键要求:工具调用API - 必须使用tool()助手

When implementing tool calling, you MUST use the
tool()
helper function from the 'ai' package.
实现工具调用时,必须使用'ai'包中的
tool()
助手函数。

❌ WRONG - Plain Object (WILL CAUSE BUILD ERROR)

❌ 错误示例 - 普通对象(会导致构建错误)

typescript
// DO NOT DO THIS - This pattern is INCORRECT
import { z } from 'zod';

tools: {
  myTool: {
    description: 'My tool',
    parameters: z.object({...}),  // ❌ WRONG - "parameters" doesn't exist in v6
    execute: async ({...}) => {...},
  }
}
This will fail with:
Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'
typescript
// 禁止这样做 - 此模式不正确
import { z } from 'zod';

tools: {
  myTool: {
    description: 'My tool',
    parameters: z.object({...}),  // ❌ 错误 - v6中不存在"parameters"
    execute: async ({...}) => {...},
  }
}
此代码会失败并提示:
Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'

✅ CORRECT - Use tool() Helper (REQUIRED)

✅ 正确示例 - 使用tool()助手(必填)

typescript
// ALWAYS DO THIS - This is the ONLY correct pattern
import { tool } from 'ai';  // ⚠️ MUST import tool
import { z } from 'zod';

tools: {
  myTool: tool({  // ⚠️ MUST wrap with tool()
    description: 'My tool',
    inputSchema: z.object({...}),  // ⚠️ MUST use "inputSchema" (not "parameters")
    execute: async ({...}) => {...},
  }),
}
typescript
// 必须这样做 - 这是唯一正确的模式
import { tool } from 'ai';  // ⚠️ 必须导入tool
import { z } from 'zod';

tools: {
  myTool: tool({  // ⚠️ 必须用tool()包裹
    description: 'My tool',
    inputSchema: z.object({...}),  // ⚠️ 必须使用"inputSchema"(而非"parameters")
    execute: async ({...}) => {...},
  }),
}

Tool Calling Checklist

工具调用检查清单

Before implementing any tool, verify:
  • [ ] Imported
    tool
    from 'ai' package:
    import { tool } from 'ai';
  • [ ] Wrapped tool definition with
    tool({ ... })
  • [ ] Used
    inputSchema
    property (NOT
    parameters
    )
  • [ ] Used zod schema:
    z.object({ ... })
  • [ ] Defined
    execute
    function with async callback
  • [ ] Added
    description
    string for the tool
实现任何工具前,请验证:
  • [ ] 已从'ai'包导入
    tool
    import { tool } from 'ai';
  • [ ] 已用
    tool({ ... })
    包裹工具定义
  • [ ] 使用了
    inputSchema
    属性(而非
    parameters
  • [ ] 使用了zod Schema:
    z.object({ ... })
  • [ ] 定义了带异步回调的
    execute
    函数
  • [ ] 为工具添加了
    description
    字符串

⚠️ NEW in v6: ToolLoopAgent for Agentic Applications

⚠️ v6新增功能:用于智能体应用的ToolLoopAgent

Agent Definition

智能体定义

typescript
import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const myAgent = new ToolLoopAgent({
  model: anthropic("claude-sonnet-4-5"),
  instructions: "You are a helpful assistant that can search and analyze data.",
  tools: {
    getData: tool({
      description: "Fetch data from API",
      inputSchema: z.object({
        query: z.string(),
      }),
      execute: async ({ query }) => {
        // Implement data fetching
        return { result: "data for " + query };
      },
    }),
    analyzeData: tool({
      description: "Analyze fetched data",
      inputSchema: z.object({
        data: z.string(),
      }),
      execute: async ({ data }) => {
        return { analysis: "Analysis of " + data };
      },
    }),
  },
  stopWhen: stepCountIs(20), // Stop after 20 steps max
});

// Non-streaming execution
const { text, toolCalls } = await myAgent.generate({
  prompt: "Find and analyze user data",
});

// Streaming execution
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
  // Handle streaming chunks
}
typescript
import { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const myAgent = new ToolLoopAgent({
  model: anthropic("claude-sonnet-4-5"),
  instructions: "You are a helpful assistant that can search and analyze data.",
  tools: {
    getData: tool({
      description: "Fetch data from API",
      inputSchema: z.object({
        query: z.string(),
      }),
      execute: async ({ query }) => {
        // 实现数据获取逻辑
        return { result: "data for " + query };
      },
    }),
    analyzeData: tool({
      description: "Analyze fetched data",
      inputSchema: z.object({
        data: z.string(),
      }),
      execute: async ({ data }) => {
        return { analysis: "Analysis of " + data };
      },
    }),
  },
  stopWhen: stepCountIs(20), // 最多执行20步后停止
});

// 非流式执行
const { text, toolCalls } = await myAgent.generate({
  prompt: "Find and analyze user data",
});

// 流式执行
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
  // 处理流式数据块
}

Agent API Route Integration

智能体API路由集成

typescript
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";

export async function POST(request: Request) {
  const { messages } = await request.json();

  return createAgentUIStreamResponse({
    agent: myAgent,
    uiMessages: messages,
  });
}
typescript
// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";

export async function POST(request: Request) {
  const { messages } = await request.json();

  return createAgentUIStreamResponse({
    agent: myAgent,
    uiMessages: messages,
  });
}

Agent Configuration Options

智能体配置选项

ParameterPurposeExample
model
AI model to use
anthropic('claude-sonnet-4-5')
instructions
System prompt
'You are a helpful assistant.'
tools
Available tools
{ toolName: tool({...}) }
stopWhen
Termination condition
stepCountIs(20)
toolChoice
Tool usage mode
'auto'
,
'required'
,
'none'
output
Structured output schema
Output.object({...})
prepareStep
Dynamic per-step adjustmentsFunction returning step config
prepareCall
Runtime options injectionAsync function for RAG, etc.
参数用途示例
model
使用的AI模型
anthropic('claude-sonnet-4-5')
instructions
系统提示词
'You are a helpful assistant.'
tools
可用工具集合
{ toolName: tool({...}) }
stopWhen
终止条件
stepCountIs(20)
toolChoice
工具使用模式
'auto'
,
'required'
,
'none'
output
结构化输出Schema
Output.object({...})
prepareStep
每步的动态调整逻辑返回步骤配置的函数
prepareCall
运行时选项注入用于RAG等场景的异步函数

⚠️ CRITICAL: Common v5 to v6 Breaking Changes

⚠️ 关键变更:v5至v6的常见破坏性变更

1. useChat Hook Changes

1. useChat钩子变更

❌ WRONG (v5 pattern):
typescript
const { messages, input, setInput, append } = useChat();

// Sending message
append({ content: text, role: "user" });
✅ CORRECT (v6 pattern):
typescript
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');

// Sending message
sendMessage({ text: input });

// New in v6: Handle tool outputs
addToolOutput({ toolCallId: 'xxx', result: { ... } });
❌ 错误示例(v5模式):
typescript
const { messages, input, setInput, append } = useChat();

// 发送消息
append({ content: text, role: "user" });
✅ 正确示例(v6模式):
typescript
const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');

// 发送消息
sendMessage({ text: input });

// v6新增:处理工具输出
addToolOutput({ toolCallId: 'xxx', result: { ... } });

2. Message Structure

2. 消息结构

❌ WRONG (v5 simple content):
typescript
<div>{message.content}</div>
✅ CORRECT (v6 parts-based):
typescript
<div>
  {message.parts.map((part, index) =>
    part.type === 'text' ? <span key={index}>{part.text}</span> : null
  )}
</div>
❌ 错误示例(v5简单内容结构):
typescript
<div>{message.content}</div>
✅ 正确示例(v6基于parts的结构):
typescript
<div>
  {message.parts.map((part, index) =>
    part.type === 'text' ? <span key={index}>{part.text}</span> : null
  )}
</div>

3. Response Methods

3. 响应方法

❌ WRONG (v5):
typescript
return result.toDataStreamResponse();
✅ CORRECT (v6):
typescript
return result.toUIMessageStreamResponse();
❌ 错误示例(v5):
typescript
return result.toDataStreamResponse();
✅ 正确示例(v6):
typescript
return result.toUIMessageStreamResponse();

4. Model Specification

4. 模型指定方式

typescript
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";

// Use provider functions (direct provider access)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");
typescript
import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";

// 使用提供商函数(直接访问提供商)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");

5. Vercel AI Gateway

5. Vercel AI网关

Purpose: Use Vercel AI Gateway for unified model access, rate limiting, caching, and observability across multiple providers.
Import:
typescript
import { gateway } from "ai";
Available Anthropic Models via Gateway:
typescript
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");
When to Use Gateway:
  • Production applications requiring rate limiting and caching
  • Multi-provider applications needing unified interface
  • Applications requiring observability and analytics
  • When you want automatic retries and error handling
When to Use Direct Provider:
  • Development/testing environments
  • When you need provider-specific features not available via gateway
  • When you want direct control over API calls
Example:
typescript
import { generateText, gateway } from "ai";

const result = await generateText({
  model: gateway("anthropic/claude-sonnet-4-5"),
  prompt: "Hello, world!",
});
Comparison:
typescript
// Option 1: Direct provider
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");

// Option 2: Gateway (recommended for production)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");
用途: 使用Vercel AI网关实现跨多个提供商的统一模型访问、速率限制、缓存和可观测性。
导入方式:
typescript
import { gateway } from "ai";
通过网关可用的Anthropic模型:
typescript
model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");
何时使用网关:
  • 需要速率限制和缓存的生产应用
  • 需要统一接口的多提供商应用
  • 需要可观测性和分析的应用
  • 需要自动重试和错误处理的场景
何时直接使用提供商:
  • 开发/测试环境
  • 需要网关不支持的提供商特定功能时
  • 需要直接控制API调用时
示例:
typescript
import { generateText, gateway } from "ai";

const result = await generateText({
  model: gateway("anthropic/claude-sonnet-4-5"),
  prompt: "Hello, world!",
});
对比:
typescript
// 选项1:直接使用提供商
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");

// 选项2:使用网关(推荐生产环境使用)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");

Core API Reference

核心API参考

1. generateText - Non-Streaming Text Generation

1. generateText - 非流式文本生成

Purpose: Generate text for non-interactive use cases (email drafts, summaries, agents with tools).
Signature:
typescript
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = await generateText({
  model: anthropic('claude-sonnet-4-5'),
  prompt: 'Your prompt here',
  system: 'Optional system message',
  tools?: { ... },
  maxSteps?: 5,
  output?: Output.object({ schema: z.object({...}) }),
});
Return Value:
typescript
{
  text: string;              // Generated text output
  output?: T;                // Typed structured output (if Output specified)
  toolCalls: ToolCall[];     // Tool invocations made
  finishReason: string;      // Why generation stopped
  usage: TokenUsage;         // Token consumption
  response: RawResponse;     // Raw provider response
  warnings: Warning[];       // Provider-specific alerts
}
Example:
typescript
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

export async function GET() {
  const result = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: "Why is the sky blue?",
  });

  return Response.json({ text: result.text });
}
用途: 为非交互式场景生成文本(邮件草稿、摘要、带工具的智能体等)。
签名:
typescript
import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = await generateText({
  model: anthropic('claude-sonnet-4-5'),
  prompt: 'Your prompt here',
  system: 'Optional system message',
  tools?: { ... },
  maxSteps?: 5,
  output?: Output.object({ schema: z.object({...}) }),
});
返回值:
typescript
{
  text: string;              // 生成的文本输出
  output?: T;                // 类型化结构化输出(若指定了Output)
  toolCalls: ToolCall[];     // 已调用的工具
  finishReason: string;      // 生成终止原因
  usage: TokenUsage;         // Token消耗
  response: RawResponse;     // 提供商原始响应
  warnings: Warning[];       // 提供商特定警告
}
示例:
typescript
// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

export async function GET() {
  const result = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: "Why is the sky blue?",
  });

  return Response.json({ text: result.text });
}

2. streamText - Streaming Text Generation

2. streamText - 流式文本生成

Purpose: Stream responses for interactive chat applications.
Signature:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = streamText({
  model: anthropic('claude-sonnet-4-5'),
  prompt: 'Your prompt here',
  system: 'Optional system message',
  messages?: ModelMessage[],
  tools?: { ... },
  onChunk?: (chunk) => { ... },
  onStepFinish?: (step) => { ... },
  onFinish?: async (result) => { ... },
  onError?: async (error) => { ... },
});
Return Methods:
typescript
// For chat applications with useChat hook
result.toUIMessageStreamResponse();

// For simple text streaming
result.toTextStreamResponse();
Example - Chat API Route:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    system: "You are a helpful assistant.",
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}
用途: 为交互式聊天应用流式传输响应。
签名:
typescript
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = streamText({
  model: anthropic('claude-sonnet-4-5'),
  prompt: 'Your prompt here',
  system: 'Optional system message',
  messages?: ModelMessage[],
  tools?: { ... },
  onChunk?: (chunk) => { ... },
  onStepFinish?: (step) => { ... },
  onFinish?: async (result) => { ... },
  onError?: async (error) => { ... },
});
返回方法:
typescript
// 搭配useChat钩子的聊天应用
result.toUIMessageStreamResponse();

// 简单文本流式传输
result.toTextStreamResponse();
示例 - 聊天API路由:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    system: "You are a helpful assistant.",
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

3. useChat Hook - Client-Side Chat Interface

3. useChat钩子 - 客户端聊天界面

Purpose: Build interactive chat UIs with streaming support.
Signature:
typescript
import { useChat } from '@ai-sdk/react';

const {
  messages,        // Array of UIMessage with parts-based structure
  sendMessage,     // Function to send messages (replaces append)
  status,          // 'submitted' | 'streaming' | 'ready' | 'error'
  stop,            // Abort current streaming
  regenerate,      // Reprocess last message
  setMessages,     // Manually modify history
  error,           // Error object if request fails
  clearError,      // Clear error state
  addToolOutput,   // Submit tool results (NEW in v6)
  resumeStream,    // Resume interrupted stream (NEW in v6)
} = useChat({
  api: '/api/chat',
  id?: 'chat-id',
  messages?: initialMessages,
  onToolCall?: async (toolCall) => { ... },
  onFinish?: (message) => { ... },
  onError?: (error) => { ... },
  sendAutomaticallyWhen?: (messages) => boolean,
  resume?: true,
});
Complete Example:
typescript
'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function ChatPage() {
  const { messages, sendMessage, status, addToolOutput } = useChat({
    onToolCall: async ({ toolCall }) => {
      // Handle client-side tool execution
      if (toolCall.name === 'confirm') {
        const result = await showConfirmDialog(toolCall.args);
        addToolOutput({ toolCallId: toolCall.id, result });
      }
    },
  });
  const [input, setInput] = useState('');

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (!input.trim()) return;

    sendMessage({ text: input });
    setInput('');
  };

  return (
    <div>
      <div>
        {messages.map((message) => (
          <div key={message.id}>
            <strong>{message.role}:</strong>
            {message.parts.map((part, index) => {
              switch (part.type) {
                case 'text':
                  return <span key={index}>{part.text}</span>;
                case 'tool-call':
                  return <div key={index}>Tool: {part.name}</div>;
                default:
                  return null;
              }
            })}
          </div>
        ))}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
          disabled={status === 'streaming'}
        />
        <button type="submit" disabled={status === 'streaming'}>
          Send
        </button>
      </form>
    </div>
  );
}
用途: 构建支持流式传输的交互式聊天UI。
签名:
typescript
import { useChat } from '@ai-sdk/react';

const {
  messages,        // 基于parts结构的UIMessage数组
  sendMessage,     // 发送消息的函数(替代append)
  status,          // 'submitted' | 'streaming' | 'ready' | 'error'
  stop,            // 中止当前流式传输
  regenerate,      // 重新处理最后一条消息
  setMessages,     // 手动修改消息历史
  error,           // 请求失败时的错误对象
  clearError,      // 清除错误状态
  addToolOutput,   // 提交工具结果(v6新增)
  resumeStream,    // 恢复中断的流式传输(v6新增)
} = useChat({
  api: '/api/chat',
  id?: 'chat-id',
  messages?: initialMessages,
  onToolCall?: async (toolCall) => { ... },
  onFinish?: (message) => { ... },
  onError?: (error) => { ... },
  sendAutomaticallyWhen?: (messages) => boolean,
  resume?: true,
});
完整示例:
typescript
'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function ChatPage() {
  const { messages, sendMessage, status, addToolOutput } = useChat({
    onToolCall: async ({ toolCall }) => {
      // 处理客户端工具执行
      if (toolCall.name === 'confirm') {
        const result = await showConfirmDialog(toolCall.args);
        addToolOutput({ toolCallId: toolCall.id, result });
      }
    },
  });
  const [input, setInput] = useState('');

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (!input.trim()) return;

    sendMessage({ text: input });
    setInput('');
  };

  return (
    <div>
      <div>
        {messages.map((message) => (
          <div key={message.id}>
            <strong>{message.role}:</strong>
            {message.parts.map((part, index) => {
              switch (part.type) {
                case 'text':
                  return <span key={index}>{part.text}</span>;
                case 'tool-call':
                  return <div key={index}>Tool: {part.name}</div>;
                default:
                  return null;
              }
            })}
          </div>
        ))}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
          disabled={status === 'streaming'}
        />
        <button type="submit" disabled={status === 'streaming'}>
          Send
        </button>
      </form>
    </div>
  );
}

4. Tool Calling / Function Calling

4. 工具调用/函数调用

Purpose: Enable AI models to call functions with structured parameters.
Defining Tools:
typescript
import { tool } from "ai";
import { z } from "zod";

const weatherTool = tool({
  description: "Get the weather in a location",
  inputSchema: z.object({
    location: z.string().describe("The location to get the weather for"),
    unit: z.enum(["C", "F"]).describe("Temperature unit"),
  }),
  outputSchema: z.object({
    temperature: z.number(),
    condition: z.string(),
  }),
  execute: async ({ location, unit }) => {
    // Fetch or mock weather data
    return {
      temperature: 24,
      condition: "Sunny",
    };
  },
});
Using Tools with generateText/streamText:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
    tools: {
      getWeather: tool({
        description: "Get the weather for a location",
        inputSchema: z.object({
          city: z.string().describe("The city to get the weather for"),
          unit: z
            .enum(["C", "F"])
            .describe("The unit to display the temperature in"),
        }),
        execute: async ({ city, unit }) => {
          // API call or mock data
          return `It is currently 24°${unit} and Sunny in ${city}!`;
        },
      }),
    },
    toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
  });

  return result.toUIMessageStreamResponse();
}
Multi-Step Tool Calling:
typescript
const result = await generateText({
  model: anthropic("claude-sonnet-4-5"),
  tools: {
    weather: weatherTool,
    search: searchTool,
  },
  prompt: "What is the weather in San Francisco and find hotels there?",
  maxSteps: 5, // Allow up to 5 tool call steps
});
用途: 让AI模型调用带结构化参数的函数。
定义工具:
typescript
import { tool } from "ai";
import { z } from "zod";

const weatherTool = tool({
  description: "Get the weather in a location",
  inputSchema: z.object({
    location: z.string().describe("The location to get the weather for"),
    unit: z.enum(["C", "F"]).describe("Temperature unit"),
  }),
  outputSchema: z.object({
    temperature: z.number(),
    condition: z.string(),
  }),
  execute: async ({ location, unit }) => {
    // 获取或模拟天气数据
    return {
      temperature: 24,
      condition: "Sunny",
    };
  },
});
在generateText/streamText中使用工具:
typescript
// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
    tools: {
      getWeather: tool({
        description: "Get the weather for a location",
        inputSchema: z.object({
          city: z.string().describe("The city to get the weather for"),
          unit: z
            .enum(["C", "F"])
            .describe("The unit to display the temperature in"),
        }),
        execute: async ({ city, unit }) => {
          // API调用或模拟数据
          return `It is currently 24°${unit} and Sunny in ${city}!`;
        },
      }),
    },
    toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
  });

  return result.toUIMessageStreamResponse();
}
多步工具调用:
typescript
const result = await generateText({
  model: anthropic("claude-sonnet-4-5"),
  tools: {
    weather: weatherTool,
    search: searchTool,
  },
  prompt: "What is the weather in San Francisco and find hotels there?",
  maxSteps: 5, // 允许最多5步工具调用
});

5. Text Embeddings

5. 文本嵌入

Purpose: Convert text into numerical vectors for semantic search, RAG, or similarity.
Signature:
typescript
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";

// Single embedding
const result = await embed({
  model: openai.textEmbeddingModel("text-embedding-3-small"),
  value: "Text to embed",
});

// Batch embeddings
const batchResult = await embedMany({
  model: openai.textEmbeddingModel("text-embedding-3-small"),
  values: ["Text 1", "Text 2", "Text 3"],
});
Return Value:
typescript
{
  embedding: number[];  // Numerical array representing the text
  usage: { tokens: number };  // Token consumption
  response: RawResponse;  // Raw provider response
}
Example - Embedding API Route:
typescript
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";

export async function POST(req: Request) {
  const { text } = await req.json();

  const { embedding, usage } = await embed({
    model: openai.textEmbeddingModel("text-embedding-3-small"),
    value: text,
  });

  return Response.json({ embedding, usage });
}
用途: 将文本转换为数值向量,用于语义搜索、RAG或相似度计算。
签名:
typescript
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";

// 单个嵌入
const result = await embed({
  model: openai.textEmbeddingModel("text-embedding-3-small"),
  value: "Text to embed",
});

// 批量嵌入
const batchResult = await embedMany({
  model: openai.textEmbeddingModel("text-embedding-3-small"),
  values: ["Text 1", "Text 2", "Text 3"],
});
返回值:
typescript
{
  embedding: number[];  // 代表文本的数值数组
  usage: { tokens: number };  // Token消耗
  response: RawResponse;  // 提供商原始响应
}
示例 - 嵌入API路由:
typescript
// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";

export async function POST(req: Request) {
  const { text } = await req.json();

  const { embedding, usage } = await embed({
    model: openai.textEmbeddingModel("text-embedding-3-small"),
    value: text,
  });

  return Response.json({ embedding, usage });
}

6. Middleware

6. 中间件

Purpose: Intercept and modify model behavior for logging, caching, guardrails, RAG, etc.
Built-in Middleware:
typescript
import {
  extractReasoningMiddleware,
  simulateStreamingMiddleware,
  defaultSettingsMiddleware,
  wrapLanguageModel,
} from "ai";

// Extract reasoning from models like Claude
const modelWithReasoning = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});

// Apply default settings
const modelWithDefaults = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: defaultSettingsMiddleware({
    temperature: 0.7,
    maxOutputTokens: 1000,
  }),
});
Custom Middleware:
typescript
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";

// Logging middleware
const loggingMiddleware: LanguageModelMiddleware = {
  transformParams: async ({ params }) => {
    console.log("Request params:", params);
    return params;
  },
  wrapGenerate: async ({ doGenerate, params }) => {
    const result = await doGenerate();
    console.log("Response:", result);
    return result;
  },
};

// Caching middleware
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
  wrapGenerate: async ({ doGenerate, params }) => {
    const cacheKey = JSON.stringify(params.prompt);
    if (cache.has(cacheKey)) {
      return { text: cache.get(cacheKey)! };
    }
    const result = await doGenerate();
    cache.set(cacheKey, result.text);
    return result;
  },
};

// RAG middleware
const ragMiddleware: LanguageModelMiddleware = {
  transformParams: async ({ params }) => {
    const relevantDocs = await vectorSearch(params.prompt);
    return {
      ...params,
      prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
    };
  },
};

// Apply multiple middleware
const enhancedModel = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});
用途: 拦截并修改模型行为,用于日志记录、缓存、防护、RAG等场景。
内置中间件:
typescript
import {
  extractReasoningMiddleware,
  simulateStreamingMiddleware,
  defaultSettingsMiddleware,
  wrapLanguageModel,
} from "ai";

// 从Claude等模型中提取推理过程
const modelWithReasoning = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});

// 应用默认设置
const modelWithDefaults = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: defaultSettingsMiddleware({
    temperature: 0.7,
    maxOutputTokens: 1000,
  }),
});
自定义中间件:
typescript
import { LanguageModelMiddleware, wrapLanguageModel } from "ai";

// 日志中间件
const loggingMiddleware: LanguageModelMiddleware = {
  transformParams: async ({ params }) => {
    console.log("Request params:", params);
    return params;
  },
  wrapGenerate: async ({ doGenerate, params }) => {
    const result = await doGenerate();
    console.log("Response:", result);
    return result;
  },
};

// 缓存中间件
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
  wrapGenerate: async ({ doGenerate, params }) => {
    const cacheKey = JSON.stringify(params.prompt);
    if (cache.has(cacheKey)) {
      return { text: cache.get(cacheKey)! };
    }
    const result = await doGenerate();
    cache.set(cacheKey, result.text);
    return result;
  },
};

// RAG中间件
const ragMiddleware: LanguageModelMiddleware = {
  transformParams: async ({ params }) => {
    const relevantDocs = await vectorSearch(params.prompt);
    return {
      ...params,
      prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
    };
  },
};

// 应用多个中间件
const enhancedModel = wrapLanguageModel({
  model: anthropic("claude-sonnet-4-5"),
  middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});

7. Model Context Protocol (MCP) Integration

7. 模型上下文协议(MCP)集成

Purpose: Connect to external MCP servers for dynamic tool access.
Installation:
bash
bun add @ai-sdk/mcp @modelcontextprotocol/sdk
HTTP Transport (Production):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const httpTransport = new StreamableHTTPClientTransport(
    new URL("https://mcp-server.example.com/mcp"),
    { headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
  );

  const mcpClient = await createMCPClient({ transport: httpTransport });

  try {
    const tools = await mcpClient.tools();

    const response = streamText({
      model: anthropic("claude-sonnet-4-5"),
      tools,
      prompt,
      onFinish: async () => {
        await mcpClient.close();
      },
      onError: async () => {
        await mcpClient.close();
      },
    });

    return response.toTextStreamResponse();
  } catch (error) {
    await mcpClient.close();
    return new Response("Internal Server Error", { status: 500 });
  }
}
Stdio Transport (Development):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";

const stdioTransport = new Experimental_StdioMCPTransport({
  command: "npx",
  args: [
    "-y",
    "@modelcontextprotocol/server-filesystem",
    "/path/to/allowed/dir",
  ],
});

const mcpClient = await createMCPClient({ transport: stdioTransport });
Key Points:
  • Always close the client in
    onFinish
    and
    onError
  • Tools are fetched dynamically with
    mcpClient.tools()
  • Use HTTP/SSE for production, Stdio for development
  • Multiple MCP clients can be combined by merging tool collections
用途: 连接外部MCP服务器以动态获取工具。
安装:
bash
bun add @ai-sdk/mcp @modelcontextprotocol/sdk
HTTP传输(生产环境):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const httpTransport = new StreamableHTTPClientTransport(
    new URL("https://mcp-server.example.com/mcp"),
    { headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
  );

  const mcpClient = await createMCPClient({ transport: httpTransport });

  try {
    const tools = await mcpClient.tools();

    const response = streamText({
      model: anthropic("claude-sonnet-4-5"),
      tools,
      prompt,
      onFinish: async () => {
        await mcpClient.close();
      },
      onError: async () => {
        await mcpClient.close();
      },
    });

    return response.toTextStreamResponse();
  } catch (error) {
    await mcpClient.close();
    return new Response("Internal Server Error", { status: 500 });
  }
}
标准输入输出传输(开发环境):
typescript
import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";

const stdioTransport = new Experimental_StdioMCPTransport({
  command: "npx",
  args: [
    "-y",
    "@modelcontextprotocol/server-filesystem",
    "/path/to/allowed/dir",
  ],
});

const mcpClient = await createMCPClient({ transport: stdioTransport });
关键点:
  • 务必在
    onFinish
    onError
    中关闭客户端
  • 通过
    mcpClient.tools()
    动态获取工具
  • 生产环境使用HTTP/SSE,开发环境使用标准输入输出
  • 可通过合并工具集合来组合多个MCP客户端

8. Message Utilities

8. 消息工具函数

convertToModelMessages: Converts UI messages from
useChat
into
ModelMessage
objects for AI functions.
typescript
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}
convertToModelMessages:
useChat
的UI消息转换为AI函数所需的
ModelMessage
对象。
typescript
import { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

Workflow Patterns

工作流模式

1. Sequential (Chain) Pattern

1. 顺序(链式)模式

typescript
async function sequentialWorkflow(input: string) {
  // Step 1: Generate initial content
  const { text: draft } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: `Write marketing copy for: ${input}`,
  });

  // Step 2: Evaluate quality
  const { output: evaluation } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.object({
      schema: z.object({
        score: z.number().min(1).max(10),
        feedback: z.string(),
      }),
    }),
    prompt: `Evaluate this copy: ${draft}`,
  });

  // Step 3: Improve if needed
  if (evaluation.score < 7) {
    const { text: improved } = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
    });
    return improved;
  }

  return draft;
}
typescript
async function sequentialWorkflow(input: string) {
  // 步骤1:生成初始内容
  const { text: draft } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: `Write marketing copy for: ${input}`,
  });

  // 步骤2:评估质量
  const { output: evaluation } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.object({
      schema: z.object({
        score: z.number().min(1).max(10),
        feedback: z.string(),
      }),
    }),
    prompt: `Evaluate this copy: ${draft}`,
  });

  // 步骤3:若需要则优化
  if (evaluation.score < 7) {
    const { text: improved } = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
    });
    return improved;
  }

  return draft;
}

2. Parallel Pattern

2. 并行模式

typescript
async function parallelReview(code: string) {
  const [securityReview, performanceReview, maintainabilityReview] =
    await Promise.all([
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for security issues:\n\n${code}`,
      }),
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for performance issues:\n\n${code}`,
      }),
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for maintainability:\n\n${code}`,
      }),
    ]);

  return {
    security: securityReview.text,
    performance: performanceReview.text,
    maintainability: maintainabilityReview.text,
  };
}
typescript
async function parallelReview(code: string) {
  const [securityReview, performanceReview, maintainabilityReview] =
    await Promise.all([
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for security issues:\n\n${code}`,
      }),
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for performance issues:\n\n${code}`,
      }),
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Review for maintainability:\n\n${code}`,
      }),
    ]);

  return {
    security: securityReview.text,
    performance: performanceReview.text,
    maintainability: maintainabilityReview.text,
  };
}

3. Routing Pattern

3. 路由模式

typescript
async function routeQuery(query: string) {
  // Classify the query
  const { output: classification } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.choice({
      choices: ["technical", "billing", "general"] as const,
    }),
    prompt: `Classify this customer query: ${query}`,
  });

  // Route to appropriate handler
  switch (classification) {
    case "technical":
      return handleTechnicalQuery(query);
    case "billing":
      return handleBillingQuery(query);
    default:
      return handleGeneralQuery(query);
  }
}
typescript
async function routeQuery(query: string) {
  // 分类查询
  const { output: classification } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.choice({
      choices: ["technical", "billing", "general"] as const,
    }),
    prompt: `Classify this customer query: ${query}`,
  });

  // 路由到对应的处理函数
  switch (classification) {
    case "technical":
      return handleTechnicalQuery(query);
    case "billing":
      return handleBillingQuery(query);
    default:
      return handleGeneralQuery(query);
  }
}

4. Orchestrator-Worker Pattern

4. 编排器-工作者模式

typescript
async function implementFeature(requirement: string) {
  // Orchestrator: Break down the task
  const { output: plan } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.object({
      schema: z.object({
        tasks: z.array(
          z.object({
            type: z.enum(["frontend", "backend", "database"]),
            description: z.string(),
          })
        ),
      }),
    }),
    prompt: `Break down this feature into tasks: ${requirement}`,
  });

  // Workers: Execute tasks in parallel
  const results = await Promise.all(
    plan.tasks.map((task) =>
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Implement this ${task.type} task: ${task.description}`,
      })
    )
  );

  return results.map((r) => r.text);
}
typescript
async function implementFeature(requirement: string) {
  // 编排器:分解任务
  const { output: plan } = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    output: Output.object({
      schema: z.object({
        tasks: z.array(
          z.object({
            type: z.enum(["frontend", "backend", "database"]),
            description: z.string(),
          })
        ),
      }),
    }),
    prompt: `Break down this feature into tasks: ${requirement}`,
  });

  // 工作者:并行执行任务
  const results = await Promise.all(
    plan.tasks.map((task) =>
      generateText({
        model: anthropic("claude-sonnet-4-5"),
        prompt: `Implement this ${task.type} task: ${task.description}`,
      })
    )
  );

  return results.map((r) => r.text);
}

5. Evaluator-Optimizer Pattern

5. 评估器-优化器模式

typescript
async function optimizeOutput(input: string, maxIterations = 3) {
  let output = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: input,
  });

  for (let i = 0; i < maxIterations; i++) {
    const { output: evaluation } = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      output: Output.object({
        schema: z.object({
          isGood: z.boolean(),
          improvements: z.array(z.string()),
        }),
      }),
      prompt: `Evaluate this output: ${output.text}`,
    });

    if (evaluation.isGood) break;

    output = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
    });
  }

  return output.text;
}
typescript
async function optimizeOutput(input: string, maxIterations = 3) {
  let output = await generateText({
    model: anthropic("claude-sonnet-4-5"),
    prompt: input,
  });

  for (let i = 0; i < maxIterations; i++) {
    const { output: evaluation } = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      output: Output.object({
        schema: z.object({
          isGood: z.boolean(),
          improvements: z.array(z.string()),
        }),
      }),
      prompt: `Evaluate this output: ${output.text}`,
    });

    if (evaluation.isGood) break;

    output = await generateText({
      model: anthropic("claude-sonnet-4-5"),
      prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
    });
  }

  return output.text;
}

Message Part Types (v6)

消息部件类型(v6)

| Part Type | Description | Properties | | ----------- | -------------------- | ------------------------------------------ | ---------- | -------- | --------------- | |
text
| Text content |
text
,
isStreaming
| |
tool-call
| Tool invocation |
name
,
args
,
state
('input-streaming' | 'invoking' | 'output' | 'output-error') | |
reasoning
| Model thinking |
text
,
isStreaming
| |
file
| File attachment |
mediaType
,
url
or
data
| |
source
| RAG source reference |
url
or
documentId
,
title
| |
step
| Workflow boundary | Marks step boundaries | |
data
| Custom data | Any custom payload |
部件类型描述属性
text
文本内容
text
,
isStreaming
tool-call
工具调用
name
,
args
,
state
('input-streaming'、'invoking'、'output'、'output-error')
reasoning
模型思考过程
text
,
isStreaming
file
文件附件
mediaType
,
url
data
source
RAG源引用
url
documentId
,
title
step
工作流边界标记步骤边界
data
自定义数据任意自定义负载

TypeScript Best Practices

TypeScript最佳实践

Type Imports

类型导入

typescript
import type {
  UIMessage, // Message type from useChat
  ModelMessage, // Message type for model functions
  ToolCall, // Tool call information
  TokenUsage, // Token consumption data
} from "ai";
typescript
import type {
  UIMessage, // useChat的消息类型
  ModelMessage, // 模型函数的消息类型
  ToolCall, // 工具调用信息
  TokenUsage, // Token消耗数据
} from "ai";

Agent Type Safety

智能体类型安全

typescript
import type { InferAgentUIMessage } from "ai";

// Type-safe messages from agent
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;
typescript
import type { InferAgentUIMessage } from "ai";

// 智能体的类型安全消息
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;

Strongly Typed Tools

强类型工具

typescript
import { tool } from "ai";
import { z } from "zod";

// Tool helper infers execute parameter types
const myTool = tool({
  description: "My tool",
  inputSchema: z.object({
    param1: z.string(),
    param2: z.number(),
  }),
  outputSchema: z.object({
    result: z.string(),
  }),
  execute: async ({ param1, param2 }) => {
    // param1 is inferred as string
    // param2 is inferred as number
    return { result: "success" };
  },
});
typescript
import { tool } from "ai";
import { z } from "zod";

// Tool助手会自动推断execute参数类型
const myTool = tool({
  description: "My tool",
  inputSchema: z.object({
    param1: z.string(),
    param2: z.number(),
  }),
  outputSchema: z.object({
    result: z.string(),
  }),
  execute: async ({ param1, param2 }) => {
    // param1被推断为string类型
    // param2被推断为number类型
    return { result: "success" };
  },
});

Common Patterns

常见模式

Pattern 1: Simple Chat Application

模式1:简单聊天应用

Client (
app/page.tsx
):
typescript
'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function Chat() {
  const { messages, sendMessage, status } = useChat();
  const [input, setInput] = useState('');

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong>
          {m.parts.map((part, i) =>
            part.type === 'text' ? <span key={i}>{part.text}</span> : null
          )}
        </div>
      ))}
      <form onSubmit={(e) => {
        e.preventDefault();
        sendMessage({ text: input });
        setInput('');
      }}>
        <input value={input} onChange={(e) => setInput(e.target.value)} />
        <button disabled={status === 'streaming'}>Send</button>
      </form>
    </div>
  );
}
Server (
app/api/chat/route.ts
):
typescript
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    system: "You are a helpful assistant.",
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}
客户端(
app/page.tsx
):
typescript
'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function Chat() {
  const { messages, sendMessage, status } = useChat();
  const [input, setInput] = useState('');

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong>
          {m.parts.map((part, i) =>
            part.type === 'text' ? <span key={i}>{part.text}</span> : null
          )}
        </div>
      ))}
      <form onSubmit={(e) => {
        e.preventDefault();
        sendMessage({ text: input });
        setInput('');
      }}>
        <input value={input} onChange={(e) => setInput(e.target.value)} />
        <button disabled={status === 'streaming'}>Send</button>
      </form>
    </div>
  );
}
服务端(
app/api/chat/route.ts
):
typescript
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    system: "You are a helpful assistant.",
    messages: convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

Pattern 2: Chat with Structured Output

模式2:带结构化输出的聊天

typescript
import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
    output: Output.object({
      schema: z.object({
        response: z.string(),
        sentiment: z.enum(["positive", "neutral", "negative"]),
        confidence: z.number().min(0).max(1),
      }),
    }),
  });

  return result.toUIMessageStreamResponse();
}
typescript
import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: anthropic("claude-sonnet-4-5"),
    messages: convertToModelMessages(messages),
    output: Output.object({
      schema: z.object({
        response: z.string(),
        sentiment: z.enum(["positive", "neutral", "negative"]),
        confidence: z.number().min(0).max(1),
      }),
    }),
  });

  return result.toUIMessageStreamResponse();
}

Pattern 3: Agent with Multiple Tools

模式3:带多工具的智能体

typescript
import {
  ToolLoopAgent,
  tool,
  stepCountIs,
  createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const researchAgent = new ToolLoopAgent({
  model: anthropic("claude-sonnet-4-5"),
  instructions:
    "You are a research assistant that can search and analyze information.",
  tools: {
    webSearch: tool({
      description: "Search the web for information",
      inputSchema: z.object({
        query: z.string().describe("Search query"),
      }),
      execute: async ({ query }) => {
        // Implement web search
        return { results: ["..."] };
      },
    }),
    analyze: tool({
      description: "Analyze collected information",
      inputSchema: z.object({
        data: z.string().describe("Data to analyze"),
      }),
      execute: async ({ data }) => {
        return { analysis: "..." };
      },
    }),
    summarize: tool({
      description: "Summarize findings",
      inputSchema: z.object({
        findings: z.array(z.string()),
      }),
      execute: async ({ findings }) => {
        return { summary: "..." };
      },
    }),
  },
  stopWhen: stepCountIs(10),
});

// API Route
export async function POST(request: Request) {
  const { messages } = await request.json();
  return createAgentUIStreamResponse({
    agent: researchAgent,
    uiMessages: messages,
  });
}
typescript
import {
  ToolLoopAgent,
  tool,
  stepCountIs,
  createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const researchAgent = new ToolLoopAgent({
  model: anthropic("claude-sonnet-4-5"),
  instructions:
    "You are a research assistant that can search and analyze information.",
  tools: {
    webSearch: tool({
      description: "Search the web for information",
      inputSchema: z.object({
        query: z.string().describe("Search query"),
      }),
      execute: async ({ query }) => {
        // 实现网页搜索
        return { results: ["..."] };
      },
    }),
    analyze: tool({
      description: "Analyze collected information",
      inputSchema: z.object({
        data: z.string().describe("Data to analyze"),
      }),
      execute: async ({ data }) => {
        return { analysis: "..." };
      },
    }),
    summarize: tool({
      description: "Summarize findings",
      inputSchema: z.object({
        findings: z.array(z.string()),
      }),
      execute: async ({ findings }) => {
        return { summary: "..." };
      },
    }),
  },
  stopWhen: stepCountIs(10),
});

// API路由
export async function POST(request: Request) {
  const { messages } = await request.json();
  return createAgentUIStreamResponse({
    agent: researchAgent,
    uiMessages: messages,
  });
}

Pattern 4: Semantic Search with Embeddings

模式4:基于嵌入的语义搜索

typescript
// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";

export async function POST(req: Request) {
  const { query } = await req.json();

  // Generate embedding for search query
  const { embedding } = await embed({
    model: openai.textEmbeddingModel("text-embedding-3-small"),
    value: query,
  });

  // Use embedding for similarity search in vector database
  // const results = await vectorDB.search(embedding);

  return Response.json({ embedding, results: [] });
}
typescript
// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";

export async function POST(req: Request) {
  const { query } = await req.json();

  // 为搜索查询生成嵌入
  const { embedding } = await embed({
    model: openai.textEmbeddingModel("text-embedding-3-small"),
    value: query,
  });

  // 使用嵌入在向量数据库中进行相似度搜索
  // const results = await vectorDB.search(embedding);

  return Response.json({ embedding, results: [] });
}

Common Pitfalls and Solutions

常见陷阱与解决方案

Pitfall 1: Using Deprecated generateObject/streamObject

陷阱1:使用已弃用的generateObject/streamObject

typescript
// ❌ WRONG - Deprecated in v6
import { generateObject } from 'ai';
const result = await generateObject({
  schema: z.object({...}),
  prompt: '...',
});

// ✅ CORRECT - Use Output with generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
  output: Output.object({ schema: z.object({...}) }),
  prompt: '...',
});
typescript
// ❌ 错误 - v6中已弃用
import { generateObject } from 'ai';
const result = await generateObject({
  schema: z.object({...}),
  prompt: '...',
});

// ✅ 正确 - 使用Output搭配generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
  output: Output.object({ schema: z.object({...}) }),
  prompt: '...',
});

Pitfall 2: NOT Using tool() Helper for Tools

陷阱2:未使用tool()助手定义工具

typescript
// ❌ WRONG - Plain object (WILL CAUSE BUILD FAILURE)
tools: {
  myTool: {
    description: 'My tool',
    parameters: z.object({...}),  // ❌ Wrong property name
    execute: async ({...}) => {...},
  },
}

// ✅ CORRECT - Use tool() helper (REQUIRED)
import { tool } from 'ai';
tools: {
  myTool: tool({
    description: 'My tool',
    inputSchema: z.object({...}),  // ⚠️ Use inputSchema
    execute: async ({...}) => {...},
  }),
}
typescript
// ❌ 错误 - 普通对象(会导致构建失败)
tools: {
  myTool: {
    description: 'My tool',
    parameters: z.object({...}),  // ❌ 属性名称错误
    execute: async ({...}) => {...},
  },
}

// ✅ 正确 - 使用tool()助手(必填)
import { tool } from 'ai';
tools: {
  myTool: tool({
    description: 'My tool',
    inputSchema: z.object({...}),  // ⚠️ 使用inputSchema
    execute: async ({...}) => {...},
  }),
}

Pitfall 3: Using v5 useChat API in v6

陷阱3:在v6中使用v5的useChat API

typescript
// ❌ WRONG - v5 pattern
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });

// ✅ CORRECT - v6 pattern
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });
typescript
// ❌ 错误 - v5模式
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });

// ✅ 正确 - v6模式
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });

Pitfall 4: Accessing message.content instead of message.parts

陷阱4:访问message.content而非message.parts

typescript
// ❌ WRONG - v5 pattern
<div>{message.content}</div>

// ✅ CORRECT - v6 parts-based
<div>
  {message.parts.map((part, i) =>
    part.type === 'text' ? <span key={i}>{part.text}</span> : null
  )}
</div>
typescript
// ❌ 错误 - v5模式
<div>{message.content}</div>

// ✅ 正确 - v6基于parts的结构
<div>
  {message.parts.map((part, i) =>
    part.type === 'text' ? <span key={i}>{part.text}</span> : null
  )}
</div>

Pitfall 5: Using Wrong Response Method

陷阱5:使用错误的响应方法

typescript
// ❌ WRONG - v5 method
return result.toDataStreamResponse();

// ✅ CORRECT - v6 method
return result.toUIMessageStreamResponse();
typescript
// ❌ 错误 - v5方法
return result.toDataStreamResponse();

// ✅ 正确 - v6方法
return result.toUIMessageStreamResponse();

Pitfall 6: Forgetting MCP Client Cleanup

陷阱6:忘记清理MCP客户端

typescript
// ❌ WRONG - no cleanup
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();

// ✅ CORRECT - cleanup in callbacks
const response = streamText({
  model,
  tools,
  prompt,
  onFinish: async () => {
    await mcpClient.close();
  },
  onError: async () => {
    await mcpClient.close();
  },
});
typescript
// ❌ 错误 - 未清理
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();

// ✅ 正确 - 在回调中清理
const response = streamText({
  model,
  tools,
  prompt,
  onFinish: async () => {
    await mcpClient.close();
  },
  onError: async () => {
    await mcpClient.close();
  },
});

Migration Checklist (v5 → v6)

迁移检查清单(v5 → v6)

When migrating from v5 to v6, update:
  • [ ] Replace
    generateObject
    /
    streamObject
    with
    generateText
    /
    streamText
    +
    Output
  • [ ] Replace
    append
    with
    sendMessage
    in useChat
  • [ ] Remove
    input
    ,
    setInput
    ,
    handleInputChange
    from useChat destructuring
  • [ ] Add local state management for input:
    const [input, setInput] = useState('')
  • [ ] Update message rendering from
    message.content
    to
    message.parts.map(...)
  • [ ] Update sendMessage calls to use
    { text: input }
    structure
  • [ ] Replace
    toDataStreamResponse()
    with
    toUIMessageStreamResponse()
  • [ ] Update tool definitions to use
    tool()
    helper with
    inputSchema
  • [ ] Update model IDs (e.g.,
    claude-sonnet-4-5
    )
  • [ ] Consider using
    ToolLoopAgent
    for agentic applications
  • [ ] Update TypeScript types (
    UIMessage
    ,
    ModelMessage
    )
  • [ ] Add
    addToolOutput
    handling if using client-side tools
  • [ ] Consider implementing middleware for logging, caching, or guardrails
从v5迁移至v6时,需更新以下内容:
  • [ ] 用
    generateText
    /
    streamText
    +
    Output
    替代
    generateObject
    /
    streamObject
  • [ ] 在useChat中用
    sendMessage
    替代
    append
  • [ ] 从useChat的解构中移除
    input
    setInput
    handleInputChange
  • [ ] 添加输入的本地状态管理:
    const [input, setInput] = useState('')
  • [ ] 将消息渲染从
    message.content
    更新为
    message.parts.map(...)
  • [ ] 更新sendMessage调用为
    { text: input }
    结构
  • [ ] 用
    toUIMessageStreamResponse()
    替代
    toDataStreamResponse()
  • [ ] 更新工具定义为使用
    tool()
    助手搭配
    inputSchema
  • [ ] 更新模型ID(例如
    claude-sonnet-4-5
  • [ ] 考虑为智能体应用使用
    ToolLoopAgent
  • [ ] 更新TypeScript类型(
    UIMessage
    ModelMessage
  • [ ] 若使用客户端工具,添加
    addToolOutput
    处理逻辑
  • [ ] 考虑实现用于日志记录、缓存或防护的中间件

Decision Guide

决策指南

When implementing AI SDK features, ask:
  1. Is this client-side or server-side?
    • Client: Use
      useChat
      hook
    • Server: Use
      generateText
      or
      streamText
    • Agent: Use
      ToolLoopAgent
      with
      createAgentUIStreamResponse
  2. Do I need streaming or non-streaming?
    • Streaming chat:
      streamText
      +
      toUIMessageStreamResponse()
    • Non-streaming:
      generateText
    • Simple text stream:
      streamText
      +
      toTextStreamResponse()
  3. Do I need structured output?
    • Yes: Use
      Output.object()
      ,
      Output.array()
      ,
      Output.choice()
      , or
      Output.json()
    • Pass to
      generateText
      or
      streamText
      via
      output
      parameter
  4. Do I need tool calling?
    • Yes: Define tools with
      tool()
      helper and
      inputSchema
      (zod)
    • Pass tools object to
      generateText
      ,
      streamText
      , or
      ToolLoopAgent
  5. Am I building an agent?
    • Yes: Use
      ToolLoopAgent
      class
    • Configure
      stopWhen
      ,
      toolChoice
      ,
      prepareStep
      as needed
    • Use
      createAgentUIStreamResponse
      for API routes
  6. Am I using the correct message format?
    • Client (useChat): Returns
      UIMessage[]
      with
      parts
      property
    • Server: Convert with
      convertToModelMessages()
      to
      ModelMessage[]
    • Render messages using
      message.parts.map(...)
  7. Is my model specification correct?
    • Direct provider:
      anthropic('claude-sonnet-4-5')
    • Gateway (production):
      gateway('anthropic/claude-sonnet-4-5')
    • Embeddings:
      openai.textEmbeddingModel('text-embedding-3-small')
  8. Do I need embeddings?
    • Use
      embed
      for single values
    • Use
      embedMany
      for batches
    • Use
      textEmbeddingModel()
      method
  9. Do I need middleware?
    • Logging: Custom middleware with
      transformParams
      /
      wrapGenerate
    • Caching: Custom middleware with result storage
    • RAG: Custom middleware to inject context
    • Guardrails: Custom middleware to filter output
实现AI SDK功能时,可按以下步骤决策:
  1. 是客户端还是服务端实现?
    • 客户端:使用
      useChat
      钩子
    • 服务端:使用
      generateText
      streamText
    • 智能体:使用
      ToolLoopAgent
      搭配
      createAgentUIStreamResponse
  2. 需要流式还是非流式?
    • 流式聊天:
      streamText
      +
      toUIMessageStreamResponse()
    • 非流式:
      generateText
    • 简单文本流式传输:
      streamText
      +
      toTextStreamResponse()
  3. 是否需要结构化输出?
    • 是:使用
      Output.object()
      Output.array()
      Output.choice()
      Output.json()
    • 通过
      output
      参数传递给
      generateText
      streamText
  4. 是否需要工具调用?
    • 是:使用
      tool()
      助手搭配
      inputSchema
      (zod)定义工具
    • 将工具对象传递给
      generateText
      streamText
      ToolLoopAgent
  5. 是否在构建智能体?
    • 是:使用
      ToolLoopAgent
    • 根据需要配置
      stopWhen
      toolChoice
      prepareStep
    • 为API路由使用
      createAgentUIStreamResponse
  6. 是否使用了正确的消息格式?
    • 客户端(useChat):返回带
      parts
      属性的
      UIMessage[]
    • 服务端:使用
      convertToModelMessages()
      转换为
      ModelMessage[]
    • 使用
      message.parts.map(...)
      渲染消息
  7. 模型指定是否正确?
    • 直接提供商:
      anthropic('claude-sonnet-4-5')
    • 网关(生产环境):
      gateway('anthropic/claude-sonnet-4-5')
    • 嵌入:
      openai.textEmbeddingModel('text-embedding-3-small')
  8. 是否需要嵌入?
    • 单个值:使用
      embed
    • 批量值:使用
      embedMany
    • 使用
      textEmbeddingModel()
      方法
  9. 是否需要中间件?
    • 日志记录:带
      transformParams
      /
      wrapGenerate
      的自定义中间件
    • 缓存:带结果存储的自定义中间件
    • RAG:用于注入上下文的自定义中间件
    • 防护:用于过滤输出的自定义中间件

Quick Reference

快速参考

TaskFunctionKey Parameters
Generate text
generateText()
model
,
prompt
,
system
,
tools
,
output
Stream text
streamText()
model
,
messages
,
tools
,
output
,
onFinish
Chat UI
useChat()
api
,
onToolCall
,
onFinish
,
onError
Build agent
ToolLoopAgent
model
,
instructions
,
tools
,
stopWhen
Tool calling
tool()
description
,
inputSchema
,
outputSchema
,
execute
Structured output
Output.object()
schema
(zod)
Text embedding
embed()
model
,
value
Batch embedding
embedMany()
model
,
values
Message conversion
convertToModelMessages()
messages
(UIMessage[])
MCP integration
createMCPClient()
transport
Add middleware
wrapLanguageModel()
model
,
middleware
Gateway model
gateway()
"provider/model-name"
(e.g.,
"anthropic/claude-sonnet-4-5"
)
任务函数关键参数
生成文本
generateText()
model
,
prompt
,
system
,
tools
,
output
流式文本
streamText()
model
,
messages
,
tools
,
output
,
onFinish
聊天UI
useChat()
api
,
onToolCall
,
onFinish
,
onError
构建智能体
ToolLoopAgent
model
,
instructions
,
tools
,
stopWhen
工具调用
tool()
description
,
inputSchema
,
outputSchema
,
execute
结构化输出
Output.object()
schema
(zod)
文本嵌入
embed()
model
,
value
批量嵌入
embedMany()
model
,
values
消息转换
convertToModelMessages()
messages
(UIMessage[])
MCP集成
createMCPClient()
transport
添加中间件
wrapLanguageModel()
model
,
middleware
网关模型
gateway()
"provider/model-name"
(例如
"anthropic/claude-sonnet-4-5"

Additional Resources

额外资源

When in doubt, check the official documentation:
Remember: AI SDK v6 uses provider function model specification (or
gateway()
for production), parts-based messages,
sendMessage
instead of
append
,
Output
helpers instead of
generateObject
,
toUIMessageStreamResponse
instead of
toDataStreamResponse
, and requires
convertToModelMessages
in API routes.
有疑问时,请查阅官方文档:
注意: AI SDK v6使用提供商函数指定模型(生产环境可使用
gateway()
)、基于parts的消息结构、
sendMessage
替代
append
、Output助手替代
generateObject
toUIMessageStreamResponse
替代
toDataStreamResponse
,且在API路由中需要使用
convertToModelMessages