Loading...
Loading...
Guide for Vercel AI SDK v6 implementation patterns including generateText, streamText, ToolLoopAgent, structured output with Output helpers, useChat hook, tool calling, embeddings, middleware, and MCP integration. Use when implementing AI chat interfaces, streaming responses, agentic applications, tool/function calling, text embeddings, workflow patterns, or working with convertToModelMessages and toUIMessageStreamResponse. Activates for AI SDK integration, useChat hook usage, message streaming, agent development, or tool calling tasks.
npx skill4agent add fluid-tools/claude-skills vercel-ai-sdkuseChatToolLoopAgentOutput.object()Output.array() NEVER accept "Module not found" errors as environment issues
YOU must install the required packages with the CORRECT package manager
Common packages needed:
- ai (core AI SDK)
- @ai-sdk/openai (OpenAI provider)
- @ai-sdk/anthropic (Anthropic provider)
- @ai-sdk/mcp (MCP integration)
- @modelcontextprotocol/sdk (MCP client SDK)
- zod (for tool schemas)
</critical> "Code is correct" is NOT enough
You must achieve FULL PASSING status
This is what it means to be an autonomous agent
</critical>ls -la | grep -E "lock"
# Look for: pnpm-lock.yaml, package-lock.json, yarn.lock, bun.lockbError: Cannot find module '@ai-sdk/anthropic'
Import: import { anthropic } from '@ai-sdk/anthropic'
Package needed: @ai-sdk/anthropic# If pnpm-lock.yaml exists (MOST COMMON for Next.js evals):
pnpm install @ai-sdk/anthropic
# or
pnpm add @ai-sdk/anthropic
# If package-lock.json exists:
npm install @ai-sdk/anthropic
# If yarn.lock exists:
yarn add @ai-sdk/anthropic
# If bun.lockb exists:
bun install @ai-sdk/anthropicnpm run build
# or pnpm run build, yarn build, bun run buildnpm run buildnpm run lintnpm run testgenerateObjectstreamObjectgenerateTextstreamTextOutput// DO NOT USE - DEPRECATED in v6
import { generateObject } from "ai";
const result = await generateObject({
model: anthropic("claude-sonnet-4-5"),
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
}),
prompt: "Analyze sentiment",
});import { generateText, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const { output } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
topics: z.array(z.string()),
}),
}),
prompt: "Analyze this feedback...",
});
// Access typed output
console.log(output.sentiment); // 'positive' | 'neutral' | 'negative'
console.log(output.topics); // string[]| Helper | Purpose | Example |
|---|---|---|
| Generate typed object | |
| Generate typed array | |
| Generate enum value | |
| Unstructured JSON | |
tool()// DO NOT DO THIS - This pattern is INCORRECT
import { z } from 'zod';
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ WRONG - "parameters" doesn't exist in v6
execute: async ({...}) => {...},
}
}Type '{ description: string; parameters: ... }' is not assignable to type '{ inputSchema: FlexibleSchema<any>; ... }'// ALWAYS DO THIS - This is the ONLY correct pattern
import { tool } from 'ai'; // ⚠️ MUST import tool
import { z } from 'zod';
tools: {
myTool: tool({ // ⚠️ MUST wrap with tool()
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ MUST use "inputSchema" (not "parameters")
execute: async ({...}) => {...},
}),
}toolimport { tool } from 'ai';tool({ ... })inputSchemaparametersz.object({ ... })executedescriptionimport { ToolLoopAgent, tool, stepCountIs } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const myAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions: "You are a helpful assistant that can search and analyze data.",
tools: {
getData: tool({
description: "Fetch data from API",
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// Implement data fetching
return { result: "data for " + query };
},
}),
analyzeData: tool({
description: "Analyze fetched data",
inputSchema: z.object({
data: z.string(),
}),
execute: async ({ data }) => {
return { analysis: "Analysis of " + data };
},
}),
},
stopWhen: stepCountIs(20), // Stop after 20 steps max
});
// Non-streaming execution
const { text, toolCalls } = await myAgent.generate({
prompt: "Find and analyze user data",
});
// Streaming execution
const stream = myAgent.stream({ prompt: "Find and analyze user data" });
for await (const chunk of stream) {
// Handle streaming chunks
}// app/api/agent/route.ts
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/agents/my-agent";
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}| Parameter | Purpose | Example |
|---|---|---|
| AI model to use | |
| System prompt | |
| Available tools | |
| Termination condition | |
| Tool usage mode | |
| Structured output schema | |
| Dynamic per-step adjustments | Function returning step config |
| Runtime options injection | Async function for RAG, etc. |
const { messages, input, setInput, append } = useChat();
// Sending message
append({ content: text, role: "user" });const { messages, sendMessage, status, addToolOutput } = useChat();
const [input, setInput] = useState('');
// Sending message
sendMessage({ text: input });
// New in v6: Handle tool outputs
addToolOutput({ toolCallId: 'xxx', result: { ... } });<div>{message.content}</div><div>
{message.parts.map((part, index) =>
part.type === 'text' ? <span key={index}>{part.text}</span> : null
)}
</div>return result.toDataStreamResponse();return result.toUIMessageStreamResponse();import { anthropic } from "@ai-sdk/anthropic";
import { openai } from "@ai-sdk/openai";
// Use provider functions (direct provider access)
model: anthropic("claude-sonnet-4-5");
model: anthropic("claude-opus-4-5");
model: anthropic("claude-haiku-4-5");
model: openai("gpt-4o");
model: openai("gpt-4o-mini");import { gateway } from "ai";model: gateway("anthropic/claude-sonnet-4-5");
model: gateway("anthropic/claude-haiku-4-5");
model: gateway("anthropic/claude-opus-4-5");import { generateText, gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4-5"),
prompt: "Hello, world!",
});// Option 1: Direct provider
import { anthropic } from "@ai-sdk/anthropic";
model: anthropic("claude-sonnet-4-5");
// Option 2: Gateway (recommended for production)
import { gateway } from "ai";
model: gateway("anthropic/claude-sonnet-4-5");import { generateText, Output } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await generateText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
tools?: { ... },
maxSteps?: 5,
output?: Output.object({ schema: z.object({...}) }),
});{
text: string; // Generated text output
output?: T; // Typed structured output (if Output specified)
toolCalls: ToolCall[]; // Tool invocations made
finishReason: string; // Why generation stopped
usage: TokenUsage; // Token consumption
response: RawResponse; // Raw provider response
warnings: Warning[]; // Provider-specific alerts
}// app/api/generate/route.ts
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function GET() {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Why is the sky blue?",
});
return Response.json({ text: result.text });
}import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = streamText({
model: anthropic('claude-sonnet-4-5'),
prompt: 'Your prompt here',
system: 'Optional system message',
messages?: ModelMessage[],
tools?: { ... },
onChunk?: (chunk) => { ... },
onStepFinish?: (step) => { ... },
onFinish?: async (result) => { ... },
onError?: async (error) => { ... },
});// For chat applications with useChat hook
result.toUIMessageStreamResponse();
// For simple text streaming
result.toTextStreamResponse();// app/api/chat/route.ts
import { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}import { useChat } from '@ai-sdk/react';
const {
messages, // Array of UIMessage with parts-based structure
sendMessage, // Function to send messages (replaces append)
status, // 'submitted' | 'streaming' | 'ready' | 'error'
stop, // Abort current streaming
regenerate, // Reprocess last message
setMessages, // Manually modify history
error, // Error object if request fails
clearError, // Clear error state
addToolOutput, // Submit tool results (NEW in v6)
resumeStream, // Resume interrupted stream (NEW in v6)
} = useChat({
api: '/api/chat',
id?: 'chat-id',
messages?: initialMessages,
onToolCall?: async (toolCall) => { ... },
onFinish?: (message) => { ... },
onError?: (error) => { ... },
sendAutomaticallyWhen?: (messages) => boolean,
resume?: true,
});'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function ChatPage() {
const { messages, sendMessage, status, addToolOutput } = useChat({
onToolCall: async ({ toolCall }) => {
// Handle client-side tool execution
if (toolCall.name === 'confirm') {
const result = await showConfirmDialog(toolCall.args);
addToolOutput({ toolCallId: toolCall.id, result });
}
},
});
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim()) return;
sendMessage({ text: input });
setInput('');
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role}:</strong>
{message.parts.map((part, index) => {
switch (part.type) {
case 'text':
return <span key={index}>{part.text}</span>;
case 'tool-call':
return <div key={index}>Tool: {part.name}</div>;
default:
return null;
}
})}
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
disabled={status === 'streaming'}
/>
<button type="submit" disabled={status === 'streaming'}>
Send
</button>
</form>
</div>
);
}import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
inputSchema: z.object({
location: z.string().describe("The location to get the weather for"),
unit: z.enum(["C", "F"]).describe("Temperature unit"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ location, unit }) => {
// Fetch or mock weather data
return {
temperature: 24,
condition: "Sunny",
};
},
});// app/api/chat/route.ts
import { streamText, convertToModelMessages, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
tools: {
getWeather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city to get the weather for"),
unit: z
.enum(["C", "F"])
.describe("The unit to display the temperature in"),
}),
execute: async ({ city, unit }) => {
// API call or mock data
return `It is currently 24°${unit} and Sunny in ${city}!`;
},
}),
},
toolChoice: "auto", // 'auto' | 'required' | 'none' | { type: 'tool', toolName: 'xxx' }
});
return result.toUIMessageStreamResponse();
}const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
tools: {
weather: weatherTool,
search: searchTool,
},
prompt: "What is the weather in San Francisco and find hotels there?",
maxSteps: 5, // Allow up to 5 tool call steps
});import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const result = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: "Text to embed",
});
// Batch embeddings
const batchResult = await embedMany({
model: openai.textEmbeddingModel("text-embedding-3-small"),
values: ["Text 1", "Text 2", "Text 3"],
});{
embedding: number[]; // Numerical array representing the text
usage: { tokens: number }; // Token consumption
response: RawResponse; // Raw provider response
}// app/api/embed/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { text } = await req.json();
const { embedding, usage } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: text,
});
return Response.json({ embedding, usage });
}import {
extractReasoningMiddleware,
simulateStreamingMiddleware,
defaultSettingsMiddleware,
wrapLanguageModel,
} from "ai";
// Extract reasoning from models like Claude
const modelWithReasoning = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: extractReasoningMiddleware({ tagName: "thinking" }),
});
// Apply default settings
const modelWithDefaults = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: defaultSettingsMiddleware({
temperature: 0.7,
maxOutputTokens: 1000,
}),
});import { LanguageModelMiddleware, wrapLanguageModel } from "ai";
// Logging middleware
const loggingMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
console.log("Request params:", params);
return params;
},
wrapGenerate: async ({ doGenerate, params }) => {
const result = await doGenerate();
console.log("Response:", result);
return result;
},
};
// Caching middleware
const cache = new Map<string, string>();
const cachingMiddleware: LanguageModelMiddleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params.prompt);
if (cache.has(cacheKey)) {
return { text: cache.get(cacheKey)! };
}
const result = await doGenerate();
cache.set(cacheKey, result.text);
return result;
},
};
// RAG middleware
const ragMiddleware: LanguageModelMiddleware = {
transformParams: async ({ params }) => {
const relevantDocs = await vectorSearch(params.prompt);
return {
...params,
prompt: `Context: ${relevantDocs}\n\nQuery: ${params.prompt}`,
};
},
};
// Apply multiple middleware
const enhancedModel = wrapLanguageModel({
model: anthropic("claude-sonnet-4-5"),
middleware: [loggingMiddleware, cachingMiddleware, ragMiddleware],
});bun add @ai-sdk/mcp @modelcontextprotocol/sdkimport { createMCPClient } from "@ai-sdk/mcp";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export async function POST(req: Request) {
const { prompt } = await req.json();
const httpTransport = new StreamableHTTPClientTransport(
new URL("https://mcp-server.example.com/mcp"),
{ headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` } }
);
const mcpClient = await createMCPClient({ transport: httpTransport });
try {
const tools = await mcpClient.tools();
const response = streamText({
model: anthropic("claude-sonnet-4-5"),
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});
return response.toTextStreamResponse();
} catch (error) {
await mcpClient.close();
return new Response("Internal Server Error", { status: 500 });
}
}import { createMCPClient } from "@ai-sdk/mcp";
import { Experimental_StdioMCPTransport } from "@ai-sdk/mcp";
const stdioTransport = new Experimental_StdioMCPTransport({
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/allowed/dir",
],
});
const mcpClient = await createMCPClient({ transport: stdioTransport });onFinishonErrormcpClient.tools()useChatModelMessageimport { convertToModelMessages } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}async function sequentialWorkflow(input: string) {
// Step 1: Generate initial content
const { text: draft } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Write marketing copy for: ${input}`,
});
// Step 2: Evaluate quality
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
score: z.number().min(1).max(10),
feedback: z.string(),
}),
}),
prompt: `Evaluate this copy: ${draft}`,
});
// Step 3: Improve if needed
if (evaluation.score < 7) {
const { text: improved } = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve this copy based on feedback:\n\nCopy: ${draft}\n\nFeedback: ${evaluation.feedback}`,
});
return improved;
}
return draft;
}async function parallelReview(code: string) {
const [securityReview, performanceReview, maintainabilityReview] =
await Promise.all([
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for security issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for performance issues:\n\n${code}`,
}),
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Review for maintainability:\n\n${code}`,
}),
]);
return {
security: securityReview.text,
performance: performanceReview.text,
maintainability: maintainabilityReview.text,
};
}async function routeQuery(query: string) {
// Classify the query
const { output: classification } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.choice({
choices: ["technical", "billing", "general"] as const,
}),
prompt: `Classify this customer query: ${query}`,
});
// Route to appropriate handler
switch (classification) {
case "technical":
return handleTechnicalQuery(query);
case "billing":
return handleBillingQuery(query);
default:
return handleGeneralQuery(query);
}
}async function implementFeature(requirement: string) {
// Orchestrator: Break down the task
const { output: plan } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
tasks: z.array(
z.object({
type: z.enum(["frontend", "backend", "database"]),
description: z.string(),
})
),
}),
}),
prompt: `Break down this feature into tasks: ${requirement}`,
});
// Workers: Execute tasks in parallel
const results = await Promise.all(
plan.tasks.map((task) =>
generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Implement this ${task.type} task: ${task.description}`,
})
)
);
return results.map((r) => r.text);
}async function optimizeOutput(input: string, maxIterations = 3) {
let output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: input,
});
for (let i = 0; i < maxIterations; i++) {
const { output: evaluation } = await generateText({
model: anthropic("claude-sonnet-4-5"),
output: Output.object({
schema: z.object({
isGood: z.boolean(),
improvements: z.array(z.string()),
}),
}),
prompt: `Evaluate this output: ${output.text}`,
});
if (evaluation.isGood) break;
output = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: `Improve based on: ${evaluation.improvements.join(", ")}\n\nOriginal: ${output.text}`,
});
}
return output.text;
}texttextisStreamingtool-callnameargsstatereasoningtextisStreamingfilemediaTypeurldatasourceurldocumentIdtitlestepdataimport type {
UIMessage, // Message type from useChat
ModelMessage, // Message type for model functions
ToolCall, // Tool call information
TokenUsage, // Token consumption data
} from "ai";import type { InferAgentUIMessage } from "ai";
// Type-safe messages from agent
type MyAgentMessage = InferAgentUIMessage<typeof myAgent>;import { tool } from "ai";
import { z } from "zod";
// Tool helper infers execute parameter types
const myTool = tool({
description: "My tool",
inputSchema: z.object({
param1: z.string(),
param2: z.number(),
}),
outputSchema: z.object({
result: z.string(),
}),
execute: async ({ param1, param2 }) => {
// param1 is inferred as string
// param2 is inferred as number
return { result: "success" };
},
});app/page.tsx'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const { messages, sendMessage, status } = useChat();
const [input, setInput] = useState('');
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>
))}
<form onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}app/api/chat/route.tsimport { streamText, convertToModelMessages } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
system: "You are a helpful assistant.",
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}import { streamText, convertToModelMessages, Output } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: anthropic("claude-sonnet-4-5"),
messages: convertToModelMessages(messages),
output: Output.object({
schema: z.object({
response: z.string(),
sentiment: z.enum(["positive", "neutral", "negative"]),
confidence: z.number().min(0).max(1),
}),
}),
});
return result.toUIMessageStreamResponse();
}import {
ToolLoopAgent,
tool,
stepCountIs,
createAgentUIStreamResponse,
} from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
const researchAgent = new ToolLoopAgent({
model: anthropic("claude-sonnet-4-5"),
instructions:
"You are a research assistant that can search and analyze information.",
tools: {
webSearch: tool({
description: "Search the web for information",
inputSchema: z.object({
query: z.string().describe("Search query"),
}),
execute: async ({ query }) => {
// Implement web search
return { results: ["..."] };
},
}),
analyze: tool({
description: "Analyze collected information",
inputSchema: z.object({
data: z.string().describe("Data to analyze"),
}),
execute: async ({ data }) => {
return { analysis: "..." };
},
}),
summarize: tool({
description: "Summarize findings",
inputSchema: z.object({
findings: z.array(z.string()),
}),
execute: async ({ findings }) => {
return { summary: "..." };
},
}),
},
stopWhen: stepCountIs(10),
});
// API Route
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: researchAgent,
uiMessages: messages,
});
}// app/api/search/route.ts
import { embed } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { query } = await req.json();
// Generate embedding for search query
const { embedding } = await embed({
model: openai.textEmbeddingModel("text-embedding-3-small"),
value: query,
});
// Use embedding for similarity search in vector database
// const results = await vectorDB.search(embedding);
return Response.json({ embedding, results: [] });
}// ❌ WRONG - Deprecated in v6
import { generateObject } from 'ai';
const result = await generateObject({
schema: z.object({...}),
prompt: '...',
});
// ✅ CORRECT - Use Output with generateText
import { generateText, Output } from 'ai';
const { output } = await generateText({
output: Output.object({ schema: z.object({...}) }),
prompt: '...',
});// ❌ WRONG - Plain object (WILL CAUSE BUILD FAILURE)
tools: {
myTool: {
description: 'My tool',
parameters: z.object({...}), // ❌ Wrong property name
execute: async ({...}) => {...},
},
}
// ✅ CORRECT - Use tool() helper (REQUIRED)
import { tool } from 'ai';
tools: {
myTool: tool({
description: 'My tool',
inputSchema: z.object({...}), // ⚠️ Use inputSchema
execute: async ({...}) => {...},
}),
}// ❌ WRONG - v5 pattern
const { input, setInput, append } = useChat();
append({ content: "Hello", role: "user" });
// ✅ CORRECT - v6 pattern
const { sendMessage } = useChat();
const [input, setInput] = useState("");
sendMessage({ text: "Hello" });// ❌ WRONG - v5 pattern
<div>{message.content}</div>
// ✅ CORRECT - v6 parts-based
<div>
{message.parts.map((part, i) =>
part.type === 'text' ? <span key={i}>{part.text}</span> : null
)}
</div>// ❌ WRONG - v5 method
return result.toDataStreamResponse();
// ✅ CORRECT - v6 method
return result.toUIMessageStreamResponse();// ❌ WRONG - no cleanup
const mcpClient = await createMCPClient({ transport });
const tools = await mcpClient.tools();
const response = streamText({ model, tools, prompt });
return response.toTextStreamResponse();
// ✅ CORRECT - cleanup in callbacks
const response = streamText({
model,
tools,
prompt,
onFinish: async () => {
await mcpClient.close();
},
onError: async () => {
await mcpClient.close();
},
});generateObjectstreamObjectgenerateTextstreamTextOutputappendsendMessageinputsetInputhandleInputChangeconst [input, setInput] = useState('')message.contentmessage.parts.map(...){ text: input }toDataStreamResponse()toUIMessageStreamResponse()tool()inputSchemaclaude-sonnet-4-5ToolLoopAgentUIMessageModelMessageaddToolOutputuseChatgenerateTextstreamTextToolLoopAgentcreateAgentUIStreamResponsestreamTexttoUIMessageStreamResponse()generateTextstreamTexttoTextStreamResponse()Output.object()Output.array()Output.choice()Output.json()generateTextstreamTextoutputtool()inputSchemagenerateTextstreamTextToolLoopAgentToolLoopAgentstopWhentoolChoiceprepareStepcreateAgentUIStreamResponseUIMessage[]partsconvertToModelMessages()ModelMessage[]message.parts.map(...)anthropic('claude-sonnet-4-5')gateway('anthropic/claude-sonnet-4-5')openai.textEmbeddingModel('text-embedding-3-small')embedembedManytextEmbeddingModel()transformParamswrapGenerate| Task | Function | Key Parameters |
|---|---|---|
| Generate text | | |
| Stream text | | |
| Chat UI | | |
| Build agent | | |
| Tool calling | | |
| Structured output | | |
| Text embedding | | |
| Batch embedding | | |
| Message conversion | | |
| MCP integration | | |
| Add middleware | | |
| Gateway model | | |
gateway()sendMessageappendOutputgenerateObjecttoUIMessageStreamResponsetoDataStreamResponseconvertToModelMessages