Loading...
Loading...
Vercel AI SDK expert guidance. Use when building AI-powered features — chat interfaces, text generation, structured output, tool calling, agents, MCP integration, streaming, embeddings, reranking, image generation, or working with any LLM provider.
npx skill4agent add vercel-labs/vercel-plugin ai-sdkCRITICAL — Your training data is outdated for this library. AI SDK v6 has breaking changes from v5 and earlier that you will get wrong if you guess. Before writing AI SDK code, fetch the docs at https://ai-sdk.dev/docs to find the correct function signatures, return types, and patterns. Key things that have changed:removal,generateObject()removal,streamObject()for image generation,result.filesfor structured streaming,Output.object()class (notToolLoopAgent),Agenttransport API, and provider package versions. Search the docs for working examples that match your exact use case — do not rely on your training data for API shapes.useChat
ai@^6.0.0@ai-sdk/react^3.0.x^6.0.0@ai-sdk/gateway^3.x^1.xcreateUIMessageStreamstream.writer.write(...)stream.write(...)useChatbodyonResponsetransporttool-<toolName>tool-weathertool-invocationDynamicToolCall.argsunknownTypedToolResult.output.resultToolLoopAgentAgentAgentinstructionssystemagent.generate()agent.stream()agent.generateText()agent.streamText()@ai-sdk/openaiopenai.embedding(...)useChat()DefaultChatTransport({ api: '/api/chat' })DirectChatTransportstopWhenstepCountIs(20)stepCountIs(1)strict: truecreateAgentUIStreamResponse({ agent, uiMessages })streamTexttoUIMessageStreamResponse()@ai-sdk/azureazure.chat()@ai-sdk/azureazureopenaiproviderMetadataproviderOptions@ai-sdk/google-vertexvertexgoogleproviderMetadataproviderOptions@ai-sdk/anthropicstructuredOutputModenpm install ai@^6.0.0 @ai-sdk/react@^3.0.0
npm install @ai-sdk/openai@^3.0.41 # Optional: required for embeddings
npm install @ai-sdk/anthropic@^3.0.58 # Optional: direct Anthropic provider access
npm install @ai-sdk/vercel@^2.0.37 # Optional: v0 model provider (v0-1.0-md)is a separate package — it is NOT included in the@ai-sdk/reactpackage. For v6 projects, installaialongside@ai-sdk/react@^3.0.x.ai@^6.0.0
If you installdirectly, use@ai-sdk/gateway(NOT@ai-sdk/gateway@^3.x).^1.x
Only install a direct provider SDK (e.g.,) if you need provider-specific features not exposed through the gateway.@ai-sdk/anthropic
| Need | How |
|---|---|
| Text generation / chat | |
| Image generation | |
| Structured JSON output | |
| Tool calling / agents | |
| Embeddings | |
generateTextvercel link # Connect to your Vercel project
# Enable AI Gateway at https://vercel.com/{team}/{project}/settings → AI Gateway
vercel env pull .env.local # Provisions VERCEL_OIDC_TOKEN automatically
npm install ai@^6.0.0 # Gateway is built in
npx ai-elements # Required: install AI text rendering componentsvercel env pullVERCEL_OIDC_TOKEN@ai-sdk/gateway@vercel/oidcvercel env pullAI_GATEWAY_API_KEY"provider/model"modelimport { generateText } from "ai";
const { text } = await generateText({
model: "openai/gpt-5.4", // plain string — routes through AI Gateway automatically
prompt: "Hello!",
});gateway()"provider/model"gateway()providerOptions.gatewayimport { gateway } from "ai";
// Explicit gateway() — only needed for advanced providerOptions
const { text } = await generateText({
model: gateway("openai/gpt-5.4"),
providerOptions: { gateway: { order: ["openai", "azure-openai"] } },
});provider/modelanthropic/claude-sonnet-4.6claude-sonnet-4-6openai/gpt-5.4anthropic/claude-sonnet-4.6gpt-4oAI Gateway does not support embeddings. Use a direct provider SDK such asfor embeddings.@ai-sdk/openai
Direct provider SDKs (,@ai-sdk/openai, etc.) are only needed for provider-specific features not exposed through the gateway (e.g., Anthropic computer use, OpenAI fine-tuned model endpoints).@ai-sdk/anthropic
import { generateText, streamText } from "ai";
// Non-streaming
const { text } = await generateText({
model: "openai/gpt-5.4",
prompt: "Explain quantum computing in simple terms.",
});
// Streaming
const result = streamText({
model: "openai/gpt-5.4",
prompt: "Write a poem about coding.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}generateObjectgenerateTextoutput: Output.object()generateObjectimport { generateText, Output } from "ai";
import { z } from "zod";
const { output } = await generateText({
model: "openai/gpt-5.4",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: "Generate a recipe for chocolate chip cookies.",
});inputSchemaparametersoutputoutputSchemaresultstrictimport { generateText, tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: "openai/gpt-5.4",
tools: {
weather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city name"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
strict: true, // Providers generate only schema-valid tool calls
execute: async ({ city }) => {
const data = await fetchWeather(city);
return { temperature: data.temp, condition: data.condition };
},
}),
},
prompt: "What is the weather in San Francisco?",
});import { dynamicTool } from "ai";
const tools = {
unknownTool: dynamicTool({
description: "A tool discovered at runtime",
execute: async (input) => {
// Handle dynamically
return { result: "done" };
},
}),
};ToolLoopAgentgenerateTextstreamTextstopWhenstepCountIs(20)AgentToolLoopAgentimport { ToolLoopAgent, stepCountIs, hasToolCall } from "ai";
const agent = new ToolLoopAgent({
model: "anthropic/claude-sonnet-4.6",
tools: { weather, search, calculator, finalAnswer },
instructions: "You are a helpful assistant.",
// Default: stepCountIs(20). Override to stop on a terminal tool or custom logic:
stopWhen: hasToolCall("finalAnswer"),
prepareStep: (context) => ({
// Customize each step — swap models, compress messages, limit tools
toolChoice: context.steps.length > 5 ? "none" : "auto",
}),
});
const { text } = await agent.generate({
prompt:
"Research the weather in Tokyo and calculate the average temperature this week.",
});import { generateText } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";
const mcpClient = await createMCPClient({
transport: {
type: "sse",
url: "https://my-mcp-server.com/sse",
},
});
const tools = await mcpClient.tools();
const result = await generateText({
model: "openai/gpt-5.4",
tools,
prompt: "Use the available tools to help the user.",
});
await mcpClient.close();@ai-sdk/mcpneedsApprovalapproval-requestedimport { streamText, tool } from "ai";
import { z } from "zod";
const result = streamText({
model: "openai/gpt-5.4",
tools: {
deleteUser: tool({
description: "Delete a user account",
inputSchema: z.object({ userId: z.string() }),
needsApproval: true, // Always require approval
execute: async ({ userId }) => {
await db.users.delete(userId);
return { deleted: true };
},
}),
processPayment: tool({
description: "Process a payment",
inputSchema: z.object({ amount: z.number(), recipient: z.string() }),
// Conditional: only approve large amounts
needsApproval: async ({ amount }) => amount > 1000,
execute: async ({ amount, recipient }) => {
return await processPayment(amount, recipient);
},
}),
},
prompt: "Delete user 123",
});useChat"use client";
import { useChat } from "@ai-sdk/react";
function Chat() {
const { messages, addToolApprovalResponse } = useChat();
return messages.map((m) =>
m.parts?.map((part, i) => {
// Tool parts in approval-requested state need user action
if (part.type.startsWith("tool-") && part.approval?.state === "approval-requested") {
return (
<div key={i}>
<p>Tool wants to run: {JSON.stringify(part.args)}</p>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: true })}>
Approve
</button>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: false })}>
Deny
</button>
</div>
);
}
return null;
}),
);
}input-streaminginput-availableapproval-requestedneedsApprovaloutput-availableoutput-errorimport { embed, embedMany, rerank } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: "The quick brown fox",
});
// Batch embeddings
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: ["text 1", "text 2", "text 3"],
});
// Rerank search results by relevance
const { results } = await rerank({
model: cohere.reranker("rerank-v3.5"),
query: "What is quantum computing?",
documents: searchResults,
});google/gemini-3.1-flash-image-previewgemini-2.0-flash-exp-image-generationgemini-2.0-flash-001google/gemini-3.1-flash-image-previewgemini-2.0-*gemini-2.5-*generateTextstreamTextimport { generateText, streamText } from "ai";
// generateText — images returned in result.files
const result = await generateText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
const imageFiles = result.files.filter((f) => f.mediaType?.startsWith("image/"));
// Convert to data URL for display
const imageFile = imageFiles[0];
const dataUrl = `data:${imageFile.mediaType};base64,${Buffer.from(imageFile.data).toString("base64")}`;
// streamText — stream text, then access images after completion
const stream = streamText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
for await (const delta of stream.fullStream) {
if (delta.type === "text-delta") process.stdout.write(delta.text);
}
const finalResult = await stream;
console.log(`Generated ${finalResult.files.length} image(s)`);google/gemini-3.1-flash-image-previewexperimental_generateImageimport { experimental_generateImage as generateImage } from "ai";
const { images } = await generateImage({
model: "google/imagen-4.0-generate-001",
prompt: "A futuristic cityscape at sunset",
aspectRatio: "16:9",
});google/imagen-4.0-ultra-generate-001bfl/flux-2-probfl/flux-kontext-maxxai/grok-imagine-image-proimport fs from "node:fs";
// From multimodal LLMs (result.files)
for (const [i, file] of imageFiles.entries()) {
const ext = file.mediaType?.split("/")[1] || "png";
await fs.promises.writeFile(`output-${i}.${ext}`, file.uint8Array);
}
// From image-only models (result.images)
for (const [i, image] of images.entries()) {
const buffer = Buffer.from(image.base64, "base64");
await fs.promises.writeFile(`output-${i}.png`, buffer);
}**bold**##`code`---<Message message={message} /><MessageResponse>{text}</MessageResponse>@/components/ai-elements/message<MessageResponse>{text}<p>{content}</p><div>{stream}</div>useChat| Transport | Use Case |
|---|---|
| HTTP POST to API routes (default — sends to |
| In-process agent communication without HTTP (SSR, testing) |
| Plain text stream protocol |
useChat()DefaultChatTransport({ api: '/api/chat' })"use client";
import { useChat } from "@ai-sdk/react";
import { Conversation } from "@/components/ai-elements/conversation";
import { Message } from "@/components/ai-elements/message";
function Chat() {
// No transport needed — defaults to DefaultChatTransport({ api: '/api/chat' })
const { messages, sendMessage, status } = useChat();
return (
<Conversation>
{messages.map((message) => (
<Message key={message.id} message={message} />
))}
</Conversation>
);
}npx ai-elements"use client";
import { useChat } from "@ai-sdk/react";
import { DirectChatTransport } from "ai";
import { myAgent } from "@/lib/agent"; // a ToolLoopAgent instance
function Chat() {
const { messages, sendMessage, status } = useChat({
transport: new DirectChatTransport({ agent: myAgent }),
});
// Same UI as above — no /api/chat route required
}useChat({ api })useChat({ transport: new DefaultChatTransport({ api }) })handleSubmitsendMessage({ text })inputhandleInputChangeuseStatebodyonResponseuseChattransportisLoadingstatus === 'streaming' || status === 'submitted'message.contentmessage.partstoUIMessageStreamResponse()useChatDefaultChatTransporttoTextStreamResponse()toUIMessageStreamResponse()toUIMessageStreamResponse()fetch()toTextStreamResponse()fetch()useChat<MessageResponse><Message>// app/api/chat/route.ts
import { streamText, convertToModelMessages, stepCountIs } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
// IMPORTANT: convertToModelMessages is async in v6
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: "openai/gpt-5.4",
messages: modelMessages,
tools: {
/* your tools */
},
// IMPORTANT: use stopWhen with stepCountIs for multi-step tool calling
// maxSteps was removed in v6 — use this instead
stopWhen: stepCountIs(5),
});
// Use toUIMessageStreamResponse (not toDataStreamResponse) for chat UIs
return result.toUIMessageStreamResponse();
}ToolLoopAgentcreateAgentUIStreamResponse// lib/agent.ts
import { ToolLoopAgent, stepCountIs } from "ai";
export const myAgent = new ToolLoopAgent({
model: "openai/gpt-5.4",
instructions: "You are a helpful assistant.",
tools: { /* your tools */ },
stopWhen: stepCountIs(5),
});// app/api/chat/route.ts — agent API route
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/lib/agent";
export async function POST(req: Request) {
const { messages } = await req.json();
return createAgentUIStreamResponse({ agent: myAgent, uiMessages: messages });
}DirectChatTransportThis pattern is for CLI tools, server-to-server pipes, and programmatic consumers. If the response will be displayed in a browser UI, use+ AI Elements instead — even for "simple" streaming text panels.toUIMessageStreamResponse()
// app/api/generate/route.ts — for CLI or server consumers, NOT browser UIs
import { streamText } from "ai";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: "openai/gpt-5.4",
prompt,
});
return result.toTextStreamResponse();
}import { wrapLanguageModel } from "ai";
const wrappedModel = wrapLanguageModel({
model: "openai/gpt-5.4",
middleware: {
transformParams: async ({ params }) => {
// Inject RAG context, modify system prompt, etc.
return { ...params, system: params.system + "\n\nContext: ..." };
},
wrapGenerate: async ({ doGenerate }) => {
const result = await doGenerate();
// Post-process, log, validate guardrails
return result;
},
},
});import { generateText } from "ai";
import { gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4.6"),
prompt: "Hello!",
providerOptions: {
gateway: {
order: ["bedrock", "anthropic"], // Try Bedrock first
models: ["openai/gpt-5.4"], // Fallback model
only: ["anthropic", "bedrock"], // Restrict providers
user: "user-123", // Usage tracking
tags: ["feature:chat", "env:production"], // Cost attribution
},
},
});npx @ai-sdk/devtools
# Opens http://localhost:4983 — inspect LLM calls, agents, token usage, timing"provider/model"model: "openai/gpt-5.4"vercel env pullgateway()providerOptions.gatewayvercel linkhttps://vercel.com/{team}/{project}/settingsvercel env pull.env.localnpx ai-elements<Message><MessageResponse>{text}streamTextuseChatgenerateTextuseChat()DefaultChatTransport({ api: '/api/chat' })convertToModelMessages()toUIMessageStreamResponse()DirectChatTransporttoTextStreamResponse()toUIMessageStreamResponse()generateTextOutput.object()ToolLoopAgentstopWhenstepCountIs(20)createAgentUIStreamResponsemcp-to-ai-sdkneedsApprovalstrict: truegenerateTextOutput.object()outputobject// CORRECT — v6
const { output } = await generateText({
model: 'openai/gpt-5.4',
output: Output.object({ schema: mySchema }),
prompt: '...',
})
console.log(output) // ✅ parsed object
// WRONG — v5 habit
const { object } = await generateText({ ... }) // ❌ undefined — `object` doesn't exist in v6outputoutputnpx @ai-sdk/codemod upgradenpx @ai-sdk/codemod v6npx @ai-sdk/codemod --dry upgradegenerateObjectstreamObjectgenerateTextstreamTextOutput.object()parametersinputSchemaresultoutputmaxStepsstopWhen: stepCountIs(N)stepCountIsaiCoreMessageModelMessageconvertToModelMessages()ToolCallOptionsToolExecutionOptionsExperimental_AgentToolLoopAgentAgentsysteminstructionsToolLoopAgentagent.generateText()agent.generate()agent.streamText()agent.stream()experimental_createMCPClientcreateMCPClientcreateAgentUIStreamResponse({ agent, uiMessages })callOptionsSchemaprepareCalluseChat({ api })useChat({ transport: new DefaultChatTransport({ api }) })useChatbodyonResponsehandleSubmitinputsendMessage({ text })toDataStreamResponse()toUIMessageStreamResponse()createUIMessageStreamstream.writer.write(...)stream.write(...)toTextStreamResponse()message.contentmessage.partstool-<toolName>tool-invocationDynamicToolCall.argsunknownTypedToolResult.resultTypedToolResult.outputai@^6.0.0@ai-sdk/react^3.0.x@ai-sdk/gateway^3.x^1.xneedsApprovalstrict: trueDirectChatTransportuseChataddToolApprovalResponseuseChatstopWhenstepCountIs(1)stepCountIs(20)ToolLoopAgentToolCallOptionsToolExecutionOptionsTool.toModelOutput({ output })outputisToolUIPartisStaticToolUIPartisToolOrDynamicToolUIPartisToolUIPartgetToolNamegetStaticToolNamegetToolOrDynamicToolNamegetToolName@ai-sdk/azureazure.chat()@ai-sdk/anthropicstructuredOutputMode@ai-sdk/langchaintoBaseMessages()toUIMessageStream()LangSmithDeploymentTransportunknownotherWarningai