tanstack-ai

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Overview

概述

TanStack AI is a modular, provider-agnostic AI SDK with tree-shakeable adapters for OpenAI, Anthropic, Gemini, Ollama, and more. It provides streaming-first text generation, tool calling with approval workflows, structured output with Zod schemas, multimodal content support, and React hooks for chat/completion UIs.
Core:
@tanstack/ai
Vanilla Client:
@tanstack/ai-client
(framework-agnostic) React:
@tanstack/ai-react
Solid:
@tanstack/ai-solid
Adapters:
@tanstack/ai-openai
,
@tanstack/ai-anthropic
,
@tanstack/ai-gemini
,
@tanstack/ai-ollama
Languages: TypeScript/JavaScript, PHP, Python Status: Alpha
TanStack AI是一款模块化、与供应商无关的AI SDK,带有可摇树优化的适配器,支持OpenAI、Anthropic、Gemini、Ollama等平台。它提供流式优先的文本生成、带审批流程的工具调用、基于Zod schema的结构化输出、多模态内容支持,以及用于聊天/补全UI的React钩子。
核心包:
@tanstack/ai
Vanilla客户端:
@tanstack/ai-client
(与框架无关) React适配:
@tanstack/ai-react
Solid适配:
@tanstack/ai-solid
适配器:
@tanstack/ai-openai
@tanstack/ai-anthropic
@tanstack/ai-gemini
@tanstack/ai-ollama
支持语言: TypeScript/JavaScript、PHP、Python 状态: Alpha版本

Installation

安装

bash
npm install @tanstack/ai @tanstack/ai-react
bash
npm install @tanstack/ai @tanstack/ai-react

Or for framework-agnostic vanilla client:

或者使用与框架无关的Vanilla客户端:

npm install @tanstack/ai @tanstack/ai-client
npm install @tanstack/ai @tanstack/ai-client

Provider adapters (install only what you need):

供应商适配器(仅安装所需的适配器):

npm install @tanstack/ai-openai npm install @tanstack/ai-anthropic npm install @tanstack/ai-gemini npm install @tanstack/ai-ollama
undefined
npm install @tanstack/ai-openai npm install @tanstack/ai-anthropic npm install @tanstack/ai-gemini npm install @tanstack/ai-ollama
undefined

PHP Installation

PHP安装

bash
composer require tanstack/ai tanstack/ai-openai
bash
composer require tanstack/ai tanstack/ai-openai

Python Installation

Python安装

bash
pip install tanstack-ai tanstack-ai-openai
bash
pip install tanstack-ai tanstack-ai-openai

Core: generate()

核心功能: generate()

typescript
import { generate } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai/adapters'

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain React hooks in 3 sentences.' },
  ],
})

// Streaming with async iteration
for await (const chunk of result) {
  process.stdout.write(chunk.text)
}
typescript
import { generate } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai/adapters'

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain React hooks in 3 sentences.' },
  ],
})

// 通过异步迭代实现流式传输
for await (const chunk of result) {
  process.stdout.write(chunk.text)
}

Provider Adapters

供应商适配器

typescript
import { openaiText } from '@tanstack/ai-openai/adapters'
import { anthropicText } from '@tanstack/ai-anthropic/adapters'
import { geminiText } from '@tanstack/ai-gemini/adapters'
import { ollamaText } from '@tanstack/ai-ollama/adapters'

// OpenAI
const openai = openaiText({ model: 'gpt-4o' })

// Anthropic
const anthropic = anthropicText({ model: 'claude-sonnet-4-20250514' })

// Google Gemini
const gemini = geminiText({ model: 'gemini-pro' })

// Ollama (local)
const ollama = ollamaText({ model: 'llama3' })

// Runtime adapter switching
const adapter = process.env.AI_PROVIDER === 'anthropic' ? anthropic : openai
typescript
import { openaiText } from '@tanstack/ai-openai/adapters'
import { anthropicText } from '@tanstack/ai-anthropic/adapters'
import { geminiText } from '@tanstack/ai-gemini/adapters'
import { ollamaText } from '@tanstack/ai-ollama/adapters'

// OpenAI
const openai = openaiText({ model: 'gpt-4o' })

// Anthropic
const anthropic = anthropicText({ model: 'claude-sonnet-4-20250514' })

// Google Gemini
const gemini = geminiText({ model: 'gemini-pro' })

// Ollama(本地部署)
const ollama = ollamaText({ model: 'llama3' })

// 运行时切换适配器
const adapter = process.env.AI_PROVIDER === 'anthropic' ? anthropic : openai

React Hooks

React钩子

useChat

useChat

tsx
import { useChat } from '@tanstack/ai-react'

function ChatUI() {
  const { messages, input, setInput, handleSubmit, isLoading } = useChat({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      {messages.map((msg) => (
        <div key={msg.id}>
          <strong>{msg.role}:</strong> {msg.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
        />
        <button type="submit" disabled={isLoading}>
          Send
        </button>
      </form>
    </div>
  )
}
tsx
import { useChat } from '@tanstack/ai-react'

function ChatUI() {
  const { messages, input, setInput, handleSubmit, isLoading } = useChat({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      {messages.map((msg) => (
        <div key={msg.id}>
          <strong>{msg.role}:</strong> {msg.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Type a message..."
        />
        <button type="submit" disabled={isLoading}>
          Send
        </button>
      </form>
    </div>
  )
}

useCompletion

useCompletion

tsx
import { useCompletion } from '@tanstack/ai-react'

function CompletionUI() {
  const { completion, input, setInput, handleSubmit, isLoading } = useCompletion({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <textarea
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Enter prompt..."
        />
        <button type="submit" disabled={isLoading}>Generate</button>
      </form>
      {completion && <div>{completion}</div>}
    </div>
  )
}
tsx
import { useCompletion } from '@tanstack/ai-react'

function CompletionUI() {
  const { completion, input, setInput, handleSubmit, isLoading } = useCompletion({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <textarea
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Enter prompt..."
        />
        <button type="submit" disabled={isLoading}>Generate</button>
      </form>
      {completion && <div>{completion}</div>}
    </div>
  )
}

Solid.js Hooks

Solid.js钩子

tsx
import { createChat } from '@tanstack/ai-solid'

function ChatUI() {
  const chat = createChat({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      <For each={chat.messages()}>
        {(msg) => (
          <div>
            <strong>{msg.role}:</strong> {msg.content}
          </div>
        )}
      </For>
      <form onSubmit={chat.handleSubmit}>
        <input
          value={chat.input()}
          onInput={(e) => chat.setInput(e.target.value)}
          placeholder="Type a message..."
        />
        <button type="submit" disabled={chat.isLoading()}>
          Send
        </button>
      </form>
    </div>
  )
}
tsx
import { createChat } from '@tanstack/ai-solid'

function ChatUI() {
  const chat = createChat({
    adapter: openaiText({ model: 'gpt-4o' }),
  })

  return (
    <div>
      <For each={chat.messages()}>
        {(msg) => (
          <div>
            <strong>{msg.role}:</strong> {msg.content}
          </div>
        )}
      </For>
      <form onSubmit={chat.handleSubmit}>
        <input
          value={chat.input()}
          onInput={(e) => chat.setInput(e.target.value)}
          placeholder="Type a message..."
        />
        <button type="submit" disabled={chat.isLoading()}>
          Send
        </button>
      </form>
    </div>
  )
}

Vanilla Client

Vanilla客户端

For framework-agnostic usage without React or Solid:
typescript
import { createAIClient } from '@tanstack/ai-client'
import { openaiText } from '@tanstack/ai-openai/adapters'

const client = createAIClient({
  adapter: openaiText({ model: 'gpt-4o' }),
})

// Subscribe to state changes
client.subscribe((state) => {
  console.log('Messages:', state.messages)
  console.log('Loading:', state.isLoading)
})

// Send a message
await client.send('Hello, world!')

// Clear conversation
client.clear()
适用于不依赖React或Solid的与框架无关场景:
typescript
import { createAIClient } from '@tanstack/ai-client'
import { openaiText } from '@tanstack/ai-openai/adapters'

const client = createAIClient({
  adapter: openaiText({ model: 'gpt-4o' }),
})

// 订阅状态变更
client.subscribe((state) => {
  console.log('Messages:', state.messages)
  console.log('Loading:', state.isLoading)
})

// 发送消息
await client.send('Hello, world!')

// 清空对话
client.clear()

Streaming

流式传输

Streaming Strategies

流式策略

typescript
import { generate } from '@tanstack/ai'

// Default: stream chunks as they arrive
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [...],
  stream: true,
})

for await (const chunk of result) {
  // Process each chunk
  console.log(chunk.text)
}
Available streaming strategies:
  • Batch - Collect all chunks before delivery
  • Punctuation - Stream at sentence boundaries
  • WordBoundary - Stream at word boundaries
  • Composite - Combine multiple strategies
typescript
import { generate } from '@tanstack/ai'

// 默认:数据块到达时立即流式传输
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [...],
  stream: true,
})

for await (const chunk of result) {
  // 处理每个数据块
  console.log(chunk.text)
}
可用流式策略:
  • Batch - 收集所有数据块后再交付
  • Punctuation - 按句子边界流式传输
  • WordBoundary - 按单词边界流式传输
  • Composite - 组合多种策略

Server-Sent Events (SSE)

服务器发送事件(SSE)

typescript
// Server-side SSE endpoint
import { createReplayStream } from '@tanstack/ai'

export async function handler(req: Request) {
  const stream = createReplayStream({
    adapter: openaiText({ model: 'gpt-4o' }),
    messages: await req.json(),
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}
typescript
// 服务端SSE端点
import { createReplayStream } from '@tanstack/ai'

export async function handler(req: Request) {
  const stream = createReplayStream({
    adapter: openaiText({ model: 'gpt-4o' }),
    messages: await req.json(),
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}

Structured Output

结构化输出

typescript
import { generate } from '@tanstack/ai'
import { convertZodToJsonSchema } from '@tanstack/ai'
import { z } from 'zod'

const RecipeSchema = z.object({
  name: z.string(),
  ingredients: z.array(z.object({
    item: z.string(),
    amount: z.string(),
  })),
  steps: z.array(z.string()),
  cookTime: z.number(),
})

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'Give me a pasta recipe' }],
  schema: convertZodToJsonSchema(RecipeSchema),
})

// result is typed as z.infer<typeof RecipeSchema>
console.log(result.name, result.ingredients)
typescript
import { generate } from '@tanstack/ai'
import { convertZodToJsonSchema } from '@tanstack/ai'
import { z } from 'zod'

const RecipeSchema = z.object({
  name: z.string(),
  ingredients: z.array(z.object({
    item: z.string(),
    amount: z.string(),
  })),
  steps: z.array(z.string()),
  cookTime: z.number(),
})

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'Give me a pasta recipe' }],
  schema: convertZodToJsonSchema(RecipeSchema),
})

// result类型为z.infer<typeof RecipeSchema>
console.log(result.name, result.ingredients)

Tool Calling

工具调用

Basic Tools

基础工具

typescript
import { generate } from '@tanstack/ai'

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'What is the weather in NYC?' }],
  tools: {
    getWeather: {
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string(),
        unit: z.enum(['celsius', 'fahrenheit']).optional(),
      }),
      execute: async ({ location, unit }) => {
        const data = await fetchWeather(location, unit)
        return data
      },
    },
  },
})
typescript
import { generate } from '@tanstack/ai'

const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'What is the weather in NYC?' }],
  tools: {
    getWeather: {
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string(),
        unit: z.enum(['celsius', 'fahrenheit']).optional(),
      }),
      execute: async ({ location, unit }) => {
        const data = await fetchWeather(location, unit)
        return data
      },
    },
  },
})

Tool Calling with Approval Workflows

带审批流程的工具调用

typescript
import { ToolCallManager } from '@tanstack/ai'

const manager = new ToolCallManager({
  tools: {
    deleteUser: {
      description: 'Delete a user account',
      parameters: z.object({ userId: z.string() }),
      requiresApproval: true, // Requires human approval
      execute: async ({ userId }) => {
        await deleteUser(userId)
        return { success: true }
      },
    },
  },
  onApprovalRequired: async (toolCall) => {
    // Present to user for approval
    return await showApprovalDialog(toolCall)
  },
})
typescript
import { ToolCallManager } from '@tanstack/ai'

const manager = new ToolCallManager({
  tools: {
    deleteUser: {
      description: 'Delete a user account',
      parameters: z.object({ userId: z.string() }),
      requiresApproval: true, // 需要人工审批
      execute: async ({ userId }) => {
        await deleteUser(userId)
        return { success: true }
      },
    },
  },
  onApprovalRequired: async (toolCall) => {
    // 展示给用户获取审批
    return await showApprovalDialog(toolCall)
  },
})

Agentic Loop

智能体循环

typescript
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'Research and summarize the topic' }],
  tools: { search, summarize, writeReport },
  maxIterations: 10, // Limit agent loop iterations
})
typescript
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{ role: 'user', content: 'Research and summarize the topic' }],
  tools: { search, summarize, writeReport },
  maxIterations: 10, // 限制智能体循环迭代次数
})

Multimodal Content

多模态内容

typescript
// Images
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{
    role: 'user',
    content: [
      { type: 'text', text: 'What is in this image?' },
      { type: 'image_url', image_url: { url: 'https://example.com/photo.jpg' } },
    ],
  }],
})

// Image generation with DALL-E
import { openaiImage } from '@tanstack/ai-openai/adapters'

const image = await generate({
  adapter: openaiImage({ model: 'dall-e-3' }),
  messages: [{ role: 'user', content: 'A sunset over mountains' }],
})

// Image generation with Gemini Imagen
import { geminiImage } from '@tanstack/ai-gemini/adapters'

const image = await generate({
  adapter: geminiImage({ model: 'imagen-3' }),
  messages: [{ role: 'user', content: 'A futuristic cityscape at night' }],
})
typescript
// 图片理解
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [{
    role: 'user',
    content: [
      { type: 'text', text: 'What is in this image?' },
      { type: 'image_url', image_url: { url: 'https://example.com/photo.jpg' } },
    ],
  }],
})

// 使用DALL-E生成图片
import { openaiImage } from '@tanstack/ai-openai/adapters'

const image = await generate({
  adapter: openaiImage({ model: 'dall-e-3' }),
  messages: [{ role: 'user', content: 'A sunset over mountains' }],
})

// 使用Gemini Imagen生成图片
import { geminiImage } from '@tanstack/ai-gemini/adapters'

const image = await generate({
  adapter: geminiImage({ model: 'imagen-3' }),
  messages: [{ role: 'user', content: 'A futuristic cityscape at night' }],
})

Thinking Models (Reasoning Tokens)

思维模型(推理令牌)

Support for models with extended reasoning/thinking capabilities:
typescript
import { generate } from '@tanstack/ai'
import { anthropicText } from '@tanstack/ai-anthropic/adapters'

const result = await generate({
  adapter: anthropicText({ model: 'claude-sonnet-4-20250514' }),
  messages: [{ role: 'user', content: 'Solve this complex math problem step by step...' }],
  thinking: {
    enabled: true,
    budget: 10000, // Max thinking tokens
  },
})

// Access thinking/reasoning output
console.log('Thinking:', result.thinking)
console.log('Response:', result.text)

// Streaming with thinking tokens
for await (const chunk of result) {
  if (chunk.type === 'thinking') {
    console.log('[Thinking]', chunk.text)
  } else {
    process.stdout.write(chunk.text)
  }
}
支持具备扩展推理/思考能力的模型:
typescript
import { generate } from '@tanstack/ai'
import { anthropicText } from '@tanstack/ai-anthropic/adapters'

const result = await generate({
  adapter: anthropicText({ model: 'claude-sonnet-4-20250514' }),
  messages: [{ role: 'user', content: 'Solve this complex math problem step by step...' }],
  thinking: {
    enabled: true,
    budget: 10000, // 最大思维令牌数
  },
})

// 访问思考/推理输出
console.log('Thinking:', result.thinking)
console.log('Response:', result.text)

// 带思维令牌的流式传输
for await (const chunk of result) {
  if (chunk.type === 'thinking') {
    console.log('[Thinking]', chunk.text)
  } else {
    process.stdout.write(chunk.text)
  }
}

Message Utilities

消息工具

typescript
import { generateMessageId, normalizeToUIMessage } from '@tanstack/ai'

// Generate unique message IDs
const id = generateMessageId()

// Normalize provider-specific messages to UI format
const uiMessage = normalizeToUIMessage(providerMessage)
typescript
import { generateMessageId, normalizeToUIMessage } from '@tanstack/ai'

// 生成唯一消息ID
const id = generateMessageId()

// 将供应商特定消息标准化为UI格式
const uiMessage = normalizeToUIMessage(providerMessage)

Observability

可观测性

typescript
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [...],
  onEvent: (event) => {
    // Structured, typed events
    switch (event.type) {
      case 'text':
        console.log('Text chunk:', event.data)
        break
      case 'tool_call':
        console.log('Tool called:', event.name)
        break
      case 'error':
        console.error('Error:', event.error)
        break
    }
  },
})
typescript
const result = await generate({
  adapter: openaiText({ model: 'gpt-4o' }),
  messages: [...],
  onEvent: (event) => {
    // 结构化、类型化事件
    switch (event.type) {
      case 'text':
        console.log('Text chunk:', event.data)
        break
      case 'tool_call':
        console.log('Tool called:', event.name)
        break
      case 'error':
        console.error('Error:', event.error)
        break
    }
  },
})

AI Devtools

AI开发工具

TanStack AI includes a dedicated devtools panel for debugging AI workflows:
tsx
import { TanStackDevtools } from '@tanstack/react-devtools'
import { AIDevtoolsPanel } from '@tanstack/ai-react/devtools'

function App() {
  return (
    <TanStackDevtools
      plugins={[
        {
          id: 'ai',
          name: 'AI',
          render: () => <AIDevtoolsPanel />,
        },
      ]}
    />
  )
}
AI Devtools features:
  • Message Inspector - View full conversation history with metadata
  • Token Usage - Track input/output tokens and costs per request
  • Streaming Visualization - Real-time view of streaming chunks
  • Tool Call Debugging - Inspect tool calls, parameters, and results
  • Thinking/Reasoning Viewer - Debug reasoning tokens from thinking models
  • Adapter Switching - Test different providers in development
  • Request/Response Logs - Full HTTP request/response inspection
TanStack AI包含专门的开发工具面板,用于调试AI工作流:
tsx
import { TanStackDevtools } from '@tanstack/react-devtools'
import { AIDevtoolsPanel } from '@tanstack/ai-react/devtools'

function App() {
  return (
    <TanStackDevtools
      plugins={[
        {
          id: 'ai',
          name: 'AI',
          render: () => <AIDevtoolsPanel />,
        },
      ]}
    />
  )
}
AI开发工具特性:
  • 消息检查器 - 查看包含元数据的完整对话历史
  • 令牌使用统计 - 跟踪每次请求的输入/输出令牌和成本
  • 流式可视化 - 实时查看流式数据块
  • 工具调用调试 - 检查工具调用、参数和结果
  • 思考/推理查看器 - 调试来自思维模型的推理令牌
  • 适配器切换 - 在开发环境中测试不同供应商
  • 请求/响应日志 - 完整的HTTP请求/响应检查

TanStack Start Integration

TanStack Start集成

typescript
// Shared implementation between AI tools and server functions
import { createServerFn } from '@tanstack/react-start'
import { generate } from '@tanstack/ai'

const aiChat = createServerFn({ method: 'POST' })
  .validator(z.object({ messages: z.array(messageSchema) }))
  .handler(async ({ data }) => {
    const result = await generate({
      adapter: openaiText({ model: 'gpt-4o' }),
      messages: data.messages,
    })
    return result
  })
typescript
// AI工具与服务器函数之间的共享实现
import { createServerFn } from '@tanstack/react-start'
import { generate } from '@tanstack/ai'

const aiChat = createServerFn({ method: 'POST' })
  .validator(z.object({ messages: z.array(messageSchema) }))
  .handler(async ({ data }) => {
    const result = await generate({
      adapter: openaiText({ model: 'gpt-4o' }),
      messages: data.messages,
    })
    return result
  })

Partial JSON Parser

部分JSON解析器

For streaming structured output that arrives incrementally:
typescript
import { parsePartialJson } from '@tanstack/ai'

// Parse incomplete JSON during streaming
const partial = parsePartialJson('{"name": "Pasta", "ingredients": [{"item": "flour"')
// Returns: { name: "Pasta", ingredients: [{ item: "flour" }] }
用于处理增量到达的流式结构化输出:
typescript
import { parsePartialJson } from '@tanstack/ai'

// 在流式传输期间解析不完整的JSON
const partial = parsePartialJson('{"name": "Pasta", "ingredients": [{"item": "flour"')
// 返回结果: { name: "Pasta", ingredients: [{ item: "flour" }] }

Best Practices

最佳实践

  1. Import only needed adapters - tree-shakeable design minimizes bundle size
  2. Use structured output with Zod schemas for type-safe AI responses
  3. Set
    maxIterations
    on agentic loops to prevent runaway execution
  4. Use
    requiresApproval
    for destructive tool calls
  5. Handle streaming errors gracefully with try/catch around async iteration
  6. Use server functions for API key security (never expose keys client-side)
  7. Use
    onEvent
    for observability and debugging in development
  8. Switch adapters at runtime for A/B testing or fallback strategies
  9. Use partial JSON parsing for progressive UI updates during streaming
  10. Normalize messages when switching between providers
  1. 仅导入所需适配器 - 可摇树设计最小化包体积
  2. 使用Zod schema实现结构化输出 - 确保AI响应的类型安全
  3. 为智能体循环设置
    maxIterations
    - 防止无限执行
  4. 对破坏性工具调用使用
    requiresApproval
  5. 优雅处理流式错误 - 在异步迭代周围使用try/catch
  6. 使用服务器函数保障API密钥安全 - 绝不在客户端暴露密钥
  7. 开发环境中使用
    onEvent
    进行可观测性和调试
  8. 运行时切换适配器 - 用于A/B测试或降级策略
  9. 使用部分JSON解析 - 在流式传输期间实现渐进式UI更新
  10. 切换供应商时标准化消息格式

Common Pitfalls

常见陷阱

  • Exposing API keys in client-side code (use server functions)
  • Not handling streaming errors (async iteration can throw)
  • Forgetting
    maxIterations
    in agentic loops (can run indefinitely)
  • Importing all adapters instead of just the one needed (bundle bloat)
  • Not using structured output for data extraction (unreliable string parsing)
  • Creating new adapter instances on every render (memoize or define at module level)
  • 在客户端代码中暴露API密钥(应使用服务器函数)
  • 未处理流式错误(异步迭代可能抛出异常)
  • 智能体循环中忘记设置
    maxIterations
    (可能无限运行)
  • 导入所有适配器而非仅所需的适配器(导致包体积臃肿)
  • 数据提取时未使用结构化输出(字符串解析不可靠)
  • 每次渲染时创建新的适配器实例(应 memoize 或在模块级别定义)