vercel-ai-sdk

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Vercel AI SDK

Vercel AI SDK

The Vercel AI SDK is a powerful toolkit for building AI-powered applications in TypeScript and JavaScript. It provides unified APIs for text generation, streaming, chatbot interfaces, tool calling, structured data generation, and multi-provider support. The SDK abstracts away provider differences, enabling seamless integration with OpenAI, Anthropic, Mistral, and other AI models through a consistent interface.
Vercel AI SDK是一个用于在TypeScript和JavaScript中构建AI驱动应用程序的强大工具包。它为文本生成、流式传输、聊天机器人界面、工具调用、结构化数据生成以及多提供商支持提供了统一的API。该SDK抽象了不同提供商之间的差异,通过一致的接口实现与OpenAI、Anthropic、Mistral等AI模型的无缝集成。

Quick Start

快速开始

Installation

安装

bash
npm install ai
bash
npm install ai

Basic Patterns

基础模式

Text Generation
typescript
import { generateText } from 'ai';

const { text } = await generateText({
  model: yourModel,
  prompt: 'Write a story about a robot learning to love'
});
Streaming
typescript
import { streamText } from 'ai';

const result = await streamText({
  model: yourModel,
  prompt: 'Explain quantum computing'
});

for await (const textPart of result.textStream) {
  console.log(textPart);
}
Building Chatbots
typescript
import { useChat } from 'ai/react';

const { messages, input, handleInputChange, handleSubmit } = useChat();
Tool Calling
typescript
import { generateText, tool } from 'ai';
import { z } from 'zod';

const { text } = await generateText({
  model: yourModel,
  tools: {
    weather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        city: z.string(),
      }),
      execute: async ({ city }) => {
        return getWeather(city);
      },
    }),
  },
});
文本生成
typescript
import { generateText } from 'ai';

const { text } = await generateText({
  model: yourModel,
  prompt: 'Write a story about a robot learning to love'
});
流式传输
typescript
import { streamText } from 'ai';

const result = await streamText({
  model: yourModel,
  prompt: 'Explain quantum computing'
});

for await (const textPart of result.textStream) {
  console.log(textPart);
}
构建聊天机器人
typescript
import { useChat } from 'ai/react';

const { messages, input, handleInputChange, handleSubmit } = useChat();
工具调用
typescript
import { generateText, tool } from 'ai';
import { z } from 'zod';

const { text } = await generateText({
  model: yourModel,
  tools: {
    weather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        city: z.string(),
      }),
      execute: async ({ city }) => {
        return getWeather(city);
      },
    }),
  },
});

Core Workflows

核心工作流

Text Generation & Streaming

文本生成与流式传输

The SDK provides two primary functions for text generation:
  • generateText: Use for non-interactive generation when you need the complete result
  • streamText: Use for interactive applications requiring real-time streaming
Both functions support identical parameters for prompts, messages, tools, and settings. Key configurations include
maxTokens
,
temperature
,
topP
, and
stopSequences
. Always use
abortSignal
for cancellation in production applications.
Configure settings globally or per-request. Global settings apply across all calls using
defaultSettingsMiddleware
. Per-request settings override global configuration.
Error handling is built-in through callbacks (
onError
) and typed error objects (
AIError
,
NoTextGeneratedError
). Enable warning logging with
globalThis.AI_SDK_LOG_WARNINGS = true
.
References:
references/Generating-and-Streaming-Text.md
,
references/Settings.md
,
references/Error-Handling&warnings.md
该SDK提供了两个用于文本生成的核心函数:
  • generateText: 适用于需要完整结果的非交互式生成场景
  • streamText: 适用于需要实时流式传输的交互式应用
这两个函数支持相同的参数,包括提示词、消息、工具和设置。关键配置项包括
maxTokens
temperature
topP
stopSequences
。在生产环境中,务必使用
abortSignal
来实现请求取消功能。
可以全局或按请求配置设置。全局设置通过
defaultSettingsMiddleware
应用于所有调用,按请求设置会覆盖全局配置。
SDK内置了错误处理机制,通过回调函数(
onError
)和类型化错误对象(
AIError
NoTextGeneratedError
)实现。设置
globalThis.AI_SDK_LOG_WARNINGS = true
可启用警告日志。
参考文档
references/Generating-and-Streaming-Text.md
references/Settings.md
references/Error-Handling&warnings.md

Building Chatbots & UI

构建聊天机器人与UI

The
useChat
hook provides complete chatbot functionality with real-time streaming, state management, and error handling.
Core Features
  • Automatic message state management
  • Real-time streaming from server to client
  • Built-in error handling and retry logic
  • Support for tool usage and approvals
  • Message persistence and resumable streams
  • Custom transport configuration
Additional Hooks
  • useCompletion: For text completion interfaces (non-chat)
  • useObject: For streaming structured JSON object generation
  • readUIMessageStream: For terminal UIs and custom stream processing
Generative UI: Build interfaces where LLMs generate React components dynamically. Use tools that return UI components, then render them in the chat interface with proper streaming and state reconciliation.
Message Persistence: Store messages server-side using
generateId()
for unique IDs. Validate messages with
validateUIMessages()
before storage. Implement message loading and saving with automatic UI updates.
Resumable Streams: Enable users to reconnect to ongoing AI generation using Redis storage and custom API endpoints. Stream context persists across disconnections.
Transport Layer: Customize message transmission with
DefaultChatTransport
or custom implementations. Configure headers, body preparation, and streaming behavior.
References:
references/Chatbot.md
,
references/Chatbot-Tool-Usage.md
,
references/Chatbot-Message-Persistence.md
,
references/Chatbot-Resume-Streams.md
,
references/Completion.md
,
references/Generative-User-Interfaces.md
,
references/Object-Generation.md
,
references/Transport.md
useChat
钩子提供了完整的聊天机器人功能,包括实时流式传输、状态管理和错误处理。
核心特性
  • 自动消息状态管理
  • 从服务器到客户端的实时流式传输
  • 内置错误处理和重试逻辑
  • 支持工具使用和审批
  • 消息持久化和可恢复流
  • 自定义传输配置
其他钩子
  • useCompletion: 用于文本补全界面(非聊天场景)
  • useObject: 用于流式生成结构化JSON对象
  • readUIMessageStream: 用于终端UI和自定义流处理
生成式UI:构建由大语言模型动态生成React组件的界面。使用返回UI组件的工具,然后在聊天界面中渲染这些组件,同时处理好流式传输和状态协调。
消息持久化:使用
generateId()
生成唯一ID,在服务器端存储消息。存储前使用
validateUIMessages()
验证消息。实现消息的加载和保存功能,并自动更新UI。
可恢复流:使用Redis存储和自定义API端点,让用户能够重新连接到正在进行的AI生成任务。流上下文在断开连接后仍能保持。
传输层:使用
DefaultChatTransport
或自定义实现来定制消息传输。配置请求头、请求体准备和流式传输行为。
参考文档
references/Chatbot.md
references/Chatbot-Tool-Usage.md
references/Chatbot-Message-Persistence.md
references/Chatbot-Resume-Streams.md
references/Completion.md
references/Generative-User-Interfaces.md
references/Object-Generation.md
references/Transport.md

Tool Calling

工具调用

Tool calling enables LLMs to execute functions and use the results to formulate responses.
Definition Define tools using the
tool()
function with Zod schemas for parameters:
typescript
import { tool } from 'ai';
import { z } from 'zod';

const myTool = tool({
  description: 'Tool description',
  parameters: z.object({
    param1: z.string(),
    param2: z.number(),
  }),
  execute: async (params) => {
    // Tool execution logic
    return result;
  },
});
Dynamic Tools: Use
dynamicTool()
for tools determined at runtime or with dynamic parameters.
Multi-Step Tool Calling: The model can call tools multiple times in a single generation. Use
maxSteps
or
stepCountIs()
to control execution. Use
stopWhen()
to conditionally stop based on tool results.
Tool Approval in Chatbots: Implement server-side tools (executed automatically) and client-side tools (require user approval). Use
addToolOutput()
to provide results and
addToolApprovalResponse()
for user decisions.
Error Handling: Handle tool execution errors gracefully. The model can retry failed tools or continue based on error information.
References:
references/Tool-Calling.md
,
references/Chatbot-Tool-Usage.md
工具调用功能允许大语言模型执行函数并使用结果来生成响应。
定义工具 使用
tool()
函数结合Zod schema来定义工具:
typescript
import { tool } from 'ai';
import { z } from 'zod';

const myTool = tool({
  description: 'Tool description',
  parameters: z.object({
    param1: z.string(),
    param2: z.number(),
  }),
  execute: async (params) => {
    // Tool execution logic
    return result;
  },
});
动态工具:使用
dynamicTool()
定义运行时动态确定的工具或参数动态变化的工具。
多步骤工具调用:模型可以在单次生成过程中多次调用工具。使用
maxSteps
stepCountIs()
来控制执行次数,使用
stopWhen()
根据工具结果条件性停止调用。
聊天机器人中的工具审批:实现服务器端工具(自动执行)和客户端工具(需要用户审批)。使用
addToolOutput()
提供结果,使用
addToolApprovalResponse()
处理用户决策。
错误处理:优雅处理工具执行错误。模型可以重试失败的工具,或根据错误信息继续执行。
参考文档
references/Tool-Calling.md
references/Chatbot-Tool-Usage.md

Structured Data Generation

结构化数据生成

Generate type-safe structured data using
Output
helpers with Zod schemas.
Output.object: Generate complete JSON objects with schema validation
typescript
const { object } = await generateText({
  model: yourModel,
  output: Output.object({
    schema: z.object({
      name: z.string(),
      age: z.number(),
    }),
  }),
});
Output.array: Generate arrays with typed elements Output.choice: Generate single values from a defined set of options Output.json: Flexible JSON generation when schema is less strict
For streaming structured data, use
useObject
hook with real-time partial updates. The UI receives incrementally updated objects as generation progresses.
Best Practices: Use clear schema descriptions, leverage
.describe()
for field documentation, and test schema robustness with edge cases. Consider provider compatibility when designing complex schemas.
References:
references/Generating-Structured-Data.md
,
references/Object-Generation.md
,
references/Prompt-Engineering.md
使用
Output
助手结合Zod schema生成类型安全的结构化数据。
Output.object:生成带schema验证的完整JSON对象
typescript
const { object } = await generateText({
  model: yourModel,
  output: Output.object({
    schema: z.object({
      name: z.string(),
      age: z.number(),
    }),
  }),
});
Output.array:生成带类型元素的数组 Output.choice:从定义的选项集中生成单个值 Output.json:当schema要求不严格时,用于灵活的JSON生成
对于流式结构化数据,使用
useObject
钩子实现实时部分更新。UI会在生成过程中接收增量更新的对象。
最佳实践:使用清晰的schema描述,利用
.describe()
为字段添加文档,测试schema在边缘场景下的健壮性。设计复杂schema时要考虑提供商的兼容性。
参考文档
references/Generating-Structured-Data.md
references/Object-Generation.md
references/Prompt-Engineering.md

Provider Configuration

提供商配置

Provider Registry: Manage multiple providers with
createProviderRegistry()
. Configure different providers for different use cases and switch between them using model aliases.
Custom Providers: Build custom providers using the Provider V3 specification. Implement
LanguageModelV3
interface with
doGenerate
and
doStream
methods. Use helper functions like
postJsonToApi
for HTTP requests.
Model Settings: Configure default settings per-provider using
defaultSettingsMiddleware
. Apply temperature, max tokens, and other configurations automatically.
Multi-Provider Setups: Use
wrapLanguageModel
to chain providers, add telemetry, or modify parameters. Combine multiple providers for A/B testing or fallback strategies.
References:
references/Provider&Model-Management.md
,
references/Writing-Custom-Provider.md
提供商注册表:使用
createProviderRegistry()
管理多个提供商。为不同的使用场景配置不同的提供商,并使用模型别名在它们之间切换。
自定义提供商:使用Provider V3规范构建自定义提供商。实现
LanguageModelV3
接口,包含
doGenerate
doStream
方法。使用
postJsonToApi
等辅助函数处理HTTP请求。
模型设置:使用
defaultSettingsMiddleware
为每个提供商配置默认设置。自动应用temperature、max tokens等配置。
多提供商设置:使用
wrapLanguageModel
来链式调用提供商、添加遥测或修改参数。组合多个提供商以实现A/B测试或 fallback策略。
参考文档
references/Provider&Model-Management.md
references/Writing-Custom-Provider.md

Advanced Features

高级特性

Language Model Middleware: Intercept and modify all language model calls using
wrapLanguageModel()
. Built-in middleware includes
extractReasoningMiddleware
and
defaultSettingsMiddleware
. Create custom middleware for logging, parameter transformation, or response modification.
Model Context Protocol (MCP): Connect to MCP servers providing tools, resources, and prompts. Use
createMCPClient()
with HTTP, SSE, or stdio transports. Access tools via
mcpClient.tools
and resources via
mcpClient.listResources()
.
Telemetry: Enable OpenTelemetry observability with
experimental_telemetry
. Collect spans, attributes, and metrics from
generateText
,
streamText
, and other SDK functions.
Testing: Use
MockLanguageModelV3
and
MockEmbeddingModelV3
for deterministic testing without real LLM calls. Simulate streaming with
simulateReadableStream()
.
Stream Protocols: Understand how data streams from backend to frontend using AI SDK protocols. Use
toUIMessageStreamResponse
and
toTextStreamResponse
helpers.
Custom Data Streaming: Stream application data alongside AI responses using
createUIMessageStream()
and
writer.write()
. Handle data reconciliation with client-side
onData
callbacks.
References:
references/Language-Model-Middleware.md
,
references/Model-Context-Protocol(MCP).md
,
references/Telemetry.md
,
references/Testing.md
,
references/Stream-Protocols.md
,
references/Streaming-Custom-Data.md
语言模型中间件:使用
wrapLanguageModel()
拦截和修改所有语言模型调用。内置中间件包括
extractReasoningMiddleware
defaultSettingsMiddleware
。可以创建自定义中间件用于日志记录、参数转换或响应修改。
模型上下文协议(MCP):连接到提供工具、资源和提示词的MCP服务器。使用
createMCPClient()
结合HTTP、SSE或stdio传输方式。通过
mcpClient.tools
访问工具,通过
mcpClient.listResources()
访问资源。
遥测:使用
experimental_telemetry
启用OpenTelemetry可观测性。收集
generateText
streamText
和其他SDK函数的跟踪数据、属性和指标。
测试:使用
MockLanguageModelV3
MockEmbeddingModelV3
进行确定性测试,无需调用真实的大语言模型。使用
simulateReadableStream()
模拟流式传输行为。
流协议:了解数据从后端到前端的流式传输架构。使用
toUIMessageStreamResponse
toTextStreamResponse
辅助函数。
自定义数据流:使用
createUIMessageStream()
writer.write()
在AI响应的同时流式传输应用数据。通过客户端的
onData
回调处理数据协调。
参考文档
references/Language-Model-Middleware.md
references/Model-Context-Protocol(MCP).md
references/Telemetry.md
references/Testing.md
references/Stream-Protocols.md
references/Streaming-Custom-Data.md

Media Processing

媒体处理

Embeddings: Generate embeddings with
embed()
or batch with
embedMany()
. Calculate similarity with
cosineSimilarity()
. Wrap embedding models with
wrapEmbeddingModel()
for customization.
Reranking: Improve search relevance with
rerank()
. Reorder documents using specialized models from Cohere, Bedrock, or other providers.
Image Generation: Generate images with
generateImage()
. Handle
NoImageGeneratedError
for failed generations. Configure providers like OpenAI or Vertex.
Speech: Generate speech with
experimental_generateSpeech()
. Support for multiple speech providers including OpenAI and LMNT.
Transcription: Transcribe audio to text with
experimental_transcribe()
. Handle
NoTranscriptGeneratedError
for empty results.
References:
references/Embeddings.md
,
references/Reranking.md
,
references/Image-Generation.md
,
references/Speech.md
,
references/Transcription.md
嵌入向量:使用
embed()
生成单个嵌入向量,或使用
embedMany()
批量生成。使用
cosineSimilarity()
计算相似度。使用
wrapEmbeddingModel()
自定义嵌入模型。
重排序:使用
rerank()
提升搜索相关性。使用Cohere、Bedrock等提供商的专用模型对文档进行重新排序。
图像生成:使用
generateImage()
生成图像。处理
NoImageGeneratedError
以应对生成失败的情况。配置OpenAI或Vertex等提供商。
语音合成:使用
experimental_generateSpeech()
生成语音。支持OpenAI和LMNT等多个语音提供商。
语音转文字:使用
experimental_transcribe()
将音频转换为文本。处理
NoTranscriptGeneratedError
以应对无结果的情况。
参考文档
references/Embeddings.md
references/Reranking.md
references/Image-Generation.md
references/Speech.md
references/Transcription.md

Key APIs by Category

按类别划分的核心API

Core APIs

核心API

  • generateText: Generate complete text non-interactively
  • streamText: Stream text for interactive applications
  • smoothStream: Smooth streaming for better UX
  • Output.object: Generate structured JSON objects
  • Output.array: Generate typed arrays
  • Output.choice: Generate from predefined choices
  • Output.json: Flexible JSON generation
  • tool: Define tools with Zod schemas
  • dynamicTool: Define runtime-dynamic tools
  • stopWhen: Conditionally stop tool execution
  • stepCountIs: Control multi-step tool calling
  • generateText: 非交互式生成完整文本
  • streamText: 为交互式应用流式传输文本
  • smoothStream: 优化流式传输体验
  • Output.object: 生成结构化JSON对象
  • Output.array: 生成类型化数组
  • Output.choice: 从预定义选项中生成结果
  • Output.json: 灵活的JSON生成
  • tool: 使用Zod schema定义工具
  • dynamicTool: 定义运行时动态工具
  • stopWhen: 条件性停止工具执行
  • stepCountIs: 控制多步骤工具调用

UI Hooks

UI钩子

  • useChat: Build chatbots with streaming and state management
  • useCompletion: Text completion interface
  • useObject: Stream structured JSON objects
  • readUIMessageStream: Process streams for custom UIs
  • DefaultChatTransport: Configure chat message transmission
  • prepareSendMessagesRequest: Prepare chat API requests
  • useChat: 构建带流式传输和状态管理的聊天机器人
  • useCompletion: 文本补全界面
  • useObject: 流式生成结构化JSON对象
  • readUIMessageStream: 处理自定义UI的数据流
  • DefaultChatTransport: 配置聊天消息传输
  • prepareSendMessagesRequest: 准备聊天API请求

Provider & Model Management

提供商与模型管理

  • customProvider: Create custom AI providers
  • createProviderRegistry: Manage multiple providers
  • wrapLanguageModel: Middleware for language models
  • defaultSettingsMiddleware: Apply default model settings
  • transformParams: Transform request parameters
  • customProvider: 创建自定义AI提供商
  • createProviderRegistry: 管理多个提供商
  • wrapLanguageModel: 语言模型中间件
  • defaultSettingsMiddleware: 应用默认模型设置
  • transformParams: 转换请求参数

Advanced Features

高级特性

  • createMCPClient: Connect to MCP servers
  • experimental_telemetry: Enable OpenTelemetry
  • MockLanguageModelV3: Test with mocked language models
  • simulateReadableStream: Test streaming behavior
  • createUIMessageStream: Custom data streaming
  • extractReasoningMiddleware: Extract model reasoning
  • createMCPClient: 连接到MCP服务器
  • experimental_telemetry: 启用OpenTelemetry
  • MockLanguageModelV3: 使用模拟语言模型进行测试
  • simulateReadableStream: 测试流式传输行为
  • createUIMessageStream: 自定义数据流
  • extractReasoningMiddleware: 提取模型推理过程

Media APIs

媒体API

  • embed: Generate single embedding
  • embedMany: Generate batch embeddings
  • cosineSimilarity: Calculate embedding similarity
  • rerank: Reorder documents by relevance
  • generateImage: Generate images from text
  • experimental_generateSpeech: Text-to-speech generation
  • experimental_transcribe: Audio-to-text transcription
  • embed: 生成单个嵌入向量
  • embedMany: 批量生成嵌入向量
  • cosineSimilarity: 计算嵌入向量相似度
  • rerank: 按相关性重新排序文档
  • generateImage: 从文本生成图像
  • experimental_generateSpeech: 文本转语音生成
  • experimental_transcribe: 音频转文本转录

Utility & Configuration

工具与配置

  • maxTokens, temperature, topP: Configure generation parameters
  • abortSignal: Cancel ongoing requests
  • onError: Handle errors globally
  • validateUIMessages: Validate chat messages
  • generateId: Generate unique message IDs
  • toUIMessageStreamResponse: Format UI message streams
  • toTextStreamResponse: Format text streams
  • maxTokens, temperature, topP: 配置生成参数
  • abortSignal: 取消正在进行的请求
  • onError: 全局错误处理
  • validateUIMessages: 验证聊天消息
  • generateId: 生成唯一消息ID
  • toUIMessageStreamResponse: 格式化UI消息流
  • toTextStreamResponse: 格式化文本流

When to Reference Original Docs

何时参考原始文档

Consult detailed documentation in the
references/
directory for:
Core Implementation Details
  • Generating-and-Streaming-Text.md: Deep dive into text generation, streaming transformations, and advanced patterns
  • Settings.md: Complete reference for all configuration options and their effects
  • Error-Handling&warnings.md: Comprehensive error handling strategies and warning types
  • Prompt-Engineering.md: Best practices for prompts, tools, and schema design
Chatbot & UI Development
  • Chatbot.md: Complete chatbot implementation guide with all features
  • Chatbot-Tool-Usage.md: Server-side, client-side, and user approval tool workflows
  • Chatbot-Message-Persistence.md: File-based storage, validation, and ID generation
  • Chatbot-Resume-Streams.md: Redis-backed resumable stream implementation
  • Completion.md: useCompletion hook patterns and examples
  • Generative-User-Interfaces.md: Dynamic component generation from LLM output
  • Object-Generation.md: Real-time structured object streaming
  • Message-Metadata.md: Attach and access custom message metadata
  • Reading-UI-Message-Streams.md: Terminal UI patterns and custom processing
  • Transport.md: Custom transport implementation and configuration
Tools & Structured Data
  • Tool-Calling.md: Complete tool calling guide with patterns and edge cases
  • Generating-Structured-Data.md: All Output helpers and schema design
Provider & Advanced Features
  • Provider&Model-Management.md: Multi-provider setups, registries, and configuration
  • Writing-Custom-Provider.md: Build providers matching V3 specification
  • Language-Model-Middleware.md: Built-in and custom middleware patterns
  • Model-Context-Protocol(MCP).md: MCP client setup and usage
  • Stream-Protocols.md: Backend-to-frontend streaming architecture
  • Streaming-Custom-Data.md: Custom data streaming with reconciliation
  • Telemetry.md: OpenTelemetry integration and observability
  • Testing.md: Mock providers and test helpers
Media Processing
  • Embeddings.md: Embedding generation, batching, and similarity
  • Reranking.md: Document reranking for search relevance
  • Image-Generation.md: Image generation with multiple providers
  • Speech.md: Text-to-speech generation
  • Transcription.md: Audio-to-text transcription
Each reference document contains comprehensive examples, edge case handling, type signatures, and production-ready patterns. Reference them when implementing specific features or encountering complex scenarios beyond basic usage.
当需要实现特定功能或遇到超出基础使用的复杂场景时,请查阅
references/
目录下的详细文档:
核心实现细节
  • Generating-and-Streaming-Text.md: 深入介绍文本生成、流式传输转换和高级模式
  • Settings.md: 所有配置选项及其效果的完整参考
  • Error-Handling&warnings.md: 全面的错误处理策略和警告类型
  • Prompt-Engineering.md: 提示词、工具和schema设计的最佳实践
聊天机器人与UI开发
  • Chatbot.md: 包含所有特性的完整聊天机器人实现指南
  • Chatbot-Tool-Usage.md: 服务器端、客户端和用户审批的工具工作流
  • Chatbot-Message-Persistence.md: 基于文件的存储、验证和ID生成
  • Chatbot-Resume-Streams.md: 基于Redis的可恢复流实现
  • Completion.md: useCompletion钩子的模式和示例
  • Generative-User-Interfaces.md: 由大语言模型输出动态生成组件
  • Object-Generation.md: 实时结构化对象流式传输
  • Message-Metadata.md: 附加和访问自定义消息元数据
  • Reading-UI-Message-Streams.md: 终端UI模式和自定义处理
  • Transport.md: 自定义传输的实现和配置
工具与结构化数据
  • Tool-Calling.md: 包含模式和边缘场景的完整工具调用指南
  • Generating-Structured-Data.md: 所有Output助手和schema设计
提供商与高级特性
  • Provider&Model-Management.md: 多提供商设置、注册表和配置
  • Writing-Custom-Provider.md: 构建符合V3规范的提供商
  • Language-Model-Middleware.md: 内置和自定义中间件模式
  • Model-Context-Protocol(MCP).md: MCP客户端的设置和使用
  • Stream-Protocols.md: 后端到前端的流式传输架构
  • Streaming-Custom-Data.md: 带协调的自定义数据流
  • Telemetry.md: OpenTelemetry集成和可观测性
  • Testing.md: 模拟提供商和测试助手
媒体处理
  • Embeddings.md: 嵌入向量生成、批处理和相似度
  • Reranking.md: 提升搜索相关性的文档重排序
  • Image-Generation.md: 多提供商的图像生成
  • Speech.md: 文本转语音生成
  • Transcription.md: 音频转文本转录
每个参考文档都包含全面的示例、边缘场景处理、类型签名和生产就绪模式。