vercel-ai-sdk
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseVercel AI SDK
Vercel AI SDK
The Vercel AI SDK is a powerful toolkit for building AI-powered applications in TypeScript and JavaScript. It provides unified APIs for text generation, streaming, chatbot interfaces, tool calling, structured data generation, and multi-provider support. The SDK abstracts away provider differences, enabling seamless integration with OpenAI, Anthropic, Mistral, and other AI models through a consistent interface.
Vercel AI SDK是一个用于在TypeScript和JavaScript中构建AI驱动应用程序的强大工具包。它为文本生成、流式传输、聊天机器人界面、工具调用、结构化数据生成以及多提供商支持提供了统一的API。该SDK抽象了不同提供商之间的差异,通过一致的接口实现与OpenAI、Anthropic、Mistral等AI模型的无缝集成。
Quick Start
快速开始
Installation
安装
bash
npm install aibash
npm install aiBasic Patterns
基础模式
Text Generation
typescript
import { generateText } from 'ai';
const { text } = await generateText({
model: yourModel,
prompt: 'Write a story about a robot learning to love'
});Streaming
typescript
import { streamText } from 'ai';
const result = await streamText({
model: yourModel,
prompt: 'Explain quantum computing'
});
for await (const textPart of result.textStream) {
console.log(textPart);
}Building Chatbots
typescript
import { useChat } from 'ai/react';
const { messages, input, handleInputChange, handleSubmit } = useChat();Tool Calling
typescript
import { generateText, tool } from 'ai';
import { z } from 'zod';
const { text } = await generateText({
model: yourModel,
tools: {
weather: tool({
description: 'Get weather for a location',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return getWeather(city);
},
}),
},
});文本生成
typescript
import { generateText } from 'ai';
const { text } = await generateText({
model: yourModel,
prompt: 'Write a story about a robot learning to love'
});流式传输
typescript
import { streamText } from 'ai';
const result = await streamText({
model: yourModel,
prompt: 'Explain quantum computing'
});
for await (const textPart of result.textStream) {
console.log(textPart);
}构建聊天机器人
typescript
import { useChat } from 'ai/react';
const { messages, input, handleInputChange, handleSubmit } = useChat();工具调用
typescript
import { generateText, tool } from 'ai';
import { z } from 'zod';
const { text } = await generateText({
model: yourModel,
tools: {
weather: tool({
description: 'Get weather for a location',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return getWeather(city);
},
}),
},
});Core Workflows
核心工作流
Text Generation & Streaming
文本生成与流式传输
The SDK provides two primary functions for text generation:
- generateText: Use for non-interactive generation when you need the complete result
- streamText: Use for interactive applications requiring real-time streaming
Both functions support identical parameters for prompts, messages, tools, and settings. Key configurations include , , , and . Always use for cancellation in production applications.
maxTokenstemperaturetopPstopSequencesabortSignalConfigure settings globally or per-request. Global settings apply across all calls using . Per-request settings override global configuration.
defaultSettingsMiddlewareError handling is built-in through callbacks () and typed error objects (, ). Enable warning logging with .
onErrorAIErrorNoTextGeneratedErrorglobalThis.AI_SDK_LOG_WARNINGS = trueReferences: , ,
references/Generating-and-Streaming-Text.mdreferences/Settings.mdreferences/Error-Handling&warnings.md该SDK提供了两个用于文本生成的核心函数:
- generateText: 适用于需要完整结果的非交互式生成场景
- streamText: 适用于需要实时流式传输的交互式应用
这两个函数支持相同的参数,包括提示词、消息、工具和设置。关键配置项包括、、和。在生产环境中,务必使用来实现请求取消功能。
maxTokenstemperaturetopPstopSequencesabortSignal可以全局或按请求配置设置。全局设置通过应用于所有调用,按请求设置会覆盖全局配置。
defaultSettingsMiddlewareSDK内置了错误处理机制,通过回调函数()和类型化错误对象(、)实现。设置可启用警告日志。
onErrorAIErrorNoTextGeneratedErrorglobalThis.AI_SDK_LOG_WARNINGS = true参考文档:、、
references/Generating-and-Streaming-Text.mdreferences/Settings.mdreferences/Error-Handling&warnings.mdBuilding Chatbots & UI
构建聊天机器人与UI
The hook provides complete chatbot functionality with real-time streaming, state management, and error handling.
useChatCore Features
- Automatic message state management
- Real-time streaming from server to client
- Built-in error handling and retry logic
- Support for tool usage and approvals
- Message persistence and resumable streams
- Custom transport configuration
Additional Hooks
- useCompletion: For text completion interfaces (non-chat)
- useObject: For streaming structured JSON object generation
- readUIMessageStream: For terminal UIs and custom stream processing
Generative UI: Build interfaces where LLMs generate React components dynamically. Use tools that return UI components, then render them in the chat interface with proper streaming and state reconciliation.
Message Persistence: Store messages server-side using for unique IDs. Validate messages with before storage. Implement message loading and saving with automatic UI updates.
generateId()validateUIMessages()Resumable Streams: Enable users to reconnect to ongoing AI generation using Redis storage and custom API endpoints. Stream context persists across disconnections.
Transport Layer: Customize message transmission with or custom implementations. Configure headers, body preparation, and streaming behavior.
DefaultChatTransportReferences: , , , , , , ,
references/Chatbot.mdreferences/Chatbot-Tool-Usage.mdreferences/Chatbot-Message-Persistence.mdreferences/Chatbot-Resume-Streams.mdreferences/Completion.mdreferences/Generative-User-Interfaces.mdreferences/Object-Generation.mdreferences/Transport.mduseChat核心特性
- 自动消息状态管理
- 从服务器到客户端的实时流式传输
- 内置错误处理和重试逻辑
- 支持工具使用和审批
- 消息持久化和可恢复流
- 自定义传输配置
其他钩子
- useCompletion: 用于文本补全界面(非聊天场景)
- useObject: 用于流式生成结构化JSON对象
- readUIMessageStream: 用于终端UI和自定义流处理
生成式UI:构建由大语言模型动态生成React组件的界面。使用返回UI组件的工具,然后在聊天界面中渲染这些组件,同时处理好流式传输和状态协调。
消息持久化:使用生成唯一ID,在服务器端存储消息。存储前使用验证消息。实现消息的加载和保存功能,并自动更新UI。
generateId()validateUIMessages()可恢复流:使用Redis存储和自定义API端点,让用户能够重新连接到正在进行的AI生成任务。流上下文在断开连接后仍能保持。
传输层:使用或自定义实现来定制消息传输。配置请求头、请求体准备和流式传输行为。
DefaultChatTransport参考文档:、、、、、、、
references/Chatbot.mdreferences/Chatbot-Tool-Usage.mdreferences/Chatbot-Message-Persistence.mdreferences/Chatbot-Resume-Streams.mdreferences/Completion.mdreferences/Generative-User-Interfaces.mdreferences/Object-Generation.mdreferences/Transport.mdTool Calling
工具调用
Tool calling enables LLMs to execute functions and use the results to formulate responses.
Definition
Define tools using the function with Zod schemas for parameters:
tool()typescript
import { tool } from 'ai';
import { z } from 'zod';
const myTool = tool({
description: 'Tool description',
parameters: z.object({
param1: z.string(),
param2: z.number(),
}),
execute: async (params) => {
// Tool execution logic
return result;
},
});Dynamic Tools: Use for tools determined at runtime or with dynamic parameters.
dynamicTool()Multi-Step Tool Calling: The model can call tools multiple times in a single generation. Use or to control execution. Use to conditionally stop based on tool results.
maxStepsstepCountIs()stopWhen()Tool Approval in Chatbots: Implement server-side tools (executed automatically) and client-side tools (require user approval). Use to provide results and for user decisions.
addToolOutput()addToolApprovalResponse()Error Handling: Handle tool execution errors gracefully. The model can retry failed tools or continue based on error information.
References: ,
references/Tool-Calling.mdreferences/Chatbot-Tool-Usage.md工具调用功能允许大语言模型执行函数并使用结果来生成响应。
定义工具
使用函数结合Zod schema来定义工具:
tool()typescript
import { tool } from 'ai';
import { z } from 'zod';
const myTool = tool({
description: 'Tool description',
parameters: z.object({
param1: z.string(),
param2: z.number(),
}),
execute: async (params) => {
// Tool execution logic
return result;
},
});动态工具:使用定义运行时动态确定的工具或参数动态变化的工具。
dynamicTool()多步骤工具调用:模型可以在单次生成过程中多次调用工具。使用或来控制执行次数,使用根据工具结果条件性停止调用。
maxStepsstepCountIs()stopWhen()聊天机器人中的工具审批:实现服务器端工具(自动执行)和客户端工具(需要用户审批)。使用提供结果,使用处理用户决策。
addToolOutput()addToolApprovalResponse()错误处理:优雅处理工具执行错误。模型可以重试失败的工具,或根据错误信息继续执行。
参考文档:、
references/Tool-Calling.mdreferences/Chatbot-Tool-Usage.mdStructured Data Generation
结构化数据生成
Generate type-safe structured data using helpers with Zod schemas.
OutputOutput.object: Generate complete JSON objects with schema validation
typescript
const { object } = await generateText({
model: yourModel,
output: Output.object({
schema: z.object({
name: z.string(),
age: z.number(),
}),
}),
});Output.array: Generate arrays with typed elements
Output.choice: Generate single values from a defined set of options
Output.json: Flexible JSON generation when schema is less strict
For streaming structured data, use hook with real-time partial updates. The UI receives incrementally updated objects as generation progresses.
useObjectBest Practices: Use clear schema descriptions, leverage for field documentation, and test schema robustness with edge cases. Consider provider compatibility when designing complex schemas.
.describe()References: , ,
references/Generating-Structured-Data.mdreferences/Object-Generation.mdreferences/Prompt-Engineering.md使用助手结合Zod schema生成类型安全的结构化数据。
OutputOutput.object:生成带schema验证的完整JSON对象
typescript
const { object } = await generateText({
model: yourModel,
output: Output.object({
schema: z.object({
name: z.string(),
age: z.number(),
}),
}),
});Output.array:生成带类型元素的数组
Output.choice:从定义的选项集中生成单个值
Output.json:当schema要求不严格时,用于灵活的JSON生成
对于流式结构化数据,使用钩子实现实时部分更新。UI会在生成过程中接收增量更新的对象。
useObject最佳实践:使用清晰的schema描述,利用为字段添加文档,测试schema在边缘场景下的健壮性。设计复杂schema时要考虑提供商的兼容性。
.describe()参考文档:、、
references/Generating-Structured-Data.mdreferences/Object-Generation.mdreferences/Prompt-Engineering.mdProvider Configuration
提供商配置
Provider Registry: Manage multiple providers with . Configure different providers for different use cases and switch between them using model aliases.
createProviderRegistry()Custom Providers: Build custom providers using the Provider V3 specification. Implement interface with and methods. Use helper functions like for HTTP requests.
LanguageModelV3doGeneratedoStreampostJsonToApiModel Settings: Configure default settings per-provider using . Apply temperature, max tokens, and other configurations automatically.
defaultSettingsMiddlewareMulti-Provider Setups: Use to chain providers, add telemetry, or modify parameters. Combine multiple providers for A/B testing or fallback strategies.
wrapLanguageModelReferences: ,
references/Provider&Model-Management.mdreferences/Writing-Custom-Provider.md提供商注册表:使用管理多个提供商。为不同的使用场景配置不同的提供商,并使用模型别名在它们之间切换。
createProviderRegistry()自定义提供商:使用Provider V3规范构建自定义提供商。实现接口,包含和方法。使用等辅助函数处理HTTP请求。
LanguageModelV3doGeneratedoStreampostJsonToApi模型设置:使用为每个提供商配置默认设置。自动应用temperature、max tokens等配置。
defaultSettingsMiddleware多提供商设置:使用来链式调用提供商、添加遥测或修改参数。组合多个提供商以实现A/B测试或 fallback策略。
wrapLanguageModel参考文档:、
references/Provider&Model-Management.mdreferences/Writing-Custom-Provider.mdAdvanced Features
高级特性
Language Model Middleware: Intercept and modify all language model calls using . Built-in middleware includes and . Create custom middleware for logging, parameter transformation, or response modification.
wrapLanguageModel()extractReasoningMiddlewaredefaultSettingsMiddlewareModel Context Protocol (MCP): Connect to MCP servers providing tools, resources, and prompts. Use with HTTP, SSE, or stdio transports. Access tools via and resources via .
createMCPClient()mcpClient.toolsmcpClient.listResources()Telemetry: Enable OpenTelemetry observability with . Collect spans, attributes, and metrics from , , and other SDK functions.
experimental_telemetrygenerateTextstreamTextTesting: Use and for deterministic testing without real LLM calls. Simulate streaming with .
MockLanguageModelV3MockEmbeddingModelV3simulateReadableStream()Stream Protocols: Understand how data streams from backend to frontend using AI SDK protocols. Use and helpers.
toUIMessageStreamResponsetoTextStreamResponseCustom Data Streaming: Stream application data alongside AI responses using and . Handle data reconciliation with client-side callbacks.
createUIMessageStream()writer.write()onDataReferences: , , , , ,
references/Language-Model-Middleware.mdreferences/Model-Context-Protocol(MCP).mdreferences/Telemetry.mdreferences/Testing.mdreferences/Stream-Protocols.mdreferences/Streaming-Custom-Data.md语言模型中间件:使用拦截和修改所有语言模型调用。内置中间件包括和。可以创建自定义中间件用于日志记录、参数转换或响应修改。
wrapLanguageModel()extractReasoningMiddlewaredefaultSettingsMiddleware模型上下文协议(MCP):连接到提供工具、资源和提示词的MCP服务器。使用结合HTTP、SSE或stdio传输方式。通过访问工具,通过访问资源。
createMCPClient()mcpClient.toolsmcpClient.listResources()遥测:使用启用OpenTelemetry可观测性。收集、和其他SDK函数的跟踪数据、属性和指标。
experimental_telemetrygenerateTextstreamText测试:使用和进行确定性测试,无需调用真实的大语言模型。使用模拟流式传输行为。
MockLanguageModelV3MockEmbeddingModelV3simulateReadableStream()流协议:了解数据从后端到前端的流式传输架构。使用和辅助函数。
toUIMessageStreamResponsetoTextStreamResponse自定义数据流:使用和在AI响应的同时流式传输应用数据。通过客户端的回调处理数据协调。
createUIMessageStream()writer.write()onData参考文档:、、、、、
references/Language-Model-Middleware.mdreferences/Model-Context-Protocol(MCP).mdreferences/Telemetry.mdreferences/Testing.mdreferences/Stream-Protocols.mdreferences/Streaming-Custom-Data.mdMedia Processing
媒体处理
Embeddings: Generate embeddings with or batch with . Calculate similarity with . Wrap embedding models with for customization.
embed()embedMany()cosineSimilarity()wrapEmbeddingModel()Reranking: Improve search relevance with . Reorder documents using specialized models from Cohere, Bedrock, or other providers.
rerank()Image Generation: Generate images with . Handle for failed generations. Configure providers like OpenAI or Vertex.
generateImage()NoImageGeneratedErrorSpeech: Generate speech with . Support for multiple speech providers including OpenAI and LMNT.
experimental_generateSpeech()Transcription: Transcribe audio to text with . Handle for empty results.
experimental_transcribe()NoTranscriptGeneratedErrorReferences: , , , ,
references/Embeddings.mdreferences/Reranking.mdreferences/Image-Generation.mdreferences/Speech.mdreferences/Transcription.md嵌入向量:使用生成单个嵌入向量,或使用批量生成。使用计算相似度。使用自定义嵌入模型。
embed()embedMany()cosineSimilarity()wrapEmbeddingModel()重排序:使用提升搜索相关性。使用Cohere、Bedrock等提供商的专用模型对文档进行重新排序。
rerank()图像生成:使用生成图像。处理以应对生成失败的情况。配置OpenAI或Vertex等提供商。
generateImage()NoImageGeneratedError语音合成:使用生成语音。支持OpenAI和LMNT等多个语音提供商。
experimental_generateSpeech()语音转文字:使用将音频转换为文本。处理以应对无结果的情况。
experimental_transcribe()NoTranscriptGeneratedError参考文档:、、、、
references/Embeddings.mdreferences/Reranking.mdreferences/Image-Generation.mdreferences/Speech.mdreferences/Transcription.mdKey APIs by Category
按类别划分的核心API
Core APIs
核心API
- generateText: Generate complete text non-interactively
- streamText: Stream text for interactive applications
- smoothStream: Smooth streaming for better UX
- Output.object: Generate structured JSON objects
- Output.array: Generate typed arrays
- Output.choice: Generate from predefined choices
- Output.json: Flexible JSON generation
- tool: Define tools with Zod schemas
- dynamicTool: Define runtime-dynamic tools
- stopWhen: Conditionally stop tool execution
- stepCountIs: Control multi-step tool calling
- generateText: 非交互式生成完整文本
- streamText: 为交互式应用流式传输文本
- smoothStream: 优化流式传输体验
- Output.object: 生成结构化JSON对象
- Output.array: 生成类型化数组
- Output.choice: 从预定义选项中生成结果
- Output.json: 灵活的JSON生成
- tool: 使用Zod schema定义工具
- dynamicTool: 定义运行时动态工具
- stopWhen: 条件性停止工具执行
- stepCountIs: 控制多步骤工具调用
UI Hooks
UI钩子
- useChat: Build chatbots with streaming and state management
- useCompletion: Text completion interface
- useObject: Stream structured JSON objects
- readUIMessageStream: Process streams for custom UIs
- DefaultChatTransport: Configure chat message transmission
- prepareSendMessagesRequest: Prepare chat API requests
- useChat: 构建带流式传输和状态管理的聊天机器人
- useCompletion: 文本补全界面
- useObject: 流式生成结构化JSON对象
- readUIMessageStream: 处理自定义UI的数据流
- DefaultChatTransport: 配置聊天消息传输
- prepareSendMessagesRequest: 准备聊天API请求
Provider & Model Management
提供商与模型管理
- customProvider: Create custom AI providers
- createProviderRegistry: Manage multiple providers
- wrapLanguageModel: Middleware for language models
- defaultSettingsMiddleware: Apply default model settings
- transformParams: Transform request parameters
- customProvider: 创建自定义AI提供商
- createProviderRegistry: 管理多个提供商
- wrapLanguageModel: 语言模型中间件
- defaultSettingsMiddleware: 应用默认模型设置
- transformParams: 转换请求参数
Advanced Features
高级特性
- createMCPClient: Connect to MCP servers
- experimental_telemetry: Enable OpenTelemetry
- MockLanguageModelV3: Test with mocked language models
- simulateReadableStream: Test streaming behavior
- createUIMessageStream: Custom data streaming
- extractReasoningMiddleware: Extract model reasoning
- createMCPClient: 连接到MCP服务器
- experimental_telemetry: 启用OpenTelemetry
- MockLanguageModelV3: 使用模拟语言模型进行测试
- simulateReadableStream: 测试流式传输行为
- createUIMessageStream: 自定义数据流
- extractReasoningMiddleware: 提取模型推理过程
Media APIs
媒体API
- embed: Generate single embedding
- embedMany: Generate batch embeddings
- cosineSimilarity: Calculate embedding similarity
- rerank: Reorder documents by relevance
- generateImage: Generate images from text
- experimental_generateSpeech: Text-to-speech generation
- experimental_transcribe: Audio-to-text transcription
- embed: 生成单个嵌入向量
- embedMany: 批量生成嵌入向量
- cosineSimilarity: 计算嵌入向量相似度
- rerank: 按相关性重新排序文档
- generateImage: 从文本生成图像
- experimental_generateSpeech: 文本转语音生成
- experimental_transcribe: 音频转文本转录
Utility & Configuration
工具与配置
- maxTokens, temperature, topP: Configure generation parameters
- abortSignal: Cancel ongoing requests
- onError: Handle errors globally
- validateUIMessages: Validate chat messages
- generateId: Generate unique message IDs
- toUIMessageStreamResponse: Format UI message streams
- toTextStreamResponse: Format text streams
- maxTokens, temperature, topP: 配置生成参数
- abortSignal: 取消正在进行的请求
- onError: 全局错误处理
- validateUIMessages: 验证聊天消息
- generateId: 生成唯一消息ID
- toUIMessageStreamResponse: 格式化UI消息流
- toTextStreamResponse: 格式化文本流
When to Reference Original Docs
何时参考原始文档
Consult detailed documentation in the directory for:
references/Core Implementation Details
- Generating-and-Streaming-Text.md: Deep dive into text generation, streaming transformations, and advanced patterns
- Settings.md: Complete reference for all configuration options and their effects
- Error-Handling&warnings.md: Comprehensive error handling strategies and warning types
- Prompt-Engineering.md: Best practices for prompts, tools, and schema design
Chatbot & UI Development
- Chatbot.md: Complete chatbot implementation guide with all features
- Chatbot-Tool-Usage.md: Server-side, client-side, and user approval tool workflows
- Chatbot-Message-Persistence.md: File-based storage, validation, and ID generation
- Chatbot-Resume-Streams.md: Redis-backed resumable stream implementation
- Completion.md: useCompletion hook patterns and examples
- Generative-User-Interfaces.md: Dynamic component generation from LLM output
- Object-Generation.md: Real-time structured object streaming
- Message-Metadata.md: Attach and access custom message metadata
- Reading-UI-Message-Streams.md: Terminal UI patterns and custom processing
- Transport.md: Custom transport implementation and configuration
Tools & Structured Data
- Tool-Calling.md: Complete tool calling guide with patterns and edge cases
- Generating-Structured-Data.md: All Output helpers and schema design
Provider & Advanced Features
- Provider&Model-Management.md: Multi-provider setups, registries, and configuration
- Writing-Custom-Provider.md: Build providers matching V3 specification
- Language-Model-Middleware.md: Built-in and custom middleware patterns
- Model-Context-Protocol(MCP).md: MCP client setup and usage
- Stream-Protocols.md: Backend-to-frontend streaming architecture
- Streaming-Custom-Data.md: Custom data streaming with reconciliation
- Telemetry.md: OpenTelemetry integration and observability
- Testing.md: Mock providers and test helpers
Media Processing
- Embeddings.md: Embedding generation, batching, and similarity
- Reranking.md: Document reranking for search relevance
- Image-Generation.md: Image generation with multiple providers
- Speech.md: Text-to-speech generation
- Transcription.md: Audio-to-text transcription
Each reference document contains comprehensive examples, edge case handling, type signatures, and production-ready patterns. Reference them when implementing specific features or encountering complex scenarios beyond basic usage.
当需要实现特定功能或遇到超出基础使用的复杂场景时,请查阅目录下的详细文档:
references/核心实现细节
- Generating-and-Streaming-Text.md: 深入介绍文本生成、流式传输转换和高级模式
- Settings.md: 所有配置选项及其效果的完整参考
- Error-Handling&warnings.md: 全面的错误处理策略和警告类型
- Prompt-Engineering.md: 提示词、工具和schema设计的最佳实践
聊天机器人与UI开发
- Chatbot.md: 包含所有特性的完整聊天机器人实现指南
- Chatbot-Tool-Usage.md: 服务器端、客户端和用户审批的工具工作流
- Chatbot-Message-Persistence.md: 基于文件的存储、验证和ID生成
- Chatbot-Resume-Streams.md: 基于Redis的可恢复流实现
- Completion.md: useCompletion钩子的模式和示例
- Generative-User-Interfaces.md: 由大语言模型输出动态生成组件
- Object-Generation.md: 实时结构化对象流式传输
- Message-Metadata.md: 附加和访问自定义消息元数据
- Reading-UI-Message-Streams.md: 终端UI模式和自定义处理
- Transport.md: 自定义传输的实现和配置
工具与结构化数据
- Tool-Calling.md: 包含模式和边缘场景的完整工具调用指南
- Generating-Structured-Data.md: 所有Output助手和schema设计
提供商与高级特性
- Provider&Model-Management.md: 多提供商设置、注册表和配置
- Writing-Custom-Provider.md: 构建符合V3规范的提供商
- Language-Model-Middleware.md: 内置和自定义中间件模式
- Model-Context-Protocol(MCP).md: MCP客户端的设置和使用
- Stream-Protocols.md: 后端到前端的流式传输架构
- Streaming-Custom-Data.md: 带协调的自定义数据流
- Telemetry.md: OpenTelemetry集成和可观测性
- Testing.md: 模拟提供商和测试助手
媒体处理
- Embeddings.md: 嵌入向量生成、批处理和相似度
- Reranking.md: 提升搜索相关性的文档重排序
- Image-Generation.md: 多提供商的图像生成
- Speech.md: 文本转语音生成
- Transcription.md: 音频转文本转录
每个参考文档都包含全面的示例、边缘场景处理、类型签名和生产就绪模式。