openrouter-typescript-sdk

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OpenRouter TypeScript SDK

OpenRouter TypeScript SDK

A comprehensive TypeScript SDK for interacting with OpenRouter's unified API, providing access to 300+ AI models through a single, type-safe interface. This skill enables AI agents to leverage the
callModel
pattern for text generation, tool usage, streaming, and multi-turn conversations.

一个用于与OpenRouter统一API交互的全面TypeScript SDK,通过单一的类型安全接口提供对300+ AI模型的访问。该能力支持AI Agent利用
callModel
模式进行文本生成、工具调用、流式传输和多轮对话。

Installation

安装

bash
npm install @openrouter/sdk
bash
npm install @openrouter/sdk

Setup

配置

Get your API key from openrouter.ai/settings/keys, then initialize:
typescript
import OpenRouter from '@openrouter/sdk';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

openrouter.ai/settings/keys获取你的API密钥,然后初始化:
typescript
import OpenRouter from '@openrouter/sdk';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

Authentication

身份验证

The SDK supports two authentication methods: API keys for server-side applications and OAuth PKCE flow for user-facing applications.
SDK支持两种身份验证方式:服务端应用使用API密钥,面向用户的应用使用OAuth PKCE流程。

API Key Authentication

API密钥身份验证

The primary authentication method uses API keys from your OpenRouter account.
主要的身份验证方式使用OpenRouter账户的API密钥。

Obtaining an API Key

获取API密钥

  1. Visit openrouter.ai/settings/keys
  2. Create a new API key
  3. Store securely in an environment variable
  1. 访问openrouter.ai/settings/keys
  2. 创建新的API密钥
  3. 安全存储在环境变量中

Environment Setup

环境配置

bash
export OPENROUTER_API_KEY=sk-or-v1-your-key-here
bash
export OPENROUTER_API_KEY=sk-or-v1-your-key-here

Client Initialization

客户端初始化

typescript
import OpenRouter from '@openrouter/sdk';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});
The client automatically uses this key for all subsequent requests:
typescript
// API key is automatically included
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Hello!'
});
typescript
import OpenRouter from '@openrouter/sdk';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});
客户端会自动将该密钥用于所有后续请求:
typescript
// API密钥会自动包含在内
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Hello!'
});

Get Current Key Metadata

获取当前密钥元数据

Retrieve information about the currently configured API key:
typescript
const keyInfo = await client.apiKeys.getCurrentKeyMetadata();
console.log('Key name:', keyInfo.name);
console.log('Created:', keyInfo.createdAt);
检索当前配置的API密钥信息:
typescript
const keyInfo = await client.apiKeys.getCurrentKeyMetadata();
console.log('密钥名称:', keyInfo.name);
console.log('创建时间:', keyInfo.createdAt);

API Key Management

API密钥管理

Programmatically manage API keys:
typescript
// List all keys
const keys = await client.apiKeys.list();

// Create a new key
const newKey = await client.apiKeys.create({
  name: 'Production API Key'
});

// Get a specific key by hash
const key = await client.apiKeys.get({
  hash: 'sk-or-v1-...'
});

// Update a key
await client.apiKeys.update({
  hash: 'sk-or-v1-...',
  requestBody: {
    name: 'Updated Key Name'
  }
});

// Delete a key
await client.apiKeys.delete({
  hash: 'sk-or-v1-...'
});
通过API以编程方式管理密钥:
typescript
// 列出所有密钥
const keys = await client.apiKeys.list();

// 创建新密钥
const newKey = await client.apiKeys.create({
  name: '生产环境API密钥'
});

// 通过哈希值获取特定密钥
const key = await client.apiKeys.get({
  hash: 'sk-or-v1-...'
});

// 更新密钥
await client.apiKeys.update({
  hash: 'sk-or-v1-...',
  requestBody: {
    name: '更新后的密钥名称'
  }
});

// 删除密钥
await client.apiKeys.delete({
  hash: 'sk-or-v1-...'
});

OAuth Authentication (PKCE Flow)

OAuth身份验证(PKCE流程)

For user-facing applications where users should control their own API keys, OpenRouter supports OAuth with PKCE (Proof Key for Code Exchange). This flow allows users to generate API keys through a browser authorization flow without your application handling their credentials.
对于用户应控制自己API密钥的面向用户应用,OpenRouter支持带有PKCE(授权码交换证明密钥)的OAuth。该流程允许用户通过浏览器授权流程生成API密钥,无需你的应用处理其凭据。

createAuthCode

createAuthCode

Generate an authorization code and URL to start the OAuth flow:
typescript
const authResponse = await client.oAuth.createAuthCode({
  callbackUrl: 'https://myapp.com/auth/callback'
});

// authResponse contains:
// - authorizationUrl: URL to redirect the user to
// - code: The authorization code for later exchange

console.log('Redirect user to:', authResponse.authorizationUrl);
Parameters:
ParameterTypeRequiredDescription
callbackUrl
string
YesYour application's callback URL after user authorization
Browser Redirect:
typescript
// In a browser environment
window.location.href = authResponse.authorizationUrl;

// Or in a server-rendered app, return a redirect response
res.redirect(authResponse.authorizationUrl);
生成授权码和URL以启动OAuth流程:
typescript
const authResponse = await client.oAuth.createAuthCode({
  callbackUrl: 'https://myapp.com/auth/callback'
});

// authResponse包含:
// - authorizationUrl: 重定向用户到的URL
// - code: 后续交换用的授权码

console.log('重定向用户到:', authResponse.authorizationUrl);
参数:
参数类型必填描述
callbackUrl
string
用户授权后你的应用回调URL
浏览器重定向:
typescript
// 在浏览器环境中
window.location.href = authResponse.authorizationUrl;

// 或在服务端渲染应用中,返回重定向响应
res.redirect(authResponse.authorizationUrl);

exchangeAuthCodeForAPIKey

exchangeAuthCodeForAPIKey

After the user authorizes your application, they are redirected back to your callback URL with an authorization code. Exchange this code for an API key:
typescript
// In your callback handler
const code = req.query.code;  // From the redirect URL

const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
  code: code
});

// apiKeyResponse contains:
// - key: The user's API key
// - Additional metadata about the key

const userApiKey = apiKeyResponse.key;

// Store securely for this user's future requests
await saveUserApiKey(userId, userApiKey);
Parameters:
ParameterTypeRequiredDescription
code
string
YesThe authorization code from the OAuth redirect
用户授权你的应用后,会被重定向回你的回调URL并携带授权码。将该代码交换为API密钥:
typescript
// 在你的回调处理程序中
const code = req.query.code;  // 来自重定向URL

const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
  code: code
});

// apiKeyResponse包含:
// - key: 用户的API密钥
// - 密钥的附加元数据

const userApiKey = apiKeyResponse.key;

// 安全存储该密钥以用于用户未来的请求
await saveUserApiKey(userId, userApiKey);
参数:
参数类型必填描述
code
string
OAuth重定向带来的授权码

Complete OAuth Flow Example

完整OAuth流程示例

typescript
import OpenRouter from '@openrouter/sdk';
import express from 'express';

const app = express();
const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY  // Your app's key for OAuth operations
});

// Step 1: Initiate OAuth flow
app.get('/auth/start', async (req, res) => {
  const authResponse = await client.oAuth.createAuthCode({
    callbackUrl: 'https://myapp.com/auth/callback'
  });

  // Store any state needed for the callback
  req.session.oauthState = { /* ... */ };

  // Redirect user to OpenRouter authorization page
  res.redirect(authResponse.authorizationUrl);
});

// Step 2: Handle callback and exchange code
app.get('/auth/callback', async (req, res) => {
  const { code } = req.query;

  if (!code) {
    return res.status(400).send('Authorization code missing');
  }

  try {
    const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
      code: code as string
    });

    // Store the user's API key securely
    await saveUserApiKey(req.session.userId, apiKeyResponse.key);

    res.redirect('/dashboard?auth=success');
  } catch (error) {
    console.error('OAuth exchange failed:', error);
    res.redirect('/auth/error');
  }
});

// Step 3: Use the user's API key for their requests
app.post('/api/chat', async (req, res) => {
  const userApiKey = await getUserApiKey(req.session.userId);

  // Create a client with the user's key
  const userClient = new OpenRouter({
    apiKey: userApiKey
  });

  const result = userClient.callModel({
    model: 'openai/gpt-5-nano',
    input: req.body.message
  });

  const text = await result.getText();
  res.json({ response: text });
});
typescript
import OpenRouter from '@openrouter/sdk';
import express from 'express';

const app = express();
const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY  // 用于OAuth操作的你的应用密钥
});

// 步骤1:启动OAuth流程
app.get('/auth/start', async (req, res) => {
  const authResponse = await client.oAuth.createAuthCode({
    callbackUrl: 'https://myapp.com/auth/callback'
  });

  // 存储回调所需的任何状态
  req.session.oauthState = { /* ... */ };

  // 重定向用户到OpenRouter授权页面
  res.redirect(authResponse.authorizationUrl);
});

// 步骤2:处理回调并交换代码
app.get('/auth/callback', async (req, res) => {
  const { code } = req.query;

  if (!code) {
    return res.status(400).send('缺少授权码');
  }

  try {
    const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
      code: code as string
    });

    // 安全存储用户的API密钥
    await saveUserApiKey(req.session.userId, apiKeyResponse.key);

    res.redirect('/dashboard?auth=success');
  } catch (error) {
    console.error('OAuth交换失败:', error);
    res.redirect('/auth/error');
  }
});

// 步骤3:使用用户的API密钥处理其请求
app.post('/api/chat', async (req, res) => {
  const userApiKey = await getUserApiKey(req.session.userId);

  // 使用用户的密钥创建客户端
  const userClient = new OpenRouter({
    apiKey: userApiKey
  });

  const result = userClient.callModel({
    model: 'openai/gpt-5-nano',
    input: req.body.message
  });

  const text = await result.getText();
  res.json({ response: text });
});

Security Best Practices

安全最佳实践

  1. Environment Variables: Store API keys in environment variables, never in code
  2. Key Rotation: Rotate keys periodically using the key management API
  3. Environment Separation: Use different keys for development, staging, and production
  4. OAuth for Users: Use the OAuth PKCE flow for user-facing apps to avoid handling user credentials
  5. Secure Storage: Store user API keys encrypted in your database
  6. Minimal Scope: Create keys with only the permissions needed

  1. 环境变量:将API密钥存储在环境变量中,绝不要硬编码在代码里
  2. 密钥轮换:定期使用密钥管理API轮换密钥
  3. 环境隔离:为开发、预发布和生产环境使用不同的密钥
  4. 用户使用OAuth:面向用户的应用使用OAuth PKCE流程,避免处理用户凭据
  5. 安全存储:在数据库中加密存储用户API密钥
  6. 最小权限:创建仅具有所需权限的密钥

Core Concepts: callModel

核心概念:callModel

The
callModel
function is the primary interface for text generation. It provides a unified, type-safe way to interact with any supported model.
callModel
函数是文本生成的主要接口。它提供了统一的、类型安全的方式与任何支持的模型交互。

Basic Usage

基本用法

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Explain quantum computing in one sentence.',
});

const text = await result.getText();
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '用一句话解释量子计算。',
});

const text = await result.getText();

Key Benefits

主要优势

  • Type-safe parameters with full IDE autocomplete
  • Auto-generated from OpenAPI specs - automatically updates with new models
  • Multiple consumption patterns - text, streaming, structured data
  • Automatic tool execution with multi-turn support

  • 类型安全参数,支持完整IDE自动补全
  • 从OpenAPI规范自动生成 - 随新模型自动更新
  • 多种消费模式 - 文本、流式传输、结构化数据
  • 自动工具执行,支持多轮对话

Input Formats

输入格式

The SDK accepts flexible input types for the
input
parameter:
SDK为
input
参数接受灵活的输入类型:

String Input

字符串输入

A simple string becomes a user message:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Hello, how are you?'
});
简单字符串会成为用户消息:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '你好,最近怎么样?'
});

Message Arrays

消息数组

For multi-turn conversations:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: [
    { role: 'user', content: 'What is the capital of France?' },
    { role: 'assistant', content: 'The capital of France is Paris.' },
    { role: 'user', content: 'What is its population?' }
  ]
});
用于多轮对话:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: [
    { role: 'user', content: '法国的首都是什么?' },
    { role: 'assistant', content: '法国的首都是巴黎。' },
    { role: 'user', content: '它的人口是多少?' }
  ]
});

Multimodal Content

多模态内容

Including images and text:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is in this image?' },
        { type: 'image_url', image_url: { url: 'https://example.com/image.png' } }
      ]
    }
  ]
});
包含图片和文本:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: [
    {
      role: 'user',
      content: [
        { type: 'text', text: '这张图片里有什么?' },
        { type: 'image_url', image_url: { url: 'https://example.com/image.png' } }
      ]
    }
  ]
});

System Instructions

系统指令

Use the
instructions
parameter for system-level guidance:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  instructions: 'You are a helpful coding assistant. Be concise.',
  input: 'How do I reverse a string in Python?'
});

使用
instructions
参数提供系统级指导:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  instructions: '你是一个乐于助人的编程助手,请保持简洁。',
  input: '如何用Python反转字符串?'
});

Response Methods

响应方法

The result object provides multiple methods for consuming the response:
MethodPurpose
getText()
Get complete text after all tools complete
getResponse()
Full response object with token usage
getTextStream()
Stream text deltas as they arrive
getReasoningStream()
Stream reasoning tokens (for o1/reasoning models)
getToolCallsStream()
Stream tool calls as they complete
结果对象提供多种方法来消费响应:
方法用途
getText()
所有工具完成后获取完整文本
getResponse()
包含令牌使用情况的完整响应对象
getTextStream()
流式传输到达的文本增量
getReasoningStream()
流式传输推理令牌(适用于o1/推理模型)
getToolCallsStream()
流式传输完成的工具调用

getText()

getText()

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Write a haiku about coding'
});

const text = await result.getText();
console.log(text);
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '写一首关于编程的俳句'
});

const text = await result.getText();
console.log(text);

getResponse()

getResponse()

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Hello!'
});

const response = await result.getResponse();
console.log('Text:', response.text);
console.log('Token usage:', response.usage);
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '你好!'
});

const response = await result.getResponse();
console.log('文本:', response.text);
console.log('令牌使用:', response.usage);

getTextStream()

getTextStream()

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Write a short story'
});

for await (const delta of result.getTextStream()) {
  process.stdout.write(delta);
}

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '写一个短篇故事'
});

for await (const delta of result.getTextStream()) {
  process.stdout.write(delta);
}

Tool System

工具系统

Create strongly-typed tools using Zod schemas for automatic validation and type inference.
使用Zod schemas创建强类型工具,实现自动验证和类型推断。

Defining Tools

定义工具

typescript
import { tool } from '@openrouter/sdk';
import { z } from 'zod';

const weatherTool = tool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  inputSchema: z.object({
    location: z.string().describe('City name'),
    units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius')
  }),
  outputSchema: z.object({
    temperature: z.number(),
    conditions: z.string(),
    humidity: z.number()
  }),
  execute: async (params) => {
    // Implement weather fetching logic
    return {
      temperature: 22,
      conditions: 'Sunny',
      humidity: 45
    };
  }
});
typescript
import { tool } from '@openrouter/sdk';
import { z } from 'zod';

const weatherTool = tool({
  name: 'get_weather',
  description: '获取某地当前天气',
  inputSchema: z.object({
    location: z.string().describe('城市名称'),
    units: z.enum(['celsius', 'fahrenheit']).optional().default('celsius')
  }),
  outputSchema: z.object({
    temperature: z.number(),
    conditions: z.string(),
    humidity: z.number()
  }),
  execute: async (params) => {
    // 实现天气获取逻辑
    return {
      temperature: 22,
      conditions: '晴天',
      humidity: 45
    };
  }
});

Using Tools with callModel

在callModel中使用工具

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'What is the weather in Paris?',
  tools: [weatherTool]
});

const text = await result.getText();
// The SDK automatically executes the tool and continues the conversation
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '巴黎的天气怎么样?',
  tools: [weatherTool]
});

const text = await result.getText();
// SDK会自动执行工具并继续对话

Tool Types

工具类型

Regular Tools

常规工具

Standard execute functions that return a result:
typescript
const calculatorTool = tool({
  name: 'calculate',
  description: 'Perform mathematical calculations',
  inputSchema: z.object({
    expression: z.string()
  }),
  execute: async ({ expression }) => {
    return { result: eval(expression) };
  }
});
返回结果的标准执行函数:
typescript
const calculatorTool = tool({
  name: 'calculate',
  description: '执行数学计算',
  inputSchema: z.object({
    expression: z.string()
  }),
  execute: async ({ expression }) => {
    return { result: eval(expression) };
  }
});

Generator Tools

生成器工具

Yield progress events using
eventSchema
:
typescript
const searchTool = tool({
  name: 'web_search',
  description: 'Search the web',
  inputSchema: z.object({ query: z.string() }),
  eventSchema: z.object({
    type: z.literal('progress'),
    message: z.string()
  }),
  outputSchema: z.object({ results: z.array(z.string()) }),
  execute: async function* ({ query }) {
    yield { type: 'progress', message: 'Searching...' };
    yield { type: 'progress', message: 'Processing results...' };
    return { results: ['Result 1', 'Result 2'] };
  }
});
使用
eventSchema
生成进度事件:
typescript
const searchTool = tool({
  name: 'web_search',
  description: '网页搜索',
  inputSchema: z.object({ query: z.string() }),
  eventSchema: z.object({
    type: z.literal('progress'),
    message: z.string()
  }),
  outputSchema: z.object({ results: z.array(z.string()) }),
  execute: async function* ({ query }) {
    yield { type: 'progress', message: '搜索中...' };
    yield { type: 'progress', message: '处理结果中...' };
    return { results: ['结果1', '结果2'] };
  }
});

Manual Tools

手动工具

Set
execute: false
to handle tool calls yourself:
typescript
const manualTool = tool({
  name: 'user_confirmation',
  description: 'Request user confirmation',
  inputSchema: z.object({ message: z.string() }),
  execute: false
});

设置
execute: false
自行处理工具调用:
typescript
const manualTool = tool({
  name: 'user_confirmation',
  description: '请求用户确认',
  inputSchema: z.object({ message: z.string() }),
  execute: false
});

Multi-Turn Conversations with Stop Conditions

带停止条件的多轮对话

Control automatic tool execution with stop conditions:
typescript
import { stepCountIs, maxCost, hasToolCall } from '@openrouter/sdk';

const result = client.callModel({
  model: 'openai/gpt-5.2',
  input: 'Research this topic thoroughly',
  tools: [searchTool, analyzeTool],
  stopWhen: [
    stepCountIs(10),      // Stop after 10 turns
    maxCost(1.00),        // Stop if cost exceeds $1.00
    hasToolCall('finish') // Stop when 'finish' tool is called
  ]
});
使用停止条件控制自动工具执行:
typescript
import { stepCountIs, maxCost, hasToolCall } from '@openrouter/sdk';

const result = client.callModel({
  model: 'openai/gpt-5.2',
  input: '深入研究这个主题',
  tools: [searchTool, analyzeTool],
  stopWhen: [
    stepCountIs(10),      // 10轮后停止
    maxCost(1.00),        // 成本超过1美元时停止
    hasToolCall('finish') // 调用'finish'工具时停止
  ]
});

Available Stop Conditions

可用停止条件

ConditionDescription
stepCountIs(n)
Stop after n turns
maxCost(amount)
Stop when cost exceeds amount
hasToolCall(name)
Stop when specific tool is called
条件描述
stepCountIs(n)
n轮后停止
maxCost(amount)
成本超过指定金额时停止
hasToolCall(name)
调用指定工具时停止

Custom Stop Conditions

自定义停止条件

typescript
const customStop = (context) => {
  return context.messages.length > 20;
};

const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Complex task',
  tools: [myTool],
  stopWhen: customStop
});

typescript
const customStop = (context) => {
  return context.messages.length > 20;
};

const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '复杂任务',
  tools: [myTool],
  stopWhen: customStop
});

Dynamic Parameters

动态参数

Compute parameters based on conversation context:
typescript
const result = client.callModel({
  model: (ctx) => ctx.numberOfTurns > 3 ? 'openai/gpt-4' : 'openai/gpt-4o-mini',
  temperature: (ctx) => ctx.numberOfTurns > 1 ? 0.3 : 0.7,
  input: 'Hello!'
});
根据对话上下文计算参数:
typescript
const result = client.callModel({
  model: (ctx) => ctx.numberOfTurns > 3 ? 'openai/gpt-4' : 'openai/gpt-4o-mini',
  temperature: (ctx) => ctx.numberOfTurns > 1 ? 0.3 : 0.7,
  input: '你好!'
});

Context Object Properties

上下文对象属性

PropertyTypeDescription
numberOfTurns
numberCurrent turn count
messages
arrayAll messages so far
instructions
stringCurrent system instructions
totalCost
numberAccumulated cost

属性类型描述
numberOfTurns
number当前轮次计数
messages
array到目前为止的所有消息
instructions
string当前系统指令
totalCost
number累计成本

nextTurnParams: Context Injection

nextTurnParams: 上下文注入

Tools can modify parameters for subsequent turns, enabling skills and context-aware behavior:
typescript
const skillTool = tool({
  name: 'load_skill',
  description: 'Load a specialized skill',
  inputSchema: z.object({
    skill: z.string().describe('Name of the skill to load')
  }),
  nextTurnParams: {
    instructions: (params, context) => {
      const skillInstructions = loadSkillInstructions(params.skill);
      return `${context.instructions}\n\n${skillInstructions}`;
    }
  },
  execute: async ({ skill }) => {
    return { loaded: skill };
  }
});
工具可以修改后续轮次的参数,实现技能和上下文感知行为:
typescript
const skillTool = tool({
  name: 'load_skill',
  description: '加载特定技能',
  inputSchema: z.object({
    skill: z.string().describe('要加载的技能名称')
  }),
  nextTurnParams: {
    instructions: (params, context) => {
      const skillInstructions = loadSkillInstructions(params.skill);
      return `${context.instructions}\n\n${skillInstructions}`;
    }
  },
  execute: async ({ skill }) => {
    return { loaded: skill };
  }
});

Use Cases for nextTurnParams

nextTurnParams用例

  • Skill Systems: Dynamically load specialized capabilities
  • Context Accumulation: Build up context over multiple turns
  • Mode Switching: Change model behavior mid-conversation
  • Memory Injection: Add retrieved context to instructions

  • 技能系统:动态加载特定能力
  • 上下文积累:多轮对话中构建上下文
  • 模式切换:对话中途改变模型行为
  • 内存注入:将检索到的上下文添加到指令中

Generation Parameters

生成参数

Control model behavior with these parameters:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Write a creative story',
  temperature: 0.7,        // Creativity (0-2, default varies by model)
  maxOutputTokens: 1000,   // Maximum tokens to generate
  topP: 0.9,               // Nucleus sampling parameter
  frequencyPenalty: 0.5,   // Reduce repetition
  presencePenalty: 0.5,    // Encourage new topics
  stop: ['\n\n']           // Stop sequences
});

使用这些参数控制模型行为:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '写一个创意故事',
  temperature: 0.7,        // 创造性(0-2,默认值因模型而异)
  maxOutputTokens: 1000,   // 生成的最大令牌数
  topP: 0.9,               // 核采样参数
  frequencyPenalty: 0.5,   // 减少重复
  presencePenalty: 0.5,    // 鼓励新主题
  stop: ['\n\n']           // 停止序列
});

Streaming

流式传输

All streaming methods support concurrent consumers from a single result object:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Write a detailed explanation'
});

// Consumer 1: Stream text to console
const textPromise = (async () => {
  for await (const delta of result.getTextStream()) {
    process.stdout.write(delta);
  }
})();

// Consumer 2: Get full response simultaneously
const responsePromise = result.getResponse();

// Both run concurrently
const [, response] = await Promise.all([textPromise, responsePromise]);
console.log('\n\nTotal tokens:', response.usage.totalTokens);
所有流式传输方法支持从单个结果对象并发消费:
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '写一个详细的解释'
});

// 消费者1:将文本流式传输到控制台
const textPromise = (async () => {
  for await (const delta of result.getTextStream()) {
    process.stdout.write(delta);
  }
})();

// 消费者2:同时获取完整响应
const responsePromise = result.getResponse();

// 两者并发运行
const [, response] = await Promise.all([textPromise, responsePromise]);
console.log('\n\n总令牌数:', response.usage.totalTokens);

Streaming Tool Calls

流式工具调用

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Search for information about TypeScript',
  tools: [searchTool]
});

for await (const toolCall of result.getToolCallsStream()) {
  console.log(`Tool called: ${toolCall.name}`);
  console.log(`Arguments: ${JSON.stringify(toolCall.arguments)}`);
  console.log(`Result: ${JSON.stringify(toolCall.result)}`);
}

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '搜索关于TypeScript的信息',
  tools: [searchTool]
});

for await (const toolCall of result.getToolCallsStream()) {
  console.log(`调用工具: ${toolCall.name}`);
  console.log(`参数: ${JSON.stringify(toolCall.arguments)}`);
  console.log(`结果: ${JSON.stringify(toolCall.result)}`);
}

Format Conversion

格式转换

Convert between ecosystem formats for interoperability:
在生态系统格式之间转换以实现互操作性:

OpenAI Format

OpenAI格式

typescript
import { fromChatMessages, toChatMessage } from '@openrouter/sdk';

// OpenAI messages → OpenRouter format
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: fromChatMessages(openaiMessages)
});

// Response → OpenAI chat message format
const response = await result.getResponse();
const chatMsg = toChatMessage(response);
typescript
import { fromChatMessages, toChatMessage } from '@openrouter/sdk';

// OpenAI消息 → OpenRouter格式
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: fromChatMessages(openaiMessages)
});

// 响应 → OpenAI聊天消息格式
const response = await result.getResponse();
const chatMsg = toChatMessage(response);

Claude Format

Claude格式

typescript
import { fromClaudeMessages, toClaudeMessage } from '@openrouter/sdk';

// Claude messages → OpenRouter format
const result = client.callModel({
  model: 'anthropic/claude-3-opus',
  input: fromClaudeMessages(claudeMessages)
});

// Response → Claude message format
const response = await result.getResponse();
const claudeMsg = toClaudeMessage(response);

typescript
import { fromClaudeMessages, toClaudeMessage } from '@openrouter/sdk';

// Claude消息 → OpenRouter格式
const result = client.callModel({
  model: 'anthropic/claude-3-opus',
  input: fromClaudeMessages(claudeMessages)
});

// 响应 → Claude消息格式
const response = await result.getResponse();
const claudeMsg = toClaudeMessage(response);

Responses API Message Shapes

Responses API消息结构

The SDK uses the OpenResponses format for messages. Understanding these shapes is essential for building robust agents.
SDK使用OpenResponses格式处理消息。了解这些结构对于构建健壮的Agent至关重要。

Message Roles

消息角色

Messages contain a
role
property that determines the message type:
RoleDescription
user
User-provided input
assistant
Model-generated responses
system
System instructions
developer
Developer-level directives
tool
Tool execution results
消息包含
role
属性,用于确定消息类型:
角色描述
user
用户提供的输入
assistant
模型生成的响应
system
系统指令
developer
开发者级指令
tool
工具执行结果

Text Message

文本消息

Simple text content from user or assistant:
typescript
interface TextMessage {
  role: 'user' | 'assistant';
  content: string;
}
用户或助手的简单文本内容:
typescript
interface TextMessage {
  role: 'user' | 'assistant';
  content: string;
}

Multimodal Message (Array Content)

多模态消息(数组内容)

Messages with mixed content types:
typescript
interface MultimodalMessage {
  role: 'user';
  content: Array<
    | { type: 'input_text'; text: string }
    | { type: 'input_image'; imageUrl: string; detail?: 'auto' | 'low' | 'high' }
    | {
        type: 'image';
        source: {
          type: 'url' | 'base64';
          url?: string;
          media_type?: string;
          data?: string
        }
      }
  >;
}
包含混合内容类型的消息:
typescript
interface MultimodalMessage {
  role: 'user';
  content: Array<
    | { type: 'input_text'; text: string }
    | { type: 'input_image'; imageUrl: string; detail?: 'auto' | 'low' | 'high' }
    | {
        type: 'image';
        source: {
          type: 'url' | 'base64';
          url?: string;
          media_type?: string;
          data?: string
        }
      }
  >;
}

Tool Function Call Message

工具函数调用消息

When the model requests a tool execution:
typescript
interface ToolCallMessage {
  role: 'assistant';
  content?: null;
  tool_calls?: Array<{
    id: string;
    type: 'function';
    function: {
      name: string;
      arguments: string;  // JSON-encoded arguments
    };
  }>;
}
模型请求工具执行时:
typescript
interface ToolCallMessage {
  role: 'assistant';
  content?: null;
  tool_calls?: Array<{
    id: string;
    type: 'function';
    function: {
      name: string;
      arguments: string;  // JSON编码的参数
    };
  }>;
}

Tool Result Message

工具结果消息

Result returned after tool execution:
typescript
interface ToolResultMessage {
  role: 'tool';
  tool_call_id: string;
  content: string;  // JSON-encoded result
}
工具执行后返回的结果:
typescript
interface ToolResultMessage {
  role: 'tool';
  tool_call_id: string;
  content: string;  // JSON编码的结果
}

Non-Streaming Response Structure

非流式响应结构

The complete response object from
getResponse()
:
typescript
interface OpenResponsesNonStreamingResponse {
  output: Array<ResponseMessage>;
  usage?: {
    inputTokens: number;
    outputTokens: number;
    cachedTokens?: number;
  };
  finishReason?: string;
  warnings?: Array<{
    type: string;
    message: string
  }>;
  experimental_providerMetadata?: Record<string, unknown>;
}
getResponse()
返回的完整响应对象:
typescript
interface OpenResponsesNonStreamingResponse {
  output: Array<ResponseMessage>;
  usage?: {
    inputTokens: number;
    outputTokens: number;
    cachedTokens?: number;
  };
  finishReason?: string;
  warnings?: Array<{
    type: string;
    message: string
  }>;
  experimental_providerMetadata?: Record<string, unknown>;
}

Response Message Types

响应消息类型

Output messages in the response array:
typescript
// Text/content message
interface ResponseOutputMessage {
  type: 'message';
  role: 'assistant';
  content: string | Array<ContentPart>;
  reasoning?: string;  // For reasoning models (o1, etc.)
}

// Tool result in output
interface FunctionCallOutputMessage {
  type: 'function_call_output';
  call_id: string;
  output: string;
}
响应数组中的输出消息:
typescript
// 文本/内容消息
interface ResponseOutputMessage {
  type: 'message';
  role: 'assistant';
  content: string | Array<ContentPart>;
  reasoning?: string;  // 适用于推理模型(o1等)
}

// 输出中的工具结果
interface FunctionCallOutputMessage {
  type: 'function_call_output';
  call_id: string;
  output: string;
}

Parsed Tool Call

解析后的工具调用

When tool calls are parsed from the response:
typescript
interface ParsedToolCall {
  id: string;
  name: string;
  arguments: unknown;  // Validated against inputSchema
}
从响应中解析出的工具调用:
typescript
interface ParsedToolCall {
  id: string;
  name: string;
  arguments: unknown;  // 根据inputSchema验证
}

Tool Execution Result

工具执行结果

After a tool completes execution:
typescript
interface ToolExecutionResult {
  toolCallId: string;
  toolName: string;
  result: unknown;                  // Validated against outputSchema
  preliminaryResults?: unknown[];   // From generator tools
  error?: Error;
}
工具完成执行后:
typescript
interface ToolExecutionResult {
  toolCallId: string;
  toolName: string;
  result: unknown;                  // 根据outputSchema验证
  preliminaryResults?: unknown[];   // 来自生成器工具
  error?: Error;
}

Step Result (for Stop Conditions)

步骤结果(用于停止条件)

Available in custom stop condition callbacks:
typescript
interface StepResult {
  stepType: 'initial' | 'continue';
  text: string;
  toolCalls: ParsedToolCall[];
  toolResults: ToolExecutionResult[];
  response: OpenResponsesNonStreamingResponse;
  usage?: {
    inputTokens: number;
    outputTokens: number;
    cachedTokens?: number;
  };
  finishReason?: string;
  warnings?: Array<{ type: string; message: string }>;
  experimental_providerMetadata?: Record<string, unknown>;
}
自定义停止条件回调中可用:
typescript
interface StepResult {
  stepType: 'initial' | 'continue';
  text: string;
  toolCalls: ParsedToolCall[];
  toolResults: ToolExecutionResult[];
  response: OpenResponsesNonStreamingResponse;
  usage?: {
    inputTokens: number;
    outputTokens: number;
    cachedTokens?: number;
  };
  finishReason?: string;
  warnings?: Array<{ type: string; message: string }>;
  experimental_providerMetadata?: Record<string, unknown>;
}

TurnContext

TurnContext

Available to tools and dynamic parameter functions:
typescript
interface TurnContext {
  numberOfTurns: number;                     // Turn count (1-indexed)
  turnRequest?: OpenResponsesRequest;        // Current request being made
  toolCall?: OpenResponsesFunctionToolCall;  // Current tool call (in tool context)
}

工具和动态参数函数中可用:
typescript
interface TurnContext {
  numberOfTurns: number;                     // 轮次计数(从1开始)
  turnRequest?: OpenResponsesRequest;        // 当前正在进行的请求
  toolCall?: OpenResponsesFunctionToolCall;  // 当前工具调用(在工具上下文中)
}

Event Shapes

事件结构

The SDK provides multiple streaming methods that yield different event types.
SDK提供多种流式传输方法,生成不同的事件类型。

Response Stream Events

响应流事件

The
getFullResponsesStream()
method yields these event types:
typescript
type EnhancedResponseStreamEvent =
  | ResponseCreatedEvent
  | ResponseInProgressEvent
  | OutputTextDeltaEvent
  | OutputTextDoneEvent
  | ReasoningDeltaEvent
  | ReasoningDoneEvent
  | FunctionCallArgumentsDeltaEvent
  | FunctionCallArgumentsDoneEvent
  | ResponseCompletedEvent
  | ToolPreliminaryResultEvent;
getFullResponsesStream()
方法生成这些事件类型:
typescript
type EnhancedResponseStreamEvent =
  | ResponseCreatedEvent
  | ResponseInProgressEvent
  | OutputTextDeltaEvent
  | OutputTextDoneEvent
  | ReasoningDeltaEvent
  | ReasoningDoneEvent
  | FunctionCallArgumentsDeltaEvent
  | FunctionCallArgumentsDoneEvent
  | ResponseCompletedEvent
  | ToolPreliminaryResultEvent;

Event Type Reference

事件类型参考

Event TypeDescriptionPayload
response.created
Response object initialized
{ response: ResponseObject }
response.in_progress
Generation has started
{}
response.output_text.delta
Text chunk received
{ delta: string }
response.output_text.done
Text generation complete
{ text: string }
response.reasoning.delta
Reasoning chunk (o1 models)
{ delta: string }
response.reasoning.done
Reasoning complete
{ reasoning: string }
response.function_call_arguments.delta
Tool argument chunk
{ delta: string }
response.function_call_arguments.done
Tool arguments complete
{ arguments: string }
response.completed
Full response complete
{ response: ResponseObject }
tool.preliminary_result
Generator tool progress
{ toolCallId: string; result: unknown }
事件类型描述负载
response.created
响应对象初始化
{ response: ResponseObject }
response.in_progress
生成已开始
{}
response.output_text.delta
接收到文本块
{ delta: string }
response.output_text.done
文本生成完成
{ text: string }
response.reasoning.delta
推理块(o1模型)
{ delta: string }
response.reasoning.done
推理完成
{ reasoning: string }
response.function_call_arguments.delta
工具参数块
{ delta: string }
response.function_call_arguments.done
工具参数完成
{ arguments: string }
response.completed
完整响应完成
{ response: ResponseObject }
tool.preliminary_result
生成器工具进度
{ toolCallId: string; result: unknown }

Text Delta Event

文本增量事件

typescript
interface OutputTextDeltaEvent {
  type: 'response.output_text.delta';
  delta: string;
}
typescript
interface OutputTextDeltaEvent {
  type: 'response.output_text.delta';
  delta: string;
}

Reasoning Delta Event

推理增量事件

For reasoning models (o1, etc.):
typescript
interface ReasoningDeltaEvent {
  type: 'response.reasoning.delta';
  delta: string;
}
适用于推理模型(o1等):
typescript
interface ReasoningDeltaEvent {
  type: 'response.reasoning.delta';
  delta: string;
}

Function Call Arguments Delta Event

函数调用参数增量事件

typescript
interface FunctionCallArgumentsDeltaEvent {
  type: 'response.function_call_arguments.delta';
  delta: string;
}
typescript
interface FunctionCallArgumentsDeltaEvent {
  type: 'response.function_call_arguments.delta';
  delta: string;
}

Tool Preliminary Result Event

工具初步结果事件

From generator tools that yield progress:
typescript
interface ToolPreliminaryResultEvent {
  type: 'tool.preliminary_result';
  toolCallId: string;
  result: unknown;  // Matches the tool's eventSchema
}
来自生成进度的生成器工具:
typescript
interface ToolPreliminaryResultEvent {
  type: 'tool.preliminary_result';
  toolCallId: string;
  result: unknown;  // 匹配工具的eventSchema
}

Response Completed Event

响应完成事件

typescript
interface ResponseCompletedEvent {
  type: 'response.completed';
  response: OpenResponsesNonStreamingResponse;
}
typescript
interface ResponseCompletedEvent {
  type: 'response.completed';
  response: OpenResponsesNonStreamingResponse;
}

Tool Stream Events

工具流事件

The
getToolStream()
method yields:
typescript
type ToolStreamEvent =
  | { type: 'delta'; content: string }
  | { type: 'preliminary_result'; toolCallId: string; result: unknown };
getToolStream()
方法生成:
typescript
type ToolStreamEvent =
  | { type: 'delta'; content: string }
  | { type: 'preliminary_result'; toolCallId: string; result: unknown };

Example: Processing Stream Events

示例:处理流事件

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Analyze this data',
  tools: [analysisTool]
});

for await (const event of result.getFullResponsesStream()) {
  switch (event.type) {
    case 'response.output_text.delta':
      process.stdout.write(event.delta);
      break;

    case 'response.reasoning.delta':
      console.log('[Reasoning]', event.delta);
      break;

    case 'response.function_call_arguments.delta':
      console.log('[Tool Args]', event.delta);
      break;

    case 'tool.preliminary_result':
      console.log(`[Progress: ${event.toolCallId}]`, event.result);
      break;

    case 'response.completed':
      console.log('\n[Complete]', event.response.usage);
      break;
  }
}
typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '分析这些数据',
  tools: [analysisTool]
});

for await (const event of result.getFullResponsesStream()) {
  switch (event.type) {
    case 'response.output_text.delta':
      process.stdout.write(event.delta);
      break;

    case 'response.reasoning.delta':
      console.log('[推理]', event.delta);
      break;

    case 'response.function_call_arguments.delta':
      console.log('[工具参数]', event.delta);
      break;

    case 'tool.preliminary_result':
      console.log(`[进度: ${event.toolCallId}]`, event.result);
      break;

    case 'response.completed':
      console.log('\n[完成]', event.response.usage);
      break;
  }
}

Message Stream Events

消息流事件

The
getNewMessagesStream()
yields OpenResponses format updates:
typescript
type MessageStreamUpdate =
  | ResponsesOutputMessage        // Text/content updates
  | OpenResponsesFunctionCallOutput;  // Tool results
getNewMessagesStream()
生成OpenResponses格式更新:
typescript
type MessageStreamUpdate =
  | ResponsesOutputMessage        // 文本/内容更新
  | OpenResponsesFunctionCallOutput;  // 工具结果

Example: Tracking New Messages

示例:跟踪新消息

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: 'Research this topic',
  tools: [searchTool]
});

const allMessages: MessageStreamUpdate[] = [];

for await (const message of result.getNewMessagesStream()) {
  allMessages.push(message);

  if (message.type === 'message') {
    console.log('Assistant:', message.content);
  } else if (message.type === 'function_call_output') {
    console.log('Tool result:', message.output);
  }
}

typescript
const result = client.callModel({
  model: 'openai/gpt-5-nano',
  input: '研究这个主题',
  tools: [searchTool]
});

const allMessages: MessageStreamUpdate[] = [];

for await (const message of result.getNewMessagesStream()) {
  allMessages.push(message);

  if (message.type === 'message') {
    console.log('助手:', message.content);
  } else if (message.type === 'function_call_output') {
    console.log('工具结果:', message.output);
  }
}

API Reference

API参考

Client Methods

Beyond
callModel
, the client provides access to other API endpoints:
typescript
const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

// List available models
const models = await client.models.list();

// Chat completions (alternative to callModel)
const completion = await client.chat.send({
  model: 'openai/gpt-5-nano',
  messages: [{ role: 'user', content: 'Hello!' }]
});

// Legacy completions format
const legacyCompletion = await client.completions.generate({
  model: 'openai/gpt-5-nano',
  prompt: 'Once upon a time'
});

// Usage analytics
const activity = await client.analytics.getUserActivity();

// Credit balance
const credits = await client.credits.getCredits();

// API key management
const keys = await client.apiKeys.list();

除了
callModel
,客户端还提供对其他API端点的访问:
typescript
const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

// 列出可用模型
const models = await client.models.list();

// 聊天补全(callModel的替代方案)
const completion = await client.chat.send({
  model: 'openai/gpt-5-nano',
  messages: [{ role: 'user', content: '你好!' }]
});

// 旧版补全格式
const legacyCompletion = await client.completions.generate({
  model: 'openai/gpt-5-nano',
  prompt: '很久很久以前'
});

// 使用分析
const activity = await client.analytics.getUserActivity();

// 信用余额
const credits = await client.credits.getCredits();

// API密钥管理
const keys = await client.apiKeys.list();

Error Handling

错误处理

The SDK provides specific error types with actionable messages:
typescript
try {
  const result = await client.callModel({
    model: 'openai/gpt-5-nano',
    input: 'Hello!'
  });
  const text = await result.getText();
} catch (error) {
  if (error.statusCode === 401) {
    console.error('Invalid API key - check your OPENROUTER_API_KEY');
  } else if (error.statusCode === 402) {
    console.error('Insufficient credits - add credits at openrouter.ai');
  } else if (error.statusCode === 429) {
    console.error('Rate limited - implement backoff retry');
  } else if (error.statusCode === 503) {
    console.error('Model temporarily unavailable - try again or use fallback');
  } else {
    console.error('Unexpected error:', error.message);
  }
}
SDK提供特定的错误类型和可操作的消息:
typescript
try {
  const result = await client.callModel({
    model: 'openai/gpt-5-nano',
    input: '你好!'
  });
  const text = await result.getText();
} catch (error) {
  if (error.statusCode === 401) {
    console.error('无效API密钥 - 检查你的OPENROUTER_API_KEY');
  } else if (error.statusCode === 402) {
    console.error('信用不足 - 前往openrouter.ai添加信用');
  } else if (error.statusCode === 429) {
    console.error('请求受限 - 实现退避重试');
  } else if (error.statusCode === 503) {
    console.error('模型暂时不可用 - 稍后重试或使用备用模型');
  } else {
    console.error('意外错误:', error.message);
  }
}

Error Status Codes

错误状态码

CodeMeaningAction
400Bad requestCheck request parameters
401UnauthorizedVerify API key
402Payment requiredAdd credits
429Rate limitedImplement exponential backoff
500Server errorRetry with backoff
503Service unavailableTry alternative model

代码含义操作
400请求错误检查请求参数
401未授权验证API密钥
402需要付费添加信用
429请求受限实现指数退避
500服务器错误退避后重试
503服务不可用尝试替代模型

Complete Example: Agent with Tools

完整示例:带工具的Agent

typescript
import OpenRouter, { tool, stepCountIs } from '@openrouter/sdk';
import { z } from 'zod';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

// Define tools
const searchTool = tool({
  name: 'web_search',
  description: 'Search the web for information',
  inputSchema: z.object({
    query: z.string().describe('Search query')
  }),
  outputSchema: z.object({
    results: z.array(z.object({
      title: z.string(),
      snippet: z.string(),
      url: z.string()
    }))
  }),
  execute: async ({ query }) => {
    // Implement actual search
    return {
      results: [
        { title: 'Example', snippet: 'Example result', url: 'https://example.com' }
      ]
    };
  }
});

const finishTool = tool({
  name: 'finish',
  description: 'Complete the task with final answer',
  inputSchema: z.object({
    answer: z.string().describe('The final answer')
  }),
  execute: async ({ answer }) => ({ answer })
});

// Run agent
async function runAgent(task: string) {
  const result = client.callModel({
    model: 'openai/gpt-5-nano',
    instructions: 'You are a helpful research assistant. Use web_search to find information, then use finish to provide your final answer.',
    input: task,
    tools: [searchTool, finishTool],
    stopWhen: [
      stepCountIs(10),
      hasToolCall('finish')
    ]
  });

  // Stream progress
  for await (const toolCall of result.getToolCallsStream()) {
    console.log(`[${toolCall.name}] ${JSON.stringify(toolCall.arguments)}`);
  }

  return await result.getText();
}

// Usage
const answer = await runAgent('What are the latest developments in quantum computing?');
console.log('Final answer:', answer);

typescript
import OpenRouter, { tool, stepCountIs } from '@openrouter/sdk';
import { z } from 'zod';

const client = new OpenRouter({
  apiKey: process.env.OPENROUTER_API_KEY
});

// 定义工具
const searchTool = tool({
  name: 'web_search',
  description: '在网页上搜索信息',
  inputSchema: z.object({
    query: z.string().describe('搜索查询')
  }),
  outputSchema: z.object({
    results: z.array(z.object({
      title: z.string(),
      snippet: z.string(),
      url: z.string()
    }))
  }),
  execute: async ({ query }) => {
    // 实现实际搜索
    return {
      results: [
        { title: '示例', snippet: '示例结果', url: 'https://example.com' }
      ]
    };
  }
});

const finishTool = tool({
  name: 'finish',
  description: '用最终答案完成任务',
  inputSchema: z.object({
    answer: z.string().describe('最终答案')
  }),
  execute: async ({ answer }) => ({ answer })
});

// 运行Agent
async function runAgent(task: string) {
  const result = client.callModel({
    model: 'openai/gpt-5-nano',
    instructions: '你是一个乐于助人的研究助手。使用web_search查找信息,然后使用finish提供最终答案。',
    input: task,
    tools: [searchTool, finishTool],
    stopWhen: [
      stepCountIs(10),
      hasToolCall('finish')
    ]
  });

  // 流式传输进度
  for await (const toolCall of result.getToolCallsStream()) {
    console.log(`[${toolCall.name}] ${JSON.stringify(toolCall.arguments)}`);
  }

  return await result.getText();
}

// 使用
const answer = await runAgent('量子计算的最新发展是什么?');
console.log('最终答案:', answer);

Best Practices

最佳实践

1. Prefer callModel Over Direct API Calls

1. 优先使用callModel而非直接API调用

The
callModel
pattern provides automatic tool execution, type safety, and multi-turn handling.
callModel
模式提供自动工具执行、类型安全和多轮对话处理。

2. Use Zod for Tool Schemas

2. 使用Zod定义工具Schema

Zod provides runtime validation and excellent TypeScript inference:
typescript
import { z } from 'zod';

const schema = z.object({
  name: z.string().min(1),
  age: z.number().int().positive()
});
Zod提供运行时验证和出色的TypeScript推断:
typescript
import { z } from 'zod';

const schema = z.object({
  name: z.string().min(1),
  age: z.number().int().positive()
});

3. Implement Stop Conditions

3. 实现停止条件

Always set reasonable limits to prevent runaway costs:
typescript
stopWhen: [stepCountIs(20), maxCost(5.00)]
始终设置合理的限制,防止成本失控:
typescript
stopWhen: [stepCountIs(20), maxCost(5.00)]

4. Handle Errors Gracefully

4. 优雅处理错误

Implement retry logic for transient failures:
typescript
async function callWithRetry(params, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await client.callModel(params).getText();
    } catch (error) {
      if (error.statusCode === 429 || error.statusCode >= 500) {
        await sleep(Math.pow(2, i) * 1000);
        continue;
      }
      throw error;
    }
  }
}
为瞬时故障实现重试逻辑:
typescript
async function callWithRetry(params, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await client.callModel(params).getText();
    } catch (error) {
      if (error.statusCode === 429 || error.statusCode >= 500) {
        await sleep(Math.pow(2, i) * 1000);
        continue;
      }
      throw error;
    }
  }
}

5. Use Streaming for Long Responses

5. 长响应使用流式传输

Streaming provides better UX and allows early termination:
typescript
for await (const delta of result.getTextStream()) {
  // Process incrementally
}

流式传输提供更好的用户体验,并允许提前终止:
typescript
for await (const delta of result.getTextStream()) {
  // 增量处理
}

Additional Resources

附加资源


SDK Status: Beta - Report issues on GitHub

SDK状态:Beta - 在GitHub上报告问题