openrouter-models
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOpenRouter Models
OpenRouter 模型
Discover, search, and compare the 300+ AI models available on OpenRouter. Query live data including pricing, context lengths, per-provider latency and uptime, throughput, supported modalities, and supported parameters.
探索、搜索并对比OpenRouter上可用的300+款AI模型。查询实时数据,包括定价、上下文长度、各服务商的延迟与正常运行时间、吞吐量、支持的模态以及支持的参数。
Prerequisites
前置条件
The environment variable is optional for most scripts. It is only required for (provider performance data). Get a key at https://openrouter.ai/keys
OPENROUTER_API_KEYget-endpoints.tsOPENROUTER_API_KEYget-endpoints.tsFirst-Time Setup
首次设置
bash
cd <skill-path>/scripts && npm installbash
cd <skill-path>/scripts && npm installDecision Tree
决策树
Pick the right script based on what the user is asking:
| User wants to... | Script | Example |
|---|---|---|
| See all available models | | "What models does OpenRouter have?" |
| Find recently added models | | "What are the newest models?" |
| Find cheapest models | | "What's the cheapest model?" |
| Find highest throughput models | | "Which models have the most output capacity?" |
| Find models in a category | | "Best programming models?" |
| Search by name | | "Do they have Claude?" |
| Resolve an informal model name | | "Use the nano banana 2.0 model" |
| Find image-capable models | | "Which models accept images?" |
| Compare specific models | | "Compare Claude vs GPT-4o" |
| Compare by throughput | | "Which has higher throughput, Claude or GPT-4o?" |
| Check provider performance | | "Which provider is fastest for Claude?" |
| Find fastest provider | | "Fastest provider for Claude Sonnet?" |
| Find lowest-latency provider | | "Lowest latency provider for GPT-4o?" |
| Check model availability | | "Is Claude Sonnet 4 up right now?" |
根据用户的需求选择合适的脚本:
| 用户需求 | 脚本 | 示例 |
|---|---|---|
| 查看所有可用模型 | | "OpenRouter有哪些模型?" |
| 查找最近新增的模型 | | "最新的模型有哪些?" |
| 查找最便宜的模型 | | "最便宜的模型是什么?" |
| 查找吞吐量最高的模型 | | "哪些模型的输出能力最强?" |
| 查找特定类别的模型 | | "最佳编程模型有哪些?" |
| 按名称搜索模型 | | "他们有Claude吗?" |
| 解析非正式的模型名称 | | "使用nano banana 2.0模型" |
| 查找支持图像的模型 | | "哪些模型支持输入图像?" |
| 对比特定模型 | | "对比Claude和GPT-4o" |
| 按吞吐量对比模型 | | "Claude和GPT-4o哪个吞吐量更高?" |
| 查看服务商性能 | | "哪个服务商提供的Claude速度最快?" |
| 查找最快的服务商 | | "Claude Sonnet的最快服务商是哪个?" |
| 查找延迟最低的服务商 | | "GPT-4o的最低延迟服务商是哪个?" |
| 检查模型可用性 | | "Claude Sonnet 4现在可用吗?" |
Resolve Model
解析模型
Resolve an informal or vague model name to an exact OpenRouter model ID using fuzzy matching:
bash
cd <skill-path>/scripts && npx tsx resolve-model.ts "claude sonnet"
cd <skill-path>/scripts && npx tsx resolve-model.ts "gpt 4o mini"
cd <skill-path>/scripts && npx tsx resolve-model.ts "llama 3.1"Results include a level and :
confidencescore| Confidence | Score | Action |
|---|---|---|
| Use the model directly — the match is unambiguous | |
| Confirm with the user before proceeding | |
| Suggest the matches and ask the user to clarify |
Two-step workflow: First resolve the informal name with , then feed the resolved into other scripts (, , etc.).
resolve-model.tsidcompare-models.tsget-endpoints.ts使用模糊匹配将非正式或模糊的模型名称解析为精确的OpenRouter模型ID:
bash
cd <skill-path>/scripts && npx tsx resolve-model.ts "claude sonnet"
cd <skill-path>/scripts && npx tsx resolve-model.ts "gpt 4o mini"
cd <skill-path>/scripts && npx tsx resolve-model.ts "llama 3.1"结果包含(置信度)和(匹配分数):
confidencescore| 置信度 | 分数 | 操作 |
|---|---|---|
| 直接使用该模型——匹配结果明确 | |
| 先与用户确认后再继续 | |
| 给出匹配建议并请用户澄清 |
两步工作流: 首先使用解析非正式名称,然后将解析后的输入到其他脚本(如、等)中。
resolve-model.tsidcompare-models.tsget-endpoints.tsList Models
列出模型
bash
cd <skill-path>/scripts && npx tsx list-models.tsbash
cd <skill-path>/scripts && npx tsx list-models.tsFilter by Category
按类别筛选
Server-side category filtering:
bash
cd <skill-path>/scripts && npx tsx list-models.ts --category programmingCategories: , , , , , , , , , , ,
programmingroleplaymarketingmarketing/seotechnologysciencetranslationlegalfinancehealthtriviaacademia服务端类别筛选:
bash
cd <skill-path>/scripts && npx tsx list-models.ts --category programming可选类别:、、、、、、、、、、、
programmingroleplaymarketingmarketing/seotechnologysciencetranslationlegalfinancehealthtriviaacademiaSort Results
排序结果
bash
cd <skill-path>/scripts && npx tsx list-models.ts --sort newest # Recently added first
cd <skill-path>/scripts && npx tsx list-models.ts --sort price # Cheapest first
cd <skill-path>/scripts && npx tsx list-models.ts --sort context # Largest context first
cd <skill-path>/scripts && npx tsx list-models.ts --sort throughput # Most output tokens firstModels with upcoming values trigger a stderr warning.
expiration_datebash
cd <skill-path>/scripts && npx tsx list-models.ts --sort newest # 按新增时间排序(最新优先)
cd <skill-path>/scripts && npx tsx list-models.ts --sort price # 按价格排序(最便宜优先)
cd <skill-path>/scripts && npx tsx list-models.ts --sort context # 按上下文长度排序(最长优先)
cd <skill-path>/scripts && npx tsx list-models.ts --sort throughput # 按吞吐量排序(输出令牌数最多优先)带有即将到来的的模型会触发stderr警告。
expiration_dateSearch Models
搜索模型
bash
cd <skill-path>/scripts && npx tsx search-models.ts "claude"
cd <skill-path>/scripts && npx tsx search-models.ts --modality image
cd <skill-path>/scripts && npx tsx search-models.ts "gpt" --modality textModalities: , , ,
textimageaudiofilebash
cd <skill-path>/scripts && npx tsx search-models.ts "claude"
cd <skill-path>/scripts && npx tsx search-models.ts --modality image
cd <skill-path>/scripts && npx tsx search-models.ts "gpt" --modality text支持的模态:、、、
textimageaudiofileCompare Models
对比模型
Compare two or more models side-by-side with pricing in per-million-tokens format. Uses exact ID matching — matches only that model, not variants like .
openai/gpt-4ogpt-4o-minibash
cd <skill-path>/scripts && npx tsx compare-models.ts "anthropic/claude-sonnet-4" "openai/gpt-4o"
cd <skill-path>/scripts && npx tsx compare-models.ts "anthropic/claude-sonnet-4" "openai/gpt-4o" "google/gemini-2.5-pro" --sort priceSort options: (cheapest first), (largest first), / (most output tokens first)
pricecontextspeedthroughput以每百万令牌的价格格式并排对比两款或多款模型。使用精确ID匹配——仅匹配该模型,不包括等变体。
openai/gpt-4ogpt-4o-minibash
cd <skill-path>/scripts && npx tsx compare-models.ts "anthropic/claude-sonnet-4" "openai/gpt-4o"
cd <skill-path>/scripts && npx tsx compare-models.ts "anthropic/claude-sonnet-4" "openai/gpt-4o" "google/gemini-2.5-pro" --sort price排序选项:(最便宜优先)、(最长优先)、/(输出令牌数最多优先)
pricecontextspeedthroughputProvider Performance (Endpoints)
服务商性能(端点)
Get per-provider latency, uptime, and throughput for any model:
bash
cd <skill-path>/scripts && npx tsx get-endpoints.ts "anthropic/claude-sonnet-4"
cd <skill-path>/scripts && npx tsx get-endpoints.ts "anthropic/claude-sonnet-4" --sort throughput
cd <skill-path>/scripts && npx tsx get-endpoints.ts "openai/gpt-4o" --sort latencySort options: (fastest tokens/sec first), (lowest p50 ms first), (most reliable first), (cheapest first)
throughputlatencyuptimepriceReturns for each provider:
- Latency (p50/p75/p90/p99 in ms) — median to worst-case response times
- Throughput (p50/p75/p90/p99 tokens/sec) — generation speed
- Uptime — percentage over the last 30 minutes
- Status — or
operationaldegraded - Provider-specific pricing — some providers offer discounts
- Supported parameters — varies by provider (some don't support all features)
获取任意模型的各服务商延迟、正常运行时间和吞吐量数据:
bash
cd <skill-path>/scripts && npx tsx get-endpoints.ts "anthropic/claude-sonnet-4"
cd <skill-path>/scripts && npx tsx get-endpoints.ts "anthropic/claude-sonnet-4" --sort throughput
cd <skill-path>/scripts && npx tsx get-endpoints.ts "openai/gpt-4o" --sort latency排序选项:(令牌/秒最快优先)、(p50毫秒最低优先)、(最可靠优先)、(最便宜优先)
throughputlatencyuptimeprice返回每个服务商的以下信息:
- 延迟(p50/p75/p90/p99,单位毫秒)——中位数到最坏情况的响应时间
- 吞吐量(p50/p75/p90/p99,单位令牌/秒)——生成速度
- 正常运行时间——过去30分钟内的可用百分比
- 状态——(正常运行)或
operational(性能下降)degraded - 服务商特定定价——部分服务商提供折扣
- 支持的参数——因服务商而异(部分不支持所有功能)
API Response Shapes
API响应结构
Query parameters (all optional):
| Parameter | Example | Effect |
|---|---|---|
| | Server-side category filter |
| | Only models supporting this parameter |
Tips for working with the response:
- To check if a model supports a feature, use (e.g.
model.supported_parameters), or filter server-side with.includes("tools").?supported_parameters=tools - To check modalities, use /
model.architecture.input_modalities.model.architecture.output_modalities - Pricing values are per-token in USD as strings — multiply by 1,000,000 for per-million-token pricing.
- and
knowledge_cutoffare date strings or null.expiration_date - points to the per-provider endpoints API for that model.
links.detailsreturnsGET /api/v1/models/{author}/{slug}/endpoints.{ data: { id, name, endpoints: Endpoint[] } } - Endpoint :
status= operational, non-zero = degraded.0 - Endpoint /
latency_last_30m: percentile objects withthroughput_last_30m,p50,p75,p90.p99
查询参数(均为可选):
| 参数 | 示例 | 作用 |
|---|---|---|
| | 服务端类别筛选 |
| | 仅返回支持该参数的模型 |
处理响应的提示:
- 要检查模型是否支持某功能,可使用(例如
model.supported_parameters),或通过服务端参数.includes("tools")进行筛选。?supported_parameters=tools - 要检查模态,可使用/
model.architecture.input_modalities。model.architecture.output_modalities - 定价值为美元每令牌的字符串格式——乘以1,000,000即可得到每百万令牌的价格。
- 和
knowledge_cutoff为日期字符串或null。expiration_date - 指向该模型的各服务商端点API。
links.details返回GET /api/v1/models/{author}/{slug}/endpoints。{ data: { id, name, endpoints: Endpoint[] } } - 端点:
status=正常运行,非0=性能下降。0 - 端点/
latency_last_30m:包含throughput_last_30m、p50、p75、p90的百分位数对象。p99
Script Output Formats
脚本输出格式
The scripts below reformat the raw API data. When calling the API directly (e.g. via ), refer to the OpenAPI spec for field names.
fetch以下脚本会重新格式化原始API数据。直接调用API(例如通过)时,请参考OpenAPI规范获取字段名称。
fetchlist-models.ts / search-models.ts
list-models.ts / search-models.ts
A subset of the raw API fields — the scripts run which drops , , , , and . If you need those fields, call the API directly.
formatModel()canonical_slughugging_face_iddefault_parametersknowledge_cutofflinks原始API字段的子集——脚本会运行,去除、、、和字段。如果需要这些字段,请直接调用API。
formatModel()canonical_slughugging_face_iddefault_parametersknowledge_cutofflinkscompare-models.ts
compare-models.ts
json
{
"id": "anthropic/claude-sonnet-4",
"name": "Anthropic: Claude Sonnet 4",
"context_length": 1000000,
"max_completion_tokens": 64000,
"per_request_limits": null,
"pricing_per_million_tokens": {
"prompt": "$3.00",
"completion": "$15.00",
"cached_input": "$0.30"
},
"modalities": { "input": ["text", "image"], "output": ["text"] },
"supported_parameters": ["max_tokens", "temperature", "..."],
"is_moderated": false
}json
{
"id": "anthropic/claude-sonnet-4",
"name": "Anthropic: Claude Sonnet 4",
"context_length": 1000000,
"max_completion_tokens": 64000,
"per_request_limits": null,
"pricing_per_million_tokens": {
"prompt": "$3.00",
"completion": "$15.00",
"cached_input": "$0.30"
},
"modalities": { "input": ["text", "image"], "output": ["text"] },
"supported_parameters": ["max_tokens", "temperature", "..."],
"is_moderated": false
}get-endpoints.ts
get-endpoints.ts
json
{
"model_id": "anthropic/claude-sonnet-4",
"model_name": "Anthropic: Claude Sonnet 4",
"total_providers": 5,
"endpoints": [
{
"provider": "Anthropic",
"tag": "anthropic",
"status": "operational",
"uptime_30m": "100.00%",
"latency_30m_ms": { "p50": 800, "p75": 1200, "p90": 2000, "p99": 5000 },
"throughput_30m_tokens_per_sec": { "p50": 45, "p75": 55, "p90": 65, "p99": 90 },
"context_length": 1000000,
"max_completion_tokens": 64000,
"pricing_per_million_tokens": { "prompt": "$3.00", "completion": "$15.00", "cached_input": "$0.30" },
"supports_implicit_caching": true,
"supported_parameters": ["max_tokens", "temperature", "tools", "..."]
}
]
}json
{
"model_id": "anthropic/claude-sonnet-4",
"model_name": "Anthropic: Claude Sonnet 4",
"total_providers": 5,
"endpoints": [
{
"provider": "Anthropic",
"tag": "anthropic",
"status": "operational",
"uptime_30m": "100.00%",
"latency_30m_ms": { "p50": 800, "p75": 1200, "p90": 2000, "p99": 5000 },
"throughput_30m_tokens_per_sec": { "p50": 45, "p75": 55, "p90": 65, "p99": 90 },
"context_length": 1000000,
"max_completion_tokens": 64000,
"pricing_per_million_tokens": { "prompt": "$3.00", "completion": "$15.00", "cached_input": "$0.30" },
"supports_implicit_caching": true,
"supported_parameters": ["max_tokens", "temperature", "tools", "..."]
}
]
}Key Fields
关键字段
| Field | Meaning |
|---|---|
| Cost per token in USD. Multiply by 1,000,000 for per-million-token pricing |
| Max total tokens (input + output) |
| Max output tokens from the best provider |
| Whether content moderation is applied |
| Per-request token limits (when non-null) |
| API parameters the model accepts (e.g., |
| Unix timestamp — use for sorting by recency |
| Non-null means the model is being deprecated |
| Median response latency over last 30 min |
| Median generation speed over last 30 min |
| Provider availability percentage over last 30 min |
| 字段 | 含义 |
|---|---|
| 每令牌的美元成本。乘以1,000,000可得到每百万令牌的价格 |
| 最大总令牌数(输入+输出) |
| 最佳服务商支持的最大输出令牌数 |
| 是否应用内容审核 |
| 每请求令牌限制(非null时有效) |
| 模型接受的API参数(例如 |
| Unix时间戳——用于按新增时间排序 |
| 非null表示该模型即将被弃用 |
| 过去30分钟内的中位数响应延迟 |
| 过去30分钟内的中位数生成速度 |
| 过去30分钟内服务商的可用百分比 |
Presenting Results
结果展示建议
- When a user mentions a model by informal name, use first, then feed the resolved
resolve-model.tsinto other scriptsid - Convert pricing to per-million-tokens format for readability
- When comparing, use a markdown table with models as columns
- For provider endpoints, highlight the fastest (lowest p50 latency) and most reliable (highest uptime) providers
- Call out notable supported parameters: ,
tools,structured_outputs,reasoningweb_search_options - Note cache pricing when available — it can cut input costs 90%+
- Flag models with as deprecated
expiration_date - When a model has multiple providers at different prices, mention the cheapest option
- 当用户提及非正式的模型名称时,先使用解析,再将解析后的
resolve-model.ts输入到其他脚本中id - 将定价转换为每百万令牌的格式以提升可读性
- 对比模型时,使用markdown表格,将模型作为列展示
- 对于服务商端点,突出显示最快(p50延迟最低)和最可靠(正常运行时间最高)的服务商
- 标注值得关注的支持参数:、
tools、structured_outputs、reasoningweb_search_options - 若有缓存定价,请注意说明——它可将输入成本降低90%以上
- 标记带有的模型为即将弃用
expiration_date - 当某一模型有多个不同定价的服务商时,提及最便宜的选项