openrouter-trending-models

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OpenRouter Trending Models Skill

OpenRouter 热门编程模型技能

Overview

概述

This skill provides access to current trending programming models from OpenRouter's public rankings. It executes a Bun script that fetches, parses, and structures data about the top 9 most-used AI models for programming tasks.
What you get:
  • Model IDs and names (e.g.,
    x-ai/grok-code-fast-1
    )
  • Token usage statistics (last week's trends)
  • Context window sizes (input capacity)
  • Pricing information (per token and per 1M tokens)
  • Summary statistics (top provider, price ranges, averages)
Data Source:
Update Frequency: Weekly (OpenRouter updates rankings every week)

本技能可从OpenRouter的公开排行榜获取当前热门的编程模型。它执行一个Bun脚本,用于获取、解析并构建排名前9的AI编程模型的相关数据。
你将获得:
  • 模型ID和名称(例如:
    x-ai/grok-code-fast-1
  • 令牌使用统计数据(上周趋势)
  • 上下文窗口大小(输入容量)
  • 定价信息(每令牌和每百万令牌)
  • 汇总统计数据(顶级提供商、价格范围、平均值)
数据源:
更新频率: 每周(OpenRouter每周更新排行榜)

When to Use This Skill

何时使用该技能

Use this skill when you need to:
  1. Select models for multi-model review
    • Plan reviewer needs current trending models
    • User asks "which models should I use for review?"
    • Updating model recommendations in agent workflows
  2. Research AI coding trends
    • Developer wants to know most popular coding models
    • Comparing model capabilities (context, pricing, usage)
    • Identifying "best value" models for specific tasks
  3. Update plugin documentation
    • Refreshing model lists in README files
    • Keeping agent prompts current with trending models
    • Documentation maintenance workflows
  4. Cost optimization
    • Finding cheapest models with sufficient context
    • Comparing pricing across trending models
    • Budget planning for AI-assisted development
  5. Model recommendations
    • User asks "what's the best model for X?"
    • Providing data-driven suggestions vs hardcoded lists
    • Offering alternatives based on requirements

当你需要以下操作时使用本技能:
  1. 为多模型评测选择模型
    • 评测规划者需要当前热门模型
    • 用户询问“我应该使用哪些模型进行评测?”
    • 在Agent工作流中更新模型推荐
  2. 研究AI编程趋势
    • 开发者想了解最受欢迎的编程模型
    • 比较模型能力(上下文、定价、使用情况)
    • 为特定任务识别“性价比最高”的模型
  3. 更新插件文档
    • 刷新README文件中的模型列表
    • 保持Agent提示词与热门模型同步
    • 文档维护工作流
  4. 成本优化
    • 找到具有足够上下文的最便宜模型
    • 比较热门模型的定价
    • AI辅助开发的预算规划
  5. 模型推荐
    • 用户询问“针对X任务的最佳模型是什么?”
    • 提供基于数据的建议而非硬编码列表
    • 根据需求提供替代方案

Quick Start

快速开始

Running the Script

运行脚本

Basic Usage:
bash
bun run scripts/get-trending-models.ts
Output to File:
bash
bun run scripts/get-trending-models.ts > trending-models.json
Pretty Print:
bash
bun run scripts/get-trending-models.ts | jq '.'
Help:
bash
bun run scripts/get-trending-models.ts --help
基本用法:
bash
bun run scripts/get-trending-models.ts
输出到文件:
bash
bun run scripts/get-trending-models.ts > trending-models.json
格式化输出:
bash
bun run scripts/get-trending-models.ts | jq '.'
帮助:
bash
bun run scripts/get-trending-models.ts --help

Expected Output

预期输出

The script outputs structured JSON to stdout:
json
{
  "metadata": {
    "fetchedAt": "2025-11-14T10:30:00.000Z",
    "weekEnding": "2025-11-10",
    "category": "programming",
    "view": "trending"
  },
  "models": [
    {
      "rank": 1,
      "id": "x-ai/grok-code-fast-1",
      "name": "Grok Code Fast",
      "tokenUsage": 908664328688,
      "contextLength": 131072,
      "maxCompletionTokens": 32768,
      "pricing": {
        "prompt": 0.0000005,
        "completion": 0.000001,
        "promptPer1M": 0.5,
        "completionPer1M": 1.0
      }
    }
    // ... 8 more models
  ],
  "summary": {
    "totalTokens": 4500000000000,
    "topProvider": "x-ai",
    "averageContextLength": 98304,
    "priceRange": {
      "min": 0.5,
      "max": 15.0,
      "unit": "USD per 1M tokens"
    }
  }
}
脚本会向标准输出输出结构化JSON:
json
{
  "metadata": {
    "fetchedAt": "2025-11-14T10:30:00.000Z",
    "weekEnding": "2025-11-10",
    "category": "programming",
    "view": "trending"
  },
  "models": [
    {
      "rank": 1,
      "id": "x-ai/grok-code-fast-1",
      "name": "Grok Code Fast",
      "tokenUsage": 908664328688,
      "contextLength": 131072,
      "maxCompletionTokens": 32768,
      "pricing": {
        "prompt": 0.0000005,
        "completion": 0.000001,
        "promptPer1M": 0.5,
        "completionPer1M": 1.0
      }
    }
    // ... 8 more models
  ],
  "summary": {
    "totalTokens": 4500000000000,
    "topProvider": "x-ai",
    "averageContextLength": 98304,
    "priceRange": {
      "min": 0.5,
      "max": 15.0,
      "unit": "USD per 1M tokens"
    }
  }
}

Execution Time

执行时间

Typical execution: 2-5 seconds
  • Fetch rankings: ~1 second
  • Fetch model details: ~1-2 seconds (parallel requests)
  • Parse and format: <1 second

典型执行时间:2-5秒
  • 获取排行榜:约1秒
  • 获取模型详情:约1-2秒(并行请求)
  • 解析和格式化:<1秒

Output Format

输出格式

Metadata Object

元数据对象

typescript
{
  fetchedAt: string;        // ISO 8601 timestamp of when data was fetched
  weekEnding: string;       // YYYY-MM-DD format, end of ranking week
  category: "programming";  // Fixed category
  view: "trending";         // Fixed view type
}
typescript
{
  fetchedAt: string;        // 数据获取的ISO 8601时间戳
  weekEnding: string;       // 排行榜周结束日期,格式为YYYY-MM-DD
  category: "programming";  // 固定分类
  view: "trending";         // 固定视图类型
}

Models Array (9 items)

模型数组(9个条目)

Each model contains:
typescript
{
  rank: number;             // 1-9, position in trending list
  id: string;               // OpenRouter model ID (e.g., "x-ai/grok-code-fast-1")
  name: string;             // Human-readable name (e.g., "Grok Code Fast")
  tokenUsage: number;       // Total tokens used last week
  contextLength: number;    // Maximum input tokens
  maxCompletionTokens: number; // Maximum output tokens
  pricing: {
    prompt: number;         // Per-token input cost (USD)
    completion: number;     // Per-token output cost (USD)
    promptPer1M: number;    // Input cost per 1M tokens (USD)
    completionPer1M: number; // Output cost per 1M tokens (USD)
  }
}
每个模型包含:
typescript
{
  rank: number;             // 1-9,热门列表中的排名
  id: string;               // OpenRouter模型ID(例如:"x-ai/grok-code-fast-1")
  name: string;             // 易读名称(例如:"Grok Code Fast")
  tokenUsage: number;       // 上周总令牌使用量
  contextLength: number;    // 最大输入令牌数
  maxCompletionTokens: number; // 最大输出令牌数
  pricing: {
    prompt: number;         // 每输入令牌成本(美元)
    completion: number;     // 每输出令牌成本(美元)
    promptPer1M: number;    // 每百万输入令牌成本(美元)
    completionPer1M: number; // 每百万输出令牌成本(美元)
  }
}

Summary Object

汇总对象

typescript
{
  totalTokens: number;      // Sum of token usage across top 9 models
  topProvider: string;      // Most represented provider (e.g., "x-ai")
  averageContextLength: number; // Average context window size
  priceRange: {
    min: number;            // Lowest prompt price per 1M tokens
    max: number;            // Highest prompt price per 1M tokens
    unit: "USD per 1M tokens";
  }
}

typescript
{
  totalTokens: number;      // 排名前9的模型总令牌使用量
  topProvider: string;      // 占比最高的提供商(例如:"x-ai")
  averageContextLength: number; // 平均上下文窗口大小
  priceRange: {
    min: number;            // 每百万输入令牌的最低价格
    max: number;            // 每百万输入令牌的最高价格
    unit: "USD per 1M tokens";
  }
}

Integration Examples

集成示例

Example 1: Dynamic Model Selection in Agent

示例1:Agent中的动态模型选择

Scenario: Plan reviewer needs current trending models for multi-model review
markdown
undefined
场景: 评测规划者需要当前热门模型进行多模型评测
markdown
undefined

In plan-reviewer agent workflow

在plan-reviewer agent工作流中

STEP 1: Fetch trending models
  • Execute: Bash("bun run scripts/get-trending-models.ts > /tmp/trending-models.json")
  • Read: /tmp/trending-models.json
STEP 2: Parse and present to user
  • Extract top 3-5 models from models array
  • Display with context and pricing info
  • Let user select preferred model(s)
STEP 3: Use selected model for review
  • Pass model ID to Claudish proxy

**Implementation:**
```typescript
// Agent reads output
const data = JSON.parse(bashOutput);

// Extract top 5 models
const topModels = data.models.slice(0, 5);

// Present to user
const modelList = topModels.map((m, i) =>
  `${i + 1}. **${m.name}** (\`${m.id}\`)
   - Context: ${m.contextLength.toLocaleString()} tokens
   - Pricing: $${m.pricing.promptPer1M}/1M input
   - Usage: ${(m.tokenUsage / 1e9).toFixed(1)}B tokens last week`
).join('\n\n');

// Ask user to select
const userChoice = await AskUserQuestion(`Select model for review:\n\n${modelList}`);
步骤1:获取热门模型
  • 执行:Bash("bun run scripts/get-trending-models.ts > /tmp/trending-models.json")
  • 读取:/tmp/trending-models.json
步骤2:解析并呈现给用户
  • 从models数组中提取前3-5个模型
  • 显示上下文和定价信息
  • 让用户选择偏好的模型
步骤3:使用选定的模型进行评测
  • 将模型ID传递给Claudish代理

**实现:**
```typescript
// Agent读取输出
const data = JSON.parse(bashOutput);

// 提取前5个模型
const topModels = data.models.slice(0, 5);

// 呈现给用户
const modelList = topModels.map((m, i) =>
  `${i + 1}. **${m.name}** (\`${m.id}\`)
   - 上下文:${m.contextLength.toLocaleString()} 令牌
   - 定价:$${m.pricing.promptPer1M}/百万输入
   - 使用量:${(m.tokenUsage / 1e9).toFixed(1)}B 令牌(上周)`
).join('\n\n');

// 询问用户选择
const userChoice = await AskUserQuestion(`选择评测模型:\n\n${modelList}`);

Example 2: Find Best Value Models

示例2:寻找性价比最高的模型

Scenario: User wants high-context models at lowest cost
bash
undefined
场景: 用户需要高上下文且成本最低的模型
bash
undefined

Fetch models and filter with jq

获取模型并使用jq过滤

bun run scripts/get-trending-models.ts | jq ' .models | map(select(.contextLength > 100000)) | sort_by(.pricing.promptPer1M) | .[:3] | .[] | { name, id, contextLength, price: .pricing.promptPer1M } '

**Output:**
```json
{
  "name": "Gemini 2.5 Flash",
  "id": "google/gemini-2.5-flash",
  "contextLength": 1000000,
  "price": 0.075
}
{
  "name": "Grok Code Fast",
  "id": "x-ai/grok-code-fast-1",
  "contextLength": 131072,
  "price": 0.5
}
bun run scripts/get-trending-models.ts | jq ' .models | map(select(.contextLength > 100000)) | sort_by(.pricing.promptPer1M) | .[:3] | .[] | { name, id, contextLength, price: .pricing.promptPer1M } '

**输出:**
```json
{
  "name": "Gemini 2.5 Flash",
  "id": "google/gemini-2.5-flash",
  "contextLength": 1000000,
  "price": 0.075
}
{
  "name": "Grok Code Fast",
  "id": "x-ai/grok-code-fast-1",
  "contextLength": 131072,
  "price": 0.5
}

Example 3: Update Plugin Documentation

示例3:更新插件文档

Scenario: Automated weekly update of README model recommendations
bash
undefined
场景: 每周自动更新README中的模型推荐
bash
undefined

Fetch models

获取模型

bun run scripts/get-trending-models.ts > trending.json
bun run scripts/get-trending-models.ts > trending.json

Extract top 5 model names and IDs

提取前5个模型的名称和ID

jq -r '.models[:5] | .[] | "-
\(.id)
- (.name) ((.contextLength / 1024)K context, $(.pricing.promptPer1M)/1M)"' trending.json
jq -r '.models[:5] | .[] | "-
\(.id)
- (.name) ((.contextLength / 1024)K 上下文, $(.pricing.promptPer1M)/百万)"' trending.json

Output (ready for README):

输出(可直接用于README):

-
x-ai/grok-code-fast-1
- Grok Code Fast (128K context, $0.5/1M)

-
x-ai/grok-code-fast-1
- Grok Code Fast (128K 上下文, $0.5/百万)

-
anthropic/claude-4.5-sonnet-20250929
- Claude 4.5 Sonnet (200K context, $3.0/1M)

-
anthropic/claude-4.5-sonnet-20250929
- Claude 4.5 Sonnet (200K 上下文, $3.0/百万)

-
google/gemini-2.5-flash
- Gemini 2.5 Flash (976K context, $0.075/1M)

-
google/gemini-2.5-flash
- Gemini 2.5 Flash (976K 上下文, $0.075/百万)

undefined
undefined

Example 4: Check for New Trending Models

示例4:检查新的热门模型

Scenario: Identify when new models enter top 9
bash
undefined
场景: 识别新进入前9的模型
bash
undefined

Save current trending models

保存当前热门模型

bun run scripts/get-trending-models.ts | jq '.models | map(.id)' > current.json
bun run scripts/get-trending-models.ts | jq '.models | map(.id)' > current.json

Compare with previous week (saved as previous.json)

与上周数据(保存为previous.json)对比

diff <(jq -r '.[]' previous.json | sort) <(jq -r '.[]' current.json | sort)
diff <(jq -r '.[]' previous.json | sort) <(jq -r '.[]' current.json | sort)

Output shows new entries (>) and removed entries (<)

输出显示新增条目(>)和移除条目(<)


---

---

Troubleshooting

故障排除

Issue: Script Fails to Fetch Rankings

问题:脚本无法获取排行榜

Error Message:
✗ Error: Failed to fetch rankings: fetch failed
Possible Causes:
  1. No internet connection
  2. OpenRouter site is down
  3. Firewall blocking openrouter.ai
  4. URL structure changed
Solutions:
  1. Test connectivity:
bash
curl -I https://openrouter.ai/rankings
错误信息:
✗ Error: Failed to fetch rankings: fetch failed
可能原因:
  1. 无网络连接
  2. OpenRouter站点故障
  3. 防火墙阻止访问openrouter.ai
  4. URL结构变更
解决方案:
  1. 测试连通性:
bash
curl -I https://openrouter.ai/rankings

Should return HTTP 200

应返回HTTP 200


2. **Check URL in browser:**
   - Visit https://openrouter.ai/rankings
   - Verify page loads and shows programming rankings
   - If URL redirects, update RANKINGS_URL constant in script

3. **Check firewall/proxy:**
```bash

2. **在浏览器中检查URL:**
   - 访问https://openrouter.ai/rankings
   - 验证页面是否加载并显示编程排行榜
   - 如果URL重定向,更新脚本中的RANKINGS_URL常量

3. **检查防火墙/代理:**
```bash

Test from command line

从命令行测试

Should return HTML with embedded JSON

应返回包含嵌入JSON的HTML


4. **Use fallback data:**
   - Keep last successful output as fallback
   - Use cached trending-models.json if < 14 days old

4. **使用备用数据:**
   - 保留上次成功输出作为备用
   - 如果trending-models.json的时间小于14天,使用缓存数据

Issue: Parse Error (Invalid RSC Format)

问题:解析错误(无效RSC格式)

Error Message:
✗ Error: Failed to extract JSON from RSC format
Cause: OpenRouter changed their page structure
Solutions:
  1. Inspect raw HTML:
bash
curl "https://openrouter.ai/rankings?category=programming&view=trending&_rsc=2nz0s" | head -200
  1. Look for data pattern:
    • Search for
      "data":[{
      in output
    • Check if line starts with different prefix (not
      1b:
      )
    • Verify JSON structure matches expected format
  2. Update regex in script:
    • Edit
      scripts/get-trending-models.ts
    • Modify regex in
      fetchRankings()
      function
    • Test with new pattern
  3. Report issue:
    • File issue in plugin repository
    • Include raw HTML sample (first 500 chars)
    • Specify when error started occurring
错误信息:
✗ Error: Failed to extract JSON from RSC format
原因: OpenRouter变更了页面结构
解决方案:
  1. 检查原始HTML:
bash
curl "https://openrouter.ai/rankings?category=programming&view=trending&_rsc=2nz0s" | head -200
  1. 查找数据模式:
    • 在输出中搜索
      "data":[{
    • 检查行是否以不同前缀开头(不是
      1b:
    • 验证JSON结构是否与预期格式匹配
  2. 更新脚本中的正则表达式:
    • 编辑
      scripts/get-trending-models.ts
    • 修改
      fetchRankings()
      函数中的正则表达式
    • 使用新模式测试
  3. 报告问题:
    • 在插件仓库中提交问题
    • 包含原始HTML样本(前500个字符)
    • 说明错误开始发生的时间

Issue: Model Details Not Found

问题:未找到模型详情

Warning Message:
Warning: Model x-ai/grok-code-fast-1 not found in API, using defaults
Cause: Model ID in rankings doesn't match API
Impact: Model will have 0 values for context/pricing
Solutions:
  1. Verify model exists in API:
bash
curl "https://openrouter.ai/api/v1/models" | jq '.data[] | select(.id == "x-ai/grok-code-fast-1")'
  1. Check for ID mismatches:
    • Rankings may use different ID format
    • API might have model under different name
    • Model may be new and not yet in API
  2. Manual correction:
    • Edit output JSON file
    • Add correct details from OpenRouter website
    • Note discrepancy for future fixes
警告信息:
Warning: Model x-ai/grok-code-fast-1 not found in API, using defaults
原因: 排行榜中的模型ID与API不匹配
影响: 模型的上下文/定价值为0
解决方案:
  1. 验证模型是否存在于API中:
bash
curl "https://openrouter.ai/api/v1/models" | jq '.data[] | select(.id == "x-ai/grok-code-fast-1")'
  1. 检查ID不匹配:
    • 排行榜可能使用不同的ID格式
    • API中模型可能使用不同名称
    • 模型可能是新模型,尚未加入API
  2. 手动修正:
    • 编辑输出JSON文件
    • 从OpenRouter网站添加正确详情
    • 记录差异以便未来修复

Issue: Stale Data Warning

问题:数据过时警告

Symptom: Models seem outdated compared to OpenRouter site
Check data age:
bash
jq '.metadata.fetchedAt' trending-models.json
症状: 模型似乎与OpenRouter站点上的内容不符
检查数据年龄:
bash
jq '.metadata.fetchedAt' trending-models.json

Compare with current date

与当前日期对比


**Solutions:**

1. **Re-run script:**
```bash
bun run scripts/get-trending-models.ts > trending-models.json
  1. Set up weekly refresh:
    • Add to cron:
      0 0 * * 1 cd /path/to/repo && bun run scripts/get-trending-models.ts > skills/openrouter-trending-models/trending-models.json
    • Or use GitHub Actions (see Automation section)
  2. Add staleness check in agents:
typescript
const data = JSON.parse(readFile("trending-models.json"));
const fetchedDate = new Date(data.metadata.fetchedAt);
const daysSinceUpdate = (Date.now() - fetchedDate.getTime()) / (1000 * 60 * 60 * 24);

if (daysSinceUpdate > 7) {
  console.warn("Data is over 7 days old, consider refreshing");
}


**解决方案:**

1. **重新运行脚本:**
```bash
bun run scripts/get-trending-models.ts > trending-models.json
  1. 设置每周刷新:
    • 添加到cron:
      0 0 * * 1 cd /path/to/repo && bun run scripts/get-trending-models.ts > skills/openrouter-trending-models/trending-models.json
    • 或使用GitHub Actions(见自动化部分)
  2. 在Agent中添加过时检查:
typescript
const data = JSON.parse(readFile("trending-models.json"));
const fetchedDate = new Date(data.metadata.fetchedAt);
const daysSinceUpdate = (Date.now() - fetchedDate.getTime()) / (1000 * 60 * 60 * 24);

if (daysSinceUpdate > 7) {
  console.warn("数据已超过7天,建议刷新");
}

Best Practices

最佳实践

Data Freshness

数据新鲜度

Recommended Update Schedule:
  • Weekly: Ideal (matches OpenRouter update cycle)
  • Bi-weekly: Acceptable for stable periods
  • Monthly: Minimum for production use
Staleness Guidelines:
  • 0-7 days: Fresh (green)
  • 8-14 days: Slightly stale (yellow)
  • 15-30 days: Stale (orange)
  • 30+ days: Very stale (red)
推荐更新计划:
  • 每周:理想(与OpenRouter更新周期匹配)
  • 每两周:稳定期可接受
  • 每月:生产环境最低要求
过时指南:
  • 0-7天:新鲜(绿色)
  • 8-14天:略有过时(黄色)
  • 15-30天:过时(橙色)
  • 30+天:非常过时(红色)

Caching Strategy

缓存策略

When to cache:
  • Multiple agents need same data
  • Frequent model selection workflows
  • Avoiding rate limits
How to cache:
  1. Run script once:
    bun run scripts/get-trending-models.ts > trending-models.json
  2. Commit to repository (under
    skills/openrouter-trending-models/
    )
  3. Agents read from file instead of re-running script
  4. Refresh weekly via manual run or automation
Cache invalidation:
bash
undefined
何时缓存:
  • 多个Agent需要相同数据
  • 频繁的模型选择工作流
  • 避免速率限制
如何缓存:
  1. 运行一次脚本:
    bun run scripts/get-trending-models.ts > trending-models.json
  2. 提交到仓库(放在
    skills/openrouter-trending-models/
    下)
  3. Agent从文件读取而非重新运行脚本
  4. 每周手动或自动刷新
缓存失效:
bash
undefined

Check if cache is stale (> 7 days)

检查缓存是否过时(超过7天)

if [ $(find trending-models.json -mtime +7) ]; then echo "Cache is stale, refreshing..." bun run scripts/get-trending-models.ts > trending-models.json fi
undefined
if [ $(find trending-models.json -mtime +7) ]; then echo "缓存已过时,正在刷新..." bun run scripts/get-trending-models.ts > trending-models.json fi
undefined

Error Handling in Agents

Agent中的错误处理

Graceful degradation pattern:
markdown
1. Try to fetch fresh data
   - Run: bun run scripts/get-trending-models.ts
   - If succeeds: Use fresh data
   - If fails: Continue to step 2

2. Try cached data
   - Check if trending-models.json exists
   - Check if < 14 days old
   - If valid: Use cached data
   - If not: Continue to step 3

3. Fallback to hardcoded models
   - Use known good models from agent prompt
   - Warn user data may be outdated
   - Suggest manual refresh
优雅降级模式:
markdown
1. 尝试获取新鲜数据
   - 运行:bun run scripts/get-trending-models.ts
   - 如果成功:使用新鲜数据
   - 如果失败:继续步骤2

2. 尝试缓存数据
   - 检查trending-models.json是否存在
   - 检查是否小于14天
   - 如果有效:使用缓存数据
   - 如果无效:继续步骤3

3. 回退到硬编码模型
   - 使用Agent提示词中的已知优质模型
   - 警告用户数据可能过时
   - 建议手动刷新

Integration Patterns

集成模式

Pattern 1: On-Demand (Fresh Data)
bash
undefined
模式1:按需获取(新鲜数据)
bash
undefined

Run before each use

每次使用前运行

bun run scripts/get-trending-models.ts > /tmp/models.json
bun run scripts/get-trending-models.ts > /tmp/models.json

Read from /tmp/models.json

从/tmp/models.json读取


**Pattern 2: Cached (Fast Access)**
```bash

**模式2:缓存(快速访问)**
```bash

Check cache age first

先检查缓存年龄

CACHE_FILE="skills/openrouter-trending-models/trending-models.json" if [ ! -f "$CACHE_FILE" ] || [ $(find "$CACHE_FILE" -mtime +7) ]; then bun run scripts/get-trending-models.ts > "$CACHE_FILE" fi
CACHE_FILE="skills/openrouter-trending-models/trending-models.json" if [ ! -f "$CACHE_FILE" ] || [ $(find "$CACHE_FILE" -mtime +7) ]; then bun run scripts/get-trending-models.ts > "$CACHE_FILE" fi

Read from cache

从缓存读取


**Pattern 3: Background Refresh (Non-Blocking)**
```bash

**模式3:后台刷新(非阻塞)**
```bash

Start refresh in background (don't wait)

在后台启动刷新(不等待)

bun run scripts/get-trending-models.ts > trending-models.json &
bun run scripts/get-trending-models.ts > trending-models.json &

Continue with workflow

继续工作流

Use cached data if available

如果有缓存则使用缓存

Fresh data will be ready for next run

下次运行时将使用新鲜数据


---

---

Changelog

更新日志

v1.0.0 (2025-11-14)

v1.0.0 (2025-11-14)

  • Initial release
  • Fetch top 9 trending programming models from OpenRouter
  • Parse RSC streaming format
  • Include context length, pricing, and token usage
  • Zero dependencies (Bun built-in APIs only)
  • Comprehensive error handling
  • Summary statistics (total tokens, top provider, price range)

  • 初始版本
  • 从OpenRouter获取排名前9的热门编程模型
  • 解析RSC流格式
  • 包含上下文长度、定价和令牌使用量
  • 零依赖(仅使用Bun内置API)
  • 全面的错误处理
  • 汇总统计数据(总令牌数、顶级提供商、价格范围)

Future Enhancements

未来改进

Planned Features

计划功能

  • Category selection (programming, creative, analysis, etc.)
  • Historical trend tracking (compare week-over-week)
  • Provider filtering (focus on specific providers)
  • Cost calculator (estimate workflow costs)
  • 分类选择(编程、创意、分析等)
  • 历史趋势跟踪(周对比)
  • 提供商过滤(聚焦特定提供商)
  • 成本计算器(估算工作流成本)

Research Ideas

研究方向

  • Correlate rankings with model performance benchmarks
  • Identify "best value" models (performance/price ratio)
  • Predict upcoming trending models
  • Multi-category analysis

Skill Version: 1.0.0 Last Updated: November 14, 2025 Maintenance: Weekly refresh recommended Dependencies: Bun runtime, internet connection
  • 将排名与模型性能基准关联
  • 识别“性价比最高”的模型(性能/价格比)
  • 预测即将热门的模型
  • 多类别分析

技能版本: 1.0.0 最后更新: 2025年11月14日 维护建议: 建议每周刷新 依赖: Bun运行时、互联网连接