Loading...
Loading...
Fetch trending programming models from OpenRouter rankings. Use when selecting models for multi-model review, updating model recommendations, or researching current AI coding trends. Provides model IDs, context windows, pricing, and usage statistics from the most recent week.
npx skill4agent add madappgang/claude-code openrouter-trending-modelsx-ai/grok-code-fast-1bun run scripts/get-trending-models.tsbun run scripts/get-trending-models.ts > trending-models.jsonbun run scripts/get-trending-models.ts | jq '.'bun run scripts/get-trending-models.ts --help{
"metadata": {
"fetchedAt": "2025-11-14T10:30:00.000Z",
"weekEnding": "2025-11-10",
"category": "programming",
"view": "trending"
},
"models": [
{
"rank": 1,
"id": "x-ai/grok-code-fast-1",
"name": "Grok Code Fast",
"tokenUsage": 908664328688,
"contextLength": 131072,
"maxCompletionTokens": 32768,
"pricing": {
"prompt": 0.0000005,
"completion": 0.000001,
"promptPer1M": 0.5,
"completionPer1M": 1.0
}
}
// ... 8 more models
],
"summary": {
"totalTokens": 4500000000000,
"topProvider": "x-ai",
"averageContextLength": 98304,
"priceRange": {
"min": 0.5,
"max": 15.0,
"unit": "USD per 1M tokens"
}
}
}{
fetchedAt: string; // ISO 8601 timestamp of when data was fetched
weekEnding: string; // YYYY-MM-DD format, end of ranking week
category: "programming"; // Fixed category
view: "trending"; // Fixed view type
}{
rank: number; // 1-9, position in trending list
id: string; // OpenRouter model ID (e.g., "x-ai/grok-code-fast-1")
name: string; // Human-readable name (e.g., "Grok Code Fast")
tokenUsage: number; // Total tokens used last week
contextLength: number; // Maximum input tokens
maxCompletionTokens: number; // Maximum output tokens
pricing: {
prompt: number; // Per-token input cost (USD)
completion: number; // Per-token output cost (USD)
promptPer1M: number; // Input cost per 1M tokens (USD)
completionPer1M: number; // Output cost per 1M tokens (USD)
}
}{
totalTokens: number; // Sum of token usage across top 9 models
topProvider: string; // Most represented provider (e.g., "x-ai")
averageContextLength: number; // Average context window size
priceRange: {
min: number; // Lowest prompt price per 1M tokens
max: number; // Highest prompt price per 1M tokens
unit: "USD per 1M tokens";
}
}# In plan-reviewer agent workflow
STEP 1: Fetch trending models
- Execute: Bash("bun run scripts/get-trending-models.ts > /tmp/trending-models.json")
- Read: /tmp/trending-models.json
STEP 2: Parse and present to user
- Extract top 3-5 models from models array
- Display with context and pricing info
- Let user select preferred model(s)
STEP 3: Use selected model for review
- Pass model ID to Claudish proxy// Agent reads output
const data = JSON.parse(bashOutput);
// Extract top 5 models
const topModels = data.models.slice(0, 5);
// Present to user
const modelList = topModels.map((m, i) =>
`${i + 1}. **${m.name}** (\`${m.id}\`)
- Context: ${m.contextLength.toLocaleString()} tokens
- Pricing: $${m.pricing.promptPer1M}/1M input
- Usage: ${(m.tokenUsage / 1e9).toFixed(1)}B tokens last week`
).join('\n\n');
// Ask user to select
const userChoice = await AskUserQuestion(`Select model for review:\n\n${modelList}`);# Fetch models and filter with jq
bun run scripts/get-trending-models.ts | jq '
.models
| map(select(.contextLength > 100000))
| sort_by(.pricing.promptPer1M)
| .[:3]
| .[] | {
name,
id,
contextLength,
price: .pricing.promptPer1M
}
'{
"name": "Gemini 2.5 Flash",
"id": "google/gemini-2.5-flash",
"contextLength": 1000000,
"price": 0.075
}
{
"name": "Grok Code Fast",
"id": "x-ai/grok-code-fast-1",
"contextLength": 131072,
"price": 0.5
}# Fetch models
bun run scripts/get-trending-models.ts > trending.json
# Extract top 5 model names and IDs
jq -r '.models[:5] | .[] | "- `\(.id)` - \(.name) (\(.contextLength / 1024)K context, $\(.pricing.promptPer1M)/1M)"' trending.json
# Output (ready for README):
# - `x-ai/grok-code-fast-1` - Grok Code Fast (128K context, $0.5/1M)
# - `anthropic/claude-4.5-sonnet-20250929` - Claude 4.5 Sonnet (200K context, $3.0/1M)
# - `google/gemini-2.5-flash` - Gemini 2.5 Flash (976K context, $0.075/1M)# Save current trending models
bun run scripts/get-trending-models.ts | jq '.models | map(.id)' > current.json
# Compare with previous week (saved as previous.json)
diff <(jq -r '.[]' previous.json | sort) <(jq -r '.[]' current.json | sort)
# Output shows new entries (>) and removed entries (<)✗ Error: Failed to fetch rankings: fetch failedcurl -I https://openrouter.ai/rankings
# Should return HTTP 200# Test from command line
curl "https://openrouter.ai/rankings?category=programming&view=trending&_rsc=2nz0s"
# Should return HTML with embedded JSON✗ Error: Failed to extract JSON from RSC formatcurl "https://openrouter.ai/rankings?category=programming&view=trending&_rsc=2nz0s" | head -200"data":[{1b:scripts/get-trending-models.tsfetchRankings()Warning: Model x-ai/grok-code-fast-1 not found in API, using defaultscurl "https://openrouter.ai/api/v1/models" | jq '.data[] | select(.id == "x-ai/grok-code-fast-1")'jq '.metadata.fetchedAt' trending-models.json
# Compare with current datebun run scripts/get-trending-models.ts > trending-models.json0 0 * * 1 cd /path/to/repo && bun run scripts/get-trending-models.ts > skills/openrouter-trending-models/trending-models.jsonconst data = JSON.parse(readFile("trending-models.json"));
const fetchedDate = new Date(data.metadata.fetchedAt);
const daysSinceUpdate = (Date.now() - fetchedDate.getTime()) / (1000 * 60 * 60 * 24);
if (daysSinceUpdate > 7) {
console.warn("Data is over 7 days old, consider refreshing");
}bun run scripts/get-trending-models.ts > trending-models.jsonskills/openrouter-trending-models/# Check if cache is stale (> 7 days)
if [ $(find trending-models.json -mtime +7) ]; then
echo "Cache is stale, refreshing..."
bun run scripts/get-trending-models.ts > trending-models.json
fi1. Try to fetch fresh data
- Run: bun run scripts/get-trending-models.ts
- If succeeds: Use fresh data
- If fails: Continue to step 2
2. Try cached data
- Check if trending-models.json exists
- Check if < 14 days old
- If valid: Use cached data
- If not: Continue to step 3
3. Fallback to hardcoded models
- Use known good models from agent prompt
- Warn user data may be outdated
- Suggest manual refresh# Run before each use
bun run scripts/get-trending-models.ts > /tmp/models.json
# Read from /tmp/models.json# Check cache age first
CACHE_FILE="skills/openrouter-trending-models/trending-models.json"
if [ ! -f "$CACHE_FILE" ] || [ $(find "$CACHE_FILE" -mtime +7) ]; then
bun run scripts/get-trending-models.ts > "$CACHE_FILE"
fi
# Read from cache# Start refresh in background (don't wait)
bun run scripts/get-trending-models.ts > trending-models.json &
# Continue with workflow
# Use cached data if available
# Fresh data will be ready for next run