aibijia-price-comparison
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAibijia Price Comparison Platform
Aibijia 价格对比平台
Skill by ara.so — Daily 2026 Skills collection.
Aibijia is a multi-platform price scraping and comparison website for AI tokens (ChatGPT Plus CDKs, API keys, etc.). It aggregates prices from various resellers/agents across platforms, helping users find the cheapest reliable source and avoid scams.
由ara.so提供的Skill——2026每日Skill合集。
Aibijia是一个针对AI令牌(ChatGPT Plus CDKs、API密钥等)的多平台价格爬取与对比网站。它聚合了各平台不同经销商/代理商的价格,帮助用户找到最实惠可靠的渠道,避免诈骗。
What This Project Does
项目功能
- Scrapes token/CDK prices from multiple card-selling platforms (卡网)
- Compares prices across vendors for the same type of AI account (e.g., ChatGPT Plus, GPT Pro)
- Aggregates vendor reliability info via community submissions
- Exposes price differences between resellers sourcing from the same upstream
- 爬取多个卡网平台的令牌/CDK价格
- 对比不同供应商同款AI账号(如ChatGPT Plus、GPT Pro)的价格
- 聚合社区提交的供应商可靠性信息
- 展示同一上游渠道下不同经销商的价格差异
Project Structure
项目结构
Since the repo is primarily a community/data project with a web frontend, the core components are:
AIbijia/
├── assets/ # Static assets (banner, images)
├── data/ # Price data / scraped results (JSON/CSV)
├── scrapers/ # Platform price scrapers
├── frontend/ # Website UI (aibijia.org)
└── SKILL.md由于本仓库主要是一个社区/数据项目,并配有Web前端,核心组件如下:
AIbijia/
├── assets/ # 静态资源(横幅、图片)
├── data/ # 价格数据/爬取结果(JSON/CSV格式)
├── scrapers/ # 平台价格爬取器
├── frontend/ # 网站UI(aibijia.org)
└── SKILL.mdInstallation & Setup
安装与设置
Clone the Repository
克隆仓库
bash
git clone https://github.com/ka-pi-ba-la/AIbijia.git
cd AIbijiabash
git clone https://github.com/ka-pi-ba-la/AIbijia.git
cd AIbijiaInstall Dependencies
安装依赖
If Python-based scrapers:
bash
pip install -r requirements.txtIf Node.js-based:
bash
npm install如果使用基于Python的爬取器:
bash
pip install -r requirements.txt如果使用基于Node.js的组件:
bash
npm installor
或
pnpm install
---pnpm install
---Core Concepts
核心概念
Token Types Tracked
追踪的令牌类型
| Token Type | Example Price Range | Notes |
|---|---|---|
| ChatGPT Plus CDK | ¥30–¥60 | Same upstream, different markup |
| GPT Pro (shared) | ~¥20/person | Split among multiple users |
| API Keys (各模型) | Varies | Per-token pricing |
| Claude / Gemini | Varies | Scraped from resellers |
| 令牌类型 | 示例价格区间 | 说明 |
|---|---|---|
| ChatGPT Plus CDK | ¥30–¥60 | 同一上游渠道,不同加价幅度 |
| GPT Pro(共享版) | ~¥20/人 | 多人共享账号 |
| API Keys(各模型) | 价格不一 | 按令牌计费 |
| Claude / Gemini | 价格不一 | 从经销商处爬取数据 |
Price Scraping Pattern
价格爬取模式
python
import requests
from bs4 import BeautifulSoup
import json
from datetime import datetime
class TokenPriceScraper:
"""
Base scraper for AI token price platforms.
Each platform subclasses this with custom parsing.
"""
def __init__(self, platform_name: str, base_url: str):
self.platform_name = platform_name
self.base_url = base_url
self.session = requests.Session()
self.session.headers.update({
"User-Agent": "Mozilla/5.0 (compatible; Aibijia/1.0)"
})
def fetch_page(self, url: str) -> BeautifulSoup:
resp = self.session.get(url, timeout=10)
resp.raise_for_status()
return BeautifulSoup(resp.text, "html.parser")
def parse_prices(self, soup: BeautifulSoup) -> list[dict]:
"""Override in subclass to extract price data."""
raise NotImplementedError
def scrape(self) -> list[dict]:
soup = self.fetch_page(self.base_url)
prices = self.parse_prices(soup)
# Annotate with metadata
for item in prices:
item["platform"] = self.platform_name
item["scraped_at"] = datetime.utcnow().isoformat()
return prices
class KawangScraper(TokenPriceScraper):
"""Example scraper for a 卡网 (card platform)."""
def parse_prices(self, soup: BeautifulSoup) -> list[dict]:
results = []
# Adapt selectors to target platform's HTML structure
for card in soup.select(".product-card"):
name = card.select_one(".product-name")
price = card.select_one(".product-price")
stock = card.select_one(".product-stock")
if name and price:
results.append({
"name": name.get_text(strip=True),
"price_cny": float(
price.get_text(strip=True)
.replace("¥", "")
.replace(",", "")
),
"in_stock": stock and "有货" in stock.get_text(),
})
return resultspython
import requests
from bs4 import BeautifulSoup
import json
from datetime import datetime
class TokenPriceScraper:
"""
AI令牌价格平台的基础爬取器。
每个平台需继承此类并实现自定义解析逻辑。
"""
def __init__(self, platform_name: str, base_url: str):
self.platform_name = platform_name
self.base_url = base_url
self.session = requests.Session()
self.session.headers.update({
"User-Agent": "Mozilla/5.0 (compatible; Aibijia/1.0)"
})
def fetch_page(self, url: str) -> BeautifulSoup:
resp = self.session.get(url, timeout=10)
resp.raise_for_status()
return BeautifulSoup(resp.text, "html.parser")
def parse_prices(self, soup: BeautifulSoup) -> list[dict]:
"""在子类中重写此方法以提取价格数据。"""
raise NotImplementedError
def scrape(self) -> list[dict]:
soup = self.fetch_page(self.base_url)
prices = self.parse_prices(soup)
# 添加元数据注释
for item in prices:
item["platform"] = self.platform_name
item["scraped_at"] = datetime.utcnow().isoformat()
return prices
class KawangScraper(TokenPriceScraper):
"""卡网平台的示例爬取器。"""
def parse_prices(self, soup: BeautifulSoup) -> list[dict]:
results = []
# 根据目标平台的HTML结构调整选择器
for card in soup.select(".product-card"):
name = card.select_one(".product-name")
price = card.select_one(".product-price")
stock = card.select_one(".product-stock")
if name and price:
results.append({
"name": name.get_text(strip=True),
"price_cny": float(
price.get_text(strip=True)
.replace("¥", "")
.replace(",", "")
),
"in_stock": stock and "有货" in stock.get_text(),
})
return resultsAggregating Prices Across Platforms
跨平台价格聚合
python
import asyncio
import aiohttp
from dataclasses import dataclass
@dataclass
class PriceListing:
token_type: str
platform: str
price_cny: float
in_stock: bool
url: str
scraped_at: str
async def aggregate_all_platforms(platforms: list[TokenPriceScraper]) -> list[PriceListing]:
"""
Run all scrapers concurrently and merge results.
"""
results = []
async def run_scraper(scraper):
loop = asyncio.get_event_loop()
# Run sync scraper in thread pool
data = await loop.run_in_executor(None, scraper.scrape)
return data
tasks = [run_scraper(p) for p in platforms]
all_data = await asyncio.gather(*tasks, return_exceptions=True)
for platform_data in all_data:
if isinstance(platform_data, Exception):
print(f"Scraper error: {platform_data}")
continue
results.extend(platform_data)
return results
def find_cheapest(listings: list[PriceListing], token_type: str) -> list[PriceListing]:
"""Filter and sort by price for a specific token type."""
filtered = [
l for l in listings
if token_type.lower() in l.token_type.lower()
and l.in_stock
]
return sorted(filtered, key=lambda x: x.price_cny)python
import asyncio
import aiohttp
from dataclasses import dataclass
@dataclass
class PriceListing:
token_type: str
platform: str
price_cny: float
in_stock: bool
url: str
scraped_at: str
async def aggregate_all_platforms(platforms: list[TokenPriceScraper]) -> list[PriceListing]:
"""
并发运行所有爬取器并合并结果。
"""
results = []
async def run_scraper(scraper):
loop = asyncio.get_event_loop()
# 在线程池中运行同步爬取器
data = await loop.run_in_executor(None, scraper.scrape)
return data
tasks = [run_scraper(p) for p in platforms]
all_data = await asyncio.gather(*tasks, return_exceptions=True)
for platform_data in all_data:
if isinstance(platform_data, Exception):
print(f"爬取器错误: {platform_data}")
continue
results.extend(platform_data)
return results
def find_cheapest(listings: list[PriceListing], token_type: str) -> list[PriceListing]:
"""筛选特定令牌类型的价格并按价格排序。"""
filtered = [
l for l in listings
if token_type.lower() in l.token_type.lower()
and l.in_stock
]
return sorted(filtered, key=lambda x: x.price_cny)Usage
使用示例
async def main():
platforms = [
KawangScraper("platform_a", "https://example-card-site-a.com/chatgpt"),
KawangScraper("platform_b", "https://example-card-site-b.com/chatgpt"),
]
all_listings = await aggregate_all_platforms(platforms)
cheapest = find_cheapest(all_listings, "ChatGPT Plus")
print("Cheapest ChatGPT Plus CDKs:")
for listing in cheapest[:5]:
print(f" ¥{listing.price_cny} — {listing.platform}")asyncio.run(main())
---async def main():
platforms = [
KawangScraper("platform_a", "https://example-card-site-a.com/chatgpt"),
KawangScraper("platform_b", "https://example-card-site-b.com/chatgpt"),
]
all_listings = await aggregate_all_platforms(platforms)
cheapest = find_cheapest(all_listings, "ChatGPT Plus")
print("最便宜的ChatGPT Plus CDKs:")
for listing in cheapest[:5]:
print(f" ¥{listing.price_cny} — {listing.platform}")asyncio.run(main())
---Vendor Submission API
供应商提交API
The site exposes a submission endpoint for community-sourced vendors:
python
import requests
import os
AIBIJIA_API = "https://aibijia.org/api" # hypothetical endpoint
def submit_vendor(vendor_info: dict) -> dict:
"""
Submit a new vendor/price source for review.
vendor_info keys:
- name: str Vendor/platform name
- url: str Purchase URL
- token_type: str e.g. "ChatGPT Plus CDK"
- price_cny: float Current price in RMB
- notes: str Optional reliability notes
"""
resp = requests.post(
f"{AIBIJIA_API}/submit",
json=vendor_info,
headers={
"Content-Type": "application/json",
# Use env var if auth is required:
"Authorization": f"Bearer {os.environ.get('AIBIJIA_API_KEY', '')}",
},
timeout=10,
)
resp.raise_for_status()
return resp.json()网站提供了一个供社区提交供应商信息的接口:
python
import requests
import os
AIBIJIA_API = "https://aibijia.org/api" # 假设的接口地址
def submit_vendor(vendor_info: dict) -> dict:
"""
提交新的供应商/价格源以供审核。
vendor_info的键:
- name: str 供应商/平台名称
- url: str 购买链接
- token_type: str 例如 "ChatGPT Plus CDK"
- price_cny: float 当前人民币价格
- notes: str 可选的可靠性说明
"""
resp = requests.post(
f"{AIBIJIA_API}/submit",
json=vendor_info,
headers={
"Content-Type": "application/json",
# 如果需要认证,使用环境变量:
"Authorization": f"Bearer {os.environ.get('AIBIJIA_API_KEY', '')}",
},
timeout=10,
)
resp.raise_for_status()
return resp.json()Example usage
使用示例
result = submit_vendor({
"name": "某卡网",
"url": "https://example-card-site.com/gpt-plus",
"token_type": "ChatGPT Plus CDK",
"price_cny": 32.0,
"notes": "24h售后,支持补货",
})
print(result)
---result = submit_vendor({
"name": "某卡网",
"url": "https://example-card-site.com/gpt-plus",
"token_type": "ChatGPT Plus CDK",
"price_cny": 32.0,
"notes": "24h售后,支持补货",
})
print(result)
---Data Storage Pattern
数据存储模式
python
import json
import os
from pathlib import Path
from datetime import datetime
DATA_DIR = Path("./data")
def save_price_snapshot(listings: list[dict], token_type: str):
"""Save a timestamped price snapshot to data/."""
DATA_DIR.mkdir(exist_ok=True)
date_str = datetime.utcnow().strftime("%Y-%m-%d")
filename = DATA_DIR / f"{token_type.replace(' ', '_')}_{date_str}.json"
snapshot = {
"token_type": token_type,
"captured_at": datetime.utcnow().isoformat(),
"count": len(listings),
"listings": listings,
}
with open(filename, "w", encoding="utf-8") as f:
json.dump(snapshot, f, ensure_ascii=False, indent=2)
print(f"Saved {len(listings)} listings to {filename}")
def load_latest_snapshot(token_type: str) -> dict | None:
"""Load the most recent snapshot for a token type."""
pattern = f"{token_type.replace(' ', '_')}_*.json"
files = sorted(DATA_DIR.glob(pattern), reverse=True)
if not files:
return None
with open(files[0], encoding="utf-8") as f:
return json.load(f)python
import json
import os
from pathlib import Path
from datetime import datetime
DATA_DIR = Path("./data")
def save_price_snapshot(listings: list[dict], token_type: str):
"""将带时间戳的价格快照保存到data/目录。"""
DATA_DIR.mkdir(exist_ok=True)
date_str = datetime.utcnow().strftime("%Y-%m-%d")
filename = DATA_DIR / f"{token_type.replace(' ', '_')}_{date_str}.json"
snapshot = {
"token_type": token_type,
"captured_at": datetime.utcnow().isoformat(),
"count": len(listings),
"listings": listings,
}
with open(filename, "w", encoding="utf-8") as f:
json.dump(snapshot, f, ensure_ascii=False, indent=2)
print(f"已将{len(listings)}条记录保存到{filename}")
def load_latest_snapshot(token_type: str) -> dict | None:
"""加载特定令牌类型的最新快照。"""
pattern = f"{token_type.replace(' ', '_')}_*.json"
files = sorted(DATA_DIR.glob(pattern), reverse=True)
if not files:
return None
with open(files[0], encoding="utf-8") as f:
return json.load(f)Community Reporting (Avoid Scams)
社区举报(避坑防骗)
Post scam reports as GitHub Issues or submit to the repo:
markdown
undefined可通过GitHub Issues提交诈骗举报,或提交到仓库:
markdown
undefined避雷报告模板
避雷报告模板
平台名称: xxx卡网
购买时间: 2026-04-28
商品: ChatGPT Plus CDK
价格: ¥35
问题: CDK已失效,无法联系售后
证据: [截图]
建议: 避免购买
---平台名称: xxx卡网
购买时间: 2026-04-28
商品: ChatGPT Plus CDK
价格: ¥35
问题: CDK已失效,无法联系售后
证据: [截图]
建议: 避免购买
---Configuration
配置
python
undefinedpython
undefinedconfig.py — Aibijia scraper configuration
config.py — Aibijia爬取器配置
import os
CONFIG = {
# Scraping behavior
"request_timeout": int(os.environ.get("SCRAPE_TIMEOUT", "10")),
"rate_limit_seconds": float(os.environ.get("SCRAPE_RATE_LIMIT", "2.0")),
"max_retries": int(os.environ.get("SCRAPE_MAX_RETRIES", "3")),
# Proxy (optional, for bot detection avoidance)
"proxy": os.environ.get("HTTP_PROXY", None),
# Data output
"data_dir": os.environ.get("DATA_DIR", "./data"),
# Notifications (Telegram)
"telegram_bot_token": os.environ.get("TELEGRAM_BOT_TOKEN"),
"telegram_channel_id": os.environ.get("TELEGRAM_CHANNEL_ID"),
# Price alert threshold (alert if price drops below X CNY)
"alert_price_threshold": float(os.environ.get("ALERT_PRICE_CNY", "30.0")),}
undefinedimport os
CONFIG = {
# 爬取行为
"request_timeout": int(os.environ.get("SCRAPE_TIMEOUT", "10")),
"rate_limit_seconds": float(os.environ.get("SCRAPE_RATE_LIMIT", "2.0")),
"max_retries": int(os.environ.get("SCRAPE_MAX_RETRIES", "3")),
# 代理(可选,用于规避机器人检测)
"proxy": os.environ.get("HTTP_PROXY", None),
# 数据输出
"data_dir": os.environ.get("DATA_DIR", "./data"),
# 通知(Telegram)
"telegram_bot_token": os.environ.get("TELEGRAM_BOT_TOKEN"),
"telegram_channel_id": os.environ.get("TELEGRAM_CHANNEL_ID"),
# 价格预警阈值(当价格低于X元时触发预警)
"alert_price_threshold": float(os.environ.get("ALERT_PRICE_CNY", "30.0")),}
undefinedEnvironment Variables
环境变量
bash
undefinedbash
undefined.env (never commit this file)
.env(请勿提交此文件)
SCRAPE_TIMEOUT=15
SCRAPE_RATE_LIMIT=3.0
HTTP_PROXY=http://proxy.example.com:8080
DATA_DIR=./data
TELEGRAM_BOT_TOKEN=your_bot_token_here
TELEGRAM_CHANNEL_ID=@ai_bi_jia_notice
ALERT_PRICE_CNY=28.0
---SCRAPE_TIMEOUT=15
SCRAPE_RATE_LIMIT=3.0
HTTP_PROXY=http://proxy.example.com:8080
DATA_DIR=./data
TELEGRAM_BOT_TOKEN=your_bot_token_here
TELEGRAM_CHANNEL_ID=@ai_bi_jia_notice
ALERT_PRICE_CNY=28.0
---Telegram Price Alert Bot
Telegram价格预警机器人
python
import os
import asyncio
from telegram import Bot
async def send_price_alert(listings: list[dict], threshold: float):
"""
Send Telegram alert when ChatGPT Plus CDK drops below threshold price.
"""
bot = Bot(token=os.environ["TELEGRAM_BOT_TOKEN"])
channel = os.environ["TELEGRAM_CHANNEL_ID"]
cheap = [l for l in listings if l["price_cny"] <= threshold and l["in_stock"]]
if not cheap:
return
lines = [f"🔥 低价预警!ChatGPT Plus CDK ≤ ¥{threshold}\n"]
for l in cheap[:5]:
lines.append(f"• ¥{l['price_cny']} — {l['platform']}")
await bot.send_message(
chat_id=channel,
text="\n".join(lines),
disable_web_page_preview=True,
)
asyncio.run(send_price_alert(all_listings, threshold=30.0))python
import os
import asyncio
from telegram import Bot
async def send_price_alert(listings: list[dict], threshold: float):
"""
当ChatGPT Plus CDK价格低于阈值时发送Telegram预警。
"""
bot = Bot(token=os.environ["TELEGRAM_BOT_TOKEN"])
channel = os.environ["TELEGRAM_CHANNEL_ID"]
cheap = [l for l in listings if l["price_cny"] <= threshold and l["in_stock"]]
if not cheap:
return
lines = [f"🔥 低价预警!ChatGPT Plus CDK ≤ ¥{threshold}\n"]
for l in cheap[:5]:
lines.append(f"• ¥{l['price_cny']} — {l['platform']}")
await bot.send_message(
chat_id=channel,
text="\n".join(lines),
disable_web_page_preview=True,
)
asyncio.run(send_price_alert(all_listings, threshold=30.0))Common Patterns
常见模式
Daily Cron Job (GitHub Actions)
每日定时任务(GitHub Actions)
yaml
undefinedyaml
undefined.github/workflows/scrape.yml
.github/workflows/scrape.yml
name: Daily Price Scrape
on:
schedule:
- cron: "0 2 * * *" # 2 AM UTC daily
workflow_dispatch:
jobs:
scrape:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -r requirements.txt
- run: python scrapers/run_all.py
env:
TELEGRAM_BOT_TOKEN: ${{ secrets.TELEGRAM_BOT_TOKEN }}
TELEGRAM_CHANNEL_ID: ${{ secrets.TELEGRAM_CHANNEL_ID }}
- uses: actions/upload-artifact@v4
with:
name: price-data
path: data/
---name: Daily Price Scrape
on:
schedule:
- cron: "0 2 * * *" # 每日UTC时间2点执行
workflow_dispatch:
jobs:
scrape:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -r requirements.txt
- run: python scrapers/run_all.py
env:
TELEGRAM_BOT_TOKEN: ${{ secrets.TELEGRAM_BOT_TOKEN }}
TELEGRAM_CHANNEL_ID: ${{ secrets.TELEGRAM_CHANNEL_ID }}
- uses: actions/upload-artifact@v4
with:
name: price-data
path: data/
---Troubleshooting
故障排查
| Problem | Cause | Fix |
|---|---|---|
| Scraper returns empty results | Target site changed HTML structure | Update CSS selectors in |
| 403 / blocked requests | Bot detection on target platform | Add proxy via |
| Prices stale | Cron not running | Check GitHub Actions logs; run |
| Telegram alerts not sending | Wrong token/channel | Verify |
| CDK already used / invalid | Upstream fraud | Report in repo issues with evidence; avoid that vendor |
| 问题 | 原因 | 解决方法 |
|---|---|---|
| 爬取器返回空结果 | 目标网站修改了HTML结构 | 更新 |
| 403错误/请求被拦截 | 目标平台的机器人检测 | 通过 |
| 价格数据过时 | 定时任务未运行 | 检查GitHub Actions日志;手动运行 |
| Telegram预警未发送 | 令牌/频道信息错误 | 验证 |
| CDK已被使用/无效 | 上游欺诈 | 在仓库Issues中提交证据举报;避免从该供应商处购买 |
Anti-Bot Countermeasures
反机器人对策
python
import time
import random
def polite_get(session, url: str, min_delay=1.5, max_delay=4.0) -> str:
"""Add random delay between requests to avoid rate limiting."""
time.sleep(random.uniform(min_delay, max_delay))
resp = session.get(url, timeout=10)
resp.raise_for_status()
return resp.textpython
import time
import random
def polite_get(session, url: str, min_delay=1.5, max_delay=4.0) -> str:
"""在请求之间添加随机延迟以避免速率限制。"""
time.sleep(random.uniform(min_delay, max_delay))
resp = session.get(url, timeout=10)
resp.raise_for_status()
return resp.textContributing Price Sources
贡献价格源
- Fork the repo
- Add your vendor/source to
data/sources.json - Open a PR with evidence of reliability (screenshots, purchase history)
- Community reviews and merges
json
// data/sources.json entry format
{
"id": "vendor_slug",
"name": "平台名称",
"url": "https://example-card-site.com",
"token_types": ["ChatGPT Plus CDK", "Claude API"],
"verified": false,
"submitted_by": "github_username",
"notes": "24h售后,微信群支持"
}- Fork本仓库
- 将你的供应商/渠道添加到
data/sources.json - 提交PR并附上可靠性证据(截图、购买记录)
- 社区审核后合并
json
// data/sources.json条目格式
{
"id": "vendor_slug",
"name": "平台名称",
"url": "https://example-card-site.com",
"token_types": ["ChatGPT Plus CDK", "Claude API"],
"verified": false,
"submitted_by": "github_username",
"notes": "24h售后,微信群支持"
}