project-builder

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Project Build

项目构建

Three phases, always in order: DESIGN → BUILD → DEBUG.
Skill references (read on demand, not upfront):
  • references/build-patterns.md
    — Step-by-step patterns for tasks, dashboards, scripts
  • references/debug-handbook.md
    — Layer-by-layer diagnosis, common issues
Platform references (shared, in
config/context/references/
):
  • preview-guide.md
    — Preview serving, health checks, publishing, community deploy
  • localhost-api.md
    — Scripts can call the agent via /chat/stream (decide when to think, what context to pass, which model) and push messages via /push
  • sc-proxy.md
    — Transparent proxy, API pricing & rate limits
Skill references (in
references/
):
  • build-patterns.md
    — Detailed build recipes per project type
  • debug-handbook.md
    — Systematic diagnosis protocol
  • dashboard-examples.md
    — Code templates for Chart.js, ApexCharts, D3.js, SSE, responsive layouts, dark mode, accessibility (read when building dashboards)

三个阶段,始终按顺序执行:设计 → 构建 → 调试
技能参考文档(按需阅读,无需提前通读):
  • references/build-patterns.md
    —— 任务、仪表盘、脚本的分步实现模式
  • references/debug-handbook.md
    —— 逐层诊断方法,常见问题汇总
平台参考文档(共享文件,位于
config/context/references/
目录下):
  • preview-guide.md
    —— 预览服务、健康检查、发布、社区部署相关说明
  • localhost-api.md
    —— 脚本可通过/chat/stream调用Agent(决定何时思考、传递什么上下文、使用哪个模型),也可通过/push推送消息
  • sc-proxy.md
    —— 透明代理、API定价与速率限制说明
技能参考文档(位于
references/
目录下):
  • build-patterns.md
    —— 不同项目类型的详细构建指南
  • debug-handbook.md
    —— 系统化诊断协议
  • dashboard-examples.md
    —— Chart.js、ApexCharts、D3.js、SSE、响应式布局、暗黑模式、无障碍相关的代码模板(构建仪表盘时阅读)

Phase 1: DESIGN

阶段1:设计

Translate vague requests into concrete specs. If intent is ambiguous, ask ONE question.
Architecture decision tree:
Periodic alerts/reports?  → Scheduled Task
Live visual interface?    → Preview Server (dashboard)
One-time analysis?        → Inline (no build needed)
Reusable tool?            → Script in workspace
For medium+ projects, present to user BEFORE writing code:
  1. Data flow — sources → processing → output
  2. Architecture choice and why
  3. Cost estimate — (cost/run) × frequency × 30 = monthly
  4. Known limitations
API cost & rate limits: All external API calls go through sc-proxy, which bills per request and enforces rate limits. Before designing, read
config/context/references/sc-proxy.md
for pricing table and limits.
  • Estimate cost:
    credits_per_request × requests_per_run × runs_per_day × 30
  • Respect rate limits: e.g. CoinGecko 60 req/min — a task polling 10 coins every minute is fine; 100 coins is not
  • Prefer batch endpoints over N single calls (e.g.
    coin_price
    with multiple ids vs N separate calls)
  • Pure script tasks (no API): ~0 credits/run
  • LLM-assisted tasks: 0.01-0.05/run (use cheapest model that works)
  • Dashboard auto-refresh costs credits — default to manual refresh unless user asks otherwise
Data reliability: Native tools > proxied APIs > direct requests > web scraping > LLM numbers (never). Iron rule: Scripts fetch data. LLMs analyze text. Final output = script variables + LLM prose.
Task scripts can import skill functions directly:
python
from core.skill_tools import coingecko, coinglass  # auto-discovers skills/*/exports.py
prices = coingecko.coin_price(coin_ids=["bitcoin"], timestamps=["now"])
Tool names = SKILL.md frontmatter
tools:
list. See
build-patterns.md § Using Skill Functions
.

将模糊的需求转化为具体的规格说明。如果意图不明确,仅提1个问题确认。
架构决策树:
Periodic alerts/reports?  → Scheduled Task
Live visual interface?    → Preview Server (dashboard)
One-time analysis?        → Inline (no build needed)
Reusable tool?            → Script in workspace
对于中等及以上规模的项目,写代码前先向用户展示以下内容:
  1. 数据流——数据源 → 处理过程 → 输出
  2. 架构选型及理由
  3. 成本估算——(单次运行成本) × 运行频率 × 30 = 月成本
  4. 已知限制
API成本与速率限制: 所有外部API调用都通过sc-proxy转发,sc-proxy会按请求计费并强制执行速率限制。 设计前,**阅读
config/context/references/sc-proxy.md
**了解定价表和限制规则。
  • 成本估算公式:
    credits_per_request × requests_per_run × runs_per_day × 30
  • 遵守速率限制:例如CoinGecko限制为60次请求/分钟——每分钟轮询10个币种的任务是可行的,轮询100个则不行
  • 优先使用批量接口而非N个单次调用(例如传入多个id调用
    coin_price
    ,而非发起N次单独调用)
  • 纯脚本任务(无API调用):单次运行约消耗0积分
  • LLM辅助任务:单次运行消耗0.01-0.05积分(使用满足需求的最便宜模型)
  • 仪表盘自动刷新会消耗积分——默认使用手动刷新,除非用户明确要求自动刷新
数据可靠性优先级: 原生工具 > 代理API > 直接请求 > 网页爬虫 > LLM生成的数字(绝对禁用) 铁则:脚本负责拉取数据,LLM负责分析文本。最终输出 = 脚本变量 + LLM生成的文案。
任务脚本可直接导入技能函数:
python
from core.skill_tools import coingecko, coinglass  # auto-discovers skills/*/exports.py
prices = coingecko.coin_price(coin_ids=["bitcoin"], timestamps=["now"])
工具名称对应SKILL.md前置元数据中的
tools:
列表,参见
build-patterns.md § Using Skill Functions
章节。

Phase 2: BUILD

阶段2:构建

Every piece follows this cycle:
Build one small piece → Run it → Verify output → ✅ Next piece / ❌ Fix first
BuiltVerify howPass
Data fetcherRun, print raw responseNon-empty, recent, plausible
API endpoint
curl localhost:{port}/api/...
Correct JSON
HTML page
preview_serve
+
preview_check
ok = true
Task script
python3 tasks/{id}/run.py
Numbers match source
LLM analysisNumbers from script vars, not LLM textTemplate pattern used
Verification layering:
  • Critical (must pass before preview/activate): data correctness, core logic, no crashes
  • Informational (can fix after delivery): styling, edge case messages, minor UX polish
Anti-patterns:
  • ❌ "Done!" without running anything
  • ❌ Writing 200+ lines then testing for the first time
  • ❌ "It should work"
→ Detailed patterns: read
references/build-patterns.md
每个模块都遵循以下循环:
Build one small piece → Run it → Verify output → ✅ Next piece / ❌ Fix first
构建内容验证方式通过标准
数据拉取模块运行并打印原始响应非空、时效性达标、数值合理
API端点执行
curl localhost:{port}/api/...
返回正确的JSON格式
HTML页面执行
preview_serve
+
preview_check
返回
ok = true
任务脚本执行
python3 tasks/{id}/run.py
数值与数据源一致
LLM分析数值来自脚本变量,而非LLM直接生成的文本正确使用模板规则
验证分层规则:
  • 关键验证项(预览/上线前必须通过):数据正确性、核心逻辑、无崩溃
  • 信息类验证项(可交付后修复):样式、边缘case提示、 minor UX 优化
反模式:
  • ❌ 未运行任何代码就宣称“完成!”
  • ❌ 先写200+行代码再首次测试
  • ❌ 只说“应该能运行”
→ 详细模式说明:阅读
references/build-patterns.md

Code Practices

代码规范

  • read_file
    before
    edit_file
    — understand what's there
  • edit_file
    >
    write_file
    for modifications
  • Check
    ls
    before
    write_file
    — avoid duplicating existing files
  • Large files (>300 lines): split into multiple files, or skeleton-first + bash inject
  • Env vars:
    os.environ["KEY"]
    , persist installs to
    setup.sh
  • 调用
    edit_file
    前先执行
    read_file
    ——了解现有代码内容
  • 修改代码优先用
    edit_file
    而非
    write_file
  • 调用
    write_file
    前先执行
    ls
    ——避免覆盖已存在的文件
  • 大文件(>300行):拆分为多个文件,或先写骨架再通过bash注入内容
  • 环境变量使用
    os.environ["KEY"]
    ,依赖安装命令写入
    setup.sh
    持久化

Platform Rules

平台规则

  • Agent tools are tool calls only — not importable in scripts
  • Preview paths must be relative (
    ./path
    not
    /path
    )
  • Fullstack = one port (backend serves API + static files)
  • Cron times are UTC — convert from user timezone
  • Preview serving & publishing → read platform reference
    config/context/references/preview-guide.md
  • localhost APIs → read
    config/context/references/localhost-api.md
    • Task scripts decide WHEN to invoke the agent, WHAT data/context to pass, WHICH model to use
    • Pattern: script fetches data → evaluates if noteworthy → calls LLM only when needed → prints result
  • LLM in scripts — two options (details in
    references/build-patterns.md
    ):
    • OpenRouter (via sc-proxy): lightweight, for summarize/translate/format text. Direct API call, no agent overhead.
    • localhost /chat/stream: full agent with tools. Use only when LLM needs tool access.
  • Data template rule: Script owns the numbers, LLM owns the words. Final output assembles data from script variables + analysis from LLM. Never let LLM output be the sole source of numbers the user sees.
  • API costs & rate limits → read platform reference
    config/context/references/sc-proxy.md

  • Agent工具仅可通过工具调用使用——不可在脚本中导入
  • 预览路径必须为相对路径(用
    ./path
    而非
    /path
  • 全栈项目仅使用一个端口(后端同时提供API服务 + 静态文件服务)
  • Cron时间使用UTC时区——需从用户时区转换得到
  • 预览服务与发布相关规则 → 阅读平台参考文档
    config/context/references/preview-guide.md
  • 本地API相关规则 → 阅读
    config/context/references/localhost-api.md
    • 任务脚本自行决定何时调用Agent、传递什么数据/上下文、使用哪个模型
    • 典型模式:脚本拉取数据 → 判断是否值得关注 → 仅在需要时调用LLM → 打印结果
  • 脚本中使用LLM的两种方案(详情见
    references/build-patterns.md
    ):
    • OpenRouter(通过sc-proxy调用):轻量方案,适用于总结/翻译/文本格式化场景。直接API调用,无Agent额外开销。
    • 本地/chat/stream接口:带工具能力的完整Agent,仅在LLM需要调用工具时使用。
  • 数据模板规则: 脚本负责数值部分,LLM负责文案部分。最终输出整合脚本变量中的数据 + LLM生成的分析内容。绝对禁止将LLM输出作为用户可见数值的唯一来源。
  • API成本与速率限制 → 阅读平台参考文档
    config/context/references/sc-proxy.md

Phase 3: DEBUG

阶段3:调试

CHECK LOGS → REPRODUCE → ISOLATE → DIAGNOSE → FIX → VERIFY → REGRESS
  • CHECK LOGS first — task logs, preview diagnostics, stderr. If logs reveal a clear cause, skip to FIX.
  • REPRODUCE only when logs are insufficient — see the failure yourself
  • ISOLATE which layer is broken (data? logic? LLM? output? frontend? backend?)
  • FIX the root cause, then VERIFY with the same repro steps. Don't just fix — fix and confirm.
Three-Strike Rule: Same approach fails twice → STOP → rethink → explain to user → different approach.
→ Full debug procedures: read
references/debug-handbook.md

CHECK LOGS → REPRODUCE → ISOLATE → DIAGNOSE → FIX → VERIFY → REGRESS
  • 首先查看日志:任务日志、预览诊断信息、标准错误输出。如果日志明确指出了问题原因,直接跳到修复步骤。
  • 仅当日志不足以定位问题时复现问题:亲自确认故障现象
  • 隔离故障层:判断是哪一层出了问题(数据?逻辑?LLM?输出?前端?后端?)
  • 修复根本原因,然后用相同的复现步骤验证修复效果。不要只做修复——修复后必须确认问题已解决。
三振规则: 同一种方法失败两次 → 停止 → 重新思考 → 向用户说明情况 → 更换方案。
→ 完整调试流程:阅读
references/debug-handbook.md

Quick Checklists

快速检查清单

Kickoff: ☐ Clarified intent ☐ Proposed architecture ☐ Estimated cost ☐ User confirmed
Build: ☐ Each component tested ☐ Numbers match source ☐ Errors handled ☐ Preview healthy (web)
Debug: ☐ Logs checked ☐ Reproduced (or skipped — logs sufficient) ☐ Isolated layer ☐ Root cause found ☐ Fix verified ☐ Regressions checked
项目启动: ☐ 意图已澄清 ☐ 架构方案已提出 ☐ 成本已估算 ☐ 已获得用户确认
构建阶段: ☐ 每个组件都已测试 ☐ 数值与数据源一致 ☐ 错误已处理 ☐ 预览状态健康(Web项目)
调试阶段: ☐ 已查看日志 ☐ 已复现问题(或跳过——日志足够定位) ☐ 已隔离故障层 ☐ 已找到根本原因 ☐ 修复已验证 ☐ 已检查回归问题