project-builder
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseProject Build
项目构建
Three phases, always in order: DESIGN → BUILD → DEBUG.
Skill references (read on demand, not upfront):
- — Step-by-step patterns for tasks, dashboards, scripts
references/build-patterns.md - — Layer-by-layer diagnosis, common issues
references/debug-handbook.md
Platform references (shared, in ):
config/context/references/- — Preview serving, health checks, publishing, community deploy
preview-guide.md - — Scripts can call the agent via /chat/stream (decide when to think, what context to pass, which model) and push messages via /push
localhost-api.md - — Transparent proxy, API pricing & rate limits
sc-proxy.md
Skill references (in ):
references/- — Detailed build recipes per project type
build-patterns.md - — Systematic diagnosis protocol
debug-handbook.md - — Code templates for Chart.js, ApexCharts, D3.js, SSE, responsive layouts, dark mode, accessibility (read when building dashboards)
dashboard-examples.md
三个阶段,始终按顺序执行:设计 → 构建 → 调试。
技能参考文档(按需阅读,无需提前通读):
- —— 任务、仪表盘、脚本的分步实现模式
references/build-patterns.md - —— 逐层诊断方法,常见问题汇总
references/debug-handbook.md
平台参考文档(共享文件,位于目录下):
config/context/references/- —— 预览服务、健康检查、发布、社区部署相关说明
preview-guide.md - —— 脚本可通过/chat/stream调用Agent(决定何时思考、传递什么上下文、使用哪个模型),也可通过/push推送消息
localhost-api.md - —— 透明代理、API定价与速率限制说明
sc-proxy.md
技能参考文档(位于目录下):
references/- —— 不同项目类型的详细构建指南
build-patterns.md - —— 系统化诊断协议
debug-handbook.md - —— Chart.js、ApexCharts、D3.js、SSE、响应式布局、暗黑模式、无障碍相关的代码模板(构建仪表盘时阅读)
dashboard-examples.md
Phase 1: DESIGN
阶段1:设计
Translate vague requests into concrete specs. If intent is ambiguous, ask ONE question.
Architecture decision tree:
Periodic alerts/reports? → Scheduled Task
Live visual interface? → Preview Server (dashboard)
One-time analysis? → Inline (no build needed)
Reusable tool? → Script in workspaceFor medium+ projects, present to user BEFORE writing code:
- Data flow — sources → processing → output
- Architecture choice and why
- Cost estimate — (cost/run) × frequency × 30 = monthly
- Known limitations
API cost & rate limits:
All external API calls go through sc-proxy, which bills per request and enforces rate limits.
Before designing, read for pricing table and limits.
config/context/references/sc-proxy.md- Estimate cost:
credits_per_request × requests_per_run × runs_per_day × 30 - Respect rate limits: e.g. CoinGecko 60 req/min — a task polling 10 coins every minute is fine; 100 coins is not
- Prefer batch endpoints over N single calls (e.g. with multiple ids vs N separate calls)
coin_price - Pure script tasks (no API): ~0 credits/run
- LLM-assisted tasks: 0.01-0.05/run (use cheapest model that works)
- Dashboard auto-refresh costs credits — default to manual refresh unless user asks otherwise
Data reliability: Native tools > proxied APIs > direct requests > web scraping > LLM numbers (never).
Iron rule: Scripts fetch data. LLMs analyze text. Final output = script variables + LLM prose.
Task scripts can import skill functions directly:
python
from core.skill_tools import coingecko, coinglass # auto-discovers skills/*/exports.py
prices = coingecko.coin_price(coin_ids=["bitcoin"], timestamps=["now"])Tool names = SKILL.md frontmatter list. See .
tools:build-patterns.md § Using Skill Functions将模糊的需求转化为具体的规格说明。如果意图不明确,仅提1个问题确认。
架构决策树:
Periodic alerts/reports? → Scheduled Task
Live visual interface? → Preview Server (dashboard)
One-time analysis? → Inline (no build needed)
Reusable tool? → Script in workspace对于中等及以上规模的项目,写代码前先向用户展示以下内容:
- 数据流——数据源 → 处理过程 → 输出
- 架构选型及理由
- 成本估算——(单次运行成本) × 运行频率 × 30 = 月成本
- 已知限制
API成本与速率限制:
所有外部API调用都通过sc-proxy转发,sc-proxy会按请求计费并强制执行速率限制。
设计前,**阅读**了解定价表和限制规则。
config/context/references/sc-proxy.md- 成本估算公式:
credits_per_request × requests_per_run × runs_per_day × 30 - 遵守速率限制:例如CoinGecko限制为60次请求/分钟——每分钟轮询10个币种的任务是可行的,轮询100个则不行
- 优先使用批量接口而非N个单次调用(例如传入多个id调用,而非发起N次单独调用)
coin_price - 纯脚本任务(无API调用):单次运行约消耗0积分
- LLM辅助任务:单次运行消耗0.01-0.05积分(使用满足需求的最便宜模型)
- 仪表盘自动刷新会消耗积分——默认使用手动刷新,除非用户明确要求自动刷新
数据可靠性优先级: 原生工具 > 代理API > 直接请求 > 网页爬虫 > LLM生成的数字(绝对禁用)
铁则:脚本负责拉取数据,LLM负责分析文本。最终输出 = 脚本变量 + LLM生成的文案。
任务脚本可直接导入技能函数:
python
from core.skill_tools import coingecko, coinglass # auto-discovers skills/*/exports.py
prices = coingecko.coin_price(coin_ids=["bitcoin"], timestamps=["now"])工具名称对应SKILL.md前置元数据中的列表,参见章节。
tools:build-patterns.md § Using Skill FunctionsPhase 2: BUILD
阶段2:构建
Every piece follows this cycle:
Build one small piece → Run it → Verify output → ✅ Next piece / ❌ Fix first| Built | Verify how | Pass |
|---|---|---|
| Data fetcher | Run, print raw response | Non-empty, recent, plausible |
| API endpoint | | Correct JSON |
| HTML page | | |
| Task script | | Numbers match source |
| LLM analysis | Numbers from script vars, not LLM text | Template pattern used |
Verification layering:
- Critical (must pass before preview/activate): data correctness, core logic, no crashes
- Informational (can fix after delivery): styling, edge case messages, minor UX polish
Anti-patterns:
- ❌ "Done!" without running anything
- ❌ Writing 200+ lines then testing for the first time
- ❌ "It should work"
→ Detailed patterns: read
references/build-patterns.md每个模块都遵循以下循环:
Build one small piece → Run it → Verify output → ✅ Next piece / ❌ Fix first| 构建内容 | 验证方式 | 通过标准 |
|---|---|---|
| 数据拉取模块 | 运行并打印原始响应 | 非空、时效性达标、数值合理 |
| API端点 | 执行 | 返回正确的JSON格式 |
| HTML页面 | 执行 | 返回 |
| 任务脚本 | 执行 | 数值与数据源一致 |
| LLM分析 | 数值来自脚本变量,而非LLM直接生成的文本 | 正确使用模板规则 |
验证分层规则:
- 关键验证项(预览/上线前必须通过):数据正确性、核心逻辑、无崩溃
- 信息类验证项(可交付后修复):样式、边缘case提示、 minor UX 优化
反模式:
- ❌ 未运行任何代码就宣称“完成!”
- ❌ 先写200+行代码再首次测试
- ❌ 只说“应该能运行”
→ 详细模式说明:阅读
references/build-patterns.mdCode Practices
代码规范
- before
read_file— understand what's thereedit_file - >
edit_filefor modificationswrite_file - Check before
ls— avoid duplicating existing fileswrite_file - Large files (>300 lines): split into multiple files, or skeleton-first + bash inject
- Env vars: , persist installs to
os.environ["KEY"]setup.sh
- 调用前先执行
edit_file——了解现有代码内容read_file - 修改代码优先用而非
edit_filewrite_file - 调用前先执行
write_file——避免覆盖已存在的文件ls - 大文件(>300行):拆分为多个文件,或先写骨架再通过bash注入内容
- 环境变量使用,依赖安装命令写入
os.environ["KEY"]持久化setup.sh
Platform Rules
平台规则
- Agent tools are tool calls only — not importable in scripts
- Preview paths must be relative (not
./path)/path - Fullstack = one port (backend serves API + static files)
- Cron times are UTC — convert from user timezone
- Preview serving & publishing → read platform reference
config/context/references/preview-guide.md - localhost APIs → read
config/context/references/localhost-api.md- Task scripts decide WHEN to invoke the agent, WHAT data/context to pass, WHICH model to use
- Pattern: script fetches data → evaluates if noteworthy → calls LLM only when needed → prints result
- LLM in scripts — two options (details in ):
references/build-patterns.md- OpenRouter (via sc-proxy): lightweight, for summarize/translate/format text. Direct API call, no agent overhead.
- localhost /chat/stream: full agent with tools. Use only when LLM needs tool access.
- Data template rule: Script owns the numbers, LLM owns the words. Final output assembles data from script variables + analysis from LLM. Never let LLM output be the sole source of numbers the user sees.
- API costs & rate limits → read platform reference
config/context/references/sc-proxy.md
- Agent工具仅可通过工具调用使用——不可在脚本中导入
- 预览路径必须为相对路径(用而非
./path)/path - 全栈项目仅使用一个端口(后端同时提供API服务 + 静态文件服务)
- Cron时间使用UTC时区——需从用户时区转换得到
- 预览服务与发布相关规则 → 阅读平台参考文档
config/context/references/preview-guide.md - 本地API相关规则 → 阅读
config/context/references/localhost-api.md- 任务脚本自行决定何时调用Agent、传递什么数据/上下文、使用哪个模型
- 典型模式:脚本拉取数据 → 判断是否值得关注 → 仅在需要时调用LLM → 打印结果
- 脚本中使用LLM的两种方案(详情见):
references/build-patterns.md- OpenRouter(通过sc-proxy调用):轻量方案,适用于总结/翻译/文本格式化场景。直接API调用,无Agent额外开销。
- 本地/chat/stream接口:带工具能力的完整Agent,仅在LLM需要调用工具时使用。
- 数据模板规则: 脚本负责数值部分,LLM负责文案部分。最终输出整合脚本变量中的数据 + LLM生成的分析内容。绝对禁止将LLM输出作为用户可见数值的唯一来源。
- API成本与速率限制 → 阅读平台参考文档
config/context/references/sc-proxy.md
Phase 3: DEBUG
阶段3:调试
CHECK LOGS → REPRODUCE → ISOLATE → DIAGNOSE → FIX → VERIFY → REGRESS- CHECK LOGS first — task logs, preview diagnostics, stderr. If logs reveal a clear cause, skip to FIX.
- REPRODUCE only when logs are insufficient — see the failure yourself
- ISOLATE which layer is broken (data? logic? LLM? output? frontend? backend?)
- FIX the root cause, then VERIFY with the same repro steps. Don't just fix — fix and confirm.
Three-Strike Rule: Same approach fails twice → STOP → rethink → explain to user → different approach.
→ Full debug procedures: read
references/debug-handbook.mdCHECK LOGS → REPRODUCE → ISOLATE → DIAGNOSE → FIX → VERIFY → REGRESS- 首先查看日志:任务日志、预览诊断信息、标准错误输出。如果日志明确指出了问题原因,直接跳到修复步骤。
- 仅当日志不足以定位问题时复现问题:亲自确认故障现象
- 隔离故障层:判断是哪一层出了问题(数据?逻辑?LLM?输出?前端?后端?)
- 修复根本原因,然后用相同的复现步骤验证修复效果。不要只做修复——修复后必须确认问题已解决。
三振规则: 同一种方法失败两次 → 停止 → 重新思考 → 向用户说明情况 → 更换方案。
→ 完整调试流程:阅读
references/debug-handbook.mdQuick Checklists
快速检查清单
Kickoff: ☐ Clarified intent ☐ Proposed architecture ☐ Estimated cost ☐ User confirmed
Build: ☐ Each component tested ☐ Numbers match source ☐ Errors handled ☐ Preview healthy (web)
Debug: ☐ Logs checked ☐ Reproduced (or skipped — logs sufficient) ☐ Isolated layer ☐ Root cause found ☐ Fix verified ☐ Regressions checked
项目启动: ☐ 意图已澄清 ☐ 架构方案已提出 ☐ 成本已估算 ☐ 已获得用户确认
构建阶段: ☐ 每个组件都已测试 ☐ 数值与数据源一致 ☐ 错误已处理 ☐ 预览状态健康(Web项目)
调试阶段: ☐ 已查看日志 ☐ 已复现问题(或跳过——日志足够定位) ☐ 已隔离故障层 ☐ 已找到根本原因 ☐ 修复已验证 ☐ 已检查回归问题