ship-gate
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseShip Gate
Ship Gate
Pre-production audit that scans a codebase and reports pass/fail/manual
across 8 categories before anything ships.
这是一项预生产审计工具,会在发布前扫描代码库,并针对8个类别给出通过/失败/需手动确认的结果。
Intercept Behavior
拦截行为
When the user says "push to production", "deploy", "ship it", "go live",
or similar deploy-intent phrases, do NOT proceed with deployment. Instead:
- Ask: "Have you run the ship gate? Want me to scan now?"
- If yes, run the full audit below.
- If the user says they already ran it, ask when. If more than 24 hours ago or if code changed since, recommend re-running.
当用户说出“推送至生产环境”“部署”“发布”“上线”或类似表达部署意图的语句时,请勿执行部署操作。而是:
- 询问:“您是否已运行发布闸门检查?需要我现在进行扫描吗?”
- 如果用户同意,运行下方完整审计流程。
- 如果用户表示已运行过,询问运行时间。若超过24小时或之后代码有变更,建议重新运行。
How It Works
工作流程
Step 1: Detect Stack
步骤1:检测技术栈
Run these checks in order to identify the project stack:
Framework detection:
package.json exists -> Node.js project
"next" in dependencies -> Next.js
"react" in dependencies -> React (if not Next.js)
"vue" in dependencies -> Vue
"svelte" in dependencies -> Svelte
"astro" in dependencies -> Astro
"express" in dependencies -> Express
"fastify" in dependencies -> Fastify
"hono" in dependencies -> Hono
requirements.txt or pyproject.toml -> Python project
"django" present -> Django
"flask" present -> Flask
"fastapi" present -> FastAPI
go.mod exists -> Go project
Cargo.toml exists -> Rust project
Database detection:
"@supabase/supabase-js" in package.json -> Supabase
supabase/ directory exists -> Supabase
"prisma" in dependencies -> Prisma (check schema for DB type)
"mongoose" in dependencies -> MongoDB
"pg" or "postgres" in dependencies -> PostgreSQL
firebase.json or .firebaserc exists -> Firebase
Deploy target detection:
vercel.json or .vercel/ exists -> Vercel
netlify.toml exists -> Netlify
Dockerfile exists -> Docker/VPS
fly.toml exists -> Fly.io
railway.json exists -> Railway
.platform/applications.yaml -> Platform.sh
Auth detection:
"@clerk" in dependencies -> Clerk
"next-auth" in dependencies -> NextAuth
"@supabase/auth-helpers" in deps -> Supabase Auth
"firebase/auth" in imports -> Firebase Auth
AI/LLM detection:
"openai" in dependencies -> OpenAI
"@anthropic-ai/sdk" in dependencies -> Claude API
"@google/generative-ai" in deps -> GeminiReport detected stack before proceeding. This determines which checks
are relevant. Checks tagged with a specific stack in
are skipped if that stack is not detected.
references/checks.md按顺序运行以下检查以识别项目技术栈:
Framework detection:
package.json exists -> Node.js project
"next" in dependencies -> Next.js
"react" in dependencies -> React (if not Next.js)
"vue" in dependencies -> Vue
"svelte" in dependencies -> Svelte
"astro" in dependencies -> Astro
"express" in dependencies -> Express
"fastify" in dependencies -> Fastify
"hono" in dependencies -> Hono
requirements.txt or pyproject.toml -> Python project
"django" present -> Django
"flask" present -> Flask
"fastapi" present -> FastAPI
go.mod exists -> Go project
Cargo.toml exists -> Rust project
Database detection:
"@supabase/supabase-js" in package.json -> Supabase
supabase/ directory exists -> Supabase
"prisma" in dependencies -> Prisma (check schema for DB type)
"mongoose" in dependencies -> MongoDB
"pg" or "postgres" in dependencies -> PostgreSQL
firebase.json or .firebaserc exists -> Firebase
Deploy target detection:
vercel.json or .vercel/ exists -> Vercel
netlify.toml exists -> Netlify
Dockerfile exists -> Docker/VPS
fly.toml exists -> Fly.io
railway.json exists -> Railway
.platform/applications.yaml -> Platform.sh
Auth detection:
"@clerk" in dependencies -> Clerk
"next-auth" in dependencies -> NextAuth
"@supabase/auth-helpers" in deps -> Supabase Auth
"firebase/auth" in imports -> Firebase Auth
AI/LLM detection:
"openai" in dependencies -> OpenAI
"@anthropic-ai/sdk" in dependencies -> Claude API
"@google/generative-ai" in deps -> Gemini在继续之前报告检测到的技术栈。这将决定哪些检查是相关的。如果未检测到对应技术栈,中标记了特定技术栈的检查将被跳过。
references/checks.mdStep 2: Run Automated Checks
步骤2:运行自动化检查
Run categories in this order: SEC, DB, CODE, DEP, AI, DEPLOY, FE, OBS.
Security and database first because they produce the most critical findings.
For each category, run every auto-scannable check from
using the patterns in .
references/checks.mdreferences/patterns.mdReport progress after each category completes:
[1/8] Security: 3 FAIL, 12 PASS, 3 SKIP
[2/8] Database: 1 FAIL, 5 PASS, 6 SKIP
...Report results as:
- PASS: check passed
- FAIL: issue found (with file path and line number)
- SKIP: not applicable to this stack
按顺序运行以下类别:SEC(安全)、DB(数据库)、CODE(代码质量)、DEP(依赖项)、AI(AI/LLM)、DEPLOY(部署)、FE(前端)、OBS(可观测性)。安全和数据库检查优先,因为它们会产生最关键的问题发现。
对于每个类别,使用中的模式,运行中所有可自动扫描的检查。
references/patterns.mdreferences/checks.md每个类别完成后报告进度:
[1/8] Security: 3 FAIL, 12 PASS, 3 SKIP
[2/8] Database: 1 FAIL, 5 PASS, 6 SKIP
...结果报告分为:
- PASS:检查通过
- FAIL:发现问题(包含文件路径和行号)
- SKIP:不适用于当前技术栈
Step 3: Manual Confirmation
步骤3:手动确认
For checks that cannot be automated (backup restore tested, rollback plan
exists, staging test passed), present them as a checklist and ask the user
to confirm each one.
对于无法自动化的检查(如备份恢复测试已验证、回滚计划已存在、 staging测试已通过),将其以清单形式呈现,并要求用户逐一确认。
Step 4: Verdict
步骤4:判定结果
Classify results into three severities:
- CRITICAL: must fix before shipping (secrets exposed, no auth on routes, no HTTPS, SQL injection vectors, no RLS on Supabase tables)
- HIGH: should fix before shipping (no error boundaries, no rate limiting, console.logs in production, no pagination)
- ADVISORY: recommended but not blocking (no OG tags, no custom 404, no analytics, no SBOM)
Final output:
SHIP GATE REPORT
================
Stack: Next.js + Supabase + Vercel
Scan time: 12s
CRITICAL (3 items, must fix)
FAIL [SEC-01] API key found in src/lib/api.ts:14
FAIL [DB-07] RLS not enabled on "profiles" table
FAIL [SEC-05] No CSRF protection on /api/checkout
HIGH (5 items, should fix)
FAIL [CODE-01] 12 console.log statements in production code
FAIL [CODE-03] Empty catch block in src/utils/auth.ts:45
FAIL [DEP-04] 3 critical npm audit vulnerabilities
FAIL [DEPLOY-05] No rollback plan documented
MANUAL [DEPLOY-06] Staging test not confirmed
ADVISORY (4 items, recommended)
FAIL [FE-01] Missing OG meta tags
FAIL [FE-03] No custom 404 page
PASS [OBS-01] Error monitoring configured
SKIP [AI-01] No AI/LLM usage detected
VERDICT: DO NOT SHIP (3 critical issues)
Fix critical items and re-run.If zero critical items remain, verdict is: CLEAR TO SHIP.
If only high items remain, verdict is: SHIP WITH CAUTION (acknowledge risks).
将结果分为三个严重级别:
- CRITICAL:发布前必须修复(如密钥泄露、路由未授权、无HTTPS、SQL注入风险、Supabase表未启用RLS)
- HIGH:发布前应修复(如无错误边界、无速率限制、生产代码中存在console.log、无分页)
- ADVISORY:推荐修复但不阻塞发布(如缺少OG标签、无自定义404页面、无分析工具、无SBOM)
最终输出示例:
SHIP GATE REPORT
================
Stack: Next.js + Supabase + Vercel
Scan time: 12s
CRITICAL (3 items, must fix)
FAIL [SEC-01] API key found in src/lib/api.ts:14
FAIL [DB-07] RLS not enabled on "profiles" table
FAIL [SEC-05] No CSRF protection on /api/checkout
HIGH (5 items, should fix)
FAIL [CODE-01] 12 console.log statements in production code
FAIL [CODE-03] Empty catch block in src/utils/auth.ts:45
FAIL [DEP-04] 3 critical npm audit vulnerabilities
FAIL [DEPLOY-05] No rollback plan documented
MANUAL [DEPLOY-06] Staging test not confirmed
ADVISORY (4 items, recommended)
FAIL [FE-01] Missing OG meta tags
FAIL [FE-03] No custom 404 page
PASS [OBS-01] Error monitoring configured
SKIP [AI-01] No AI/LLM usage detected
VERDICT: DO NOT SHIP (3 critical issues)
Fix critical items and re-run.如果关键问题数量为0,则判定结果为:CLEAR TO SHIP(可发布)。
如果仅存在高优先级问题,则判定结果为:SHIP WITH CAUTION(谨慎发布,需确认风险)。
Categories
检查类别
Eight categories, each with a code prefix. Full check details in
.
references/checks.md| Prefix | Category | Auto | Manual | Tool |
|---|---|---|---|---|
| SEC | Security | 15 | 3 | 0 |
| DB | Database | 7 | 5 | 0 |
| DEPLOY | Deployment | 3 | 8 | 0 |
| CODE | Code Quality | 11 | 0 | 1 |
| AI | AI/LLM Security | 5 | 3 | 0 |
| DEP | Dependencies | 5 | 0 | 1 |
| FE | Frontend Quality | 7 | 3 | 0 |
| OBS | Observability | 2 | 5 | 0 |
八个类别,每个类别有代码前缀。完整检查详情见。
references/checks.md| 前缀 | 类别 | 自动化检查数 | 手动检查数 | 工具依赖数 |
|---|---|---|---|---|
| SEC | 安全 | 15 | 3 | 0 |
| DB | 数据库 | 7 | 5 | 0 |
| DEPLOY | 部署 | 3 | 8 | 0 |
| CODE | 代码质量 | 11 | 0 | 1 |
| AI | AI/LLM安全 | 5 | 3 | 0 |
| DEP | 依赖项 | 5 | 0 | 1 |
| FE | 前端质量 | 7 | 3 | 0 |
| OBS | 可观测性 | 2 | 5 | 0 |
Scope
适用范围
This skill audits. It does not fix. When it finds issues, it reports
them with file locations and remediation guidance. The user or another
skill (systematic-debugging, backend-patterns, shadcn-stack) handles
the fix.
This skill does not:
- Set up CI/CD pipelines
- Provision infrastructure
- Configure monitoring tools
- Run after deployment (it is pre-deploy only)
该工具仅负责审计,不负责修复。当发现问题时,它会报告问题所在的文件位置和修复指导。修复工作由用户或其他工具(如systematic-debugging、backend-patterns、shadcn-stack)处理。
该工具不负责:
- 设置CI/CD流水线
- 部署基础设施
- 配置监控工具
- 部署后运行(仅适用于预部署阶段)
Integration Points
集成点
- karpathy-coder: run ship-gate after karpathy-check passes — simplicity first, then production readiness
- adversarial-reviewer: deep security review for items ship-gate flags as critical
- security-pen-testing: penetration testing methodology for SEC-category findings
- code-reviewer: general code quality review complements ship-gate's automated checks
- karpathy-coder:在karpathy-check通过后运行ship-gate — 先保证简洁性,再确保生产就绪
- adversarial-reviewer:对ship-gate标记为关键的问题进行深度安全审查
- security-pen-testing:针对SEC类别发现的问题采用渗透测试方法
- code-reviewer:通用代码质量审查,补充ship-gate的自动化检查