mcp-supabase
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSupabase MCP Skill
Supabase MCP 技能
Interact with Supabase database via MCP tools, execute queries, writes, migrations, and diagnostics.
通过MCP工具与Supabase数据库交互,执行查询、写入、迁移和诊断操作。
Scope
适用范围
Applies to:
- Need to perform "database actions" on Supabase: query/statistics/export, write (after confirmation), migration (DDL), type generation, query logs/advisors
Does not apply to:
- Need to complete "integration implementation" in Next.js project (env/client code/minimal data access layer/project structure)
→ Use(Step 6: Supabase integration) for project-side setup; this skill only handles DB-side actions and gatesworkflow-ship-faster
Called by:
- uses this skill as DB operation foundation;
workflow-ship-fasterhandles project-side integration, this skill handles DB-side actions and security gatesworkflow-ship-faster
适用场景:
- 需要在Supabase上执行「数据库操作」:查询/统计/导出、写入(需确认后执行)、迁移(DDL)、类型生成、查询日志/建议
不适用场景:
- 需要在Next.js项目中完成「集成实现」(环境配置/客户端代码/极简数据访问层/项目结构)
→ 项目侧配置请使用(步骤6:Supabase集成);本技能仅处理数据库端操作和安全校验workflow-ship-faster
调用方:
- 将本技能作为数据库操作基础;
workflow-ship-faster负责项目侧集成,本技能负责数据库端操作和安全管控workflow-ship-faster
Postgres Best Practices (Bundled)
Postgres最佳实践(内置)
Ship Faster vendors Supabase's Postgres best practices inside the skill (install alongside this skill if you want these references available locally):
supabasesupabase- Full guide:
supabase/references/postgres-best-practices/AGENTS.md - Individual rules:
supabase/references/postgres-best-practices/rules/*.md
Consult it when:
- Writing/reviewing/optimizing SQL queries
- Designing indexes, schema changes, or RLS policies
- Diagnosing performance, locking, or connection issues
When proposing changes, cite the relevant rule file path (for example: ) and keep changes minimal.
supabase/references/postgres-best-practices/rules/query-missing-indexes.mdShip Faster在技能中内置了Supabase的Postgres最佳实践(若需要本地使用这些参考内容,请同时安装技能):
supabasesupabase- 完整指南:
supabase/references/postgres-best-practices/AGENTS.md - 独立规则:
supabase/references/postgres-best-practices/rules/*.md
以下场景可参考本内容:
- 编写/审核/优化SQL查询
- 设计索引、结构变更或RLS策略
- 诊断性能、锁或连接问题
提出变更建议时,请引用相关规则文件路径(例如:),并尽量保持变更最小化。
supabase/references/postgres-best-practices/rules/query-missing-indexes.mdFile-based Pipeline (Pass Paths Only)
基于文件的流水线(仅传递路径)
When integrating database operations into multi-step workflows, persist all context and artifacts to disk, passing only paths between agents/sub-agents.
Recommended directory structure (within project):
runs/<workflow>/active/<run_id>/- Input: (requirements),
01-input/goal.md(known tables/fields/IDs)01-input/context.json - Plan: (SQL to execute; write operations must be written here before confirmation)
03-plans/sql.md - Output: (conclusion + key numbers + SQL + truncated results)
05-final/result.md - Logs: (summary of each tool call; do not log sensitive field values)
logs/events.jsonl
将数据库操作集成到多步骤工作流时,请将所有上下文和工件持久化到磁盘,仅在Agent/子Agent之间传递路径。
推荐项目内目录结构:
runs/<workflow>/active/<run_id>/- 输入:(需求)、
01-input/goal.md(已知表/字段/ID)01-input/context.json - 计划:(待执行SQL;写入操作必须先在此处编写,再确认)
03-plans/sql.md - 输出:(结论 + 关键数据 + 执行的SQL + 截断后的结果)
05-final/result.md - 日志:(每个工具调用的摘要;请勿记录敏感字段值)
logs/events.jsonl
Tool Reference
工具参考
| Tool | Parameters | Purpose |
|---|---|---|
| | List all tables in specified schema |
| | Execute SQL (query or DML) |
| | Apply database migration |
| | View existing migrations |
| | Generate TypeScript type definitions |
| | Get project URL |
| | Get public API keys |
| | Query service logs |
| | Get security/performance recommendations |
Optional tools (if enabled):
- Edge Functions: ,
list_edge_functions,get_edge_functiondeploy_edge_function - Branching: ,
create_branch,list_branches,merge_branch,reset_branch,rebase_branchdelete_branch
| 工具 | 参数 | 用途 |
|---|---|---|
| | 列出指定schema中的所有表 |
| | 执行SQL(查询或DML语句) |
| | 应用数据库迁移 |
| | 查看已存在的迁移记录 |
| | 生成TypeScript类型定义 |
| | 获取项目URL |
| | 获取公开API密钥 |
| | 查询服务日志 |
| | 获取安全/性能优化建议 |
可选工具(若启用):
- Edge Functions:、
list_edge_functions、get_edge_functiondeploy_edge_function - 分支管理:、
create_branch、list_branches、merge_branch、reset_branch、rebase_branchdelete_branch
Security Rules (Must Follow)
安全规则(必须遵守)
- Read first: Always check schema before any operation
- Default LIMIT 50: All SELECT queries default to , unless user explicitly requests more
LIMIT 50 - Write operation confirmation: INSERT/UPDATE/DELETE must before execution:
- Display the SQL to be executed
- State expected number of affected rows
- Await explicit user confirmation
- No bare writes: UPDATE/DELETE without WHERE condition → refuse directly, do not execute
- Batch threshold: Affecting > 100 rows → force double confirmation + suggest first
SELECT count(*) - DDL via migration: Schema changes must use ,
apply_migrationcannot run DDL directlyexecute_sql - Production environment: Write disabled by default; only allow when user explicitly says "execute on prod" and double confirms
- Sensitive fields: email/phone/token/password are masked or not returned by default, unless user explicitly requests
- 先读取:所有操作前必须先检查数据库结构(schema)
- 默认限制50行:所有SELECT查询默认添加,除非用户明确请求更多
LIMIT 50 - 写入操作需确认:INSERT/UPDATE/DELETE操作执行前必须:
- 展示待执行的SQL语句
- 说明预计受影响的行数
- 等待用户明确确认
- 禁止无条件写入:不带WHERE条件的UPDATE/DELETE语句→直接拒绝,不执行
- 批量操作阈值:受影响行数>100→强制二次确认 + 建议先执行
SELECT count(*) - DDL通过迁移执行:数据库结构变更必须使用,
apply_migration不能直接运行DDL语句execute_sql - 生产环境限制:默认禁用写入操作;仅当用户明确说明「在生产环境执行」并二次确认后才允许
- 敏感字段处理:默认屏蔽或不返回email/phone/token/password等敏感字段,除非用户明确请求
Operation Flow
操作流程
1. Parse requirements → restate objective
2. Unsure about tables/fields → first list_tables or execute_sql to query information_schema
3. Plan SQL → present to user
4. Read-only → execute directly
5. Write operation → confirm before execution → verify affected rows → report result1. 解析需求 → 重述目标
2. 不确定表/字段信息 → 先调用list_tables或execute_sql查询information_schema
3. 编写SQL计划 → 提交给用户
4. 只读操作 → 直接执行
5. 写入操作 → 确认后执行 → 验证受影响行数 → 报告结果Output Format
输出格式
- Language: English
- Structure: Conclusion → Key numbers → Executed SQL → Result table (max 50 rows)
- Overflow handling: Truncate + show total count + optional export/pagination
Example output:
✅ Query complete: 142 new users in the last 7 days
Executed SQL:
SELECT DATE(created_at) as date, COUNT(*) as count
FROM user_profiles
WHERE created_at > NOW() - INTERVAL '7 days'
GROUP BY DATE(created_at) ORDER BY date DESC;
| date | count |
|------------|-------|
| 2025-01-09 | 23 |
| 2025-01-08 | 31 |
| ... | ... |- 语言:英文
- 结构:结论 → 关键数据 → 执行的SQL → 结果表格(最多50行)
- 溢出处理:截断结果 + 显示总行数 + 可选导出/分页
示例输出:
✅ 查询完成:过去7天新增142位用户
执行的SQL:
SELECT DATE(created_at) as date, COUNT(*) as count
FROM user_profiles
WHERE created_at > NOW() - INTERVAL '7 days'
GROUP BY DATE(created_at) ORDER BY date DESC;
| date | count |
|------------|-------|
| 2025-01-09 | 23 |
| 2025-01-08 | 31 |
| ... | ... |Error Handling
错误处理
| Situation | Action |
|---|---|
| SQL syntax error | Return error summary + fix suggestions |
| Insufficient permissions | Explain required permissions + alternatives |
| No data returned | Explain possible reasons (conditions too strict? data doesn't exist?) |
| RLS blocked | Suggest checking RLS policy or using service_role |
| 场景 | 操作 |
|---|---|
| SQL语法错误 | 返回错误摘要 + 修复建议 |
| 权限不足 | 说明所需权限 + 替代方案 |
| 无数据返回 | 解释可能原因(条件过严?数据不存在?) |
| RLS拦截 | 建议检查RLS策略或使用service_role |
Example Dialogues
示例对话
Read: Simple Query
读取:简单查询
User: Get registered user count for the last 7 days, by day
Execution:
1. Confirm table user_profiles, field created_at
2. Execute aggregation SQL
3. Return: conclusion + numbers + SQL + table用户:获取过去7天的注册用户数,按天统计
执行步骤:
1. 确认user_profiles表和created_at字段存在
2. 执行聚合SQL
3. 返回:结论 + 数据 + SQL + 表格Read: Complex Query
读取:复杂查询
User: Find projects that have runs but all failed
Execution:
1. Confirm projects, runs tables and status field
2. Present JOIN + aggregation SQL
3. Execute and return results (mask email)用户:找出有运行记录但全部失败的项目
执行步骤:
1. 确认projects、runs表和status字段存在
2. 提交JOIN+聚合SQL
3. 执行并返回结果(屏蔽email字段)Write: Insert
写入:插入
User: Create a new run for project xxx
Execution:
1. First check if project exists
2. Present INSERT SQL + expected impact: 1 row
3. Await confirmation → execute → return new record id用户:为项目xxx创建一条新的运行记录
执行步骤:
1. 先检查项目是否存在
2. 提交INSERT SQL + 预计影响:1行
3. 等待确认 → 执行 → 返回新记录IDWrite: Update
写入:更新
User: Change run abc's status to completed
Execution:
1. First SELECT to verify current state
2. Present UPDATE SQL + WHERE id = 'abc'
3. Confirm → execute → SELECT again to verify用户:将运行记录abc的状态改为已完成
执行步骤:
1. 先执行SELECT验证当前状态
2. 提交带WHERE id = 'abc'的UPDATE SQL
3. 确认 → 执行 → 再次SELECT验证结果Dangerous: Delete
高危:删除
User: Delete all runs where status = 'failed'
Execution:
1. First SELECT count(*) WHERE status = 'failed'
2. Present count + DELETE SQL
3. If > 100 rows, force double confirmation
4. After confirmation execute → report deleted row count用户:删除所有状态为「失败」的运行记录
执行步骤:
1. 先执行SELECT count(*) WHERE status = 'failed'统计数量
2. 展示统计数 + DELETE SQL
3. 若行数>100,强制二次确认
4. 确认后执行 → 报告删除行数Dangerous: DELETE without WHERE
高危:无WHERE条件的DELETE
User: Clear the runs table
Execution:
❌ Refuse to execute
→ Prompt: DELETE without WHERE condition, this will delete all data
→ Suggest: Use TRUNCATE (requires migration) or add explicit condition用户:清空runs表
执行步骤:
❌ 拒绝执行
→ 提示:无WHERE条件的DELETE语句会删除所有数据
→ 建议:使用TRUNCATE(需通过迁移执行)或添加明确条件Schema Reference
结构参考
Get latest schema at runtime:
sql
-- List all tables
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public';
-- View table structure
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = '<table_name>';For project-specific schema (may be outdated), see schema.md. Default to information_schema / as source of truth.
generate_typescript_types运行时获取最新数据库结构:
sql
-- 列出所有表
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public';
-- 查看表结构
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = '<table_name>';项目专属结构(可能过时)请查看schema.md。默认以information_schema或作为可信来源。
generate_typescript_types