mcp-supabase
Original:🇺🇸 English
Translated
Execute database operations via Supabase MCP (query/write/migration/logs/type generation). Triggers: query/statistics/export/insert/update/delete/fix/backfill/migrate/logs/alerts/type generation. Does not trigger for: pure architecture discussion or code planning. Write operations require confirmation; UPDATE/DELETE without WHERE is refused.
2installs
Sourceheyvhuang/ship-faster
Added on
NPX Install
npx skill4agent add heyvhuang/ship-faster mcp-supabaseTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Supabase MCP Skill
Interact with Supabase database via MCP tools, execute queries, writes, migrations, and diagnostics.
Scope
Applies to:
- Need to perform "database actions" on Supabase: query/statistics/export, write (after confirmation), migration (DDL), type generation, query logs/advisors
Does not apply to:
- Need to complete "integration implementation" in Next.js project (env/client code/minimal data access layer/project structure)
→ Use(Step 6: Supabase integration) for project-side setup; this skill only handles DB-side actions and gatesworkflow-ship-faster
Called by:
- uses this skill as DB operation foundation;
workflow-ship-fasterhandles project-side integration, this skill handles DB-side actions and security gatesworkflow-ship-faster
Postgres Best Practices (Bundled)
Ship Faster vendors Supabase's Postgres best practices inside the skill (install alongside this skill if you want these references available locally):
supabasesupabase- Full guide:
supabase/references/postgres-best-practices/AGENTS.md - Individual rules:
supabase/references/postgres-best-practices/rules/*.md
Consult it when:
- Writing/reviewing/optimizing SQL queries
- Designing indexes, schema changes, or RLS policies
- Diagnosing performance, locking, or connection issues
When proposing changes, cite the relevant rule file path (for example: ) and keep changes minimal.
supabase/references/postgres-best-practices/rules/query-missing-indexes.mdFile-based Pipeline (Pass Paths Only)
When integrating database operations into multi-step workflows, persist all context and artifacts to disk, passing only paths between agents/sub-agents.
Recommended directory structure (within project):
runs/<workflow>/active/<run_id>/- Input: (requirements),
01-input/goal.md(known tables/fields/IDs)01-input/context.json - Plan: (SQL to execute; write operations must be written here before confirmation)
03-plans/sql.md - Output: (conclusion + key numbers + SQL + truncated results)
05-final/result.md - Logs: (summary of each tool call; do not log sensitive field values)
logs/events.jsonl
Tool Reference
| Tool | Parameters | Purpose |
|---|---|---|
| | List all tables in specified schema |
| | Execute SQL (query or DML) |
| | Apply database migration |
| | View existing migrations |
| | Generate TypeScript type definitions |
| | Get project URL |
| | Get public API keys |
| | Query service logs |
| | Get security/performance recommendations |
Optional tools (if enabled):
- Edge Functions: ,
list_edge_functions,get_edge_functiondeploy_edge_function - Branching: ,
create_branch,list_branches,merge_branch,reset_branch,rebase_branchdelete_branch
Security Rules (Must Follow)
- Read first: Always check schema before any operation
- Default LIMIT 50: All SELECT queries default to , unless user explicitly requests more
LIMIT 50 - Write operation confirmation: INSERT/UPDATE/DELETE must before execution:
- Display the SQL to be executed
- State expected number of affected rows
- Await explicit user confirmation
- No bare writes: UPDATE/DELETE without WHERE condition → refuse directly, do not execute
- Batch threshold: Affecting > 100 rows → force double confirmation + suggest first
SELECT count(*) - DDL via migration: Schema changes must use ,
apply_migrationcannot run DDL directlyexecute_sql - Production environment: Write disabled by default; only allow when user explicitly says "execute on prod" and double confirms
- Sensitive fields: email/phone/token/password are masked or not returned by default, unless user explicitly requests
Operation Flow
1. Parse requirements → restate objective
2. Unsure about tables/fields → first list_tables or execute_sql to query information_schema
3. Plan SQL → present to user
4. Read-only → execute directly
5. Write operation → confirm before execution → verify affected rows → report resultOutput Format
- Language: English
- Structure: Conclusion → Key numbers → Executed SQL → Result table (max 50 rows)
- Overflow handling: Truncate + show total count + optional export/pagination
Example output:
✅ Query complete: 142 new users in the last 7 days
Executed SQL:
SELECT DATE(created_at) as date, COUNT(*) as count
FROM user_profiles
WHERE created_at > NOW() - INTERVAL '7 days'
GROUP BY DATE(created_at) ORDER BY date DESC;
| date | count |
|------------|-------|
| 2025-01-09 | 23 |
| 2025-01-08 | 31 |
| ... | ... |Error Handling
| Situation | Action |
|---|---|
| SQL syntax error | Return error summary + fix suggestions |
| Insufficient permissions | Explain required permissions + alternatives |
| No data returned | Explain possible reasons (conditions too strict? data doesn't exist?) |
| RLS blocked | Suggest checking RLS policy or using service_role |
Example Dialogues
Read: Simple Query
User: Get registered user count for the last 7 days, by day
Execution:
1. Confirm table user_profiles, field created_at
2. Execute aggregation SQL
3. Return: conclusion + numbers + SQL + tableRead: Complex Query
User: Find projects that have runs but all failed
Execution:
1. Confirm projects, runs tables and status field
2. Present JOIN + aggregation SQL
3. Execute and return results (mask email)Write: Insert
User: Create a new run for project xxx
Execution:
1. First check if project exists
2. Present INSERT SQL + expected impact: 1 row
3. Await confirmation → execute → return new record idWrite: Update
User: Change run abc's status to completed
Execution:
1. First SELECT to verify current state
2. Present UPDATE SQL + WHERE id = 'abc'
3. Confirm → execute → SELECT again to verifyDangerous: Delete
User: Delete all runs where status = 'failed'
Execution:
1. First SELECT count(*) WHERE status = 'failed'
2. Present count + DELETE SQL
3. If > 100 rows, force double confirmation
4. After confirmation execute → report deleted row countDangerous: DELETE without WHERE
User: Clear the runs table
Execution:
❌ Refuse to execute
→ Prompt: DELETE without WHERE condition, this will delete all data
→ Suggest: Use TRUNCATE (requires migration) or add explicit conditionSchema Reference
Get latest schema at runtime:
sql
-- List all tables
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public';
-- View table structure
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = '<table_name>';For project-specific schema (may be outdated), see schema.md. Default to information_schema / as source of truth.
generate_typescript_types