sql-planner

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Claim Conditions

技能触发规则

This skill is the entry point for any data question expressed in natural language.
I claim when:
  • User asks a question about data in natural language (English or Spanish): "how many", "count", "list", "show", "verify", "check", "find", "cuantos", "traeme", "muéstrame"
  • User mentions a database table or business entity (users, subscriptions, sessions, etc.)
  • User asks about data in ANY environment (production, staging, dev, local)
I do NOT claim when:
  • User provides a complete SQL query ready to execute
  • User asks to create a connector (use
    /sql-planner:new-connector
    instead)
本技能是处理所有自然语言数据问题的入口。
触发场景:
  • 用户用自然语言(英语或西班牙语)提出数据相关问题:"how many"、"count"、"list"、"show"、"verify"、"check"、"find"、"cuantos"、"traeme"、"muéstrame"
  • 用户提及数据库表或业务实体(如users、subscriptions、sessions等)
  • 用户询问任意环境(production、staging、dev、local)中的数据
不触发场景:
  • 用户提供可直接执行的完整SQL查询语句
  • 用户要求创建连接器(请改用
    /sql-planner:new-connector

Input

输入参数

$ARGUMENTS = natural language describing what data the user wants.
$ARGUMENTS = 描述用户所需数据的自然语言内容。

Step 1 — Gather Context

步骤1 — 收集上下文信息

Run these tasks in parallel where possible:
Phase A (parallel):
尽可能并行执行以下任务:
A阶段(并行执行):

1a. Detect DB Connection & Engine

1a. 检测数据库连接与引擎

Auto-detect the local database connection from the project:
  1. docker-compose.yml
    /
    compose.yml
    :
    • mysql
      /
      mariadb
      image →
      docker compose exec <service> mysql -u <user> -p<pass> <dbname> --table -e "{sql}"
    • postgres
      image →
      docker compose exec <service> psql -U <user> -d <dbname> -c "{sql}"
  2. Django
    DATABASES
    setting
    → build CLI command from ENGINE/HOST/NAME/USER/PASSWORD
  3. Rails
    config/database.yml
    → extract dev config
  4. db.sqlite3
    exists
    sqlite3 db.sqlite3 "{sql}"
  5. Can't detect → ask the user
From the detected engine, infer the SQL dialect (MySQL, PostgreSQL, SQLite).
从项目中自动检测本地数据库连接:
  1. docker-compose.yml
    /
    compose.yml
    :
    • mysql
      /
      mariadb
      镜像 → 执行命令
      docker compose exec <service> mysql -u <user> -p<pass> <dbname> --table -e "{sql}"
    • postgres
      镜像 → 执行命令
      docker compose exec <service> psql -U <user> -d <dbname> -c "{sql}"
  2. Django
    DATABASES
    配置
    → 根据ENGINE/HOST/NAME/USER/PASSWORD构建CLI命令
  3. Rails
    config/database.yml
    → 提取dev环境配置
  4. 存在
    db.sqlite3
    文件
    → 执行命令
    sqlite3 db.sqlite3 "{sql}"
  5. 无法检测到连接 → 询问用户
根据检测到的引擎,推断SQL方言(MySQL、PostgreSQL、SQLite)。

1b. Discover Connector

1b. 发现连接器

Search for project-level connectors:
  1. Grep for
    ## SQL Connector
    within
    .claude/skills/**/SKILL.md
    in the current project directory
  2. If found → read the matching file(s) for engine, environments, domain knowledge
  3. If the user asks for a remote environment and NO connector exists → inform them and suggest:
    /sql-planner:new-connector
搜索项目级别的连接器:
  1. 在当前项目目录的
    .claude/skills/**/SKILL.md
    文件中搜索
    ## SQL Connector
    标记
  2. 如果找到→读取匹配文件中的引擎、环境及领域知识信息
  3. 如果用户询问远程环境但不存在对应连接器→告知用户并建议使用:
    /sql-planner:new-connector

1c. Read Schema Cache

1c. 读取Schema缓存

If
.claude/sql-planner/schema.tsv
exists, read it and extract the
# count:<N>
metadata from the first line. Otherwise, note that schema needs to be generated.
Phase B (after 1a completes):
如果存在
.claude/sql-planner/schema.tsv
文件,读取该文件并提取第一行的
# count:<N>
元数据。否则,记录需要生成Schema。
B阶段(完成1a后执行):

1d. Validate or Generate Schema

1d. 验证或生成Schema

Using the DB connection from 1a and the cache state from 1c:
  1. If cache exists → execute a quick column count against the DB and compare with N. If differs → regenerate; if matches → use cache.
  2. If cache does NOT exist → generate from INFORMATION_SCHEMA /
    .schema
    and save with metadata count.
Schema generation commands:
MySQL:
sql
SELECT TABLE_NAME, GROUP_CONCAT(CONCAT(COLUMN_NAME, ' ', COLUMN_TYPE, IF(COLUMN_KEY='PRI',' PK',''), IF(COLUMN_KEY='MUL',' FK',''), IF(COLUMN_KEY='UNI',' UQ','')) ORDER BY ORDINAL_POSITION SEPARATOR ', ') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = DATABASE() GROUP BY TABLE_NAME ORDER BY TABLE_NAME;
PostgreSQL:
sql
SELECT table_name, string_agg(column_name || ' ' || data_type, ', ' ORDER BY ordinal_position) FROM information_schema.columns WHERE table_schema = 'public' GROUP BY table_name ORDER BY table_name;
SQLite:
.schema
Autonomy principle: resolve everything possible without asking. But if something is genuinely ambiguous or cannot be inferred with certainty, ask the user — it's better to confirm than to assume wrong.
使用1a中获取的数据库连接及1c中的缓存状态:
  1. 如果缓存存在→对数据库执行快速列计数并与N对比。若不一致→重新生成;若一致→使用缓存。
  2. 如果缓存不存在→从INFORMATION_SCHEMA或
    .schema
    生成Schema,并附带元数据计数保存。
Schema生成命令:
MySQL:
sql
SELECT TABLE_NAME, GROUP_CONCAT(CONCAT(COLUMN_NAME, ' ', COLUMN_TYPE, IF(COLUMN_KEY='PRI',' PK',''), IF(COLUMN_KEY='MUL',' FK',''), IF(COLUMN_KEY='UNI',' UQ','')) ORDER BY ORDINAL_POSITION SEPARATOR ', ') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = DATABASE() GROUP BY TABLE_NAME ORDER BY TABLE_NAME;
PostgreSQL:
sql
SELECT table_name, string_agg(column_name || ' ' || data_type, ', ' ORDER BY ordinal_position) FROM information_schema.columns WHERE table_schema = 'public' GROUP BY table_name ORDER BY table_name;
SQLite:
.schema
自主处理原则:尽可能在不询问用户的情况下解决问题。但如果存在明确歧义或无法确定推断结果,请询问用户——确认比错误假设更重要。

Step 2 — Generate SQL

步骤2 — 生成SQL

  • Translate natural language → SQL using schema + domain knowledge from connector (if exists) + correct dialect
  • LIMIT 25
    by default unless the user explicitly asks for all rows
  • Use readable column aliases for cryptic column names
  • 结合Schema、连接器中的领域知识(若存在)及正确的SQL方言,将自然语言转换为SQL
  • 默认添加
    LIMIT 25
    限制,除非用户明确要求返回所有行
  • 为晦涩的列名使用易读的列别名

Step 3 — Confirm & Execute

步骤3 — 确认并执行

Apply these rules based on the query type and target environment:
  • READ-ONLY + local → show the SQL and execute directly, no confirmation needed
  • READ-ONLY + remote → use
    AskUserQuestion
    to confirm the target environment before executing
  • Write operation (INSERT, UPDATE, DELETE, DROP, ALTER, TRUNCATE, CREATE)show the generated SQL only. Do NOT execute. Explain that this skill does not execute write operations — the user can copy and run the query themselves.
When executing read queries against a remote environment, use the connection command from the connector's
## Environments
section, replacing
{sql}
with the generated query.
根据查询类型和目标环境应用以下规则:
  • 只读查询+本地环境 → 展示SQL并直接执行,无需确认
  • 只读查询+远程环境 → 执行前使用
    AskUserQuestion
    确认目标环境
  • 写入操作(INSERT、UPDATE、DELETE、DROP、ALTER、TRUNCATE、CREATE)仅展示生成的SQL,不执行。需向用户说明本技能不执行写入操作,用户可自行复制并运行该查询。
当对远程环境执行只读查询时,使用连接器
## Environments
部分中的连接命令,将
{sql}
替换为生成的查询语句。

Step 4 — Display Results

步骤4 — 展示结果

Format and show the output to the user.
格式化查询结果并展示给用户。

Safety Rules

安全规则

  • Read-only execution. This skill only executes
    SELECT
    queries.
  • Write queries (
    INSERT
    ,
    UPDATE
    ,
    DELETE
    ,
    DROP
    ,
    ALTER
    ,
    TRUNCATE
    ,
    CREATE
    ) are generated and shown but never executed.
  • Never generate
    DROP DATABASE
  • Always show the SQL before executing
  • Remote environments: confirm target before executing
  • 仅执行只读操作:本技能仅执行
    SELECT
    查询。
  • 写入类查询(
    INSERT
    UPDATE
    DELETE
    DROP
    ALTER
    TRUNCATE
    CREATE
    )仅生成并展示,绝不执行
  • 绝不生成
    DROP DATABASE
    语句
  • 执行前始终展示SQL语句
  • 远程环境:执行前确认目标环境

Schema Maintenance

Schema维护

  • If a table or column referenced in the prompt is not found in the schema → regenerate automatically
  • After regeneration, save the updated schema to
    .claude/sql-planner/schema.tsv
    with the new
    # count:<N>
    metadata line
  • 如果提示中引用的表或列未在Schema中找到→自动重新生成Schema
  • 重新生成后,将更新后的Schema保存到
    .claude/sql-planner/schema.tsv
    文件,并添加新的
    # count:<N>
    元数据行