dsql

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Amazon Aurora DSQL Skill

Amazon Aurora DSQL 技能

Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
Key capabilities:
  • Direct query execution via MCP tools
  • Schema management with DSQL constraints
  • Migration support and safe schema evolution
  • Multi-tenant isolation patterns
  • IAM-based authentication

Aurora DSQL是一款无服务器、兼容PostgreSQL的分布式SQL数据库。本技能提供通过MCP工具直接与数据库交互、模式管理、迁移支持以及多租户模式等功能。
核心功能:
  • 通过MCP工具直接执行查询
  • 遵循DSQL约束的模式管理
  • 迁移支持与安全的模式演进
  • 多租户隔离模式
  • 基于IAM的身份验证

Reference Files

参考文件

Load these files as needed for detailed guidance:
根据需要加载以下文件获取详细指导:

development-guide.md

development-guide.md

When: ALWAYS load before implementing schema changes or database operations Contains: DDL rules, connection patterns, transaction limits, security best practices
适用场景: 在实施模式变更或数据库操作前必须加载 包含内容: DDL规则、连接模式、事务限制、安全最佳实践

MCP:

MCP相关:

mcp-setup.md

mcp-setup.md

When: Always load for guidance using or updating the DSQL MCP server Contains: Instructions for setting up the DSQL MCP server with 2 configuration options as sampled in .mcp.json
  1. Documentation-Tools Only
  2. Database Operations (requires a cluster endpoint)
适用场景: 使用或更新DSQL MCP服务器时必须加载 包含内容: 配置DSQL MCP服务器的两种选项说明,示例见.mcp.json
  1. 仅文档工具模式
  2. 数据库操作模式(需要集群端点)

mcp-tools.md

mcp-tools.md

When: Load when you need detailed MCP tool syntax and examples Contains: Tool parameters, detailed examples, usage patterns
适用场景: 需要详细MCP工具语法和示例时加载 包含内容: 工具参数、详细示例、使用模式

language.md

language.md

When: MUST load when making language-specific implementation choices Contains: Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
适用场景: 进行语言相关实现选择时必须加载 包含内容: 驱动选择、框架模式、Python/JS/Go/Java/Rust的连接代码

dsql-examples.md

dsql-examples.md

When: Load when looking for specific implementation examples Contains: Code examples, repository patterns, multi-tenant implementations
适用场景: 查找特定实现示例时加载 包含内容: 代码示例、仓库模式、多租户实现方案

troubleshooting.md

troubleshooting.md

When: Load when debugging errors or unexpected behavior Contains: Common pitfalls, error messages, solutions
适用场景: 调试错误或异常行为时加载 包含内容: 常见问题、错误信息、解决方案

onboarding.md

onboarding.md

When: User explicitly requests to "Get started with DSQL" or similar phrase Contains: Interactive step-by-step guide for new users
适用场景: 用户明确请求“开始使用DSQL”或类似表述时加载 包含内容: 面向新用户的交互式分步指南

ddl-migrations.md

ddl-migrations.md

When: MUST load when trying to perform DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT functionality Contains: Table recreation patterns, batched migration for large tables, data validation

适用场景: 执行DROP COLUMN、RENAME COLUMN、ALTER COLUMN TYPE或DROP CONSTRAINT操作时必须加载 包含内容: 表重建模式、大表分批迁移方案、数据验证方法

MCP Tools Available

可用的MCP工具

The
aurora-dsql
MCP server provides these tools:
Database Operations:
  1. readonly_query - Execute SELECT queries (returns list of dicts)
  2. transact - Execute DDL/DML statements in transaction (takes list of SQL statements)
  3. get_schema - Get table structure for a specific table
Documentation & Knowledge: 4. dsql_search_documentation - Search Aurora DSQL documentation 5. dsql_read_documentation - Read specific documentation pages 6. dsql_recommend - Get DSQL best practice recommendations
Note: There is no
list_tables
tool. Use
readonly_query
with information_schema.
See mcp-setup.md for detailed setup instructions. See mcp-tools.md for detailed usage and examples.

aurora-dsql
MCP服务器提供以下工具:
数据库操作类:
  1. readonly_query - 执行SELECT查询(返回字典列表)
  2. transact - 在事务中执行DDL/DML语句(接收SQL语句列表)
  3. get_schema - 获取指定表的结构
文档与知识类: 4. dsql_search_documentation - 搜索Aurora DSQL文档 5. dsql_read_documentation - 读取特定文档页面 6. dsql_recommend - 获取DSQL最佳实践建议
注意: 没有
list_tables
工具,可使用
readonly_query
查询information_schema获取表列表。
查看mcp-setup.md获取详细配置说明。 查看mcp-tools.md获取详细用法和示例。

CLI Scripts Available

可用的CLI脚本

Bash scripts for cluster management and direct psql connections. All scripts are located in scripts/.
Cluster Management:
  • create-cluster.sh - Create new DSQL cluster with optional tags
  • delete-cluster.sh - Delete cluster with confirmation prompt
  • list-clusters.sh - List all clusters in a region
  • cluster-info.sh - Get detailed cluster information
Database Connection:
  • psql-connect.sh - Connect to DSQL using psql with automatic IAM auth token generation
Quick example:
bash
./scripts/create-cluster.sh --region us-east-1
export CLUSTER=abc123def456
./scripts/psql-connect.sh
See scripts/README.md for detailed usage.

用于集群管理和直接psql连接的Bash脚本,所有脚本位于scripts/目录。
集群管理:
  • create-cluster.sh - 创建带有可选标签的新DSQL集群
  • delete-cluster.sh - 删除集群(带确认提示)
  • list-clusters.sh - 列出指定区域的所有集群
  • cluster-info.sh - 获取集群详细信息
数据库连接:
  • psql-connect.sh - 使用psql连接DSQL,自动生成IAM认证令牌
快速示例:
bash
./scripts/create-cluster.sh --region us-east-1
export CLUSTER=abc123def456
./scripts/psql-connect.sh
查看scripts/README.md获取详细用法。

Quick Start

快速开始

1. List tables and explore schema

1. 列出表并探索模式

Use readonly_query with information_schema to list tables
Use get_schema to understand table structure
使用readonly_query查询information_schema列出表
使用get_schema了解表结构

2. Query data

2. 查询数据

Use readonly_query for SELECT queries
Always include tenant_id in WHERE clause for multi-tenant apps
Validate inputs carefully (no parameterized queries available)
使用readonly_query执行SELECT查询
多租户应用中必须在WHERE子句中包含tenant_id
仔细验证输入(不支持参数化查询)

3. Execute schema changes

3. 执行模式变更

Use transact tool with list of SQL statements
Follow one-DDL-per-transaction rule
Always use CREATE INDEX ASYNC in separate transaction

使用transact工具执行SQL语句列表
遵循“每个事务仅一条DDL”规则
必须在单独事务中使用CREATE INDEX ASYNC

Common Workflows

常见工作流

Workflow 1: Create Multi-Tenant Schema

工作流1:创建多租户模式

Goal: Create a new table with proper tenant isolation
Steps:
  1. Create main table with tenant_id column using transact
  2. Create async index on tenant_id in separate transact call
  3. Create composite indexes for common query patterns (separate transact calls)
  4. Verify schema with get_schema
Critical rules:
  • Include tenant_id in all tables
  • Use CREATE INDEX ASYNC (never synchronous)
  • Each DDL in its own transact call:
    transact(["CREATE TABLE ..."])
  • Store arrays/JSON as TEXT
目标: 创建带有正确租户隔离的新表
步骤:
  1. 使用transact创建包含tenant_id列的主表
  2. 在单独的transact调用中为tenant_id创建异步索引
  3. 为常见查询模式创建复合索引(单独transact调用)
  4. 使用get_schema验证模式
关键规则:
  • 所有表必须包含tenant_id
  • 使用CREATE INDEX ASYNC(禁止同步创建)
  • 每条DDL单独放在transact调用中:
    transact(["CREATE TABLE ..."])
  • 数组/JSON以TEXT类型存储

Workflow 2: Safe Data Migration

工作流2:安全数据迁移

Goal: Add a new column with defaults safely
Steps:
  1. Add column using transact:
    transact(["ALTER TABLE ... ADD COLUMN ..."])
  2. Populate existing rows with UPDATE in separate transact calls (batched under 3,000 rows)
  3. Verify migration with readonly_query using COUNT
  4. Create async index for new column using transact if needed
Critical rules:
  • Add column first, populate later
  • Never add DEFAULT in ALTER TABLE
  • Batch updates under 3,000 rows in separate transact calls
  • Each ALTER TABLE in its own transaction
目标: 安全添加带默认值的新列
步骤:
  1. 使用transact添加列:
    transact(["ALTER TABLE ... ADD COLUMN ..."])
  2. 在单独的transact调用中分批更新现有行(每批不超过3000行)
  3. 使用readonly_query的COUNT验证迁移结果
  4. 如有需要,使用transact为新列创建异步索引
关键规则:
  • 先添加列,再填充数据
  • 禁止在ALTER TABLE中添加DEFAULT
  • 分批更新每批不超过3000行,使用单独transact调用
  • 每条ALTER TABLE单独放在一个事务中

Workflow 3: Application-Layer Referential Integrity

工作流3:应用层引用完整性

Goal: Safely insert/delete records with parent-child relationships
Steps for INSERT:
  1. Validate parent exists with readonly_query
  2. Throw error if parent not found
  3. Insert child record using transact with parent reference
Steps for DELETE:
  1. Check for dependent records with readonly_query (COUNT)
  2. Return error if dependents exist
  3. Delete record using transact if safe
目标: 安全插入/删除具有父子关系的记录
插入步骤:
  1. 使用readonly_query验证父记录存在
  2. 如父记录不存在则抛出错误
  3. 使用transact插入包含父引用的子记录
删除步骤:
  1. 使用readonly_query(COUNT)检查是否存在依赖记录
  2. 如存在依赖记录则返回错误
  3. 确认安全后使用transact删除记录

Workflow 4: Query with Tenant Isolation

工作流4:租户隔离查询

Goal: Retrieve data scoped to a specific tenant
Steps:
  1. Always include tenant_id in WHERE clause
  2. Validate and sanitize tenant_id input (no parameterized queries available!)
  3. Use readonly_query with validated tenant_id
  4. Never allow cross-tenant data access
Critical rules:
  • Validate ALL inputs before building SQL (SQL injection risk!)
  • ALL queries include WHERE tenant_id = 'validated-value'
  • Reject cross-tenant access at application layer
  • Use allowlists or regex validation for tenant IDs
目标: 检索特定租户范围内的数据
步骤:
  1. 必须在WHERE子句中包含tenant_id
  2. 验证并清洗tenant_id输入(不支持参数化查询!)
  3. 使用已验证的tenant_id调用readonly_query
  4. 禁止跨租户数据访问
关键规则:
  • 构建SQL前必须验证所有输入(存在SQL注入风险!)
  • 所有查询必须包含WHERE tenant_id = '已验证值'
  • 在应用层拒绝跨租户访问
  • 对租户ID使用白名单或正则验证

Workflow 5: Table Recreation DDL Migration

工作流5:表重建式DDL迁移

Goal: Perform DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT using the table recreation pattern.
MUST load ddl-migrations.md for detailed guidance.
Steps:
  1. MUST validate table exists and get row count with
    readonly_query
  2. MUST get current schema with
    get_schema
  3. MUST create new table with desired structure using
    transact
  4. MUST migrate data (batched in 500-1,000 row chunks for tables > 3,000 rows)
  5. MUST verify row counts match before proceeding
  6. MUST swap tables: drop original, rename new
  7. MUST recreate indexes using
    CREATE INDEX ASYNC
Rules:
  • MUST use batching for tables exceeding 3,000 rows
  • PREFER batches of 500-1,000 rows for optimal throughput
  • MUST validate data compatibility before type changes (abort if incompatible)
  • MUST NOT drop original table until new table is verified
  • MUST recreate all indexes after table swap using ASYNC

目标: 使用表重建模式执行DROP COLUMN、RENAME COLUMN、ALTER COLUMN TYPE或DROP CONSTRAINT操作。
必须加载ddl-migrations.md获取详细指导。
步骤:
  1. 必须使用
    readonly_query
    验证表存在并获取行数
  2. 必须使用
    get_schema
    获取当前模式
  3. 必须使用
    transact
    创建具有目标结构的新表
  4. 必须迁移数据(表行数>3000时,分批500-1000行)
  5. 必须在继续前验证行数匹配
  6. 必须交换表:删除原表,重命名新表
  7. 必须使用CREATE INDEX ASYNC重建所有索引
规则:
  • 表行数超过3000时必须使用分批处理
  • 优先选择500-1000行的批次以获得最佳吞吐量
  • 类型变更前必须验证数据兼容性(不兼容则终止)
  • 新表验证完成前禁止删除原表
  • 表交换后必须使用ASYNC重建所有索引

Best Practices

最佳实践

  • SHOULD read guidelines first - Check development_guide.md before making schema changes
  • SHOULD use preferred language patterns - Check language.md
  • SHOULD Execute queries directly - PREFER MCP tools for ad-hoc queries
  • REQUIRED: Follow DDL Guidelines - Refer to DDL Rules
  • SHALL repeatedly generate fresh tokens - Refer to Connection Limits
  • ALWAYS use ASYNC indexes -
    CREATE INDEX ASYNC
    is mandatory
  • MUST Serialize arrays/JSON as TEXT - Store arrays/JSON as TEXT (comma separated, JSON.stringify)
  • ALWAYS Batch under 3,000 rows - maintain transaction limits
  • REQUIRED: Use parameterized queries - Prevent SQL injection with $1, $2 placeholders
  • MUST follow correct Application Layer Patterns - when multi-tenant isolation or application referential itegrity are required; refer to Application Layer Patterns
  • REQUIRED use DELETE for truncation - DELETE is the only supported operation for truncation
  • SHOULD test any migrations - Verify DDL on dev clusters before production
  • Plan for Horizontal Scale - DSQL is designed to optimize for massive scales without latency drops; refer to Horizontal Scaling
  • SHOULD use connection pooling in production applications - Refer to Connection Pooling
  • SHOULD debug with the troubleshooting guide: - Always refer to the resources and guidelines in troubleshooting.md

  • 应先阅读指南 - 进行模式变更前查看development_guide.md
  • 应使用推荐语言模式 - 查看language.md
  • 应直接执行查询 - 优先使用MCP工具执行临时查询
  • 必须遵循DDL指南 - 参考DDL规则
  • 必须定期生成新令牌 - 参考连接限制
  • 必须使用异步索引 -
    CREATE INDEX ASYNC
    为强制要求
  • 必须将数组/JSON序列化为TEXT - 数组/JSON以TEXT类型存储(逗号分隔,JSON.stringify)
  • 必须控制批次不超过3000行 - 遵守事务限制
  • 必须使用参数化查询 - 使用$1、$2占位符防止SQL注入
  • 必须遵循正确的应用层模式 - 当需要多租户隔离或应用层引用完整性时,参考应用层模式
  • 必须使用DELETE进行截断 - DELETE是唯一支持的截断操作
  • 应对所有迁移进行测试 - 在生产环境前先在开发集群验证DDL
  • 规划水平扩展 - DSQL专为大规模扩展设计,可避免延迟上升;参考水平扩展
  • 生产应用应使用连接池 - 参考连接池
  • 应使用故障排除指南调试 - 始终参考troubleshooting.md中的资源和指南

Additional Resources

额外资源