atlas-best-practices
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAtlas Best Practices
Atlas 最佳实践
Atlas is a language-independent tool for managing database schemas using declarative or versioned workflows.
Atlas是一款独立于编程语言的工具,可通过声明式或版本化工作流管理数据库Schema。
Two Workflows
两种工作流
Declarative (Terraform-like): Atlas compares current vs desired state and generates migrations automatically.
bash
atlas schema apply --url "postgres://..." --to "file://schema.hcl" --dev-url "docker://postgres/15"Versioned: Atlas generates migration files from schema changes, stored in version control.
bash
atlas migrate diff add_users --dir "file://migrations" --to "file://schema.sql" --dev-url "docker://postgres/15"
atlas migrate apply --dir "file://migrations" --url "postgres://..."声明式(类Terraform风格): Atlas会对比当前状态与期望状态,并自动生成迁移脚本。
bash
atlas schema apply --url "postgres://..." --to "file://schema.hcl" --dev-url "docker://postgres/15"版本化: Atlas根据Schema变更生成迁移文件,并存储到版本控制系统中。
bash
atlas migrate diff add_users --dir "file://migrations" --to "file://schema.sql" --dev-url "docker://postgres/15"
atlas migrate apply --dir "file://migrations" --url "postgres://..."Dev Database
开发数据库
Atlas requires a dev database for schema validation, diffing, and linting. Use the docker driver for ephemeral containers:
bash
undefinedAtlas需要一个开发数据库来进行Schema验证、差异对比和校验。可使用Docker驱动创建临时容器:
bash
undefinedPostgreSQL
PostgreSQL
--dev-url "docker://postgres/15/dev?search_path=public"
--dev-url "docker://postgres/15/dev?search_path=public"
MySQL
MySQL
--dev-url "docker://mysql/8/dev"
--dev-url "docker://mysql/8/dev"
SQLite
SQLite
--dev-url "sqlite://dev?mode=memory"
undefined--dev-url "sqlite://dev?mode=memory"
undefinedSchema-as-Code
代码化Schema
HCL Schema (Recommended)
HCL Schema(推荐)
Use database-specific file extensions for editor support: (PostgreSQL), (MySQL), (SQLite).
.pg.hcl.my.hcl.lt.hclhcl
schema "public" {
comment = "Application schema"
}
table "users" {
schema = schema.public
column "id" {
type = bigint
}
column "email" {
type = varchar(255)
null = false
}
column "created_at" {
type = timestamptz
default = sql("now()")
}
primary_key {
columns = [column.id]
}
index "idx_users_email" {
columns = [column.email]
unique = true
}
}
table "orders" {
schema = schema.public
column "id" {
type = bigint
}
column "user_id" {
type = bigint
null = false
}
column "total" {
type = numeric
null = false
}
foreign_key "fk_user" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_delete = CASCADE
}
check "positive_total" {
expr = "total > 0"
}
}使用数据库专属的文件扩展名以获得编辑器支持:(PostgreSQL)、(MySQL)、(SQLite)。
.pg.hcl.my.hcl.lt.hclhcl
schema "public" {
comment = "Application schema"
}
table "users" {
schema = schema.public
column "id" {
type = bigint
}
column "email" {
type = varchar(255)
null = false
}
column "created_at" {
type = timestamptz
default = sql("now()")
}
primary_key {
columns = [column.id]
}
index "idx_users_email" {
columns = [column.email]
unique = true
}
}
table "orders" {
schema = schema.public
column "id" {
type = bigint
}
column "user_id" {
type = bigint
null = false
}
column "total" {
type = numeric
null = false
}
foreign_key "fk_user" {
columns = [column.user_id]
ref_columns = [table.users.column.id]
on_delete = CASCADE
}
check "positive_total" {
expr = "total > 0"
}
}SQL Schema
SQL Schema
Use standard SQL DDL files:
sql
CREATE TABLE "users" (
"id" bigint PRIMARY KEY,
"email" varchar(255) NOT NULL UNIQUE,
"created_at" timestamptz DEFAULT now()
);使用标准SQL DDL文件:
sql
CREATE TABLE "users" (
"id" bigint PRIMARY KEY,
"email" varchar(255) NOT NULL UNIQUE,
"created_at" timestamptz DEFAULT now()
);Project Configuration
项目配置
Create for environment configuration:
atlas.hclhcl
variable "db_url" {
type = string
}
env "local" {
src = "file://schema.pg.hcl"
url = var.db_url
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
env "prod" {
src = "file://schema.pg.hcl"
url = var.db_url
migration {
dir = "atlas://myapp" # Atlas Registry
}
}Run with environment:
bash
atlas schema apply --env local --var "db_url=postgres://..."创建文件进行环境配置:
atlas.hclhcl
variable "db_url" {
type = string
}
env "local" {
src = "file://schema.pg.hcl"
url = var.db_url
dev = "docker://postgres/15/dev?search_path=public"
migration {
dir = "file://migrations"
}
format {
migrate {
diff = "{{ sql . \" \" }}"
}
}
}
env "prod" {
src = "file://schema.pg.hcl"
url = var.db_url
migration {
dir = "atlas://myapp" # Atlas Registry
}
}指定环境运行命令:
bash
atlas schema apply --env local --var "db_url=postgres://..."Migration Linting
迁移校验
Atlas analyzes migrations for safety. Configure in :
atlas.hclhcl
lint {
destructive {
error = true # Fail on DROP TABLE/COLUMN
}
data_depend {
error = true # Fail on data-dependent changes
}
naming {
match = "^[a-z_]+$"
message = "must be lowercase with underscores"
index {
match = "^idx_"
message = "indexes must start with idx_"
}
}
# PostgreSQL: require CONCURRENTLY for indexes (Pro)
concurrent_index {
error = true
}
}Key analyzers:
- DS: Destructive changes (DROP SCHEMA/TABLE/COLUMN)
- MF: Data-dependent changes (ADD UNIQUE, NOT NULL)
- BC: Backward incompatible (rename table/column)
- PG (Pro): Concurrent index, blocking DDL
Lint migrations:
bash
atlas migrate lint --env local --latest 1Suppress specific checks in migration files:
sql
-- atlas:nolint destructive
DROP TABLE old_users;Atlas会分析迁移脚本的安全性。可在中配置校验规则:
atlas.hclhcl
lint {
destructive {
error = true # 遇到DROP TABLE/COLUMN时终止
}
data_depend {
error = true # 遇到依赖数据的变更时终止
}
naming {
match = "^[a-z_]+$"
message = "必须为小写字母加下划线格式"
index {
match = "^idx_"
message = "索引必须以idx_开头"
}
}
# PostgreSQL:要求索引使用CONCURRENTLY(专业版功能)
concurrent_index {
error = true
}
}主要分析器:
- DS:破坏性变更(DROP SCHEMA/TABLE/COLUMN)
- MF:依赖数据的变更(添加UNIQUE、NOT NULL约束)
- BC:向后不兼容变更(重命名表/列)
- PG(专业版):并发索引、阻塞式DDL
执行迁移校验:
bash
atlas migrate lint --env local --latest 1在迁移文件中禁用特定校验:
sql
-- atlas:nolint destructive
DROP TABLE old_users;Schema Testing
Schema测试
Write tests in files:
.test.hclhcl
test "schema" "user_constraints" {
parallel = true
exec {
sql = "INSERT INTO users (id, email) VALUES (1, 'test@example.com')"
}
# Test unique constraint
catch {
sql = "INSERT INTO users (id, email) VALUES (2, 'test@example.com')"
error = "duplicate key"
}
assert {
sql = "SELECT COUNT(*) = 1 FROM users"
error_message = "expected exactly one user"
}
cleanup {
sql = "DELETE FROM users"
}
}在文件中编写测试用例:
.test.hclhcl
test "schema" "user_constraints" {
parallel = true
exec {
sql = "INSERT INTO users (id, email) VALUES (1, 'test@example.com')"
}
# 测试唯一约束
catch {
sql = "INSERT INTO users (id, email) VALUES (2, 'test@example.com')"
error = "duplicate key"
}
assert {
sql = "SELECT COUNT(*) = 1 FROM users"
error_message = "预期仅存在一个用户"
}
cleanup {
sql = "DELETE FROM users"
}
}Table-driven tests
表驱动测试
test "schema" "email_validation" {
for_each = [
{input: "valid@test.com", valid: true},
{input: "invalid", valid: false},
]
exec {
sql = "SELECT validate_email('${each.value.input}')"
output = each.value.valid ? "t" : "f"
}
}
Run tests:
```bash
atlas schema test --env local schema.test.hcltest "schema" "email_validation" {
for_each = [
{input: "valid@test.com", valid: true},
{input: "invalid", valid: false},
]
exec {
sql = "SELECT validate_email('${each.value.input}')"
output = each.value.valid ? "t" : "f"
}
}
运行测试:
```bash
atlas schema test --env local schema.test.hclTransaction Modes
事务模式
Control transaction behavior per-file with directives:
sql
-- atlas:txmode none
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);Modes: (default, one tx per file), (one tx for all), (no tx).
fileallnone可通过指令为每个文件控制事务行为:
sql
-- atlas:txmode none
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);模式说明:(默认,每个文件一个事务)、(所有文件共用一个事务)、(无事务)。
fileallnonePre-Execution Checks (Pro)
预执行检查(专业版)
Block dangerous operations in (requires Atlas Pro):
atlas.hclhcl
env "prod" {
check "migrate_apply" {
deny "too_many_files" {
condition = length(self.planned_migration.files) > 3
message = "Cannot apply more than 3 migrations at once"
}
}
}在中阻止危险操作(需Atlas专业版):
atlas.hclhcl
env "prod" {
check "migrate_apply" {
deny "too_many_files" {
condition = length(self.planned_migration.files) > 3
message = "一次不能应用超过3个迁移文件"
}
}
}Common Commands
常用命令
bash
undefinedbash
undefinedGenerate migration from schema diff
根据Schema差异生成迁移文件
atlas migrate diff migration_name --env local
atlas migrate diff migration_name --env local
Apply pending migrations
应用待处理的迁移
atlas migrate apply --env local
atlas migrate apply --env local
Validate migration directory integrity
验证迁移目录的完整性
atlas migrate validate --env local
atlas migrate validate --env local
View migration status
查看迁移状态
atlas migrate status --env local
atlas migrate status --env local
Push to Atlas Registry
推送迁移至Atlas Registry
atlas migrate push myapp --env local
atlas migrate push myapp --env local
Declarative apply (no migration files)
声明式应用(无需迁移文件)
atlas schema apply --env local --auto-approve
atlas schema apply --env local --auto-approve
Inspect current database schema
检查当前数据库Schema
atlas schema inspect --url "postgres://..." --format "{{ sql . }}"
atlas schema inspect --url "postgres://..." --format "{{ sql . }}"
Compare schemas
对比Schema差异
atlas schema diff --from "postgres://..." --to "file://schema.hcl"
undefinedatlas schema diff --from "postgres://..." --to "file://schema.hcl"
undefinedCI/CD Integration
CI/CD 集成
GitHub Actions setup:
yaml
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- name: Lint migrations
run: atlas migrate lint --env ci --git-base origin/mainGitHub Actions 配置:
yaml
- uses: ariga/setup-atlas@v0
with:
cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN }}
- name: Lint migrations
run: atlas migrate lint --env ci --git-base origin/mainBaseline for Existing Databases
现有数据库基线设置
When adopting Atlas on existing databases:
bash
undefined在现有数据库中引入Atlas时:
bash
undefinedCreate baseline migration reflecting current schema
创建反映当前Schema的基线迁移文件
atlas migrate diff baseline --env local --to "file://schema.hcl"
atlas migrate diff baseline --env local --to "file://schema.hcl"
Mark baseline as applied (skip execution)
标记基线迁移已应用(跳过执行)
atlas migrate apply --env prod --baseline "20240101000000"
undefinedatlas migrate apply --env prod --baseline "20240101000000"
undefinedORM Integration
ORM 集成
Atlas supports loading schemas from ORMs via external providers:
hcl
data "external_schema" "gorm" {
program = [
"go", "run", "-mod=mod",
"ariga.io/atlas-provider-gorm",
"load", "--path", "./models",
"--dialect", "postgres",
]
}
env "local" {
src = data.external_schema.gorm.url
}Supported: GORM, Sequelize, TypeORM, Django, SQLAlchemy, Prisma, and more.
Atlas支持通过外部加载器从ORM中读取Schema:
hcl
data "external_schema" "gorm" {
program = [
"go", "run", "-mod=mod",
"ariga.io/atlas-provider-gorm",
"load", "--path", "./models",
"--dialect", "postgres",
]
}
env "local" {
src = data.external_schema.gorm.url
}支持的ORM:GORM、Sequelize、TypeORM、Django、SQLAlchemy、Prisma等。
Instructions
注意事项
- Always use a dev database for and
migrate diff; it validates schemas safely.schema apply - Enable strict linting in CI to catch destructive and data-dependent changes early.
- Use versioned migrations for production; declarative workflow suits development/testing.
- Test schemas with files; validate constraints, triggers, and functions.
.test.hcl - Push migrations to Atlas Registry for deployment; avoid copying files manually.
- Use for PostgreSQL concurrent index operations.
-- atlas:txmode none - Configure naming conventions in lint rules; consistency prevents errors.
- 执行和
migrate diff时务必使用开发数据库,可安全验证Schema。schema apply - 在CI中启用严格的迁移校验,尽早发现破坏性和依赖数据的变更。
- 生产环境使用版本化迁移;声明式工作流更适合开发/测试环境。
- 使用文件测试Schema,验证约束、触发器和函数。
.test.hcl - 将迁移文件推送至Atlas Registry进行部署,避免手动复制文件。
- PostgreSQL并发索引操作需使用指令。
-- atlas:txmode none - 在校验规则中配置命名规范,一致性可减少错误。