cloudflare-d1

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cloudflare D1 Database

Cloudflare D1 数据库

Status: Production Ready ✅ Last Updated: 2025-10-21 Dependencies: cloudflare-worker-base (for Worker setup) Latest Versions: wrangler@4.43.0, @cloudflare/workers-types@4.20251014.0

状态:已就绪可用于生产环境 ✅ 最后更新:2025-10-21 依赖项:cloudflare-worker-base(用于Worker环境搭建) 最新版本:wrangler@4.43.0, @cloudflare/workers-types@4.20251014.0

Quick Start (5 Minutes)

快速入门(5分钟)

1. Create D1 Database

1. 创建D1数据库

bash
undefined
bash
undefined

Create a new D1 database

Create a new D1 database

npx wrangler d1 create my-database
npx wrangler d1 create my-database

Output includes database_id - save this!

Output includes database_id - save this!

✅ Successfully created DB 'my-database'

✅ Successfully created DB 'my-database'

[[d1_databases]]

[[d1_databases]]

binding = "DB"

binding = "DB"

database_name = "my-database"

database_name = "my-database"

database_id = "<UUID>"

database_id = "<UUID>"

undefined
undefined

2. Configure Bindings

2. 配置绑定

Add to your
wrangler.jsonc
:
jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2025-10-11",
  "d1_databases": [
    {
      "binding": "DB",                    // Available as env.DB in your Worker
      "database_name": "my-database",      // Name from wrangler d1 create
      "database_id": "<UUID>",             // ID from wrangler d1 create
      "preview_database_id": "local-db"    // For local development
    }
  ]
}
CRITICAL:
  • binding
    is how you access the database in code (
    env.DB
    )
  • database_id
    is the production database UUID
  • preview_database_id
    is for local dev (can be any string)
  • Never commit real
    database_id
    values to public repos
    - use environment variables or secrets
将以下内容添加到你的
wrangler.jsonc
jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2025-10-11",
  "d1_databases": [
    {
      "binding": "DB",                    // Available as env.DB in your Worker
      "database_name": "my-database",      // Name from wrangler d1 create
      "database_id": "<UUID>",             // ID from wrangler d1 create
      "preview_database_id": "local-db"    // For local development
    }
  ]
}
关键注意事项:
  • binding
    是你在代码中访问数据库的方式(
    env.DB
  • database_id
    是生产数据库的UUID
  • preview_database_id
    用于本地开发(可以是任意字符串)
  • 绝对不要将真实的
    database_id
    值提交到公开代码仓库
    ——使用环境变量或密钥管理

3. Create Your First Migration

3. 创建你的第一个迁移脚本

bash
undefined
bash
undefined

Create migration file

Create migration file

npx wrangler d1 migrations create my-database create_users_table
npx wrangler d1 migrations create my-database create_users_table

This creates: migrations/0001_create_users_table.sql

This creates: migrations/0001_create_users_table.sql


Edit the migration file:

```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
  user_id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  username TEXT NOT NULL,
  created_at INTEGER NOT NULL,
  updated_at INTEGER
);

-- Create index for common queries
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Optimize database
PRAGMA optimize;

编辑迁移文件:

```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
  user_id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  username TEXT NOT NULL,
  created_at INTEGER NOT NULL,
  updated_at INTEGER
);

-- Create index for common queries
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Optimize database
PRAGMA optimize;

4. Apply Migration

4. 应用迁移脚本

bash
undefined
bash
undefined

Apply locally first (for testing)

Apply locally first (for testing)

npx wrangler d1 migrations apply my-database --local
npx wrangler d1 migrations apply my-database --local

Apply to production when ready

Apply to production when ready

npx wrangler d1 migrations apply my-database --remote
undefined
npx wrangler d1 migrations apply my-database --remote
undefined

5. Query from Your Worker

5. 从Worker中查询数据库

typescript
// src/index.ts
import { Hono } from 'hono';

type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();

app.get('/api/users/:email', async (c) => {
  const email = c.req.param('email');

  try {
    // ALWAYS use prepared statements with bind()
    const result = await c.env.DB.prepare(
      'SELECT * FROM users WHERE email = ?'
    )
    .bind(email)
    .first();

    if (!result) {
      return c.json({ error: 'User not found' }, 404);
    }

    return c.json(result);
  } catch (error: any) {
    console.error('D1 Error:', error.message);
    return c.json({ error: 'Database error' }, 500);
  }
});

export default app;

typescript
// src/index.ts
import { Hono } from 'hono';

type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();

app.get('/api/users/:email', async (c) => {
  const email = c.req.param('email');

  try {
    // ALWAYS use prepared statements with bind()
    const result = await c.env.DB.prepare(
      'SELECT * FROM users WHERE email = ?'
    )
    .bind(email)
    .first();

    if (!result) {
      return c.json({ error: 'User not found' }, 404);
    }

    return c.json(result);
  } catch (error: any) {
    console.error('D1 Error:', error.message);
    return c.json({ error: 'Database error' }, 500);
  }
});

export default app;

D1 Migrations System

D1 迁移系统

Migration Workflow

迁移工作流

bash
undefined
bash
undefined

1. Create migration

1. Create migration

npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>

2. List unapplied migrations

2. List unapplied migrations

npx wrangler d1 migrations list <DATABASE_NAME> --local npx wrangler d1 migrations list <DATABASE_NAME> --remote
npx wrangler d1 migrations list <DATABASE_NAME> --local npx wrangler d1 migrations list <DATABASE_NAME> --remote

3. Apply migrations

3. Apply migrations

npx wrangler d1 migrations apply <DATABASE_NAME> --local # Test locally npx wrangler d1 migrations apply <DATABASE_NAME> --remote # Deploy to production
undefined
npx wrangler d1 migrations apply <DATABASE_NAME> --local # Test locally npx wrangler d1 migrations apply <DATABASE_NAME> --remote # Deploy to production
undefined

Migration File Naming

迁移文件命名规则

Migrations are automatically versioned:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sql
Rules:
  • Files are executed in sequential order
  • Each migration runs once (tracked in
    d1_migrations
    table)
  • Failed migrations roll back (transactional)
  • Can't modify or delete applied migrations
迁移文件会自动按版本编号:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sql
规则:
  • 文件按顺序执行
  • 每个迁移脚本仅运行一次(执行记录保存在
    d1_migrations
    表中)
  • 失败的迁移会自动回滚(事务性处理)
  • 已应用的迁移脚本不能修改或删除

Custom Migration Configuration

自定义迁移配置

jsonc
{
  "d1_databases": [
    {
      "binding": "DB",
      "database_name": "my-database",
      "database_id": "<UUID>",
      "migrations_dir": "db/migrations",        // Custom directory (default: migrations/)
      "migrations_table": "schema_migrations"   // Custom tracking table (default: d1_migrations)
    }
  ]
}
jsonc
{
  "d1_databases": [
    {
      "binding": "DB",
      "database_name": "my-database",
      "database_id": "<UUID>",
      "migrations_dir": "db/migrations",        // Custom directory (default: migrations/)
      "migrations_table": "schema_migrations"   // Custom tracking table (default: d1_migrations)
    }
  ]
}

Migration Best Practices

迁移最佳实践

✅ Always Do:

✅ 推荐做法:

sql
-- Use IF NOT EXISTS to make migrations idempotent
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Run PRAGMA optimize after schema changes
PRAGMA optimize;

-- Use transactions for data migrations
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;
sql
-- Use IF NOT EXISTS to make migrations idempotent
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Run PRAGMA optimize after schema changes
PRAGMA optimize;

-- Use transactions for data migrations
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;

❌ Never Do:

❌ 禁止做法:

sql
-- DON'T include BEGIN TRANSACTION at start (D1 handles this)
BEGIN TRANSACTION;  -- ❌ Remove this

-- DON'T use MySQL/PostgreSQL syntax
ALTER TABLE users MODIFY COLUMN email VARCHAR(255);  -- ❌ Not SQLite

-- DON'T create tables without IF NOT EXISTS
CREATE TABLE users (...);  -- ❌ Fails if table exists
sql
-- DON'T include BEGIN TRANSACTION at start (D1 handles this)
BEGIN TRANSACTION;  -- ❌ Remove this

-- DON'T use MySQL/PostgreSQL syntax
ALTER TABLE users MODIFY COLUMN email VARCHAR(255);  -- ❌ Not SQLite

-- DON'T create tables without IF NOT EXISTS
CREATE TABLE users (...);  -- ❌ Fails if table exists

Handling Foreign Keys in Migrations

迁移中处理外键

sql
-- Temporarily disable foreign key checks during schema changes
PRAGMA defer_foreign_keys = true;

-- Make schema changes that would violate foreign keys
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);

-- Foreign keys re-enabled automatically at end of migration

sql
-- Temporarily disable foreign key checks during schema changes
PRAGMA defer_foreign_keys = true;

-- Make schema changes that would violate foreign keys
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);

-- Foreign keys re-enabled automatically at end of migration

D1 Workers API

D1 Workers API

Type Definitions

类型定义

typescript
// Add to env.d.ts or worker-configuration.d.ts
interface Env {
  DB: D1Database;
  // ... other bindings
}

// For Hono
type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();
typescript
// Add to env.d.ts or worker-configuration.d.ts
interface Env {
  DB: D1Database;
  // ... other bindings
}

// For Hono
type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();

prepare() - Prepared Statements (PRIMARY METHOD)

prepare() - 预处理语句(推荐首选方法)

Always use prepared statements for queries with user input.
typescript
// Basic prepared statement
const stmt = env.DB.prepare('SELECT * FROM users WHERE user_id = ?');
const bound = stmt.bind(userId);
const result = await bound.first();

// Chained (most common pattern)
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email)
  .first();
Why use prepare():
  • ✅ Prevents SQL injection
  • ✅ Can be reused with different parameters
  • ✅ Better performance (query plan caching)
  • ✅ Type-safe with TypeScript
处理包含用户输入的查询时,务必使用预处理语句。
typescript
// Basic prepared statement
const stmt = env.DB.prepare('SELECT * FROM users WHERE user_id = ?');
const bound = stmt.bind(userId);
const result = await bound.first();

// Chained (most common pattern)
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email)
  .first();
使用 prepare() 的优势:
  • ✅ 防止SQL注入
  • ✅ 可绑定不同参数重复使用
  • ✅ 性能更优(查询计划缓存)
  • ✅ 与TypeScript兼容,类型安全

Query Result Methods

查询结果方法

.all() - Get All Rows

.all() - 获取所有行

typescript
const { results, meta } = await env.DB.prepare(
  'SELECT * FROM users WHERE created_at > ?'
)
.bind(timestamp)
.all();

console.log(results);  // Array of rows
console.log(meta);     // { duration, rows_read, rows_written }
typescript
const { results, meta } = await env.DB.prepare(
  'SELECT * FROM users WHERE created_at > ?'
)
.bind(timestamp)
.all();

console.log(results);  // Array of rows
console.log(meta);     // { duration, rows_read, rows_written }

.first() - Get First Row

.first() - 获取第一行

typescript
// Returns first row or null
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind('user@example.com')
  .first();

if (!user) {
  return c.json({ error: 'Not found' }, 404);
}
typescript
// Returns first row or null
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind('user@example.com')
  .first();

if (!user) {
  return c.json({ error: 'Not found' }, 404);
}

.first(column) - Get Single Column Value

.first(column) - 获取单行指定列的值

typescript
// Returns the value of a specific column from first row
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users')
  .first('total');

console.log(count);  // 42 (just the number, not an object)
typescript
// Returns the value of a specific column from first row
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users')
  .first('total');

console.log(count);  // 42 (just the number, not an object)

.run() - Execute Without Results

.run() - 执行无返回结果的语句

typescript
// For INSERT, UPDATE, DELETE
const { success, meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
)
.bind(email, username, Date.now())
.run();

console.log(meta);  // { duration, rows_read, rows_written, last_row_id }
typescript
// For INSERT, UPDATE, DELETE
const { success, meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
)
.bind(email, username, Date.now())
.run();

console.log(meta);  // { duration, rows_read, rows_written, last_row_id }

batch() - Execute Multiple Queries

batch() - 执行多个查询

CRITICAL FOR PERFORMANCE: Use batch() to reduce latency.
typescript
// Prepare multiple statements
const stmt1 = env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1);
const stmt2 = env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(2);
const stmt3 = env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1);

// Execute all in one round trip
const results = await env.DB.batch([stmt1, stmt2, stmt3]);

console.log(results[0].results);  // Users query 1
console.log(results[1].results);  // Users query 2
console.log(results[2].results);  // Posts query
Batch Behavior:
  • Executes sequentially (in order)
  • Each statement commits individually (auto-commit mode)
  • If one fails, remaining statements don't execute
  • Much faster than individual queries (single network round trip)
Batch Use Cases:
typescript
// ✅ Insert multiple rows efficiently
const inserts = users.map(user =>
  env.DB.prepare('INSERT INTO users (email, username) VALUES (?, ?)')
    .bind(user.email, user.username)
);
await env.DB.batch(inserts);

// ✅ Fetch related data in parallel
const [user, posts, comments] = await env.DB.batch([
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(userId),
  env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(userId),
  env.DB.prepare('SELECT * FROM comments WHERE user_id = ?').bind(userId)
]);
性能优化关键:使用 batch() 减少延迟。
typescript
// Prepare multiple statements
const stmt1 = env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1);
const stmt2 = env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(2);
const stmt3 = env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1);

// Execute all in one round trip
const results = await env.DB.batch([stmt1, stmt2, stmt3]);

console.log(results[0].results);  // Users query 1
console.log(results[1].results);  // Users query 2
console.log(results[2].results);  // Posts query
批量执行特性:
  • 按顺序执行(与传入顺序一致)
  • 每个语句独立提交(自动提交模式)
  • 若某一语句失败,剩余语句不再执行
  • 比单独执行多个查询快得多(仅一次网络往返)
批量执行适用场景:
typescript
// ✅ Insert multiple rows efficiently
const inserts = users.map(user =>
  env.DB.prepare('INSERT INTO users (email, username) VALUES (?, ?)')
    .bind(user.email, user.username)
);
await env.DB.batch(inserts);

// ✅ Fetch related data in parallel
const [user, posts, comments] = await env.DB.batch([
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(userId),
  env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(userId),
  env.DB.prepare('SELECT * FROM comments WHERE user_id = ?').bind(userId)
]);

exec() - Execute Raw SQL (AVOID IN PRODUCTION)

exec() - 执行原始SQL(生产环境禁止使用)

typescript
// Only for migrations, maintenance, and one-off tasks
const result = await env.DB.exec(`
  SELECT * FROM users;
  SELECT * FROM posts;
`);

console.log(result);  // { count: 2, duration: 5 }
NEVER use exec() for:
  • ❌ Queries with user input (SQL injection risk)
  • ❌ Production queries (poor performance)
  • ❌ Queries that need results (exec doesn't return data)
ONLY use exec() for:
  • ✅ Running migration SQL files locally
  • ✅ One-off maintenance tasks
  • ✅ Database initialization scripts

typescript
// Only for migrations, maintenance, and one-off tasks
const result = await env.DB.exec(`
  SELECT * FROM users;
  SELECT * FROM posts;
`);

console.log(result);  // { count: 2, duration: 5 }
绝对不要在以下场景使用 exec():
  • ❌ 处理包含用户输入的查询(存在SQL注入风险)
  • ❌ 生产环境中的查询(性能差)
  • ❌ 需要返回结果的查询(exec不返回数据)
仅可在以下场景使用 exec():
  • ✅ 本地运行迁移SQL文件
  • ✅ 一次性维护任务
  • ✅ 数据库初始化脚本

Query Patterns

查询模式

Basic CRUD Operations

基础CRUD操作

Create (INSERT)

创建(INSERT)

typescript
// Single insert
const { meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
)
.bind(email, username, Date.now())
.run();

const newUserId = meta.last_row_id;

// Bulk insert with batch()
const users = [
  { email: 'user1@example.com', username: 'user1' },
  { email: 'user2@example.com', username: 'user2' }
];

const inserts = users.map(u =>
  env.DB.prepare('INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)')
    .bind(u.email, u.username, Date.now())
);

await env.DB.batch(inserts);
typescript
// Single insert
const { meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
)
.bind(email, username, Date.now())
.run();

const newUserId = meta.last_row_id;

// Bulk insert with batch()
const users = [
  { email: 'user1@example.com', username: 'user1' },
  { email: 'user2@example.com', username: 'user2' }
];

const inserts = users.map(u =>
  env.DB.prepare('INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)')
    .bind(u.email, u.username, Date.now())
);

await env.DB.batch(inserts);

Read (SELECT)

读取(SELECT)

typescript
// Single row
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId)
  .first();

// Multiple rows
const { results } = await env.DB.prepare(
  'SELECT * FROM users WHERE created_at > ? ORDER BY created_at DESC LIMIT ?'
)
.bind(timestamp, 10)
.all();

// Count
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users')
  .first('total');

// Exists check
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
  .bind(email)
  .first();

if (exists) {
  // Email already registered
}
typescript
// Single row
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId)
  .first();

// Multiple rows
const { results } = await env.DB.prepare(
  'SELECT * FROM users WHERE created_at > ? ORDER BY created_at DESC LIMIT ?'
)
.bind(timestamp, 10)
.all();

// Count
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users')
  .first('total');

// Exists check
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
  .bind(email)
  .first();

if (exists) {
  // Email already registered
}

Update (UPDATE)

更新(UPDATE)

typescript
const { meta } = await env.DB.prepare(
  'UPDATE users SET username = ?, updated_at = ? WHERE user_id = ?'
)
.bind(newUsername, Date.now(), userId)
.run();

const rowsAffected = meta.rows_written;

if (rowsAffected === 0) {
  // User not found
}
typescript
const { meta } = await env.DB.prepare(
  'UPDATE users SET username = ?, updated_at = ? WHERE user_id = ?'
)
.bind(newUsername, Date.now(), userId)
.run();

const rowsAffected = meta.rows_written;

if (rowsAffected === 0) {
  // User not found
}

Delete (DELETE)

删除(DELETE)

typescript
const { meta } = await env.DB.prepare('DELETE FROM users WHERE user_id = ?')
  .bind(userId)
  .run();

const rowsDeleted = meta.rows_written;
typescript
const { meta } = await env.DB.prepare('DELETE FROM users WHERE user_id = ?')
  .bind(userId)
  .run();

const rowsDeleted = meta.rows_written;

Advanced Queries

高级查询

Pagination

分页

typescript
app.get('/api/users', async (c) => {
  const page = parseInt(c.req.query('page') || '1');
  const limit = parseInt(c.req.query('limit') || '20');
  const offset = (page - 1) * limit;

  const [countResult, usersResult] = await c.env.DB.batch([
    c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
    c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
      .bind(limit, offset)
  ]);

  const total = countResult.results[0].total as number;
  const users = usersResult.results;

  return c.json({
    users,
    pagination: {
      page,
      limit,
      total,
      pages: Math.ceil(total / limit)
    }
  });
});
typescript
app.get('/api/users', async (c) => {
  const page = parseInt(c.req.query('page') || '1');
  const limit = parseInt(c.req.query('limit') || '20');
  const offset = (page - 1) * limit;

  const [countResult, usersResult] = await c.env.DB.batch([
    c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
    c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
      .bind(limit, offset)
  ]);

  const total = countResult.results[0].total as number;
  const users = usersResult.results;

  return c.json({
    users,
    pagination: {
      page,
      limit,
      total,
      pages: Math.ceil(total / limit)
    }
  });
});

Joins

关联查询(Joins)

typescript
const { results } = await env.DB.prepare(`
  SELECT
    posts.*,
    users.username as author_name,
    users.email as author_email
  FROM posts
  INNER JOIN users ON posts.user_id = users.user_id
  WHERE posts.published = ?
  ORDER BY posts.created_at DESC
  LIMIT ?
`)
.bind(1, 10)
.all();
typescript
const { results } = await env.DB.prepare(`
  SELECT
    posts.*,
    users.username as author_name,
    users.email as author_email
  FROM posts
  INNER JOIN users ON posts.user_id = users.user_id
  WHERE posts.published = ?
  ORDER BY posts.created_at DESC
  LIMIT ?
`)
.bind(1, 10)
.all();

Transactions (Batch Pattern)

事务(批量执行模式)

D1 doesn't support multi-statement transactions, but batch() provides sequential execution:
typescript
// Transfer credits between users (pseudo-transaction)
await env.DB.batch([
  env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?')
    .bind(amount, fromUserId),
  env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?')
    .bind(amount, toUserId),
  env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)')
    .bind(fromUserId, toUserId, amount)
]);
Note: If any statement fails, the batch stops. This provides some transaction-like behavior.

D1不支持多语句事务,但可通过batch()实现顺序执行的类事务行为:
typescript
// Transfer credits between users (pseudo-transaction)
await env.DB.batch([
  env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?')
    .bind(amount, fromUserId),
  env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?')
    .bind(amount, toUserId),
  env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)')
    .bind(fromUserId, toUserId, amount)
]);
注意:若某一语句失败,批量执行会停止。这提供了类似事务的部分特性。

Error Handling

错误处理

Error Types

错误类型

typescript
try {
  const result = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
    .bind(userId)
    .first();
} catch (error: any) {
  // D1 errors have a message property
  const errorMessage = error.message;

  if (errorMessage.includes('D1_ERROR')) {
    // D1-specific error
  } else if (errorMessage.includes('D1_EXEC_ERROR')) {
    // SQL syntax error
  } else if (errorMessage.includes('D1_TYPE_ERROR')) {
    // Type mismatch (e.g., undefined instead of null)
  } else if (errorMessage.includes('D1_COLUMN_NOTFOUND')) {
    // Column doesn't exist
  }

  console.error('Database error:', errorMessage);
  return c.json({ error: 'Database operation failed' }, 500);
}
typescript
try {
  const result = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
    .bind(userId)
    .first();
} catch (error: any) {
  // D1 errors have a message property
  const errorMessage = error.message;

  if (errorMessage.includes('D1_ERROR')) {
    // D1-specific error
  } else if (errorMessage.includes('D1_EXEC_ERROR')) {
    // SQL syntax error
  } else if (errorMessage.includes('D1_TYPE_ERROR')) {
    // Type mismatch (e.g., undefined instead of null)
  } else if (errorMessage.includes('D1_COLUMN_NOTFOUND')) {
    // Column doesn't exist
  }

  console.error('Database error:', errorMessage);
  return c.json({ error: 'Database operation failed' }, 500);
}

Common Errors and Fixes

常见错误及修复方案

"Statement too long"

"Statement too long"

typescript
// ❌ DON'T: Single massive INSERT
await env.DB.exec(`
  INSERT INTO users (email) VALUES
    ('user1@example.com'),
    ('user2@example.com'),
    ... // 1000 more rows
`);

// ✅ DO: Break into batches
const batchSize = 100;
for (let i = 0; i < users.length; i += batchSize) {
  const batch = users.slice(i, i + batchSize);
  const inserts = batch.map(u =>
    env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(u.email)
  );
  await env.DB.batch(inserts);
}
typescript
// ❌ DON'T: Single massive INSERT
await env.DB.exec(`
  INSERT INTO users (email) VALUES
    ('user1@example.com'),
    ('user2@example.com'),
    ... // 1000 more rows
`);

// ✅ DO: Break into batches
const batchSize = 100;
for (let i = 0; i < users.length; i += batchSize) {
  const batch = users.slice(i, i + batchSize);
  const inserts = batch.map(u =>
    env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(u.email)
  );
  await env.DB.batch(inserts);
}

"Too many requests queued"

"Too many requests queued"

typescript
// ❌ DON'T: Fire off many individual queries
for (const user of users) {
  await env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(user.email).run();
}

// ✅ DO: Use batch()
const inserts = users.map(u =>
  env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(u.email)
);
await env.DB.batch(inserts);
typescript
// ❌ DON'T: Fire off many individual queries
for (const user of users) {
  await env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(user.email).run();
}

// ✅ DO: Use batch()
const inserts = users.map(u =>
  env.DB.prepare('INSERT INTO users (email) VALUES (?)').bind(u.email)
);
await env.DB.batch(inserts);

"D1_TYPE_ERROR" (undefined vs null)

"D1_TYPE_ERROR" (undefined vs null)

typescript
// ❌ DON'T: Use undefined
await env.DB.prepare('INSERT INTO users (email, bio) VALUES (?, ?)')
  .bind(email, undefined);  // ❌ D1 doesn't support undefined

// ✅ DO: Use null for optional values
await env.DB.prepare('INSERT INTO users (email, bio) VALUES (?, ?)')
  .bind(email, bio || null);
typescript
// ❌ DON'T: Use undefined
await env.DB.prepare('INSERT INTO users (email, bio) VALUES (?, ?)')
  .bind(email, undefined);  // ❌ D1 doesn't support undefined

// ✅ DO: Use null for optional values
await env.DB.prepare('INSERT INTO users (email, bio) VALUES (?, ?)')
  .bind(email, bio || null);

Retry Logic

重试逻辑

typescript
async function queryWithRetry<T>(
  queryFn: () => Promise<T>,
  maxRetries = 3
): Promise<T> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await queryFn();
    } catch (error: any) {
      const message = error.message;

      // Retry on transient errors
      const isRetryable =
        message.includes('Network connection lost') ||
        message.includes('storage caused object to be reset') ||
        message.includes('reset because its code was updated');

      if (!isRetryable || attempt === maxRetries - 1) {
        throw error;
      }

      // Exponential backoff
      const delay = Math.min(1000 * Math.pow(2, attempt), 5000);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }

  throw new Error('Retry logic failed');
}

// Usage
const user = await queryWithRetry(() =>
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
    .bind(userId)
    .first()
);

typescript
async function queryWithRetry<T>(
  queryFn: () => Promise<T>,
  maxRetries = 3
): Promise<T> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await queryFn();
    } catch (error: any) {
      const message = error.message;

      // Retry on transient errors
      const isRetryable =
        message.includes('Network connection lost') ||
        message.includes('storage caused object to be reset') ||
        message.includes('reset because its code was updated');

      if (!isRetryable || attempt === maxRetries - 1) {
        throw error;
      }

      // Exponential backoff
      const delay = Math.min(1000 * Math.pow(2, attempt), 5000);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }

  throw new Error('Retry logic failed');
}

// Usage
const user = await queryWithRetry(() =>
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
    .bind(userId)
    .first()
);

Performance Optimization

性能优化

Indexes

索引

Indexes dramatically improve query performance for filtered columns.
索引可显著提升带过滤条件的查询性能。

When to Create Indexes

何时创建索引

typescript
// ✅ Index columns used in WHERE clauses
CREATE INDEX idx_users_email ON users(email);

// ✅ Index foreign keys
CREATE INDEX idx_posts_user_id ON posts(user_id);

// ✅ Index columns used for sorting
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);

// ✅ Multi-column indexes for complex queries
CREATE INDEX idx_posts_user_published ON posts(user_id, published);
typescript
// ✅ Index columns used in WHERE clauses
CREATE INDEX idx_users_email ON users(email);

// ✅ Index foreign keys
CREATE INDEX idx_posts_user_id ON posts(user_id);

// ✅ Index columns used for sorting
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);

// ✅ Multi-column indexes for complex queries
CREATE INDEX idx_posts_user_published ON posts(user_id, published);

Test Index Usage

测试索引使用情况

sql
-- Check if index is being used
EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'user@example.com';

-- Should see: SEARCH users USING INDEX idx_users_email
sql
-- Check if index is being used
EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'user@example.com';

-- Should see: SEARCH users USING INDEX idx_users_email

Partial Indexes

部分索引

sql
-- Index only non-deleted records
CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0;

-- Index only published posts
CREATE INDEX idx_posts_published ON posts(created_at DESC) WHERE published = 1;
sql
-- Index only non-deleted records
CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0;

-- Index only published posts
CREATE INDEX idx_posts_published ON posts(created_at DESC) WHERE published = 1;

PRAGMA optimize

PRAGMA optimize

Run after creating indexes or making schema changes:
sql
-- In your migration file
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize;
Or from Worker:
typescript
await env.DB.exec('PRAGMA optimize');
创建索引或修改 schema 后执行:
sql
-- In your migration file
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize;
或从Worker中执行:
typescript
await env.DB.exec('PRAGMA optimize');

Query Optimization Tips

查询优化技巧

typescript
// ✅ Use specific columns instead of SELECT *
const users = await env.DB.prepare(
  'SELECT user_id, email, username FROM users'
).all();

// ✅ Use LIMIT to prevent scanning entire table
const latest = await env.DB.prepare(
  'SELECT * FROM posts ORDER BY created_at DESC LIMIT 10'
).all();

// ✅ Use indexes for WHERE conditions
// Create index first: CREATE INDEX idx_users_email ON users(email)
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email)
  .first();

// ❌ Avoid functions in WHERE (can't use indexes)
// Bad: WHERE LOWER(email) = 'user@example.com'
// Good: WHERE email = 'user@example.com' (store email lowercase)

typescript
// ✅ Use specific columns instead of SELECT *
const users = await env.DB.prepare(
  'SELECT user_id, email, username FROM users'
).all();

// ✅ Use LIMIT to prevent scanning entire table
const latest = await env.DB.prepare(
  'SELECT * FROM posts ORDER BY created_at DESC LIMIT 10'
).all();

// ✅ Use indexes for WHERE conditions
// Create index first: CREATE INDEX idx_users_email ON users(email)
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email)
  .first();

// ❌ Avoid functions in WHERE (can't use indexes)
// Bad: WHERE LOWER(email) = 'user@example.com'
// Good: WHERE email = 'user@example.com' (store email lowercase)

Local Development

本地开发

Local vs Remote Databases

本地与远程数据库

bash
undefined
bash
undefined

Create local database (automatic on first --local command)

Create local database (automatic on first --local command)

npx wrangler d1 migrations apply my-database --local
npx wrangler d1 migrations apply my-database --local

Query local database

Query local database

npx wrangler d1 execute my-database --local --command "SELECT * FROM users"
npx wrangler d1 execute my-database --local --command "SELECT * FROM users"

Query remote database

Query remote database

npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"
undefined
npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"
undefined

Local Database Location

本地数据库存储位置

Local D1 databases are stored in:
.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite
本地D1数据库存储在:
.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite

Seeding Local Database

本地数据库数据填充

bash
undefined
bash
undefined

Create seed file

Create seed file

cat > seed.sql << 'EOF' INSERT INTO users (email, username, created_at) VALUES ('alice@example.com', 'alice', 1698000000), ('bob@example.com', 'bob', 1698000060); EOF
cat > seed.sql << 'EOF' INSERT INTO users (email, username, created_at) VALUES ('alice@example.com', 'alice', 1698000000), ('bob@example.com', 'bob', 1698000060); EOF

Apply seed

Apply seed

npx wrangler d1 execute my-database --local --file=seed.sql

---
npx wrangler d1 execute my-database --local --file=seed.sql

---

Drizzle ORM (Optional)

Drizzle ORM(可选)

While D1 works great with raw SQL, some developers prefer ORMs. Drizzle ORM supports D1:
bash
npm install drizzle-orm
npm install -D drizzle-kit
Note: Drizzle adds complexity and another layer to learn. For most D1 use cases, raw SQL with wrangler is simpler and more direct. Only consider Drizzle if you:
  • Prefer TypeScript schema definitions over SQL
  • Want auto-complete for queries
  • Are building a very large application with complex schemas

虽然D1配合原生SQL使用体验极佳,但部分开发者偏好ORM框架。Drizzle ORM支持D1:
bash
npm install drizzle-orm
npm install -D drizzle-kit
注意:Drizzle会增加复杂度和学习成本。对于大多数D1使用场景,配合Wrangler使用原生SQL更简单直接。仅在以下场景考虑使用Drizzle:
  • 偏好TypeScript schema定义而非SQL
  • 需要查询自动补全
  • 正在构建超大型应用且包含复杂schema

Best Practices Summary

最佳实践总结

✅ Always Do:

✅ 推荐始终遵循:

  1. Use prepared statements with
    .bind()
    for user input
  2. Use
    .batch()
    for multiple queries (reduces latency)
  3. Create indexes on frequently queried columns
  4. Run
    PRAGMA optimize
    after schema changes
  5. Use
    IF NOT EXISTS
    in migrations for idempotency
  6. Test migrations locally before applying to production
  7. Handle errors gracefully with try/catch
  8. Use
    null
    instead of
    undefined
    for optional values
  9. Validate input before binding to queries
  10. Check
    meta.rows_written
    after UPDATE/DELETE
  1. 使用预处理语句,通过
    .bind()
    处理用户输入
  2. 使用
    .batch()
    执行多个查询(减少延迟)
  3. 为频繁查询的列创建索引
  4. 修改schema后执行
    PRAGMA optimize
  5. 在迁移脚本中使用
    IF NOT EXISTS
    保证幂等性
  6. 先在本地测试迁移脚本,再应用到生产环境
  7. 通过try/catch优雅处理错误
  8. 使用
    null
    而非
    undefined
    表示可选值
  9. 绑定到查询前先验证输入
  10. UPDATE/DELETE后检查
    meta.rows_written

❌ Never Do:

❌ 绝对禁止:

  1. Never use
    .exec()
    with user input (SQL injection risk)
  2. Never hardcode
    database_id
    in public repos
  3. Never use
    undefined
    in bind parameters (causes D1_TYPE_ERROR)
  4. Never fire individual queries in loops (use batch instead)
  5. Never forget
    LIMIT
    on potentially large result sets
  6. Never use
    SELECT *
    in production (specify columns)
  7. Never include
    BEGIN TRANSACTION
    in migration files
  8. Never modify applied migrations (create new ones)
  9. Never skip error handling on database operations
  10. Never assume queries succeed (always check results)

  1. 永远不要在用户输入场景使用
    .exec()
    (存在SQL注入风险)
  2. 永远不要在公开代码仓库硬编码
    database_id
  3. 永远不要在绑定参数中使用
    undefined
    (会导致D1_TYPE_ERROR)
  4. 永远不要在循环中单独执行查询(使用批量执行替代)
  5. 永远不要忘记在可能返回大量结果的查询中使用
    LIMIT
  6. 永远不要在生产环境使用
    SELECT *
    (明确指定需要的列)
  7. 永远不要在迁移文件中包含
    BEGIN TRANSACTION
  8. 永远不要修改已应用的迁移脚本(创建新的迁移脚本)
  9. 永远不要忽略数据库操作的错误处理
  10. 永远不要假设查询一定会成功(始终检查结果)

Known Issues Prevented

已规避的已知问题

IssueDescriptionHow to Avoid
Statement too longLarge INSERT statements exceed D1 limitsBreak into batches of 100-250 rows
Transaction conflicts
BEGIN TRANSACTION
in migration files
Remove BEGIN/COMMIT (D1 handles this)
Foreign key violationsSchema changes break foreign key constraintsUse
PRAGMA defer_foreign_keys = true
Rate limiting / queue overloadToo many individual queriesUse
batch()
instead of loops
Memory limit exceededQuery loads too much data into memoryAdd LIMIT, paginate results, shard queries
Type mismatch errorsUsing
undefined
instead of
null
Always use
null
for optional values

问题描述规避方案
Statement too long大型INSERT语句超出D1限制拆分为100-250行的批量操作
Transaction conflicts迁移文件中包含
BEGIN TRANSACTION
删除BEGIN/COMMIT(D1会自动处理)
Foreign key violationsSchema修改破坏外键约束使用
PRAGMA defer_foreign_keys = true
Rate limiting / queue overload过多单独查询导致限流或队列过载使用
batch()
替代循环执行
Memory limit exceeded查询加载过多数据到内存添加LIMIT、分页、分片查询
Type mismatch errors使用
undefined
而非
null
始终使用
null
表示可选值

Wrangler Commands Reference

Wrangler命令参考

bash
undefined
bash
undefined

Database management

Database management

wrangler d1 create <DATABASE_NAME> wrangler d1 list wrangler d1 delete <DATABASE_NAME> wrangler d1 info <DATABASE_NAME>
wrangler d1 create <DATABASE_NAME> wrangler d1 list wrangler d1 delete <DATABASE_NAME> wrangler d1 info <DATABASE_NAME>

Migrations

Migrations

wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME> wrangler d1 migrations list <DATABASE_NAME> --local|--remote wrangler d1 migrations apply <DATABASE_NAME> --local|--remote
wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME> wrangler d1 migrations list <DATABASE_NAME> --local|--remote wrangler d1 migrations apply <DATABASE_NAME> --local|--remote

Execute queries

Execute queries

wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users" wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql
wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users" wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql

Time Travel (view historical data)

Time Travel (view historical data)

wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20" wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"

---
wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20" wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"

---

Official Documentation

官方文档