cloudflare-d1

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cloudflare D1 Database

Cloudflare D1 Database

Status: Production Ready ✅ Last Updated: 2026-01-20 Dependencies: cloudflare-worker-base (for Worker setup) Latest Versions: wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
Recent Updates (2025):
  • Nov 2025: Jurisdiction support (data localization compliance), remote bindings GA (wrangler@4.37.0+), automatic resource provisioning
  • Sept 2025: Automatic read-only query retries (up to 2 attempts), remote bindings public beta
  • July 2025: Storage limits increased (250GB → 1TB), alpha backup access removed, REST API 50-500ms faster
  • May 2025: HTTP API permissions security fix (D1:Edit required for writes)
  • April 2025: Read replication public beta (read-only replicas across regions)
  • Feb 2025: PRAGMA optimize support, read-only access permission bug fix
  • Jan 2025: Free tier limits enforcement (Feb 10 start), Worker API 40-60% faster queries

状态:已就绪可用于生产 ✅ 最后更新时间:2026-01-20 依赖项:cloudflare-worker-base(用于Worker配置) 最新版本:wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
2025年更新记录
  • 2025年11月:新增司法管辖区支持(数据本地化合规)、远程绑定正式可用(GA,需wrangler@4.37.0+)、自动资源配置
  • 2025年9月:新增只读查询自动重试机制(最多2次尝试)、远程绑定公开测试版
  • 2025年7月:存储上限提升(从250GB增至1TB)、移除Alpha版备份访问权限、REST API速度提升50-500ms
  • 2025年5月:修复HTTP API权限安全问题(写入操作需D1:Edit权限)
  • 2025年4月:只读复制功能公开测试版(跨区域只读副本)
  • 2025年2月:支持PRAGMA optimize、修复只读访问权限bug
  • 2025年1月:免费版额度开始强制执行(2月10日生效)、Worker API查询速度提升40-60%

Quick Start (5 Minutes)

快速入门(5分钟)

1. Create D1 Database

1. 创建D1数据库

bash
undefined
bash
undefined

Create a new D1 database

创建新的D1数据库

npx wrangler d1 create my-database
npx wrangler d1 create my-database

Output includes database_id - save this!

输出包含database_id - 请保存好这个ID!

✅ Successfully created DB 'my-database'

✅ 成功创建数据库 'my-database'

[[d1_databases]]

[[d1_databases]]

binding = "DB"

binding = "DB"

database_name = "my-database"

database_name = "my-database"

database_id = "<UUID>"

database_id = "<UUID>"

undefined
undefined

2. Configure Bindings

2. 配置绑定

Add to your
wrangler.jsonc
:
jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2025-10-11",
  "d1_databases": [
    {
      "binding": "DB",                    // Available as env.DB in your Worker
      "database_name": "my-database",      // Name from wrangler d1 create
      "database_id": "<UUID>",             // ID from wrangler d1 create
      "preview_database_id": "local-db"    // For local development
    }
  ]
}
CRITICAL:
  • binding
    is how you access the database in code (
    env.DB
    )
  • database_id
    is the production database UUID
  • preview_database_id
    is for local dev (can be any string)
  • Never commit real
    database_id
    values to public repos
    - use environment variables or secrets
将以下内容添加到你的
wrangler.jsonc
jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2025-10-11",
  "d1_databases": [
    {
      "binding": "DB",                    // 在Worker中通过env.DB访问
      "database_name": "my-database",      // 来自wrangler d1 create的数据库名称
      "database_id": "<UUID>",             // 来自wrangler d1 create的数据库ID
      "preview_database_id": "local-db"    // 用于本地开发
    }
  ]
}
重要提示
  • binding
    是你在代码中访问数据库的方式(
    env.DB
  • database_id
    是生产环境数据库的UUID
  • preview_database_id
    用于本地开发(可以是任意字符串)
  • 绝对不要将真实的
    database_id
    提交到公开代码仓库
    - 使用环境变量或密钥管理

3. Create Your First Migration

3. 创建首个迁移脚本

bash
undefined
bash
undefined

Create migration file

创建迁移文件

npx wrangler d1 migrations create my-database create_users_table
npx wrangler d1 migrations create my-database create_users_table

This creates: migrations/0001_create_users_table.sql

这会生成:migrations/0001_create_users_table.sql


Edit the migration file:

```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
  user_id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  username TEXT NOT NULL,
  created_at INTEGER NOT NULL,
  updated_at INTEGER
);

-- Create index for common queries
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Optimize database
PRAGMA optimize;

编辑迁移文件:

```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
  user_id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  username TEXT NOT NULL,
  created_at INTEGER NOT NULL,
  updated_at INTEGER
);

-- 为常用查询创建索引
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- 优化数据库
PRAGMA optimize;

4. Apply Migration

4. 应用迁移脚本

bash
undefined
bash
undefined

Apply locally first (for testing)

先在本地应用(用于测试)

npx wrangler d1 migrations apply my-database --local
npx wrangler d1 migrations apply my-database --local

Apply to production when ready

准备就绪后应用到生产环境

npx wrangler d1 migrations apply my-database --remote
undefined
npx wrangler d1 migrations apply my-database --remote
undefined

5. Query from Your Worker

5. 从Worker中查询数据库

typescript
// src/index.ts
import { Hono } from 'hono';

type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();

app.get('/api/users/:email', async (c) => {
  const email = c.req.param('email');

  try {
    // ALWAYS use prepared statements with bind()
    const result = await c.env.DB.prepare(
      'SELECT * FROM users WHERE email = ?'
    )
    .bind(email)
    .first();

    if (!result) {
      return c.json({ error: 'User not found' }, 404);
    }

    return c.json(result);
  } catch (error: any) {
    console.error('D1 Error:', error.message);
    return c.json({ error: 'Database error' }, 500);
  }
});

export default app;

typescript
// src/index.ts
import { Hono } from 'hono';

type Bindings = {
  DB: D1Database;
};

const app = new Hono<{ Bindings: Bindings }>();

app.get('/api/users/:email', async (c) => {
  const email = c.req.param('email');

  try {
    // 始终使用带bind()的预编译语句
    const result = await c.env.DB.prepare(
      'SELECT * FROM users WHERE email = ?'
    )
    .bind(email)
    .first();

    if (!result) {
      return c.json({ error: '用户未找到' }, 404);
    }

    return c.json(result);
  } catch (error: any) {
    console.error('D1错误:', error.message);
    return c.json({ error: '数据库错误' }, 500);
  }
});

export default app;

D1 Migrations System

D1迁移系统

Migration Workflow

迁移工作流

bash
undefined
bash
undefined

1. Create migration

1. 创建迁移脚本

npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>

2. List unapplied migrations

2. 列出未应用的迁移脚本

npx wrangler d1 migrations list <DATABASE_NAME> --local npx wrangler d1 migrations list <DATABASE_NAME> --remote
npx wrangler d1 migrations list <DATABASE_NAME> --local npx wrangler d1 migrations list <DATABASE_NAME> --remote

3. Apply migrations

3. 应用迁移脚本

npx wrangler d1 migrations apply <DATABASE_NAME> --local # Test locally npx wrangler d1 migrations apply <DATABASE_NAME> --remote # Deploy to production
undefined
npx wrangler d1 migrations apply <DATABASE_NAME> --local # 本地测试 npx wrangler d1 migrations apply <DATABASE_NAME> --remote # 部署到生产环境
undefined

Migration File Naming

迁移文件命名规则

Migrations are automatically versioned:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sql
Rules:
  • Files are executed in sequential order
  • Each migration runs once (tracked in
    d1_migrations
    table)
  • Failed migrations roll back (transactional)
  • Can't modify or delete applied migrations
迁移文件会自动进行版本编号:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sql
规则
  • 文件按顺序执行
  • 每个迁移脚本仅运行一次(执行记录保存在
    d1_migrations
    表中)
  • 失败的迁移会回滚(事务性处理)
  • 已应用的迁移脚本无法修改或删除

Custom Migration Configuration

自定义迁移配置

jsonc
{
  "d1_databases": [
    {
      "binding": "DB",
      "database_name": "my-database",
      "database_id": "<UUID>",
      "migrations_dir": "db/migrations",        // Custom directory (default: migrations/)
      "migrations_table": "schema_migrations"   // Custom tracking table (default: d1_migrations)
    }
  ]
}
jsonc
{
  "d1_databases": [
    {
      "binding": "DB",
      "database_name": "my-database",
      "database_id": "<UUID>",
      "migrations_dir": "db/migrations",        // 自定义迁移目录(默认:migrations/)
      "migrations_table": "schema_migrations"   // 自定义跟踪表(默认:d1_migrations)
    }
  ]
}

Migration Best Practices

迁移最佳实践

✅ Always Do:

✅ 推荐做法:

sql
-- Use IF NOT EXISTS to make migrations idempotent
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- Run PRAGMA optimize after schema changes
PRAGMA optimize;

-- Use UPPERCASE BEGIN/END in triggers (lowercase fails remotely)
CREATE TRIGGER update_timestamp
AFTER UPDATE ON users
FOR EACH ROW
BEGIN
  UPDATE users SET updated_at = unixepoch() WHERE user_id = NEW.user_id;
END;

-- Use transactions for data migrations
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;
sql
-- 使用IF NOT EXISTS确保迁移脚本的幂等性
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

-- 模式变更后运行PRAGMA optimize
PRAGMA optimize;

-- 触发器中使用大写的BEGIN/END(小写在远程环境会失败)
CREATE TRIGGER update_timestamp
AFTER UPDATE ON users
FOR EACH ROW
BEGIN
  UPDATE users SET updated_at = unixepoch() WHERE user_id = NEW.user_id;
END;

-- 数据迁移使用事务
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;

❌ Never Do:

❌ 禁止做法:

sql
-- DON'T include BEGIN TRANSACTION at start of migration file (D1 handles this)
BEGIN TRANSACTION;  -- ❌ Remove this

-- DON'T use lowercase begin/end in triggers (works locally, FAILS remotely)
CREATE TRIGGER my_trigger
AFTER INSERT ON table
begin  -- ❌ Use BEGIN (uppercase)
  UPDATE ...;
end;   -- ❌ Use END (uppercase)

-- DON'T use MySQL/PostgreSQL syntax
ALTER TABLE users MODIFY COLUMN email VARCHAR(255);  -- ❌ Not SQLite

-- DON'T create tables without IF NOT EXISTS
CREATE TABLE users (...);  -- ❌ Fails if table exists
sql
-- 不要在迁移文件开头包含BEGIN TRANSACTION(D1会自动处理)
BEGIN TRANSACTION;  -- ❌ 移除该行

-- 触发器中不要使用小写的begin/end(本地可用,远程环境会失败)
CREATE TRIGGER my_trigger
AFTER INSERT ON table
begin  -- ❌ 使用BEGIN(大写)
  UPDATE ...;
end;   -- ❌ 使用END(大写)

-- 不要使用MySQL/PostgreSQL语法
ALTER TABLE users MODIFY COLUMN email VARCHAR(255);  -- ❌ 这不是SQLite语法

-- 不要创建表时不使用IF NOT EXISTS
CREATE TABLE users (...);  -- ❌ 如果表已存在会失败

Handling Foreign Keys in Migrations

迁移中处理外键

sql
-- Temporarily disable foreign key checks during schema changes
PRAGMA defer_foreign_keys = true;

-- Make schema changes that would violate foreign keys
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);

-- Foreign keys re-enabled automatically at end of migration

sql
-- 模式变更期间临时禁用外键检查
PRAGMA defer_foreign_keys = true;

-- 执行会违反外键约束的模式变更
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);

-- 迁移结束后会自动重新启用外键

D1 Workers API

D1 Workers API

Type Definitions:
typescript
interface Env { DB: D1Database; }
type Bindings = { DB: D1Database; };
const app = new Hono<{ Bindings: Bindings }>();
prepare() - PRIMARY METHOD (always use for user input):
typescript
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email).first();
Why: Prevents SQL injection, reusable, better performance, type-safe
Query Result Methods:
  • .all()
    { results, meta }
    - Get all rows
  • .first()
    → row object or null - Get first row
  • .first('column')
    → value - Get single column value (e.g., COUNT)
  • .run()
    { success, meta }
    - Execute INSERT/UPDATE/DELETE (no results)
batch() - CRITICAL FOR PERFORMANCE:
typescript
const results = await env.DB.batch([
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1),
  env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1)
]);
  • Executes sequentially, single network round trip
  • If one fails, remaining statements don't execute
  • Use for: bulk inserts, fetching related data
exec() - AVOID IN PRODUCTION:
typescript
await env.DB.exec('SELECT * FROM users;'); // Only for migrations/maintenance
  • ❌ Never use with user input (SQL injection risk)
  • ✅ Only use for: migration files, one-off tasks

类型定义
typescript
interface Env { DB: D1Database; }
type Bindings = { DB: D1Database; };
const app = new Hono<{ Bindings: Bindings }>();
prepare() - 核心方法(处理用户输入时必须使用)
typescript
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
  .bind(email).first();
原因:防止SQL注入、可复用、性能更优、类型安全
查询结果方法
  • .all()
    { results, meta }
    - 获取所有行
  • .first()
    → 行对象或null - 获取第一行
  • .first('column')
    → 单个值 - 获取指定列的值(例如COUNT)
  • .run()
    { success, meta }
    - 执行INSERT/UPDATE/DELETE(无返回结果)
batch() - 性能优化关键
typescript
const results = await env.DB.batch([
  env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1),
  env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1)
]);
  • 按顺序执行,单次网络往返
  • 若某条语句失败,剩余语句不再执行
  • 适用场景:批量插入、获取关联数据
exec() - 生产环境禁止使用
typescript
await env.DB.exec('SELECT * FROM users;'); // 仅用于迁移或维护任务
  • ❌ 绝对不要用于处理用户输入(存在SQL注入风险)
  • ✅ 仅用于:迁移文件、一次性任务

Query Patterns

查询模式

Basic CRUD Operations

基础CRUD操作

typescript
// CREATE
const { meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
).bind(email, username, Date.now()).run();
const newUserId = meta.last_row_id;

// READ (single)
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId).first();

// READ (multiple)
const { results } = await env.DB.prepare('SELECT * FROM users LIMIT ?')
  .bind(10).all();

// UPDATE
const { meta } = await env.DB.prepare('UPDATE users SET username = ? WHERE user_id = ?')
  .bind(newUsername, userId).run();
const rowsAffected = meta.rows_written;

// DELETE
await env.DB.prepare('DELETE FROM users WHERE user_id = ?').bind(userId).run();

// COUNT
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users').first('total');

// EXISTS check
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
  .bind(email).first();
typescript
// 创建
const { meta } = await env.DB.prepare(
  'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
).bind(email, username, Date.now()).run();
const newUserId = meta.last_row_id;

// 读取(单条)
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId).first();

// 读取(多条)
const { results } = await env.DB.prepare('SELECT * FROM users LIMIT ?')
  .bind(10).all();

// 更新
const { meta } = await env.DB.prepare('UPDATE users SET username = ? WHERE user_id = ?')
  .bind(newUsername, userId).run();
const rowsAffected = meta.rows_written;

// 删除
await env.DB.prepare('DELETE FROM users WHERE user_id = ?').bind(userId).run();

// 统计
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users').first('total');

// 存在性检查
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
  .bind(email).first();

Pagination Pattern

分页模式

typescript
const page = parseInt(c.req.query('page') || '1');
const limit = 20;
const offset = (page - 1) * limit;

const [countResult, usersResult] = await c.env.DB.batch([
  c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
  c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
    .bind(limit, offset)
]);

return c.json({
  users: usersResult.results,
  pagination: { page, limit, total: countResult.results[0].total }
});
typescript
const page = parseInt(c.req.query('page') || '1');
const limit = 20;
const offset = (page - 1) * limit;

const [countResult, usersResult] = await c.env.DB.batch([
  c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
  c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
    .bind(limit, offset)
]);

return c.json({
  users: usersResult.results,
  pagination: { page, limit, total: countResult.results[0].total }
});

Batch Pattern (Pseudo-Transactions)

批量模式(伪事务)

typescript
// D1 doesn't support multi-statement transactions, but batch() provides sequential execution
await env.DB.batch([
  env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?').bind(amount, fromUserId),
  env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?').bind(amount, toUserId),
  env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)').bind(fromUserId, toUserId, amount)
]);
// If any statement fails, batch stops (transaction-like behavior)

typescript
// D1不支持多语句事务,但batch()提供顺序执行的能力
await env.DB.batch([
  env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?').bind(amount, fromUserId),
  env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?').bind(amount, toUserId),
  env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)').bind(fromUserId, toUserId, amount)
]);
// 若任意语句失败,批量操作会停止(类似事务的行为)

Error Handling

错误处理

Common Error Types:
  • D1_ERROR
    - General D1 error (often transient)
  • D1_EXEC_ERROR
    - SQL syntax error or limitations
  • D1_TYPE_ERROR
    - Type mismatch (undefined instead of null)
  • D1_COLUMN_NOTFOUND
    - Column doesn't exist
Common Errors and Fixes:
ErrorCauseSolution
Statement too longLarge INSERT with 1000+ rowsBreak into batches of 100-250 using
batch()
Network connection lostTransient failure or large importImplement retry logic (see below) or break into smaller chunks
Too many requests queuedIndividual queries in loopUse
batch()
instead of loop
D1_TYPE_ERRORUsing
undefined
in bind
Use
null
for optional values:
.bind(email, bio || null)
Transaction conflictsBEGIN TRANSACTION in migrationRemove BEGIN/COMMIT (D1 handles automatically)
Foreign key violationsSchema changes break constraintsUse
PRAGMA defer_foreign_keys = true
D1_EXEC_ERROR: incomplete inputMulti-line SQL in D1Database.exec()Use prepared statements or external .sql files (Issue #9133)
常见错误类型
  • D1_ERROR
    - 通用D1错误(通常是临时性的)
  • D1_EXEC_ERROR
    - SQL语法错误或限制
  • D1_TYPE_ERROR
    - 类型不匹配(使用undefined而非null)
  • D1_COLUMN_NOTFOUND
    - 列不存在
常见错误及修复方案
错误原因解决方案
语句过长大型INSERT语句超出D1限制使用
batch()
拆分为100-250行的批量操作
网络连接丢失临时性故障或大型导入实现重试逻辑(见下文)或拆分为更小的块
请求队列过载循环中执行大量单个查询使用
batch()
替代循环
D1_TYPE_ERRORbind()中使用undefined可选值使用null:`.bind(email, bio
事务冲突迁移文件中包含BEGIN TRANSACTION移除BEGIN/COMMIT(D1会自动处理)
外键约束违反模式变更破坏外键约束迁移中使用
PRAGMA defer_foreign_keys = true
D1_EXEC_ERROR: incomplete inputD1Database.exec()中使用多行SQL使用预编译语句或外部.sql文件(Issue #9133

Transient Errors Are Expected Behavior

临时性错误是预期行为

CRITICAL: D1 queries fail transiently with errors like "Network connection lost", "storage operation exceeded timeout", or "isolate exceeded its memory limit". Cloudflare documentation states "a handful of errors every several hours is not unexpected" and recommends implementing retry logic. (D1 FAQ)
Common Transient Errors:
  • D1_ERROR: Network connection lost
  • D1 DB storage operation exceeded timeout which caused object to be reset
  • Internal error while starting up D1 DB storage caused object to be reset
  • D1 DB's isolate exceeded its memory limit and was reset
Retry Pattern (Recommended):
typescript
async function queryWithRetry<T>(
  fn: () => Promise<T>,
  maxRetries = 3,
  baseDelay = 100
): Promise<T> {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error: any) {
      const isTransient = error.message?.includes('Network connection lost') ||
                         error.message?.includes('exceeded timeout') ||
                         error.message?.includes('exceeded its memory limit');

      if (!isTransient || i === maxRetries - 1) throw error;

      // Exponential backoff
      await new Promise(r => setTimeout(r, baseDelay * Math.pow(2, i)));
    }
  }
  throw new Error('Max retries exceeded');
}

// Usage
const user = await queryWithRetry(() =>
  env.DB.prepare('SELECT * FROM users WHERE email = ?').bind(email).first()
);
Automatic Retries (Sept 2025): D1 automatically retries read-only queries (SELECT, EXPLAIN, WITH) up to 2 times on retryable errors. Check
meta.total_attempts
in response for retry count. Write queries should still implement custom retry logic.

重要提示:D1查询可能会出现临时性失败,例如“Network connection lost”、“storage operation exceeded timeout”或“isolate exceeded its memory limit”。Cloudflare文档指出**“每小时出现少量错误是正常现象”**,建议实现重试逻辑。(D1 FAQ
常见临时性错误
  • D1_ERROR: Network connection lost
  • D1 DB storage operation exceeded timeout which caused object to be reset
  • Internal error while starting up D1 DB storage caused object to be reset
  • D1 DB's isolate exceeded its memory limit and was reset
推荐重试模式
typescript
async function queryWithRetry<T>(
  fn: () => Promise<T>,
  maxRetries = 3,
  baseDelay = 100
): Promise<T> {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error: any) {
      const isTransient = error.message?.includes('Network connection lost') ||
                         error.message?.includes('exceeded timeout') ||
                         error.message?.includes('exceeded its memory limit');

      if (!isTransient || i === maxRetries - 1) throw error;

      // 指数退避
      await new Promise(r => setTimeout(r, baseDelay * Math.pow(2, i)));
    }
  }
  throw new Error('达到最大重试次数');
}

// 使用示例
const user = await queryWithRetry(() =>
  env.DB.prepare('SELECT * FROM users WHERE email = ?').bind(email).first()
);
自动重试(2025年9月): D1会自动对只读查询(SELECT、EXPLAIN、WITH)进行最多2次重试。可通过响应中的
meta.total_attempts
查看重试次数。写入查询仍需实现自定义重试逻辑。

Performance Optimization

性能优化

Index Best Practices:
  • ✅ Index columns in WHERE clauses:
    CREATE INDEX idx_users_email ON users(email)
  • ✅ Index foreign keys:
    CREATE INDEX idx_posts_user_id ON posts(user_id)
  • ✅ Index columns for sorting:
    CREATE INDEX idx_posts_created_at ON posts(created_at DESC)
  • ✅ Multi-column indexes:
    CREATE INDEX idx_posts_user_published ON posts(user_id, published)
  • ✅ Partial indexes:
    CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0
  • ✅ Test with:
    EXPLAIN QUERY PLAN SELECT ...
PRAGMA optimize (Feb 2025):
sql
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize;  -- Run after schema changes
Query Optimization:
  • ✅ Use specific columns (not
    SELECT *
    )
  • ✅ Always include LIMIT on large result sets
  • ✅ Use indexes for WHERE conditions
  • ❌ Avoid functions in WHERE (can't use indexes):
    WHERE LOWER(email)
    → store lowercase instead

索引最佳实践
  • ✅ 为WHERE子句中的列创建索引:
    CREATE INDEX idx_users_email ON users(email)
  • ✅ 为外键创建索引:
    CREATE INDEX idx_posts_user_id ON posts(user_id)
  • ✅ 为排序列创建索引:
    CREATE INDEX idx_posts_created_at ON posts(created_at DESC)
  • ✅ 多列索引:
    CREATE INDEX idx_posts_user_published ON posts(user_id, published)
  • ✅ 部分索引:
    CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0
  • ✅ 使用
    EXPLAIN QUERY PLAN SELECT ...
    测试索引效果
PRAGMA optimize(2025年2月)
sql
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize;  -- 模式变更后运行
查询优化
  • ✅ 使用指定列(而非
    SELECT *
  • ✅ 大型结果集务必包含LIMIT
  • ✅ WHERE条件使用索引
  • ❌ 避免在WHERE中使用函数(无法利用索引):
    WHERE LOWER(email)
    → 改为存储小写邮箱

Local Development

本地开发

Local vs Remote (Nov 2025 - Remote Bindings GA):
bash
undefined
本地与远程(2025年11月 - 远程绑定正式可用)
bash
undefined

Local database (automatic creation)

本地数据库(自动创建)

npx wrangler d1 migrations apply my-database --local npx wrangler d1 execute my-database --local --command "SELECT * FROM users"
npx wrangler d1 migrations apply my-database --local npx wrangler d1 execute my-database --local --command "SELECT * FROM users"

Remote database

远程数据库

npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"
npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"

Remote bindings (wrangler@4.37.0+) - connect local Worker to deployed D1

远程绑定(wrangler@4.37.0+)- 本地Worker连接已部署的D1

Add to wrangler.jsonc: { "binding": "DB", "remote": true }

添加到wrangler.jsonc: { "binding": "DB", "remote": true }

undefined
undefined

Remote Bindings Connection Timeout

远程绑定连接超时

Known Issue: When using remote D1 bindings (
{ "remote": true }
), the connection times out after exactly 1 hour of inactivity. (GitHub Issue #10801)
Error:
D1_ERROR: Failed to parse body as JSON, got: error code: 1031
Workaround:
typescript
// Keep connection alive with periodic query (optional)
setInterval(async () => {
  try {
    await env.DB.prepare('SELECT 1').first();
  } catch (e) {
    console.log('Connection keepalive failed:', e);
  }
}, 30 * 60 * 1000); // Every 30 minutes
Or simply restart your dev server if queries fail after 1 hour of inactivity.
已知问题:使用远程D1绑定(
{ "remote": true }
)时,连接在闲置1小时后会超时。(GitHub Issue #10801
错误信息
D1_ERROR: Failed to parse body as JSON, got: error code: 1031
解决方案
typescript
// 定期发送查询保持连接(可选)
setInterval(async () => {
  try {
    await env.DB.prepare('SELECT 1').first();
  } catch (e) {
    console.log('连接保活失败:', e);
  }
}, 30 * 60 * 1000); // 每30分钟一次
或者在闲置1小时后查询失败时,重启开发服务器即可。

Multi-Worker Development (Service Bindings)

多Worker开发(服务绑定)

When running multiple Workers with service bindings in a single
wrangler dev
process, the auxiliary worker cannot access its D1 binding because both workers share the same persistence path. (GitHub Issue #11121)
Solution: Use
--persist-to
flag to point all workers to the same persistence store:
bash
undefined
在单个
wrangler dev
进程中运行多个带服务绑定的Worker时,辅助Worker无法访问其D1绑定,因为所有Worker共享相同的持久化路径。(GitHub Issue #11121
解决方案:使用
--persist-to
参数让所有Worker指向相同的持久化存储:
bash
undefined

Apply worker2 migrations to worker1's persistence path

将worker2的迁移应用到worker1的持久化路径

cd worker2 npx wrangler d1 migrations apply DB --local --persist-to=../worker1/.wrangler/state
cd worker2 npx wrangler d1 migrations apply DB --local --persist-to=../worker1/.wrangler/state

Now both workers can access D1

现在两个Worker都可以访问D1

cd ../worker1 npx wrangler dev # Both workers share the same D1 data

**Local Database Location:**
`.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite`

**Seed Local Database:**
```bash
npx wrangler d1 execute my-database --local --file=seed.sql

cd ../worker1 npx wrangler dev # 两个Worker共享同一D1数据

**本地数据库位置**:
`.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite`

**本地数据库初始化数据**:
```bash
npx wrangler d1 execute my-database --local --file=seed.sql

Scaling & Limitations

扩展与限制

10 GB Database Size Limit - Sharding Pattern

10 GB数据库大小限制 - 分片模式

D1 has a hard 10 GB per database limit, but Cloudflare supports up to 50,000 databases per Worker. Use sharding to scale beyond 10 GB. (DEV.to Article)
Hash-based sharding example (10 databases = 100 GB capacity):
typescript
// Hash user ID to shard number
function getShardId(userId: string): number {
  const hash = Array.from(userId).reduce((acc, char) =>
    ((acc << 5) - acc) + char.charCodeAt(0), 0
  );
  return Math.abs(hash) % 10; // 10 shards
}

// wrangler.jsonc - Define 10 database shards
{
  "d1_databases": [
    { "binding": "DB_SHARD_0", "database_id": "..." },
    { "binding": "DB_SHARD_1", "database_id": "..." },
    { "binding": "DB_SHARD_2", "database_id": "..." },
    // ... up to DB_SHARD_9
  ]
}

// Get correct shard for user
function getUserDb(env: Env, userId: string): D1Database {
  const shardId = getShardId(userId);
  return env[`DB_SHARD_${shardId}`];
}

// Query user's data from correct shard
const db = getUserDb(env, userId);
const user = await db.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId).first();
Alternative: Tenant-based sharding (one database per customer/tenant)
D1单数据库有10 GB的硬限制,但Cloudflare支持每个Worker最多50,000个数据库。使用分片可突破10 GB限制。(DEV.to文章
基于哈希的分片示例(10个数据库 = 100 GB容量)
typescript
// 将用户ID哈希到分片编号
function getShardId(userId: string): number {
  const hash = Array.from(userId).reduce((acc, char) =>
    ((acc << 5) - acc) + char.charCodeAt(0), 0
  );
  return Math.abs(hash) % 10; // 10个分片
}

// wrangler.jsonc - 定义10个数据库分片
{
  "d1_databases": [
    { "binding": "DB_SHARD_0", "database_id": "..." },
    { "binding": "DB_SHARD_1", "database_id": "..." },
    { "binding": "DB_SHARD_2", "database_id": "..." },
    // ... 直到DB_SHARD_9
  ]
}

// 获取用户对应的分片数据库
function getUserDb(env: Env, userId: string): D1Database {
  const shardId = getShardId(userId);
  return env[`DB_SHARD_${shardId}`];
}

// 从正确的分片查询用户数据
const db = getUserDb(env, userId);
const user = await db.prepare('SELECT * FROM users WHERE user_id = ?')
  .bind(userId).first();
替代方案:基于租户的分片(每个客户/租户对应一个数据库)

2 MB Row Size Limit - Hybrid D1 + R2 Pattern

2 MB行大小限制 - D1 + R2混合模式

D1 has a 2 MB row size limit. For large content (HTML, JSON, images), use R2 for storage and D1 for metadata. (DEV.to Article)
Error:
database row size exceeded maximum allowed size
Solution - Hybrid storage pattern:
typescript
// 1. Store large content in R2
const contentKey = `pages/${crypto.randomUUID()}.html`;
await env.R2_BUCKET.put(contentKey, largeHtmlContent);

// 2. Store metadata in D1
await env.DB.prepare(`
  INSERT INTO pages (url, r2_key, size, created_at)
  VALUES (?, ?, ?, ?)
`).bind(url, contentKey, largeHtmlContent.length, Date.now()).run();

// 3. Retrieve content
const page = await env.DB.prepare('SELECT * FROM pages WHERE url = ?')
  .bind(url).first();

if (page) {
  const content = await env.R2_BUCKET.get(page.r2_key);
  const html = await content.text();
}
D1单条行有2 MB的大小限制。对于大内容(HTML、JSON、图片),使用R2存储内容,D1存储元数据。(DEV.to文章
错误信息
database row size exceeded maximum allowed size
解决方案 - 混合存储模式
typescript
// 1. 将大内容存储到R2
const contentKey = `pages/${crypto.randomUUID()}.html`;
await env.R2_BUCKET.put(contentKey, largeHtmlContent);

// 2. 将元数据存储到D1
await env.DB.prepare(`
  INSERT INTO pages (url, r2_key, size, created_at)
  VALUES (?, ?, ?, ?)
`).bind(url, contentKey, largeHtmlContent.length, Date.now()).run();

// 3. 检索内容
const page = await env.DB.prepare('SELECT * FROM pages WHERE url = ?')
  .bind(url).first();

if (page) {
  const content = await env.R2_BUCKET.get(page.r2_key);
  const html = await content.text();
}

Database Portability - PostgreSQL Migration Considerations

数据库可移植性 - PostgreSQL迁移注意事项

If you plan to migrate from D1 (SQLite) to Hyperdrive (PostgreSQL) later, use consistent lowercase naming. PostgreSQL is case-sensitive for table and column names, while SQLite is not. (Mats' Blog)
sql
-- Use lowercase for portability
CREATE TABLE users (user_id INTEGER, email TEXT);
CREATE INDEX idx_users_email ON users(email);

-- NOT: CREATE TABLE Users (UserId INTEGER, Email TEXT);
如果你计划以后从D1(SQLite)迁移到Hyperdrive(PostgreSQL),请使用统一的小写命名。PostgreSQL对表和列名区分大小写,而SQLite不区分。(Mats的博客
sql
// 为了可移植性,使用小写
CREATE TABLE users (user_id INTEGER, email TEXT);
CREATE INDEX idx_users_email ON users(email);

// 不要使用:CREATE TABLE Users (UserId INTEGER, Email TEXT);

FTS5 Full-Text Search

FTS5全文搜索

Case Sensitivity: Always use lowercase "fts5" when creating virtual tables. Uppercase may cause "not authorized" errors. (Cloudflare Community)
sql
-- Correct
CREATE VIRTUAL TABLE search_index USING fts5(
  title,
  content,
  tokenize = 'porter unicode61'
);

-- Query the index
SELECT * FROM search_index WHERE search_index MATCH 'query terms';
Export Limitation: Databases with FTS5 virtual tables cannot be exported using
wrangler d1 export
. Drop virtual tables before export, then recreate after import. (GitHub Issue #9519)
大小写敏感:创建虚拟表时始终使用小写的"fts5"。大写可能导致“未授权”错误。(Cloudflare社区
sql
// 正确写法
CREATE VIRTUAL TABLE search_index USING fts5(
  title,
  content,
  tokenize = 'porter unicode61'
);

// 查询索引
SELECT * FROM search_index WHERE search_index MATCH 'query terms';
导出限制:包含FTS5虚拟表的数据库无法使用
wrangler d1 export
导出。导出前需删除虚拟表,导入后重新创建。(GitHub Issue #9519

Large Import/Export Operations

大型导入/导出操作

Network Timeout on Large Imports: Files with 5000+ INSERT statements may fail with "Network connection lost" error. (GitHub Issue #11958)
Solutions:
  1. Break large files into smaller chunks (<5000 statements per file)
  2. Use
    batch()
    API from Worker instead of wrangler CLI
  3. Import to local first, then use Time Travel to restore to remote
  4. Reduce individual statement size (100-250 rows per INSERT)
Windows-Specific Issue: On Windows 11, large SQL files exported from D1 may fail to re-import with "HashIndex detected hash table inconsistency". (GitHub Issue #11708)
Workaround: Delete
.wrangler
directory before executing:
bash
rm -rf .wrangler
npx wrangler d1 execute db-name --file=database.sql

大型导入网络超时:包含5000+条INSERT语句的文件可能会因“Network connection lost”错误失败。(GitHub Issue #11958
解决方案
  1. 将大文件拆分为更小的块(每个文件<5000条语句)
  2. 使用Worker的
    batch()
    API替代wrangler CLI
  3. 先导入到本地,再使用时光旅行功能恢复到远程
  4. 减小单个语句的大小(每个INSERT 100-250行)
Windows特定问题:在Windows 11上,从D1导出的大型SQL文件重新导入时可能会失败,提示“HashIndex detected hash table inconsistency”。(GitHub Issue #11708
解决方案:执行前删除
.wrangler
目录:
bash
rm -rf .wrangler
npx wrangler d1 execute db-name --file=database.sql

Best Practices Summary

最佳实践总结

✅ Always Do:

✅ 推荐做法:

  1. Use prepared statements with
    .bind()
    for user input
  2. Use
    .batch()
    for multiple queries (reduces latency)
  3. Create indexes on frequently queried columns
  4. Run
    PRAGMA optimize
    after schema changes
  5. Use
    IF NOT EXISTS
    in migrations for idempotency
  6. Test migrations locally before applying to production
  7. Handle errors gracefully with try/catch
  8. Use
    null
    instead of
    undefined
    for optional values
  9. Validate input before binding to queries
  10. Check
    meta.rows_written
    after UPDATE/DELETE
  1. 使用预编译语句:处理用户输入时始终使用
    .bind()
  2. 使用
    .batch()
    :执行多个查询时使用(减少延迟)
  3. 创建索引:为频繁查询的列创建索引
  4. 运行
    PRAGMA optimize
    :模式变更后运行
  5. 迁移中使用
    IF NOT EXISTS
    :确保迁移脚本幂等
  6. 本地测试迁移:应用到生产环境前先在本地测试
  7. 优雅处理错误:使用try/catch
  8. 使用
    null
    :可选值使用null而非undefined
  9. 验证输入:绑定到查询前验证输入
  10. 检查
    meta.rows_written
    :UPDATE/DELETE后检查受影响行数

❌ Never Do:

❌ 禁止做法:

  1. Never use
    .exec()
    with user input (SQL injection risk)
  2. Never hardcode
    database_id
    in public repos
  3. Never use
    undefined
    in bind parameters (causes D1_TYPE_ERROR)
  4. Never fire individual queries in loops (use batch instead)
  5. Never forget
    LIMIT
    on potentially large result sets
  6. Never use
    SELECT *
    in production (specify columns)
  7. Never include
    BEGIN TRANSACTION
    in migration files
  8. Never modify applied migrations (create new ones)
  9. Never skip error handling on database operations
  10. Never assume queries succeed (always check results)

  1. 不要使用
    .exec()
    处理用户输入
    :存在SQL注入风险
  2. 不要硬编码
    database_id
    到公开仓库
  3. 不要在bind参数中使用undefined:会导致D1_TYPE_ERROR
  4. 不要在循环中执行单个查询:使用batch替代
  5. 不要忘记LIMIT:对可能的大型结果集添加LIMIT
  6. 不要在生产环境使用
    SELECT *
    :指定具体列
  7. 不要在迁移文件中包含
    BEGIN TRANSACTION
  8. 不要修改已应用的迁移脚本:创建新的迁移脚本
  9. 不要忽略数据库操作的错误处理
  10. 不要假设查询一定会成功:始终检查结果

Known Issues Prevented

已避免的已知问题

This skill prevents 14 documented D1 errors:
Issue #Error/IssueDescriptionHow to AvoidSource
#1Statement too longLarge INSERT statements exceed D1 limitsBreak into batches of 100-250 rows using
batch()
Existing
#2Transaction conflicts
BEGIN TRANSACTION
in migration files
Remove BEGIN/COMMIT (D1 handles automatically)Existing
#3Foreign key violationsSchema changes break foreign key constraintsUse
PRAGMA defer_foreign_keys = true
in migrations
Existing
#4Rate limiting / queue overloadToo many individual queriesUse
batch()
instead of loops
Existing
#5Memory limit exceededQuery loads too much data into memoryAdd LIMIT, paginate results, shard queriesExisting
#6Type mismatch errorsUsing
undefined
instead of
null
in bind()
Always use
null
for optional values
Existing
#7Lowercase BEGIN in triggersTriggers with lowercase
begin/end
fail remotely
Use uppercase
BEGIN/END
keywords (Issue #10998)
TIER 1
#8Remote bindings timeoutConnection times out after 1 hour of inactivityRestart dev server or implement keepalive pattern (Issue #10801)TIER 1
#9Service bindings D1 accessAuxiliary worker can't access D1 in multi-worker devUse
--persist-to
flag to share persistence path (Issue #11121)
TIER 1
#10Transient network errorsRandom "Network connection lost" failuresImplement exponential backoff retry logic (D1 FAQ)TIER 1
#11FTS5 breaks exportDatabases with FTS5 virtual tables can't exportDrop virtual tables before export, recreate after import (Issue #9519)TIER 1
#12Multi-line SQL in exec()D1Database.exec() fails on multi-line SQLUse prepared statements or external .sql files (Issue #9133)TIER 1
#1310 GB database limitSingle database limited to 10 GBImplement sharding across multiple databases (Community)TIER 2
#142 MB row size limitRows exceeding 2 MB failUse hybrid D1 + R2 storage pattern (Community)TIER 2

本技能可避免14种已记录的D1错误:
问题编号错误/问题描述避免方式来源
#1语句过长大型INSERT语句超出D1限制使用
batch()
拆分为100-250行的批量操作
现有文档
#2事务冲突迁移文件中包含
BEGIN TRANSACTION
移除BEGIN/COMMIT(D1会自动处理)现有文档
#3外键约束违反模式变更破坏外键约束迁移中使用
PRAGMA defer_foreign_keys = true
现有文档
#4速率限制/队列过载执行过多单个查询使用
batch()
替代循环
现有文档
#5内存限制超出查询加载过多数据到内存添加LIMIT、分页、分片查询现有文档
#6类型不匹配错误bind()中使用undefined而非null可选值始终使用null现有文档
#7触发器大小写问题触发器中使用小写begin/end在远程环境失败使用大写BEGIN/END关键字(Issue #10998一级发现
#8远程绑定超时闲置1小时后连接超时重启开发服务器或实现保活模式(Issue #10801一级发现
#9多Worker开发D1访问问题辅助Worker无法访问D1绑定使用
--persist-to
参数共享持久化路径(Issue #11121
一级发现
#10临时性网络错误随机出现“Network connection lost”失败实现指数退避重试逻辑(D1 FAQ一级发现
#11FTS5导出限制包含FTS5虚拟表的数据库无法导出导出前删除虚拟表,导入后重新创建(Issue #9519一级发现
#12exec()多行SQL错误D1Database.exec()执行多行SQL失败使用预编译语句或外部.sql文件(Issue #9133一级发现
#1310 GB数据库限制单数据库容量受限实现多数据库分片(社区文章二级发现
#142 MB行大小限制行大小超出限制使用D1 + R2混合存储模式(社区文章二级发现

Wrangler Commands Reference

Wrangler命令参考

bash
undefined
bash
undefined

Database management

数据库管理

wrangler d1 create <DATABASE_NAME> wrangler d1 list wrangler d1 delete <DATABASE_NAME> wrangler d1 info <DATABASE_NAME>
wrangler d1 create <DATABASE_NAME> wrangler d1 list wrangler d1 delete <DATABASE_NAME> wrangler d1 info <DATABASE_NAME>

Migrations

迁移管理

wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME> wrangler d1 migrations list <DATABASE_NAME> --local|--remote wrangler d1 migrations apply <DATABASE_NAME> --local|--remote
wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME> wrangler d1 migrations list <DATABASE_NAME> --local|--remote wrangler d1 migrations apply <DATABASE_NAME> --local|--remote

Execute queries

执行查询

wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users" wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql
wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users" wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql

Time Travel (view historical data)

时光旅行(查看历史数据)

wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20" wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"

---
wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20" wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"

---

Official Documentation

官方文档


Ready to build with D1! 🚀

Last verified: 2026-01-20 | Skill version: 3.0.0 | Changes: Added 8 new known issues from community research (TIER 1-2 findings): trigger case sensitivity, remote binding timeouts, multi-worker dev patterns, transient error handling, FTS5 limitations, sharding patterns, hybrid D1+R2 storage, and database portability considerations.

准备好使用D1进行开发了! 🚀

最后验证时间:2026-01-20 | 技能版本:3.0.0 | 更新内容:新增8种来自社区研究的已知问题(一级/二级发现):触发器大小写敏感、远程绑定超时、多Worker开发模式、临时性错误处理、FTS5限制、分片模式、D1+R2混合存储、数据库可移植性注意事项。