cloudflare-d1
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare D1 Database
Cloudflare D1 Database
Status: Production Ready ✅
Last Updated: 2026-01-20
Dependencies: cloudflare-worker-base (for Worker setup)
Latest Versions: wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
Recent Updates (2025):
- Nov 2025: Jurisdiction support (data localization compliance), remote bindings GA (wrangler@4.37.0+), automatic resource provisioning
- Sept 2025: Automatic read-only query retries (up to 2 attempts), remote bindings public beta
- July 2025: Storage limits increased (250GB → 1TB), alpha backup access removed, REST API 50-500ms faster
- May 2025: HTTP API permissions security fix (D1:Edit required for writes)
- April 2025: Read replication public beta (read-only replicas across regions)
- Feb 2025: PRAGMA optimize support, read-only access permission bug fix
- Jan 2025: Free tier limits enforcement (Feb 10 start), Worker API 40-60% faster queries
状态:已就绪可用于生产 ✅
最后更新时间:2026-01-20
依赖项:cloudflare-worker-base(用于Worker配置)
最新版本:wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
2025年更新记录:
- 2025年11月:新增司法管辖区支持(数据本地化合规)、远程绑定正式可用(GA,需wrangler@4.37.0+)、自动资源配置
- 2025年9月:新增只读查询自动重试机制(最多2次尝试)、远程绑定公开测试版
- 2025年7月:存储上限提升(从250GB增至1TB)、移除Alpha版备份访问权限、REST API速度提升50-500ms
- 2025年5月:修复HTTP API权限安全问题(写入操作需D1:Edit权限)
- 2025年4月:只读复制功能公开测试版(跨区域只读副本)
- 2025年2月:支持PRAGMA optimize、修复只读访问权限bug
- 2025年1月:免费版额度开始强制执行(2月10日生效)、Worker API查询速度提升40-60%
Quick Start (5 Minutes)
快速入门(5分钟)
1. Create D1 Database
1. 创建D1数据库
bash
undefinedbash
undefinedCreate a new D1 database
创建新的D1数据库
npx wrangler d1 create my-database
npx wrangler d1 create my-database
Output includes database_id - save this!
输出包含database_id - 请保存好这个ID!
✅ Successfully created DB 'my-database'
✅ 成功创建数据库 'my-database'
[[d1_databases]]
[[d1_databases]]
binding = "DB"
binding = "DB"
database_name = "my-database"
database_name = "my-database"
database_id = "<UUID>"
database_id = "<UUID>"
undefinedundefined2. Configure Bindings
2. 配置绑定
Add to your :
wrangler.jsoncjsonc
{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2025-10-11",
"d1_databases": [
{
"binding": "DB", // Available as env.DB in your Worker
"database_name": "my-database", // Name from wrangler d1 create
"database_id": "<UUID>", // ID from wrangler d1 create
"preview_database_id": "local-db" // For local development
}
]
}CRITICAL:
- is how you access the database in code (
binding)env.DB - is the production database UUID
database_id - is for local dev (can be any string)
preview_database_id - Never commit real values to public repos - use environment variables or secrets
database_id
将以下内容添加到你的:
wrangler.jsoncjsonc
{
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2025-10-11",
"d1_databases": [
{
"binding": "DB", // 在Worker中通过env.DB访问
"database_name": "my-database", // 来自wrangler d1 create的数据库名称
"database_id": "<UUID>", // 来自wrangler d1 create的数据库ID
"preview_database_id": "local-db" // 用于本地开发
}
]
}重要提示:
- 是你在代码中访问数据库的方式(
binding)env.DB - 是生产环境数据库的UUID
database_id - 用于本地开发(可以是任意字符串)
preview_database_id - 绝对不要将真实的提交到公开代码仓库 - 使用环境变量或密钥管理
database_id
3. Create Your First Migration
3. 创建首个迁移脚本
bash
undefinedbash
undefinedCreate migration file
创建迁移文件
npx wrangler d1 migrations create my-database create_users_table
npx wrangler d1 migrations create my-database create_users_table
This creates: migrations/0001_create_users_table.sql
这会生成:migrations/0001_create_users_table.sql
Edit the migration file:
```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
user_id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT NOT NULL UNIQUE,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER
);
-- Create index for common queries
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
-- Optimize database
PRAGMA optimize;
编辑迁移文件:
```sql
-- migrations/0001_create_users_table.sql
DROP TABLE IF EXISTS users;
CREATE TABLE IF NOT EXISTS users (
user_id INTEGER PRIMARY KEY AUTOINCREMENT,
email TEXT NOT NULL UNIQUE,
username TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER
);
-- 为常用查询创建索引
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
-- 优化数据库
PRAGMA optimize;4. Apply Migration
4. 应用迁移脚本
bash
undefinedbash
undefinedApply locally first (for testing)
先在本地应用(用于测试)
npx wrangler d1 migrations apply my-database --local
npx wrangler d1 migrations apply my-database --local
Apply to production when ready
准备就绪后应用到生产环境
npx wrangler d1 migrations apply my-database --remote
undefinednpx wrangler d1 migrations apply my-database --remote
undefined5. Query from Your Worker
5. 从Worker中查询数据库
typescript
// src/index.ts
import { Hono } from 'hono';
type Bindings = {
DB: D1Database;
};
const app = new Hono<{ Bindings: Bindings }>();
app.get('/api/users/:email', async (c) => {
const email = c.req.param('email');
try {
// ALWAYS use prepared statements with bind()
const result = await c.env.DB.prepare(
'SELECT * FROM users WHERE email = ?'
)
.bind(email)
.first();
if (!result) {
return c.json({ error: 'User not found' }, 404);
}
return c.json(result);
} catch (error: any) {
console.error('D1 Error:', error.message);
return c.json({ error: 'Database error' }, 500);
}
});
export default app;typescript
// src/index.ts
import { Hono } from 'hono';
type Bindings = {
DB: D1Database;
};
const app = new Hono<{ Bindings: Bindings }>();
app.get('/api/users/:email', async (c) => {
const email = c.req.param('email');
try {
// 始终使用带bind()的预编译语句
const result = await c.env.DB.prepare(
'SELECT * FROM users WHERE email = ?'
)
.bind(email)
.first();
if (!result) {
return c.json({ error: '用户未找到' }, 404);
}
return c.json(result);
} catch (error: any) {
console.error('D1错误:', error.message);
return c.json({ error: '数据库错误' }, 500);
}
});
export default app;D1 Migrations System
D1迁移系统
Migration Workflow
迁移工作流
bash
undefinedbash
undefined1. Create migration
1. 创建迁移脚本
npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
npx wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
2. List unapplied migrations
2. 列出未应用的迁移脚本
npx wrangler d1 migrations list <DATABASE_NAME> --local
npx wrangler d1 migrations list <DATABASE_NAME> --remote
npx wrangler d1 migrations list <DATABASE_NAME> --local
npx wrangler d1 migrations list <DATABASE_NAME> --remote
3. Apply migrations
3. 应用迁移脚本
npx wrangler d1 migrations apply <DATABASE_NAME> --local # Test locally
npx wrangler d1 migrations apply <DATABASE_NAME> --remote # Deploy to production
undefinednpx wrangler d1 migrations apply <DATABASE_NAME> --local # 本地测试
npx wrangler d1 migrations apply <DATABASE_NAME> --remote # 部署到生产环境
undefinedMigration File Naming
迁移文件命名规则
Migrations are automatically versioned:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sqlRules:
- Files are executed in sequential order
- Each migration runs once (tracked in table)
d1_migrations - Failed migrations roll back (transactional)
- Can't modify or delete applied migrations
迁移文件会自动进行版本编号:
migrations/
├── 0000_initial_schema.sql
├── 0001_add_users_table.sql
├── 0002_add_posts_table.sql
└── 0003_add_indexes.sql规则:
- 文件按顺序执行
- 每个迁移脚本仅运行一次(执行记录保存在表中)
d1_migrations - 失败的迁移会回滚(事务性处理)
- 已应用的迁移脚本无法修改或删除
Custom Migration Configuration
自定义迁移配置
jsonc
{
"d1_databases": [
{
"binding": "DB",
"database_name": "my-database",
"database_id": "<UUID>",
"migrations_dir": "db/migrations", // Custom directory (default: migrations/)
"migrations_table": "schema_migrations" // Custom tracking table (default: d1_migrations)
}
]
}jsonc
{
"d1_databases": [
{
"binding": "DB",
"database_name": "my-database",
"database_id": "<UUID>",
"migrations_dir": "db/migrations", // 自定义迁移目录(默认:migrations/)
"migrations_table": "schema_migrations" // 自定义跟踪表(默认:d1_migrations)
}
]
}Migration Best Practices
迁移最佳实践
✅ Always Do:
✅ 推荐做法:
sql
-- Use IF NOT EXISTS to make migrations idempotent
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
-- Run PRAGMA optimize after schema changes
PRAGMA optimize;
-- Use UPPERCASE BEGIN/END in triggers (lowercase fails remotely)
CREATE TRIGGER update_timestamp
AFTER UPDATE ON users
FOR EACH ROW
BEGIN
UPDATE users SET updated_at = unixepoch() WHERE user_id = NEW.user_id;
END;
-- Use transactions for data migrations
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;sql
-- 使用IF NOT EXISTS确保迁移脚本的幂等性
CREATE TABLE IF NOT EXISTS users (...);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
-- 模式变更后运行PRAGMA optimize
PRAGMA optimize;
-- 触发器中使用大写的BEGIN/END(小写在远程环境会失败)
CREATE TRIGGER update_timestamp
AFTER UPDATE ON users
FOR EACH ROW
BEGIN
UPDATE users SET updated_at = unixepoch() WHERE user_id = NEW.user_id;
END;
-- 数据迁移使用事务
BEGIN TRANSACTION;
UPDATE users SET updated_at = unixepoch() WHERE updated_at IS NULL;
COMMIT;❌ Never Do:
❌ 禁止做法:
sql
-- DON'T include BEGIN TRANSACTION at start of migration file (D1 handles this)
BEGIN TRANSACTION; -- ❌ Remove this
-- DON'T use lowercase begin/end in triggers (works locally, FAILS remotely)
CREATE TRIGGER my_trigger
AFTER INSERT ON table
begin -- ❌ Use BEGIN (uppercase)
UPDATE ...;
end; -- ❌ Use END (uppercase)
-- DON'T use MySQL/PostgreSQL syntax
ALTER TABLE users MODIFY COLUMN email VARCHAR(255); -- ❌ Not SQLite
-- DON'T create tables without IF NOT EXISTS
CREATE TABLE users (...); -- ❌ Fails if table existssql
-- 不要在迁移文件开头包含BEGIN TRANSACTION(D1会自动处理)
BEGIN TRANSACTION; -- ❌ 移除该行
-- 触发器中不要使用小写的begin/end(本地可用,远程环境会失败)
CREATE TRIGGER my_trigger
AFTER INSERT ON table
begin -- ❌ 使用BEGIN(大写)
UPDATE ...;
end; -- ❌ 使用END(大写)
-- 不要使用MySQL/PostgreSQL语法
ALTER TABLE users MODIFY COLUMN email VARCHAR(255); -- ❌ 这不是SQLite语法
-- 不要创建表时不使用IF NOT EXISTS
CREATE TABLE users (...); -- ❌ 如果表已存在会失败Handling Foreign Keys in Migrations
迁移中处理外键
sql
-- Temporarily disable foreign key checks during schema changes
PRAGMA defer_foreign_keys = true;
-- Make schema changes that would violate foreign keys
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);
-- Foreign keys re-enabled automatically at end of migrationsql
-- 模式变更期间临时禁用外键检查
PRAGMA defer_foreign_keys = true;
-- 执行会违反外键约束的模式变更
ALTER TABLE posts DROP COLUMN author_id;
ALTER TABLE posts ADD COLUMN user_id INTEGER REFERENCES users(user_id);
-- 迁移结束后会自动重新启用外键D1 Workers API
D1 Workers API
Type Definitions:
typescript
interface Env { DB: D1Database; }
type Bindings = { DB: D1Database; };
const app = new Hono<{ Bindings: Bindings }>();prepare() - PRIMARY METHOD (always use for user input):
typescript
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
.bind(email).first();Why: Prevents SQL injection, reusable, better performance, type-safe
Query Result Methods:
- →
.all()- Get all rows{ results, meta } - → row object or null - Get first row
.first() - → value - Get single column value (e.g., COUNT)
.first('column') - →
.run()- Execute INSERT/UPDATE/DELETE (no results){ success, meta }
batch() - CRITICAL FOR PERFORMANCE:
typescript
const results = await env.DB.batch([
env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1),
env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1)
]);- Executes sequentially, single network round trip
- If one fails, remaining statements don't execute
- Use for: bulk inserts, fetching related data
exec() - AVOID IN PRODUCTION:
typescript
await env.DB.exec('SELECT * FROM users;'); // Only for migrations/maintenance- ❌ Never use with user input (SQL injection risk)
- ✅ Only use for: migration files, one-off tasks
类型定义:
typescript
interface Env { DB: D1Database; }
type Bindings = { DB: D1Database; };
const app = new Hono<{ Bindings: Bindings }>();prepare() - 核心方法(处理用户输入时必须使用):
typescript
const user = await env.DB.prepare('SELECT * FROM users WHERE email = ?')
.bind(email).first();原因:防止SQL注入、可复用、性能更优、类型安全
查询结果方法:
- →
.all()- 获取所有行{ results, meta } - → 行对象或null - 获取第一行
.first() - → 单个值 - 获取指定列的值(例如COUNT)
.first('column') - →
.run()- 执行INSERT/UPDATE/DELETE(无返回结果){ success, meta }
batch() - 性能优化关键:
typescript
const results = await env.DB.batch([
env.DB.prepare('SELECT * FROM users WHERE user_id = ?').bind(1),
env.DB.prepare('SELECT * FROM posts WHERE user_id = ?').bind(1)
]);- 按顺序执行,单次网络往返
- 若某条语句失败,剩余语句不再执行
- 适用场景:批量插入、获取关联数据
exec() - 生产环境禁止使用:
typescript
await env.DB.exec('SELECT * FROM users;'); // 仅用于迁移或维护任务- ❌ 绝对不要用于处理用户输入(存在SQL注入风险)
- ✅ 仅用于:迁移文件、一次性任务
Query Patterns
查询模式
Basic CRUD Operations
基础CRUD操作
typescript
// CREATE
const { meta } = await env.DB.prepare(
'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
).bind(email, username, Date.now()).run();
const newUserId = meta.last_row_id;
// READ (single)
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
.bind(userId).first();
// READ (multiple)
const { results } = await env.DB.prepare('SELECT * FROM users LIMIT ?')
.bind(10).all();
// UPDATE
const { meta } = await env.DB.prepare('UPDATE users SET username = ? WHERE user_id = ?')
.bind(newUsername, userId).run();
const rowsAffected = meta.rows_written;
// DELETE
await env.DB.prepare('DELETE FROM users WHERE user_id = ?').bind(userId).run();
// COUNT
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users').first('total');
// EXISTS check
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
.bind(email).first();typescript
// 创建
const { meta } = await env.DB.prepare(
'INSERT INTO users (email, username, created_at) VALUES (?, ?, ?)'
).bind(email, username, Date.now()).run();
const newUserId = meta.last_row_id;
// 读取(单条)
const user = await env.DB.prepare('SELECT * FROM users WHERE user_id = ?')
.bind(userId).first();
// 读取(多条)
const { results } = await env.DB.prepare('SELECT * FROM users LIMIT ?')
.bind(10).all();
// 更新
const { meta } = await env.DB.prepare('UPDATE users SET username = ? WHERE user_id = ?')
.bind(newUsername, userId).run();
const rowsAffected = meta.rows_written;
// 删除
await env.DB.prepare('DELETE FROM users WHERE user_id = ?').bind(userId).run();
// 统计
const count = await env.DB.prepare('SELECT COUNT(*) as total FROM users').first('total');
// 存在性检查
const exists = await env.DB.prepare('SELECT 1 FROM users WHERE email = ? LIMIT 1')
.bind(email).first();Pagination Pattern
分页模式
typescript
const page = parseInt(c.req.query('page') || '1');
const limit = 20;
const offset = (page - 1) * limit;
const [countResult, usersResult] = await c.env.DB.batch([
c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
.bind(limit, offset)
]);
return c.json({
users: usersResult.results,
pagination: { page, limit, total: countResult.results[0].total }
});typescript
const page = parseInt(c.req.query('page') || '1');
const limit = 20;
const offset = (page - 1) * limit;
const [countResult, usersResult] = await c.env.DB.batch([
c.env.DB.prepare('SELECT COUNT(*) as total FROM users'),
c.env.DB.prepare('SELECT * FROM users ORDER BY created_at DESC LIMIT ? OFFSET ?')
.bind(limit, offset)
]);
return c.json({
users: usersResult.results,
pagination: { page, limit, total: countResult.results[0].total }
});Batch Pattern (Pseudo-Transactions)
批量模式(伪事务)
typescript
// D1 doesn't support multi-statement transactions, but batch() provides sequential execution
await env.DB.batch([
env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?').bind(amount, fromUserId),
env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?').bind(amount, toUserId),
env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)').bind(fromUserId, toUserId, amount)
]);
// If any statement fails, batch stops (transaction-like behavior)typescript
// D1不支持多语句事务,但batch()提供顺序执行的能力
await env.DB.batch([
env.DB.prepare('UPDATE users SET credits = credits - ? WHERE user_id = ?').bind(amount, fromUserId),
env.DB.prepare('UPDATE users SET credits = credits + ? WHERE user_id = ?').bind(amount, toUserId),
env.DB.prepare('INSERT INTO transactions (from_user, to_user, amount) VALUES (?, ?, ?)').bind(fromUserId, toUserId, amount)
]);
// 若任意语句失败,批量操作会停止(类似事务的行为)Error Handling
错误处理
Common Error Types:
- - General D1 error (often transient)
D1_ERROR - - SQL syntax error or limitations
D1_EXEC_ERROR - - Type mismatch (undefined instead of null)
D1_TYPE_ERROR - - Column doesn't exist
D1_COLUMN_NOTFOUND
Common Errors and Fixes:
| Error | Cause | Solution |
|---|---|---|
| Statement too long | Large INSERT with 1000+ rows | Break into batches of 100-250 using |
| Network connection lost | Transient failure or large import | Implement retry logic (see below) or break into smaller chunks |
| Too many requests queued | Individual queries in loop | Use |
| D1_TYPE_ERROR | Using | Use |
| Transaction conflicts | BEGIN TRANSACTION in migration | Remove BEGIN/COMMIT (D1 handles automatically) |
| Foreign key violations | Schema changes break constraints | Use |
| D1_EXEC_ERROR: incomplete input | Multi-line SQL in D1Database.exec() | Use prepared statements or external .sql files (Issue #9133) |
常见错误类型:
- - 通用D1错误(通常是临时性的)
D1_ERROR - - SQL语法错误或限制
D1_EXEC_ERROR - - 类型不匹配(使用undefined而非null)
D1_TYPE_ERROR - - 列不存在
D1_COLUMN_NOTFOUND
常见错误及修复方案:
| 错误 | 原因 | 解决方案 |
|---|---|---|
| 语句过长 | 大型INSERT语句超出D1限制 | 使用 |
| 网络连接丢失 | 临时性故障或大型导入 | 实现重试逻辑(见下文)或拆分为更小的块 |
| 请求队列过载 | 循环中执行大量单个查询 | 使用 |
| D1_TYPE_ERROR | bind()中使用undefined | 可选值使用null:`.bind(email, bio |
| 事务冲突 | 迁移文件中包含BEGIN TRANSACTION | 移除BEGIN/COMMIT(D1会自动处理) |
| 外键约束违反 | 模式变更破坏外键约束 | 迁移中使用 |
| D1_EXEC_ERROR: incomplete input | D1Database.exec()中使用多行SQL | 使用预编译语句或外部.sql文件(Issue #9133) |
Transient Errors Are Expected Behavior
临时性错误是预期行为
CRITICAL: D1 queries fail transiently with errors like "Network connection lost", "storage operation exceeded timeout", or "isolate exceeded its memory limit". Cloudflare documentation states "a handful of errors every several hours is not unexpected" and recommends implementing retry logic. (D1 FAQ)
Common Transient Errors:
D1_ERROR: Network connection lostD1 DB storage operation exceeded timeout which caused object to be resetInternal error while starting up D1 DB storage caused object to be resetD1 DB's isolate exceeded its memory limit and was reset
Retry Pattern (Recommended):
typescript
async function queryWithRetry<T>(
fn: () => Promise<T>,
maxRetries = 3,
baseDelay = 100
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error: any) {
const isTransient = error.message?.includes('Network connection lost') ||
error.message?.includes('exceeded timeout') ||
error.message?.includes('exceeded its memory limit');
if (!isTransient || i === maxRetries - 1) throw error;
// Exponential backoff
await new Promise(r => setTimeout(r, baseDelay * Math.pow(2, i)));
}
}
throw new Error('Max retries exceeded');
}
// Usage
const user = await queryWithRetry(() =>
env.DB.prepare('SELECT * FROM users WHERE email = ?').bind(email).first()
);Automatic Retries (Sept 2025):
D1 automatically retries read-only queries (SELECT, EXPLAIN, WITH) up to 2 times on retryable errors. Check in response for retry count. Write queries should still implement custom retry logic.
meta.total_attempts重要提示:D1查询可能会出现临时性失败,例如“Network connection lost”、“storage operation exceeded timeout”或“isolate exceeded its memory limit”。Cloudflare文档指出**“每小时出现少量错误是正常现象”**,建议实现重试逻辑。(D1 FAQ)
常见临时性错误:
D1_ERROR: Network connection lostD1 DB storage operation exceeded timeout which caused object to be resetInternal error while starting up D1 DB storage caused object to be resetD1 DB's isolate exceeded its memory limit and was reset
推荐重试模式:
typescript
async function queryWithRetry<T>(
fn: () => Promise<T>,
maxRetries = 3,
baseDelay = 100
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error: any) {
const isTransient = error.message?.includes('Network connection lost') ||
error.message?.includes('exceeded timeout') ||
error.message?.includes('exceeded its memory limit');
if (!isTransient || i === maxRetries - 1) throw error;
// 指数退避
await new Promise(r => setTimeout(r, baseDelay * Math.pow(2, i)));
}
}
throw new Error('达到最大重试次数');
}
// 使用示例
const user = await queryWithRetry(() =>
env.DB.prepare('SELECT * FROM users WHERE email = ?').bind(email).first()
);自动重试(2025年9月):
D1会自动对只读查询(SELECT、EXPLAIN、WITH)进行最多2次重试。可通过响应中的查看重试次数。写入查询仍需实现自定义重试逻辑。
meta.total_attemptsPerformance Optimization
性能优化
Index Best Practices:
- ✅ Index columns in WHERE clauses:
CREATE INDEX idx_users_email ON users(email) - ✅ Index foreign keys:
CREATE INDEX idx_posts_user_id ON posts(user_id) - ✅ Index columns for sorting:
CREATE INDEX idx_posts_created_at ON posts(created_at DESC) - ✅ Multi-column indexes:
CREATE INDEX idx_posts_user_published ON posts(user_id, published) - ✅ Partial indexes:
CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0 - ✅ Test with:
EXPLAIN QUERY PLAN SELECT ...
PRAGMA optimize (Feb 2025):
sql
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize; -- Run after schema changesQuery Optimization:
- ✅ Use specific columns (not )
SELECT * - ✅ Always include LIMIT on large result sets
- ✅ Use indexes for WHERE conditions
- ❌ Avoid functions in WHERE (can't use indexes): → store lowercase instead
WHERE LOWER(email)
索引最佳实践:
- ✅ 为WHERE子句中的列创建索引:
CREATE INDEX idx_users_email ON users(email) - ✅ 为外键创建索引:
CREATE INDEX idx_posts_user_id ON posts(user_id) - ✅ 为排序列创建索引:
CREATE INDEX idx_posts_created_at ON posts(created_at DESC) - ✅ 多列索引:
CREATE INDEX idx_posts_user_published ON posts(user_id, published) - ✅ 部分索引:
CREATE INDEX idx_users_active ON users(email) WHERE deleted = 0 - ✅ 使用测试索引效果
EXPLAIN QUERY PLAN SELECT ...
PRAGMA optimize(2025年2月):
sql
CREATE INDEX idx_users_email ON users(email);
PRAGMA optimize; -- 模式变更后运行查询优化:
- ✅ 使用指定列(而非)
SELECT * - ✅ 大型结果集务必包含LIMIT
- ✅ WHERE条件使用索引
- ❌ 避免在WHERE中使用函数(无法利用索引):→ 改为存储小写邮箱
WHERE LOWER(email)
Local Development
本地开发
Local vs Remote (Nov 2025 - Remote Bindings GA):
bash
undefined本地与远程(2025年11月 - 远程绑定正式可用):
bash
undefinedLocal database (automatic creation)
本地数据库(自动创建)
npx wrangler d1 migrations apply my-database --local
npx wrangler d1 execute my-database --local --command "SELECT * FROM users"
npx wrangler d1 migrations apply my-database --local
npx wrangler d1 execute my-database --local --command "SELECT * FROM users"
Remote database
远程数据库
npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"
npx wrangler d1 execute my-database --remote --command "SELECT * FROM users"
Remote bindings (wrangler@4.37.0+) - connect local Worker to deployed D1
远程绑定(wrangler@4.37.0+)- 本地Worker连接已部署的D1
Add to wrangler.jsonc: { "binding": "DB", "remote": true }
添加到wrangler.jsonc: { "binding": "DB", "remote": true }
undefinedundefinedRemote Bindings Connection Timeout
远程绑定连接超时
Known Issue: When using remote D1 bindings (), the connection times out after exactly 1 hour of inactivity. (GitHub Issue #10801)
{ "remote": true }Error:
D1_ERROR: Failed to parse body as JSON, got: error code: 1031Workaround:
typescript
// Keep connection alive with periodic query (optional)
setInterval(async () => {
try {
await env.DB.prepare('SELECT 1').first();
} catch (e) {
console.log('Connection keepalive failed:', e);
}
}, 30 * 60 * 1000); // Every 30 minutesOr simply restart your dev server if queries fail after 1 hour of inactivity.
已知问题:使用远程D1绑定()时,连接在闲置1小时后会超时。(GitHub Issue #10801)
{ "remote": true }错误信息:
D1_ERROR: Failed to parse body as JSON, got: error code: 1031解决方案:
typescript
// 定期发送查询保持连接(可选)
setInterval(async () => {
try {
await env.DB.prepare('SELECT 1').first();
} catch (e) {
console.log('连接保活失败:', e);
}
}, 30 * 60 * 1000); // 每30分钟一次或者在闲置1小时后查询失败时,重启开发服务器即可。
Multi-Worker Development (Service Bindings)
多Worker开发(服务绑定)
When running multiple Workers with service bindings in a single process, the auxiliary worker cannot access its D1 binding because both workers share the same persistence path. (GitHub Issue #11121)
wrangler devSolution: Use flag to point all workers to the same persistence store:
--persist-tobash
undefined在单个进程中运行多个带服务绑定的Worker时,辅助Worker无法访问其D1绑定,因为所有Worker共享相同的持久化路径。(GitHub Issue #11121)
wrangler dev解决方案:使用参数让所有Worker指向相同的持久化存储:
--persist-tobash
undefinedApply worker2 migrations to worker1's persistence path
将worker2的迁移应用到worker1的持久化路径
cd worker2
npx wrangler d1 migrations apply DB --local --persist-to=../worker1/.wrangler/state
cd worker2
npx wrangler d1 migrations apply DB --local --persist-to=../worker1/.wrangler/state
Now both workers can access D1
现在两个Worker都可以访问D1
cd ../worker1
npx wrangler dev # Both workers share the same D1 data
**Local Database Location:**
`.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite`
**Seed Local Database:**
```bash
npx wrangler d1 execute my-database --local --file=seed.sqlcd ../worker1
npx wrangler dev # 两个Worker共享同一D1数据
**本地数据库位置**:
`.wrangler/state/v3/d1/miniflare-D1DatabaseObject/<database_id>.sqlite`
**本地数据库初始化数据**:
```bash
npx wrangler d1 execute my-database --local --file=seed.sqlScaling & Limitations
扩展与限制
10 GB Database Size Limit - Sharding Pattern
10 GB数据库大小限制 - 分片模式
D1 has a hard 10 GB per database limit, but Cloudflare supports up to 50,000 databases per Worker. Use sharding to scale beyond 10 GB. (DEV.to Article)
Hash-based sharding example (10 databases = 100 GB capacity):
typescript
// Hash user ID to shard number
function getShardId(userId: string): number {
const hash = Array.from(userId).reduce((acc, char) =>
((acc << 5) - acc) + char.charCodeAt(0), 0
);
return Math.abs(hash) % 10; // 10 shards
}
// wrangler.jsonc - Define 10 database shards
{
"d1_databases": [
{ "binding": "DB_SHARD_0", "database_id": "..." },
{ "binding": "DB_SHARD_1", "database_id": "..." },
{ "binding": "DB_SHARD_2", "database_id": "..." },
// ... up to DB_SHARD_9
]
}
// Get correct shard for user
function getUserDb(env: Env, userId: string): D1Database {
const shardId = getShardId(userId);
return env[`DB_SHARD_${shardId}`];
}
// Query user's data from correct shard
const db = getUserDb(env, userId);
const user = await db.prepare('SELECT * FROM users WHERE user_id = ?')
.bind(userId).first();Alternative: Tenant-based sharding (one database per customer/tenant)
D1单数据库有10 GB的硬限制,但Cloudflare支持每个Worker最多50,000个数据库。使用分片可突破10 GB限制。(DEV.to文章)
基于哈希的分片示例(10个数据库 = 100 GB容量):
typescript
// 将用户ID哈希到分片编号
function getShardId(userId: string): number {
const hash = Array.from(userId).reduce((acc, char) =>
((acc << 5) - acc) + char.charCodeAt(0), 0
);
return Math.abs(hash) % 10; // 10个分片
}
// wrangler.jsonc - 定义10个数据库分片
{
"d1_databases": [
{ "binding": "DB_SHARD_0", "database_id": "..." },
{ "binding": "DB_SHARD_1", "database_id": "..." },
{ "binding": "DB_SHARD_2", "database_id": "..." },
// ... 直到DB_SHARD_9
]
}
// 获取用户对应的分片数据库
function getUserDb(env: Env, userId: string): D1Database {
const shardId = getShardId(userId);
return env[`DB_SHARD_${shardId}`];
}
// 从正确的分片查询用户数据
const db = getUserDb(env, userId);
const user = await db.prepare('SELECT * FROM users WHERE user_id = ?')
.bind(userId).first();替代方案:基于租户的分片(每个客户/租户对应一个数据库)
2 MB Row Size Limit - Hybrid D1 + R2 Pattern
2 MB行大小限制 - D1 + R2混合模式
D1 has a 2 MB row size limit. For large content (HTML, JSON, images), use R2 for storage and D1 for metadata. (DEV.to Article)
Error:
database row size exceeded maximum allowed sizeSolution - Hybrid storage pattern:
typescript
// 1. Store large content in R2
const contentKey = `pages/${crypto.randomUUID()}.html`;
await env.R2_BUCKET.put(contentKey, largeHtmlContent);
// 2. Store metadata in D1
await env.DB.prepare(`
INSERT INTO pages (url, r2_key, size, created_at)
VALUES (?, ?, ?, ?)
`).bind(url, contentKey, largeHtmlContent.length, Date.now()).run();
// 3. Retrieve content
const page = await env.DB.prepare('SELECT * FROM pages WHERE url = ?')
.bind(url).first();
if (page) {
const content = await env.R2_BUCKET.get(page.r2_key);
const html = await content.text();
}D1单条行有2 MB的大小限制。对于大内容(HTML、JSON、图片),使用R2存储内容,D1存储元数据。(DEV.to文章)
错误信息:
database row size exceeded maximum allowed size解决方案 - 混合存储模式:
typescript
// 1. 将大内容存储到R2
const contentKey = `pages/${crypto.randomUUID()}.html`;
await env.R2_BUCKET.put(contentKey, largeHtmlContent);
// 2. 将元数据存储到D1
await env.DB.prepare(`
INSERT INTO pages (url, r2_key, size, created_at)
VALUES (?, ?, ?, ?)
`).bind(url, contentKey, largeHtmlContent.length, Date.now()).run();
// 3. 检索内容
const page = await env.DB.prepare('SELECT * FROM pages WHERE url = ?')
.bind(url).first();
if (page) {
const content = await env.R2_BUCKET.get(page.r2_key);
const html = await content.text();
}Database Portability - PostgreSQL Migration Considerations
数据库可移植性 - PostgreSQL迁移注意事项
If you plan to migrate from D1 (SQLite) to Hyperdrive (PostgreSQL) later, use consistent lowercase naming. PostgreSQL is case-sensitive for table and column names, while SQLite is not. (Mats' Blog)
sql
-- Use lowercase for portability
CREATE TABLE users (user_id INTEGER, email TEXT);
CREATE INDEX idx_users_email ON users(email);
-- NOT: CREATE TABLE Users (UserId INTEGER, Email TEXT);如果你计划以后从D1(SQLite)迁移到Hyperdrive(PostgreSQL),请使用统一的小写命名。PostgreSQL对表和列名区分大小写,而SQLite不区分。(Mats的博客)
sql
// 为了可移植性,使用小写
CREATE TABLE users (user_id INTEGER, email TEXT);
CREATE INDEX idx_users_email ON users(email);
// 不要使用:CREATE TABLE Users (UserId INTEGER, Email TEXT);FTS5 Full-Text Search
FTS5全文搜索
Case Sensitivity: Always use lowercase "fts5" when creating virtual tables. Uppercase may cause "not authorized" errors. (Cloudflare Community)
sql
-- Correct
CREATE VIRTUAL TABLE search_index USING fts5(
title,
content,
tokenize = 'porter unicode61'
);
-- Query the index
SELECT * FROM search_index WHERE search_index MATCH 'query terms';Export Limitation: Databases with FTS5 virtual tables cannot be exported using . Drop virtual tables before export, then recreate after import. (GitHub Issue #9519)
wrangler d1 export大小写敏感:创建虚拟表时始终使用小写的"fts5"。大写可能导致“未授权”错误。(Cloudflare社区)
sql
// 正确写法
CREATE VIRTUAL TABLE search_index USING fts5(
title,
content,
tokenize = 'porter unicode61'
);
// 查询索引
SELECT * FROM search_index WHERE search_index MATCH 'query terms';导出限制:包含FTS5虚拟表的数据库无法使用导出。导出前需删除虚拟表,导入后重新创建。(GitHub Issue #9519)
wrangler d1 exportLarge Import/Export Operations
大型导入/导出操作
Network Timeout on Large Imports: Files with 5000+ INSERT statements may fail with "Network connection lost" error. (GitHub Issue #11958)
Solutions:
- Break large files into smaller chunks (<5000 statements per file)
- Use API from Worker instead of wrangler CLI
batch() - Import to local first, then use Time Travel to restore to remote
- Reduce individual statement size (100-250 rows per INSERT)
Windows-Specific Issue: On Windows 11, large SQL files exported from D1 may fail to re-import with "HashIndex detected hash table inconsistency". (GitHub Issue #11708)
Workaround: Delete directory before executing:
.wranglerbash
rm -rf .wrangler
npx wrangler d1 execute db-name --file=database.sql大型导入网络超时:包含5000+条INSERT语句的文件可能会因“Network connection lost”错误失败。(GitHub Issue #11958)
解决方案:
- 将大文件拆分为更小的块(每个文件<5000条语句)
- 使用Worker的API替代wrangler CLI
batch() - 先导入到本地,再使用时光旅行功能恢复到远程
- 减小单个语句的大小(每个INSERT 100-250行)
Windows特定问题:在Windows 11上,从D1导出的大型SQL文件重新导入时可能会失败,提示“HashIndex detected hash table inconsistency”。(GitHub Issue #11708)
解决方案:执行前删除目录:
.wranglerbash
rm -rf .wrangler
npx wrangler d1 execute db-name --file=database.sqlBest Practices Summary
最佳实践总结
✅ Always Do:
✅ 推荐做法:
- Use prepared statements with for user input
.bind() - Use for multiple queries (reduces latency)
.batch() - Create indexes on frequently queried columns
- Run after schema changes
PRAGMA optimize - Use in migrations for idempotency
IF NOT EXISTS - Test migrations locally before applying to production
- Handle errors gracefully with try/catch
- Use instead of
nullfor optional valuesundefined - Validate input before binding to queries
- Check after UPDATE/DELETE
meta.rows_written
- 使用预编译语句:处理用户输入时始终使用
.bind() - 使用:执行多个查询时使用(减少延迟)
.batch() - 创建索引:为频繁查询的列创建索引
- 运行:模式变更后运行
PRAGMA optimize - 迁移中使用:确保迁移脚本幂等
IF NOT EXISTS - 本地测试迁移:应用到生产环境前先在本地测试
- 优雅处理错误:使用try/catch
- 使用:可选值使用null而非undefined
null - 验证输入:绑定到查询前验证输入
- 检查:UPDATE/DELETE后检查受影响行数
meta.rows_written
❌ Never Do:
❌ 禁止做法:
- Never use with user input (SQL injection risk)
.exec() - Never hardcode in public repos
database_id - Never use in bind parameters (causes D1_TYPE_ERROR)
undefined - Never fire individual queries in loops (use batch instead)
- Never forget on potentially large result sets
LIMIT - Never use in production (specify columns)
SELECT * - Never include in migration files
BEGIN TRANSACTION - Never modify applied migrations (create new ones)
- Never skip error handling on database operations
- Never assume queries succeed (always check results)
- 不要使用处理用户输入:存在SQL注入风险
.exec() - 不要硬编码到公开仓库
database_id - 不要在bind参数中使用undefined:会导致D1_TYPE_ERROR
- 不要在循环中执行单个查询:使用batch替代
- 不要忘记LIMIT:对可能的大型结果集添加LIMIT
- 不要在生产环境使用:指定具体列
SELECT * - 不要在迁移文件中包含
BEGIN TRANSACTION - 不要修改已应用的迁移脚本:创建新的迁移脚本
- 不要忽略数据库操作的错误处理
- 不要假设查询一定会成功:始终检查结果
Known Issues Prevented
已避免的已知问题
This skill prevents 14 documented D1 errors:
| Issue # | Error/Issue | Description | How to Avoid | Source |
|---|---|---|---|---|
| #1 | Statement too long | Large INSERT statements exceed D1 limits | Break into batches of 100-250 rows using | Existing |
| #2 | Transaction conflicts | | Remove BEGIN/COMMIT (D1 handles automatically) | Existing |
| #3 | Foreign key violations | Schema changes break foreign key constraints | Use | Existing |
| #4 | Rate limiting / queue overload | Too many individual queries | Use | Existing |
| #5 | Memory limit exceeded | Query loads too much data into memory | Add LIMIT, paginate results, shard queries | Existing |
| #6 | Type mismatch errors | Using | Always use | Existing |
| #7 | Lowercase BEGIN in triggers | Triggers with lowercase | Use uppercase | TIER 1 |
| #8 | Remote bindings timeout | Connection times out after 1 hour of inactivity | Restart dev server or implement keepalive pattern (Issue #10801) | TIER 1 |
| #9 | Service bindings D1 access | Auxiliary worker can't access D1 in multi-worker dev | Use | TIER 1 |
| #10 | Transient network errors | Random "Network connection lost" failures | Implement exponential backoff retry logic (D1 FAQ) | TIER 1 |
| #11 | FTS5 breaks export | Databases with FTS5 virtual tables can't export | Drop virtual tables before export, recreate after import (Issue #9519) | TIER 1 |
| #12 | Multi-line SQL in exec() | D1Database.exec() fails on multi-line SQL | Use prepared statements or external .sql files (Issue #9133) | TIER 1 |
| #13 | 10 GB database limit | Single database limited to 10 GB | Implement sharding across multiple databases (Community) | TIER 2 |
| #14 | 2 MB row size limit | Rows exceeding 2 MB fail | Use hybrid D1 + R2 storage pattern (Community) | TIER 2 |
本技能可避免14种已记录的D1错误:
| 问题编号 | 错误/问题 | 描述 | 避免方式 | 来源 |
|---|---|---|---|---|
| #1 | 语句过长 | 大型INSERT语句超出D1限制 | 使用 | 现有文档 |
| #2 | 事务冲突 | 迁移文件中包含 | 移除BEGIN/COMMIT(D1会自动处理) | 现有文档 |
| #3 | 外键约束违反 | 模式变更破坏外键约束 | 迁移中使用 | 现有文档 |
| #4 | 速率限制/队列过载 | 执行过多单个查询 | 使用 | 现有文档 |
| #5 | 内存限制超出 | 查询加载过多数据到内存 | 添加LIMIT、分页、分片查询 | 现有文档 |
| #6 | 类型不匹配错误 | bind()中使用undefined而非null | 可选值始终使用null | 现有文档 |
| #7 | 触发器大小写问题 | 触发器中使用小写begin/end在远程环境失败 | 使用大写BEGIN/END关键字(Issue #10998) | 一级发现 |
| #8 | 远程绑定超时 | 闲置1小时后连接超时 | 重启开发服务器或实现保活模式(Issue #10801) | 一级发现 |
| #9 | 多Worker开发D1访问问题 | 辅助Worker无法访问D1绑定 | 使用 | 一级发现 |
| #10 | 临时性网络错误 | 随机出现“Network connection lost”失败 | 实现指数退避重试逻辑(D1 FAQ) | 一级发现 |
| #11 | FTS5导出限制 | 包含FTS5虚拟表的数据库无法导出 | 导出前删除虚拟表,导入后重新创建(Issue #9519) | 一级发现 |
| #12 | exec()多行SQL错误 | D1Database.exec()执行多行SQL失败 | 使用预编译语句或外部.sql文件(Issue #9133) | 一级发现 |
| #13 | 10 GB数据库限制 | 单数据库容量受限 | 实现多数据库分片(社区文章) | 二级发现 |
| #14 | 2 MB行大小限制 | 行大小超出限制 | 使用D1 + R2混合存储模式(社区文章) | 二级发现 |
Wrangler Commands Reference
Wrangler命令参考
bash
undefinedbash
undefinedDatabase management
数据库管理
wrangler d1 create <DATABASE_NAME>
wrangler d1 list
wrangler d1 delete <DATABASE_NAME>
wrangler d1 info <DATABASE_NAME>
wrangler d1 create <DATABASE_NAME>
wrangler d1 list
wrangler d1 delete <DATABASE_NAME>
wrangler d1 info <DATABASE_NAME>
Migrations
迁移管理
wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
wrangler d1 migrations list <DATABASE_NAME> --local|--remote
wrangler d1 migrations apply <DATABASE_NAME> --local|--remote
wrangler d1 migrations create <DATABASE_NAME> <MIGRATION_NAME>
wrangler d1 migrations list <DATABASE_NAME> --local|--remote
wrangler d1 migrations apply <DATABASE_NAME> --local|--remote
Execute queries
执行查询
wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users"
wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql
wrangler d1 execute <DATABASE_NAME> --local|--remote --command "SELECT * FROM users"
wrangler d1 execute <DATABASE_NAME> --local|--remote --file=./query.sql
Time Travel (view historical data)
时光旅行(查看历史数据)
wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20"
wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"
---wrangler d1 time-travel info <DATABASE_NAME> --timestamp "2025-10-20"
wrangler d1 time-travel restore <DATABASE_NAME> --timestamp "2025-10-20"
---Official Documentation
官方文档
- D1 Overview: https://developers.cloudflare.com/d1/
- Get Started: https://developers.cloudflare.com/d1/get-started/
- Migrations: https://developers.cloudflare.com/d1/reference/migrations/
- Workers API: https://developers.cloudflare.com/d1/worker-api/
- Best Practices: https://developers.cloudflare.com/d1/best-practices/
- Wrangler Commands: https://developers.cloudflare.com/workers/wrangler/commands/#d1
Ready to build with D1! 🚀
Last verified: 2026-01-20 | Skill version: 3.0.0 | Changes: Added 8 new known issues from community research (TIER 1-2 findings): trigger case sensitivity, remote binding timeouts, multi-worker dev patterns, transient error handling, FTS5 limitations, sharding patterns, hybrid D1+R2 storage, and database portability considerations.
- D1概述:https://developers.cloudflare.com/d1/
- 快速开始:https://developers.cloudflare.com/d1/get-started/
- 迁移指南:https://developers.cloudflare.com/d1/reference/migrations/
- Workers API:https://developers.cloudflare.com/d1/worker-api/
- 最佳实践:https://developers.cloudflare.com/d1/best-practices/
- Wrangler命令:https://developers.cloudflare.com/workers/wrangler/commands/#d1
准备好使用D1进行开发了! 🚀
最后验证时间:2026-01-20 | 技能版本:3.0.0 | 更新内容:新增8种来自社区研究的已知问题(一级/二级发现):触发器大小写敏感、远程绑定超时、多Worker开发模式、临时性错误处理、FTS5限制、分片模式、D1+R2混合存储、数据库可移植性注意事项。