cloudflare-r2-d1

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cloudflare R2, D1 & Storage Products

Cloudflare R2、D1 及存储产品

Comprehensive guide for Cloudflare's edge storage products: R2 (object storage), D1 (SQLite database), and KV (key-value store).
这是一份关于Cloudflare边缘存储产品的综合指南,包括R2(对象存储)、D1(SQLite数据库)和KV(键值存储)。

Sources

参考来源

When to Use What

产品选型指南

ProductBest ForLimits
R2Large files, media, user uploads, S3-compatible storageNo egress fees, 10GB free
D1Relational data, per-tenant databases, SQLite workloads10GB per database max
KVSession data, config, API keys, high-read caching1 write/sec per key
Durable ObjectsReal-time coordination, WebSockets, countersSingle-threaded per object
Decision tree:
  • Need SQL queries? → D1
  • Storing files/blobs? → R2
  • High-read, low-write config? → KV
  • Real-time state coordination? → Durable Objects

产品适用场景限制说明
R2大文件存储、媒体资源、用户上传、兼容S3的存储服务无出口流量费,免费额度10GB
D1关系型数据、租户专属数据库、SQLite工作负载单数据库最大10GB
KV会话数据、配置信息、API密钥、高读取缓存单Key每秒最多1次写入
Durable Objects实时协调、WebSocket、计数器每个对象为单线程处理
决策树:
  • 需要SQL查询?→ D1
  • 存储文件/二进制数据?→ R2
  • 高读取、低写入的配置数据?→ KV
  • 实时状态协调?→ Durable Objects

D1 SQLite Database

D1 SQLite数据库

Critical Limitations

关键限制

<EXTREMELY-IMPORTANT> D1 has a **10GB maximum database size**. Design for horizontal sharding across multiple smaller databases (per-user, per-tenant). </EXTREMELY-IMPORTANT>
LimitValue
Max database size10 GB
Max connections per Worker6 simultaneous
Max databases per Worker~5,000 bindings
Import file size5 GB
JavaScript number precision52-bit (int64 values may lose precision)
<EXTREMELY-IMPORTANT> D1数据库的**最大容量为10GB**。设计时应考虑通过多个小型数据库(按用户/租户拆分)实现水平分片。 </EXTREMELY-IMPORTANT>
限制项数值
数据库最大容量10 GB
单个Worker的最大连接数6个并发连接
单个Worker可绑定的数据库数量约5000个
导入文件最大容量5 GB
JavaScript数值精度52位(int64值可能丢失精度)

Performance Characteristics

性能特性

  • Single-threaded: Each D1 database processes queries sequentially
  • Throughput formula: If avg query = 1ms → ~1,000 QPS; if 100ms → 10 QPS
  • Read queries: < 1ms with proper indexes
  • Write queries: Several ms (must be durably persisted)
  • 单线程处理:每个D1数据库按顺序处理查询请求
  • 吞吐量计算公式:若平均查询耗时1ms → 约1000 QPS;若耗时100ms → 10 QPS
  • 读取查询:配置合适索引后耗时<1ms
  • 写入查询:耗时数毫秒(需持久化存储)

Gotchas

常见陷阱

1. No traditional transactions
javascript
// WRONG - BEGIN TRANSACTION not supported in Workers
await db.exec('BEGIN TRANSACTION');

// CORRECT - Use batch() for atomic operations
const results = await db.batch([
  db.prepare('INSERT INTO users (name) VALUES (?)').bind('Alice'),
  db.prepare('INSERT INTO logs (action) VALUES (?)').bind('user_created'),
]);
2. Large migrations must be batched
javascript
// WRONG - Will exceed execution limits
await db.exec('DELETE FROM logs WHERE created_at < ?', oldDate);

// CORRECT - Batch in chunks
while (true) {
  const result = await db.prepare(
    'DELETE FROM logs WHERE id IN (SELECT id FROM logs WHERE created_at < ? LIMIT 1000)'
  ).bind(oldDate).run();
  if (result.changes === 0) break;
}
3. Int64 precision loss
javascript
// JavaScript numbers are 53-bit precision
// Storing 9007199254740993 may return 9007199254740992
// Use TEXT for large integers if precision matters
4. Cannot import MySQL/PostgreSQL dumps directly
  • Must convert to SQLite-compatible SQL
  • Cannot import raw
    .sqlite3
    files
  • Large string values (~500KB+) may fail due to SQL length limits
1. 不支持传统事务
javascript
// 错误写法 - Workers环境不支持BEGIN TRANSACTION
await db.exec('BEGIN TRANSACTION');

// 正确写法 - 使用batch()实现原子操作
const results = await db.batch([
  db.prepare('INSERT INTO users (name) VALUES (?)').bind('Alice'),
  db.prepare('INSERT INTO logs (action) VALUES (?)').bind('user_created'),
]);
2. 大型迁移需分批处理
javascript
// 错误写法 - 会超出执行限制
await db.exec('DELETE FROM logs WHERE created_at < ?', oldDate);

// 正确写法 - 分批次处理
while (true) {
  const result = await db.prepare(
    'DELETE FROM logs WHERE id IN (SELECT id FROM logs WHERE created_at < ? LIMIT 1000)'
  ).bind(oldDate).run();
  if (result.changes === 0) break;
}
3. Int64精度丢失问题
javascript
// JavaScript数值为53位精度
// 存储9007199254740993可能返回9007199254740992
// 若需要精度,使用TEXT类型存储大整数
4. 无法直接导入MySQL/PostgreSQL备份文件
  • 必须转换为SQLite兼容的SQL语句
  • 无法直接导入.raw
    .sqlite3
    文件
  • 大字符串值(约500KB+)可能因SQL长度限制导入失败

wrangler.toml Configuration

wrangler.toml配置

toml
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
toml
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

For local development (auto-creates if missing in wrangler 4.45+)

本地开发配置(wrangler 4.45+版本中若不存在会自动创建)

[[d1_databases]] binding = "DB" database_name = "my-database"
undefined
[[d1_databases]] binding = "DB" database_name = "my-database"
undefined

Common Patterns

常见模式

Schema migrations:
javascript
// migrations/0001_initial.sql
CREATE TABLE IF NOT EXISTS users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT UNIQUE NOT NULL,
  created_at TEXT DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
bash
undefined
Schema迁移:
javascript
// migrations/0001_initial.sql
CREATE TABLE IF NOT EXISTS users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT UNIQUE NOT NULL,
  created_at TEXT DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
bash
undefined

Apply migrations

应用迁移

wrangler d1 migrations apply my-database

**Multi-tenant pattern:**
```javascript
// Create per-tenant database
// D1 allows thousands of databases at no extra cost
const tenantDb = env[`DB_${tenantId}`];

wrangler d1 migrations apply my-database

**多租户模式:**
```javascript

R2 Object Storage

创建租户专属数据库

Key Features

D1支持免费创建数千个数据库

  • S3-compatible API (with some differences)
  • No egress fees (major cost advantage over S3)
  • Strong consistency - reads immediately see writes
  • Workers integration - direct binding, no network hop
const tenantDb = env[
DB_${tenantId}
];

---

wrangler.toml Configuration

R2对象存储

核心特性

toml
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"
  • 兼容S3的API(存在部分差异)
  • 无出口流量费(相比S3的主要成本优势)
  • 强一致性 - 写入后立即可读
  • Workers集成 - 直接绑定,无需网络跳转

With jurisdiction (data residency)

wrangler.toml配置

[[r2_buckets]] binding = "EU_BUCKET" bucket_name = "eu-data" jurisdiction = "eu"
undefined
toml
[[r2_buckets]]
binding = "BUCKET"
bucket_name = "my-bucket"

Common Operations

配置数据驻留区域

javascript
export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);

    switch (request.method) {
      case 'PUT': {
        // Upload object
        await env.BUCKET.put(key, request.body, {
          httpMetadata: {
            contentType: request.headers.get('content-type'),
          },
          customMetadata: {
            uploadedBy: 'user-123',
          },
        });
        return new Response('Uploaded', { status: 201 });
      }

      case 'GET': {
        // Download object
        const object = await env.BUCKET.get(key);
        if (!object) {
          return new Response('Not Found', { status: 404 });
        }
        return new Response(object.body, {
          headers: {
            'content-type': object.httpMetadata?.contentType || 'application/octet-stream',
            'etag': object.etag,
          },
        });
      }

      case 'DELETE': {
        await env.BUCKET.delete(key);
        return new Response('Deleted', { status: 200 });
      }

      case 'HEAD': {
        const object = await env.BUCKET.head(key);
        if (!object) {
          return new Response(null, { status: 404 });
        }
        return new Response(null, {
          headers: {
            'content-length': object.size.toString(),
            'etag': object.etag,
          },
        });
      }
    }
  },
};
[[r2_buckets]] binding = "EU_BUCKET" bucket_name = "eu-data" jurisdiction = "eu"
undefined

Gotchas

常见操作

1. Memory limits when processing large files
javascript
// WRONG - Loads entire file into memory (128MB Worker limit)
const object = await env.BUCKET.get(key);
const data = await object.text();

// CORRECT - Stream for large files
const object = await env.BUCKET.get(key);
return new Response(object.body); // Stream directly
2. Request body can only be read once
javascript
// WRONG - Body already consumed
const data = await request.text();
await env.BUCKET.put(key, request.body); // Fails!

// CORRECT - Clone request first
const clone = request.clone();
const data = await request.text();
await env.BUCKET.put(key, clone.body);
3. List operations return max 1000 keys
javascript
// Paginate through all objects
let cursor;
const allKeys = [];

do {
  const listed = await env.BUCKET.list({ cursor, limit: 1000 });
  allKeys.push(...listed.objects.map(o => o.key));
  cursor = listed.truncated ? listed.cursor : null;
} while (cursor);
javascript
export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);

    switch (request.method) {
      case 'PUT': {
        // 上传对象
        await env.BUCKET.put(key, request.body, {
          httpMetadata: {
            contentType: request.headers.get('content-type'),
          },
          customMetadata: {
            uploadedBy: 'user-123',
          },
        });
        return new Response('上传成功', { status: 201 });
      }

      case 'GET': {
        // 下载对象
        const object = await env.BUCKET.get(key);
        if (!object) {
          return new Response('未找到资源', { status: 404 });
        }
        return new Response(object.body, {
          headers: {
            'content-type': object.httpMetadata?.contentType || 'application/octet-stream',
            'etag': object.etag,
          },
        });
      }

      case 'DELETE': {
        await env.BUCKET.delete(key);
        return new Response('删除成功', { status: 200 });
      }

      case 'HEAD': {
        const object = await env.BUCKET.head(key);
        if (!object) {
          return new Response(null, { status: 404 });
        }
        return new Response(null, {
          headers: {
            'content-length': object.size.toString(),
            'etag': object.etag,
          },
        });
      }
    }
  },
};

Presigned URLs (S3-compatible)

常见陷阱

javascript
import { AwsClient } from 'aws4fetch';

const r2 = new AwsClient({
  accessKeyId: env.R2_ACCESS_KEY,
  secretAccessKey: env.R2_SECRET_KEY,
});

// Generate presigned upload URL
const signedUrl = await r2.sign(
  new Request(`https://${env.R2_BUCKET}.r2.cloudflarestorage.com/${key}`, {
    method: 'PUT',
  }),
  { aws: { signQuery: true } }
);

1. 处理大文件时的内存限制
javascript
// 错误写法 - 将整个文件加载到内存中(Worker内存限制为128MB)
const object = await env.BUCKET.get(key);
const data = await object.text();

// 正确写法 - 流式处理大文件
const object = await env.BUCKET.get(key);
return new Response(object.body); // 直接流式返回
2. 请求Body只能读取一次
javascript
// 错误写法 - Body已被消耗
const data = await request.text();
await env.BUCKET.put(key, request.body); // 执行失败!

// 正确写法 - 先克隆请求
const clone = request.clone();
const data = await request.text();
await env.BUCKET.put(key, clone.body);
3. 列表操作最多返回1000个Key
javascript
// 分页遍历所有对象
let cursor;
const allKeys = [];

do {
  const listed = await env.BUCKET.list({ cursor, limit: 1000 });
  allKeys.push(...listed.objects.map(o => o.key));
  cursor = listed.truncated ? listed.cursor : null;
} while (cursor);

KV (Key-Value Store)

预签名URL(兼容S3)

When to Use KV

  • Session tokens / auth data
  • Feature flags / configuration
  • Cached API responses
  • Data with high reads, low writes
javascript
import { AwsClient } from 'aws4fetch';

const r2 = new AwsClient({
  accessKeyId: env.R2_ACCESS_KEY,
  secretAccessKey: env.R2_SECRET_KEY,
});

// 生成预签名上传URL
const signedUrl = await r2.sign(
  new Request(`https://${env.R2_BUCKET}.r2.cloudflarestorage.com/${key}`, {
    method: 'PUT',
  }),
  { aws: { signQuery: true } }
);

Critical Limitation

KV(键值存储)

适用场景

<EXTREMELY-IMPORTANT> KV has a **1 write per second per key** limit. Use D1 or Durable Objects for frequent writes. </EXTREMELY-IMPORTANT>
  • 会话令牌/认证数据
  • 功能开关/配置信息
  • 缓存API响应
  • 高读取、低写入的数据

wrangler.toml Configuration

关键限制

toml
[[kv_namespaces]]
binding = "CACHE"
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
<EXTREMELY-IMPORTANT> KV的**单Key每秒最多支持1次写入**。频繁写入场景请使用D1或Durable Objects。 </EXTREMELY-IMPORTANT>

Common Operations

wrangler.toml配置

javascript
// Write (with optional TTL)
await env.CACHE.put('user:123', JSON.stringify(userData), {
  expirationTtl: 3600, // 1 hour
});

// Read
const data = await env.CACHE.get('user:123', { type: 'json' });

// Delete
await env.CACHE.delete('user:123');

// List keys with prefix
const keys = await env.CACHE.list({ prefix: 'user:' });

toml
[[kv_namespaces]]
binding = "CACHE"
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Automatic Resource Provisioning (2025)

常见操作

As of wrangler 4.45+, resources are auto-created:
toml
undefined
javascript
// 写入数据(可选设置TTL)
await env.CACHE.put('user:123', JSON.stringify(userData), {
  expirationTtl: 3600, // 1小时
});

// 读取数据
const data = await env.CACHE.get('user:123', { type: 'json' });

// 删除数据
await env.CACHE.delete('user:123');

// 按前缀列出Key
const keys = await env.CACHE.list({ prefix: 'user:' });

wrangler.toml - No IDs needed for new resources

自动资源创建(2025年更新)

[[d1_databases]] binding = "DB" database_name = "my-app-db"
[[r2_buckets]] binding = "BUCKET" bucket_name = "my-app-files"
[[kv_namespaces]] binding = "CACHE"

```bash
从wrangler 4.45+版本开始,资源会自动创建:
toml
undefined

First deploy auto-creates resources

wrangler.toml - 新资源无需配置ID

wrangler deploy

---
[[d1_databases]] binding = "DB" database_name = "my-app-db"
[[r2_buckets]] binding = "BUCKET" bucket_name = "my-app-files"
[[kv_namespaces]] binding = "CACHE"

```bash

Full-Stack Pattern: D1 + R2 + KV

首次部署时自动创建资源

javascript
export default {
  async fetch(request, env) {
    const url = new URL(request.url);

    // KV: Check cache first
    const cached = await env.CACHE.get(url.pathname);
    if (cached) return new Response(cached);

    // D1: Query database
    const { results } = await env.DB.prepare(
      'SELECT * FROM posts WHERE slug = ?'
    ).bind(url.pathname).all();

    if (!results.length) {
      return new Response('Not Found', { status: 404 });
    }

    const post = results[0];

    // R2: Get associated image
    const image = post.image_key
      ? await env.BUCKET.get(post.image_key)
      : null;

    // Cache the response
    const html = renderPost(post, image);
    await env.CACHE.put(url.pathname, html, { expirationTtl: 300 });

    return new Response(html, {
      headers: { 'content-type': 'text/html' },
    });
  },
};

wrangler deploy

---

Cost Optimization

全栈模式:D1 + R2 + KV

Free Tier Limits

ProductFree Tier
R210 GB storage, 1M Class A ops, 10M Class B ops
D15M rows read/day, 100K rows written/day, 5 GB storage
KV100K reads/day, 1K writes/day, 1 GB storage
Workers100K requests/day
javascript
export default {
  async fetch(request, env) {
    const url = new URL(request.url);

    // KV:先检查缓存
    const cached = await env.CACHE.get(url.pathname);
    if (cached) return new Response(cached);

    // D1:查询数据库
    const { results } = await env.DB.prepare(
      'SELECT * FROM posts WHERE slug = ?'
    ).bind(url.pathname).all();

    if (!results.length) {
      return new Response('未找到资源', { status: 404 });
    }

    const post = results[0];

    // R2:获取关联图片
    const image = post.image_key
      ? await env.BUCKET.get(post.image_key)
      : null;

    // 缓存响应结果
    const html = renderPost(post, image);
    await env.CACHE.put(url.pathname, html, { expirationTtl: 300 });

    return new Response(html, {
      headers: { 'content-type': 'text/html' },
    });
  },
};

Tips

成本优化

免费额度限制

  1. Use KV for caching to reduce D1 reads
  2. Batch D1 writes to minimize write operations
  3. Stream R2 objects instead of loading into memory
  4. Set TTLs on KV to auto-expire stale data
  5. Shard D1 databases per-tenant for horizontal scale

产品免费额度
R210GB存储、100万次A类操作、1000万次B类操作
D1每日500万行读取、10万行写入、5GB存储
KV每日10万次读取、1000次写入、1GB存储
Workers每日10万次请求

Troubleshooting

优化建议

"D1_ERROR: too many SQL variables"

Split large IN clauses into batched queries.
  1. 使用KV做缓存以减少D1读取次数
  2. 批量处理D1写入以最小化写入操作数
  3. 流式传输R2对象而非加载到内存
  4. 为KV设置TTL自动过期 stale 数据
  5. 按租户分片D1数据库实现水平扩展

"R2: EntityTooLarge"

故障排查

"D1_ERROR: too many SQL variables"

Files > 5GB must use multipart upload.
将大型IN子句拆分为分批查询。

"KV: Too many writes"

"R2: EntityTooLarge"

You're hitting 1 write/sec/key limit. Use D1 or Durable Objects.
大于5GB的文件必须使用分片上传。

"Worker exceeded CPU time limit"

"KV: Too many writes"

  • Add indexes to D1 queries
  • Stream R2 objects instead of buffering
  • Split work across multiple requests
触发了单Key每秒1次写入的限制。请改用D1或Durable Objects。

"Worker exceeded CPU time limit"

  • 为D1查询添加索引
  • 流式传输R2对象而非缓冲
  • 将工作拆分到多个请求中执行