r2-storage

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

R2 Object Storage

R2 对象存储

Store and retrieve objects at scale using Cloudflare's S3-compatible object storage.
使用Cloudflare的兼容S3的对象存储服务,大规模存储和检索对象。

When to Use

适用场景

  • File uploads (images, videos, documents)
  • AI assets and model artifacts
  • Image and media asset storage
  • Structured data (JSON, CSV, logs)
  • User-facing uploads and downloads
  • Static asset hosting
  • Backup and archival storage
  • S3-compatible workflows with existing tools
  • 文件上传(图片、视频、文档)
  • AI资产和模型工件存储
  • 图片与媒体资产存储
  • 结构化数据(JSON、CSV、日志)
  • 用户端上传与下载
  • 静态资源托管
  • 备份与归档存储
  • 与现有工具配合的S3兼容工作流

FIRST: Create R2 Bucket

第一步:创建R2存储桶

bash
undefined
bash
undefined

Create bucket

Create bucket

wrangler r2 bucket create my-bucket
wrangler r2 bucket create my-bucket

Create with location hint

Create with location hint

wrangler r2 bucket create my-bucket --location wnam
wrangler r2 bucket create my-bucket --location wnam

List buckets

List buckets

wrangler r2 bucket list
undefined
wrangler r2 bucket list
undefined

Quick Reference

快速参考

OperationAPI
Upload object
await bucket.put(key, data, { httpMetadata })
Download object
const obj = await bucket.get(key); const data = await obj.text()
Delete object
await bucket.delete(key)
List objects
const list = await bucket.list({ prefix, limit })
Get metadata
const obj = await bucket.head(key)
Multipart upload
const upload = await bucket.createMultipartUpload(key)
Generate signed URLUse presigned URL patterns with R2's S3 compatibility
操作API
上传对象
await bucket.put(key, data, { httpMetadata })
下载对象
const obj = await bucket.get(key); const data = await obj.text()
删除对象
await bucket.delete(key)
列出对象
const list = await bucket.list({ prefix, limit })
获取元数据
const obj = await bucket.head(key)
分段上传
const upload = await bucket.createMultipartUpload(key)
生成预签名URL借助R2的S3兼容性使用预签名URL模式

Wrangler Configuration

Wrangler 配置

jsonc
// wrangler.jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2026-01-01",
  "r2_buckets": [
    {
      "binding": "BUCKET",
      "bucket_name": "my-bucket"
    }
  ]
}
TypeScript Types (run
wrangler types
to generate):
typescript
export interface Env {
  BUCKET: R2Bucket;
}
jsonc
// wrangler.jsonc
{
  "name": "my-worker",
  "main": "src/index.ts",
  "compatibility_date": "2026-01-01",
  "r2_buckets": [
    {
      "binding": "BUCKET",
      "bucket_name": "my-bucket"
    }
  ]
}
TypeScript 类型(运行
wrangler types
生成):
typescript
export interface Env {
  BUCKET: R2Bucket;
}

Basic Upload and Download

基础上传与下载

typescript
import { R2Bucket } from "@cloudflare/workers-types";

export interface Env {
  BUCKET: R2Bucket;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const key = url.pathname.slice(1); // Remove leading /

    // Upload
    if (request.method === "PUT") {
      await env.BUCKET.put(key, request.body, {
        httpMetadata: {
          contentType: request.headers.get("content-type") || "application/octet-stream",
        },
      });
      return new Response("Uploaded", { status: 201 });
    }

    // Download
    if (request.method === "GET") {
      const object = await env.BUCKET.get(key);
      
      if (!object) {
        return new Response("Not found", { status: 404 });
      }

      return new Response(object.body, {
        headers: {
          "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
          "ETag": object.httpEtag,
          "Cache-Control": object.httpMetadata?.cacheControl || "public, max-age=3600",
        },
      });
    }

    // Delete
    if (request.method === "DELETE") {
      await env.BUCKET.delete(key);
      return new Response("Deleted", { status: 204 });
    }

    return new Response("Method not allowed", { status: 405 });
  },
};
typescript
import { R2Bucket } from "@cloudflare/workers-types";

export interface Env {
  BUCKET: R2Bucket;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const key = url.pathname.slice(1); // Remove leading /

    // Upload
    if (request.method === "PUT") {
      await env.BUCKET.put(key, request.body, {
        httpMetadata: {
          contentType: request.headers.get("content-type") || "application/octet-stream",
        },
      });
      return new Response("Uploaded", { status: 201 });
    }

    // Download
    if (request.method === "GET") {
      const object = await env.BUCKET.get(key);
      
      if (!object) {
        return new Response("Not found", { status: 404 });
      }

      return new Response(object.body, {
        headers: {
          "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
          "ETag": object.httpEtag,
          "Cache-Control": object.httpMetadata?.cacheControl || "public, max-age=3600",
        },
      });
    }

    // Delete
    if (request.method === "DELETE") {
      await env.BUCKET.delete(key);
      return new Response("Deleted", { status: 204 });
    }

    return new Response("Method not allowed", { status: 405 });
  },
};

Multipart Form Upload Handler

多部分表单上传处理器

Handle file uploads from HTML forms or multipart requests.
typescript
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    if (request.method !== "POST") {
      return new Response("Method not allowed", { status: 405 });
    }

    const formData = await request.formData();
    const file = formData.get("file") as File;

    if (!file) {
      return new Response("No file provided", { status: 400 });
    }

    // Generate unique key
    const key = `uploads/${crypto.randomUUID()}-${file.name}`;

    // Upload to R2
    await env.BUCKET.put(key, file.stream(), {
      httpMetadata: {
        contentType: file.type,
      },
      customMetadata: {
        originalName: file.name,
        uploadedAt: new Date().toISOString(),
      },
    });

    return Response.json({
      success: true,
      key,
      url: `/files/${key}`,
    });
  },
};
处理来自HTML表单或多部分请求的文件上传。
typescript
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    if (request.method !== "POST") {
      return new Response("Method not allowed", { status: 405 });
    }

    const formData = await request.formData();
    const file = formData.get("file") as File;

    if (!file) {
      return new Response("No file provided", { status: 400 });
    }

    // Generate unique key
    const key = `uploads/${crypto.randomUUID()}-${file.name}`;

    // Upload to R2
    await env.BUCKET.put(key, file.stream(), {
      httpMetadata: {
        contentType: file.type,
      },
      customMetadata: {
        originalName: file.name,
        uploadedAt: new Date().toISOString(),
      },
    });

    return Response.json({
      success: true,
      key,
      url: `/files/${key}`,
    });
  },
};

List Objects with Pagination

带分页的对象列表

typescript
async function listAllObjects(
  bucket: R2Bucket,
  prefix: string = ""
): Promise<R2Object[]> {
  const objects: R2Object[] = [];
  let cursor: string | undefined;

  do {
    const listed = await bucket.list({
      prefix,
      cursor,
      limit: 1000,
    });

    objects.push(...listed.objects);
    cursor = listed.truncated ? listed.cursor : undefined;
  } while (cursor);

  return objects;
}

// Usage
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const prefix = url.searchParams.get("prefix") || "";

    const objects = await listAllObjects(env.BUCKET, prefix);

    return Response.json({
      count: objects.length,
      objects: objects.map((obj) => ({
        key: obj.key,
        size: obj.size,
        uploaded: obj.uploaded,
      })),
    });
  },
};
typescript
async function listAllObjects(
  bucket: R2Bucket,
  prefix: string = ""
): Promise<R2Object[]> {
  const objects: R2Object[] = [];
  let cursor: string | undefined;

  do {
    const listed = await bucket.list({
      prefix,
      cursor,
      limit: 1000,
    });

    objects.push(...listed.objects);
    cursor = listed.truncated ? listed.cursor : undefined;
  } while (cursor);

  return objects;
}

// Usage
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const prefix = url.searchParams.get("prefix") || "";

    const objects = await listAllObjects(env.BUCKET, prefix);

    return Response.json({
      count: objects.length,
      objects: objects.map((obj) => ({
        key: obj.key,
        size: obj.size,
        uploaded: obj.uploaded,
      })),
    });
  },
};

Conditional Operations (ETags)

条件操作(ETag)

Use ETags for conditional reads/writes to prevent race conditions.
typescript
// Conditional write (only if not modified)
const existingObject = await env.BUCKET.head("config.json");

if (existingObject) {
  // Update only if ETag matches
  await env.BUCKET.put("config.json", newData, {
    httpMetadata: {
      contentType: "application/json",
    },
    onlyIf: {
      etagMatches: existingObject.httpEtag,
    },
  });
}

// Conditional read (If-None-Match)
const object = await env.BUCKET.get("image.jpg", {
  onlyIf: {
    etagDoesNotMatch: cachedEtag,
  },
});

if (object === null) {
  // Object not modified - return 304
  return new Response(null, {
    status: 304,
    headers: { "ETag": cachedEtag },
  });
}
使用ETag实现条件读写,防止竞态条件。
typescript
// Conditional write (only if not modified)
const existingObject = await env.BUCKET.head("config.json");

if (existingObject) {
  // Update only if ETag matches
  await env.BUCKET.put("config.json", newData, {
    httpMetadata: {
      contentType: "application/json",
    },
    onlyIf: {
      etagMatches: existingObject.httpEtag,
    },
  });
}

// Conditional read (If-None-Match)
const object = await env.BUCKET.get("image.jpg", {
  onlyIf: {
    etagDoesNotMatch: cachedEtag,
  },
});

if (object === null) {
  // Object not modified - return 304
  return new Response(null, {
    status: 304,
    headers: { "ETag": cachedEtag },
  });
}

Custom Metadata

自定义元数据

Store application-specific metadata alongside objects.
typescript
// Store with custom metadata
await env.BUCKET.put("document.pdf", pdfData, {
  httpMetadata: {
    contentType: "application/pdf",
  },
  customMetadata: {
    userId: "user-123",
    documentType: "invoice",
    version: "2",
    tags: "finance,2024",
  },
});

// Read metadata without downloading body
const object = await env.BUCKET.head("document.pdf");
console.log(object.customMetadata?.userId); // "user-123"
将应用特定的元数据与对象一起存储。
typescript
// Store with custom metadata
await env.BUCKET.put("document.pdf", pdfData, {
  httpMetadata: {
    contentType: "application/pdf",
  },
  customMetadata: {
    userId: "user-123",
    documentType: "invoice",
    version: "2",
    tags: "finance,2024",
  },
});

// Read metadata without downloading body
const object = await env.BUCKET.head("document.pdf");
console.log(object.customMetadata?.userId); // "user-123"

Range Requests (Partial Downloads)

范围请求(部分下载)

Efficiently download portions of large files.
typescript
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const key = new URL(request.url).pathname.slice(1);
    const rangeHeader = request.headers.get("range");

    if (rangeHeader) {
      // Parse range: "bytes=0-1023"
      const match = rangeHeader.match(/bytes=(\d+)-(\d*)/);
      if (match) {
        const start = parseInt(match[1], 10);
        const end = match[2] ? parseInt(match[2], 10) : undefined;

        const object = await env.BUCKET.get(key, {
          range: { offset: start, length: end ? end - start + 1 : undefined },
        });

        if (!object) {
          return new Response("Not found", { status: 404 });
        }

        return new Response(object.body, {
          status: 206,
          headers: {
            "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
            "Content-Range": `bytes ${start}-${end || object.size - 1}/${object.size}`,
            "Content-Length": object.size.toString(),
          },
        });
      }
    }

    // Regular full download
    const object = await env.BUCKET.get(key);
    if (!object) {
      return new Response("Not found", { status: 404 });
    }

    return new Response(object.body, {
      headers: {
        "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
      },
    });
  },
};
高效下载大文件的部分内容。
typescript
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const key = new URL(request.url).pathname.slice(1);
    const rangeHeader = request.headers.get("range");

    if (rangeHeader) {
      // Parse range: "bytes=0-1023"
      const match = rangeHeader.match(/bytes=(\d+)-(\d*)/);
      if (match) {
        const start = parseInt(match[1], 10);
        const end = match[2] ? parseInt(match[2], 10) : undefined;

        const object = await env.BUCKET.get(key, {
          range: { offset: start, length: end ? end - start + 1 : undefined },
        });

        if (!object) {
          return new Response("Not found", { status: 404 });
        }

        return new Response(object.body, {
          status: 206,
          headers: {
            "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
            "Content-Range": `bytes ${start}-${end || object.size - 1}/${object.size}`,
            "Content-Length": object.size.toString(),
          },
        });
      }
    }

    // Regular full download
    const object = await env.BUCKET.get(key);
    if (!object) {
      return new Response("Not found", { status: 404 });
    }

    return new Response(object.body, {
      headers: {
        "Content-Type": object.httpMetadata?.contentType || "application/octet-stream",
      },
    });
  },
};

AWS SDK Integration (S3 Compatible)

AWS SDK 集成(兼容S3)

R2 is fully compatible with the AWS S3 API. Use the official AWS SDK v3.
Install dependencies:
bash
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
wrangler.jsonc (add Node.js compatibility):
jsonc
{
  "compatibility_flags": ["nodejs_compat_v2"],
  "r2_buckets": [
    { "binding": "BUCKET", "bucket_name": "my-bucket" }
  ]
}
Using AWS SDK:
typescript
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

export interface Env {
  BUCKET: R2Bucket;
  R2_ACCESS_KEY_ID: string;
  R2_SECRET_ACCESS_KEY: string;
  R2_ACCOUNT_ID: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Create S3 client
    const s3 = new S3Client({
      region: "auto",
      endpoint: `https://${env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
      credentials: {
        accessKeyId: env.R2_ACCESS_KEY_ID,
        secretAccessKey: env.R2_SECRET_ACCESS_KEY,
      },
    });

    // Upload using S3 API
    await s3.send(
      new PutObjectCommand({
        Bucket: "my-bucket",
        Key: "file.txt",
        Body: "Hello R2",
        ContentType: "text/plain",
      })
    );

    // Generate presigned URL (valid for 1 hour)
    const command = new GetObjectCommand({
      Bucket: "my-bucket",
      Key: "file.txt",
    });
    const signedUrl = await getSignedUrl(s3, command, { expiresIn: 3600 });

    return Response.json({ signedUrl });
  },
};
Note: Using the native R2Bucket binding is more efficient than AWS SDK for basic operations. Use AWS SDK when you need presigned URLs or have existing S3 tooling.
R2完全兼容AWS S3 API,可使用官方AWS SDK v3。
安装依赖:
bash
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
wrangler.jsonc(添加Node.js兼容性):
jsonc
{
  "compatibility_flags": ["nodejs_compat_v2"],
  "r2_buckets": [
    { "binding": "BUCKET", "bucket_name": "my-bucket" }
  ]
}
使用AWS SDK:
typescript
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

export interface Env {
  BUCKET: R2Bucket;
  R2_ACCESS_KEY_ID: string;
  R2_SECRET_ACCESS_KEY: string;
  R2_ACCOUNT_ID: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Create S3 client
    const s3 = new S3Client({
      region: "auto",
      endpoint: `https://${env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
      credentials: {
        accessKeyId: env.R2_ACCESS_KEY_ID,
        secretAccessKey: env.R2_SECRET_ACCESS_KEY,
      },
    });

    // Upload using S3 API
    await s3.send(
      new PutObjectCommand({
        Bucket: "my-bucket",
        Key: "file.txt",
        Body: "Hello R2",
        ContentType: "text/plain",
      })
    );

    // Generate presigned URL (valid for 1 hour)
    const command = new GetObjectCommand({
      Bucket: "my-bucket",
      Key: "file.txt",
    });
    const signedUrl = await getSignedUrl(s3, command, { expiresIn: 3600 });

    return Response.json({ signedUrl });
  },
};
注意:对于基础操作,使用原生R2Bucket绑定比AWS SDK更高效。当你需要预签名URL或已有S3工具链时,再使用AWS SDK。

Presigned URLs for Direct Uploads

用于直接上传的预签名URL

Allow clients to upload directly to R2 without going through your Worker.
typescript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

export interface Env {
  R2_ACCESS_KEY_ID: string;
  R2_SECRET_ACCESS_KEY: string;
  R2_ACCOUNT_ID: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    // Generate presigned upload URL
    if (url.pathname === "/upload-url") {
      const filename = url.searchParams.get("filename");
      if (!filename) {
        return new Response("Missing filename", { status: 400 });
      }

      const s3 = new S3Client({
        region: "auto",
        endpoint: `https://${env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
        credentials: {
          accessKeyId: env.R2_ACCESS_KEY_ID,
          secretAccessKey: env.R2_SECRET_ACCESS_KEY,
        },
      });

      const key = `uploads/${crypto.randomUUID()}-${filename}`;
      const command = new PutObjectCommand({
        Bucket: "my-bucket",
        Key: key,
      });

      const signedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 minutes

      return Response.json({
        uploadUrl: signedUrl,
        key,
      });
    }

    return new Response("Not found", { status: 404 });
  },
};
Client-side usage:
javascript
// 1. Get presigned URL from your Worker
const response = await fetch("/upload-url?filename=photo.jpg");
const { uploadUrl, key } = await response.json();

// 2. Upload file directly to R2
const file = document.querySelector('input[type="file"]').files[0];
await fetch(uploadUrl, {
  method: "PUT",
  body: file,
  headers: {
    "Content-Type": file.type,
  },
});

// 3. File is now available at key in R2
允许客户端直接上传到R2,无需通过你的Worker。
typescript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

export interface Env {
  R2_ACCESS_KEY_ID: string;
  R2_SECRET_ACCESS_KEY: string;
  R2_ACCOUNT_ID: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    // Generate presigned upload URL
    if (url.pathname === "/upload-url") {
      const filename = url.searchParams.get("filename");
      if (!filename) {
        return new Response("Missing filename", { status: 400 });
      }

      const s3 = new S3Client({
        region: "auto",
        endpoint: `https://${env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
        credentials: {
          accessKeyId: env.R2_ACCESS_KEY_ID,
          secretAccessKey: env.R2_SECRET_ACCESS_KEY,
        },
      });

      const key = `uploads/${crypto.randomUUID()}-${filename}`;
      const command = new PutObjectCommand({
        Bucket: "my-bucket",
        Key: key,
      });

      const signedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 minutes

      return Response.json({
        uploadUrl: signedUrl,
        key,
      });
    }

    return new Response("Not found", { status: 404 });
  },
};
客户端使用示例:
javascript
// 1. 从你的Worker获取预签名URL
const response = await fetch("/upload-url?filename=photo.jpg");
const { uploadUrl, key } = await response.json();

// 2. 直接将文件上传到R2
const file = document.querySelector('input[type="file"]').files[0];
await fetch(uploadUrl, {
  method: "PUT",
  body: file,
  headers: {
    "Content-Type": file.type,
  },
});

// 3. 文件现在已存储在R2的对应key路径下

Multipart Upload for Large Files

大文件分段上传

For files larger than 100MB, use multipart uploads:
typescript
// Initiate multipart upload
const multipartUpload = await env.BUCKET.createMultipartUpload(key, {
  httpMetadata: { contentType: "application/zip" }
});

const uploadedParts: R2UploadedPart[] = [];
const chunkSize = 10 * 1024 * 1024; // 10MB chunks

// Upload parts
for (let offset = 0; offset < data.byteLength; offset += chunkSize) {
  const chunk = data.slice(offset, Math.min(offset + chunkSize, data.byteLength));
  const part = await multipartUpload.uploadPart(partNumber++, chunk);
  uploadedParts.push(part);
}

// Complete upload
const object = await multipartUpload.complete(uploadedParts);
See references/operations.md for complete multipart upload patterns
对于大于100MB的文件,使用分段上传:
typescript
// Initiate multipart upload
const multipartUpload = await env.BUCKET.createMultipartUpload(key, {
  httpMetadata: { contentType: "application/zip" }
});

const uploadedParts: R2UploadedPart[] = [];
const chunkSize = 10 * 1024 * 1024; // 10MB chunks

// Upload parts
for (let offset = 0; offset < data.byteLength; offset += chunkSize) {
  const chunk = data.slice(offset, Math.min(offset + chunkSize, data.byteLength));
  const part = await multipartUpload.uploadPart(partNumber++, chunk);
  uploadedParts.push(part);
}

// Complete upload
const object = await multipartUpload.complete(uploadedParts);
完整的分段上传模式请参考 references/operations.md

Detailed References

详细参考文档

  • references/operations.md - Complete API reference, conditional operations, streaming, error handling
  • references/s3-compat.md - S3 API compatibility, AWS SDK patterns, presigned URLs, migration guide
  • references/limits.md - Storage limits, rate limits, Workers integration limits, best practices
  • references/testing.md - Vitest integration, mocking R2, multipart uploads, test isolation
  • references/operations.md - 完整API参考、条件操作、流式处理、错误处理
  • references/s3-compat.md - S3 API兼容性、AWS SDK使用模式、预签名URL、迁移指南
  • references/limits.md - 存储限制、速率限制、Workers集成限制、最佳实践
  • references/testing.md - Vitest集成、R2模拟、分段上传、测试隔离

Best Practices

最佳实践

  1. Set appropriate Content-Type: Always specify
    httpMetadata.contentType
    for proper browser handling
  2. Use unique keys: Prefix with UUIDs or timestamps to avoid collisions
  3. Leverage custom metadata: Store searchable metadata without downloading objects
  4. Use ETags for consistency: Implement conditional operations for race condition prevention
  5. Stream large uploads: Use multipart uploads for files > 100MB
  6. Cache control headers: Set appropriate
    cacheControl
    in httpMetadata
  7. Use presigned URLs for direct uploads: Reduce Worker bandwidth and latency
  8. List with pagination: Always handle truncated results when listing objects
  9. Use R2Bucket binding by default: Only use AWS SDK when you need presigned URLs or S3-specific features
  10. Handle 404s gracefully:
    bucket.get()
    returns
    null
    when object doesn't exist
  1. 设置合适的Content-Type:始终指定
    httpMetadata.contentType
    以确保浏览器正确处理
  2. 使用唯一键:前缀添加UUID或时间戳避免冲突
  3. 利用自定义元数据:无需下载对象即可存储可搜索的元数据
  4. 使用ETag保证一致性:实现条件操作防止竞态条件
  5. 流式上传大文件:对大于100MB的文件使用分段上传
  6. 缓存控制头:在httpMetadata中设置合适的
    cacheControl
  7. 使用预签名URL实现直接上传:减少Worker带宽和延迟
  8. 分页列出对象:列出对象时始终处理截断结果
  9. 默认使用R2Bucket绑定:仅在需要预签名URL或S3特定功能时使用AWS SDK
  10. 优雅处理404:当对象不存在时,
    bucket.get()
    会返回
    null

Common Patterns

See references/operations.md for complete examples including:
  • Image upload and optimization with metadata
  • Structured data storage (JSON profiles)
  • Backup and archival patterns
  • Streaming large files
  • Conditional operations with ETags