cloudflare-r2
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare R2
Cloudflare R2
S3-compatible object storage with zero egress bandwidth fees. Built on Cloudflare's global network for high durability (11 nines) and strong consistency.
兼容S3的对象存储服务,零出口带宽费用。基于Cloudflare全球网络构建,具备高持久性(99.999999999%)和强一致性。
When to Use This Skill
适用场景
- Implementing object storage for applications
- Migrating from AWS S3 or other storage providers
- Setting up file uploads/downloads
- Configuring public or private buckets
- Integrating R2 with Cloudflare Workers
- Using R2 with S3-compatible tools and SDKs
- Configuring CORS, lifecycles, or event notifications
- Optimizing storage costs with zero egress fees
- 为应用部署对象存储
- 从AWS S3或其他存储服务商迁移数据
- 搭建文件上传/下载功能
- 配置公开或私有存储桶
- 将R2与Cloudflare Workers集成
- 使用兼容S3的工具和SDK操作R2
- 配置CORS、生命周期规则或事件通知
- 利用零出口费用优化存储成本
Prerequisites
前置条件
Required:
- Cloudflare account with R2 purchased
- Account ID from Cloudflare dashboard
For API access:
- R2 Access Keys (Access Key ID + Secret Access Key)
- Generate from: Cloudflare Dashboard → R2 → Manage R2 API Tokens
For Wrangler CLI:
bash
npm install -g wrangler
wrangler login必需项:
- 已开通R2服务的Cloudflare账户
- Cloudflare控制台中的账户ID
API访问需准备:
- R2访问密钥(Access Key ID + Secret Access Key)
- 生成路径:Cloudflare控制台 → R2 → 管理R2 API令牌
Wrangler CLI需准备:
bash
npm install -g wrangler
wrangler loginAPI Key Configuration
API密钥配置
R2 requires specific credentials for S3-compatible API access. The system searches for API keys in this order:
- - Runtime environment variables
process.env - - Project-level environment file
<project-root>/.env - - Claude configuration directory
.claude/.env - - Skills shared configuration
.claude/skills/.env - - Skill-specific configuration
.claude/skills/cloudflare-r2/.env
Required Environment Variables:
bash
undefinedR2的S3兼容API访问需要特定凭证,系统会按以下顺序查找API密钥:
- - 运行时环境变量
process.env - - 项目级环境文件
<project-root>/.env - - Claude配置目录
.claude/.env - - 技能共享配置
.claude/skills/.env - - 技能专属配置
.claude/skills/cloudflare-r2/.env
必需环境变量:
bash
undefinedR2-specific credentials
R2专属凭证
R2_ACCESS_KEY_ID=your_r2_access_key_id_here
R2_SECRET_ACCESS_KEY=your_r2_secret_access_key_here
R2_ACCOUNT_ID=your_account_id_here
R2_ACCESS_KEY_ID=your_r2_access_key_id_here
R2_SECRET_ACCESS_KEY=your_r2_secret_access_key_here
R2_ACCOUNT_ID=your_account_id_here
Or use general Cloudflare credentials
或使用通用Cloudflare凭证
CLOUDFLARE_ACCOUNT_ID=your_account_id_here
CLOUDFLARE_API_TOKEN=your_api_token_here # For Wrangler CLI
**Where to Get Credentials:**
- R2 Access Keys: Cloudflare Dashboard → R2 → Manage R2 API Tokens → Create API Token
- Generates both Access Key ID and Secret Access Key (S3-compatible)
- Account ID: Cloudflare Dashboard → Overview → Account ID (right sidebar)
- API Token (for Wrangler): Cloudflare Dashboard → My Profile → API Tokens
**Example .env File:**
```bashCLOUDFLARE_ACCOUNT_ID=your_account_id_here
CLOUDFLARE_API_TOKEN=your_api_token_here # 用于Wrangler CLI
**凭证获取路径:**
- R2访问密钥:Cloudflare控制台 → R2 → 管理R2 API令牌 → 创建API令牌
- 会同时生成Access Key ID和Secret Access Key(兼容S3)
- 账户ID:Cloudflare控制台 → 概览 → 账户ID(右侧边栏)
- API令牌(用于Wrangler):Cloudflare控制台 → 我的资料 → API令牌
**示例.env文件:**
```bashSee .claude/skills/.env.example for complete configuration
完整配置示例请参考 .claude/skills/.env.example
R2_ACCESS_KEY_ID=abc123...
R2_SECRET_ACCESS_KEY=xyz789...
R2_ACCOUNT_ID=def456...
undefinedR2_ACCESS_KEY_ID=abc123...
R2_SECRET_ACCESS_KEY=xyz789...
R2_ACCOUNT_ID=def456...
undefinedCore Concepts
核心概念
Architecture
架构
- S3-compatible API - works with AWS SDKs and tools
- Workers API - native Cloudflare Workers integration
- Global network - strong consistency across all regions
- Zero egress fees - no bandwidth charges for data retrieval
- S3兼容API - 可与AWS SDK及工具配合使用
- Workers API - 原生Cloudflare Workers集成
- 全球网络 - 全区域强一致性
- 零出口费用 - 数据取回无带宽费用
Storage Classes
存储类别
- Standard - default, optimized for frequent access
- Infrequent Access - lower storage cost, retrieval fees apply, 30-day minimum
- 标准存储 - 默认类别,针对频繁访问优化
- 低频访问存储 - 存储成本更低,取回需付费,最短存储期限30天
Access Methods
访问方式
- R2 Workers Binding - serverless integration (recommended for new apps)
- S3 API - compatibility with existing tools
- Public buckets - direct HTTP access via custom domains or r2.dev
- Presigned URLs - temporary access without credentials
- R2 Workers绑定 - 无服务器集成(推荐用于新应用)
- S3 API - 兼容现有工具
- 公开存储桶 - 通过自定义域名或r2.dev直接HTTP访问
- 预签名URL - 无需凭证的临时访问链接
Quick Start
快速开始
1. Create Bucket
1. 创建存储桶
Wrangler:
bash
wrangler r2 bucket create my-bucketWith location hint:
bash
wrangler r2 bucket create my-bucket --location=wnamLocations: (West NA), (East NA), (West EU), (East EU), (Asia Pacific)
wnamenamweureeurapac使用Wrangler:
bash
wrangler r2 bucket create my-bucket指定位置提示:
bash
wrangler r2 bucket create my-bucket --location=wnam可选位置:(北美西部)、(北美东部)、(欧洲西部)、(欧洲东部)、(亚太地区)
wnamenamweureeurapac2. Upload Object
2. 上传对象
Wrangler:
bash
wrangler r2 object put my-bucket/file.txt --file=./local-file.txtWorkers API:
javascript
await env.MY_BUCKET.put('file.txt', fileContents, {
httpMetadata: {
contentType: 'text/plain',
},
});使用Wrangler:
bash
wrangler r2 object put my-bucket/file.txt --file=./local-file.txt使用Workers API:
javascript
await env.MY_BUCKET.put('file.txt', fileContents, {
httpMetadata: {
contentType: 'text/plain',
},
});3. Download Object
3. 下载对象
Wrangler:
bash
wrangler r2 object get my-bucket/file.txt --file=./downloaded.txtWorkers API:
javascript
const object = await env.MY_BUCKET.get('file.txt');
const contents = await object.text();使用Wrangler:
bash
wrangler r2 object get my-bucket/file.txt --file=./downloaded.txt使用Workers API:
javascript
const object = await env.MY_BUCKET.get('file.txt');
const contents = await object.text();Workers Integration
Workers集成
Binding Configuration
绑定配置
wrangler.toml:
toml
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-bucket"
preview_bucket_name = "my-bucket-preview"wrangler.toml:
toml
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-bucket"
preview_bucket_name = "my-bucket-preview"Common Operations
常见操作
Upload with metadata:
javascript
await env.MY_BUCKET.put('user-uploads/photo.jpg', imageData, {
httpMetadata: {
contentType: 'image/jpeg',
cacheControl: 'public, max-age=31536000',
},
customMetadata: {
uploadedBy: userId,
uploadDate: new Date().toISOString(),
},
});Download with streaming:
javascript
const object = await env.MY_BUCKET.get('large-file.mp4');
if (object === null) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata.contentType,
'ETag': object.etag,
},
});List objects:
javascript
const listed = await env.MY_BUCKET.list({
prefix: 'user-uploads/',
limit: 100,
});
for (const object of listed.objects) {
console.log(object.key, object.size);
}Delete object:
javascript
await env.MY_BUCKET.delete('old-file.txt');Check if object exists:
javascript
const object = await env.MY_BUCKET.head('file.txt');
if (object) {
console.log('Exists:', object.size, 'bytes');
}带元数据上传:
javascript
await env.MY_BUCKET.put('user-uploads/photo.jpg', imageData, {
httpMetadata: {
contentType: 'image/jpeg',
cacheControl: 'public, max-age=31536000',
},
customMetadata: {
uploadedBy: userId,
uploadDate: new Date().toISOString(),
},
});流式下载:
javascript
const object = await env.MY_BUCKET.get('large-file.mp4');
if (object === null) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata.contentType,
'ETag': object.etag,
},
});列出对象:
javascript
const listed = await env.MY_BUCKET.list({
prefix: 'user-uploads/',
limit: 100,
});
for (const object of listed.objects) {
console.log(object.key, object.size);
}删除对象:
javascript
await env.MY_BUCKET.delete('old-file.txt');检查对象是否存在:
javascript
const object = await env.MY_BUCKET.head('file.txt');
if (object) {
console.log('Exists:', object.size, 'bytes');
}S3 SDK Integration
S3 SDK集成
AWS CLI
AWS CLI
Configure:
bash
aws configure配置:
bash
aws configureAccess Key ID: <your-key-id>
Access Key ID: <your-key-id>
Secret Access Key: <your-secret>
Secret Access Key: <your-secret>
Region: auto
Region: auto
**Operations:**
```bash
**操作命令:**
```bashList buckets
列出存储桶
aws s3api list-buckets --endpoint-url https://<accountid>.r2.cloudflarestorage.com
aws s3api list-buckets --endpoint-url https://<accountid>.r2.cloudflarestorage.com
Upload file
上传文件
aws s3 cp file.txt s3://my-bucket/ --endpoint-url https://<accountid>.r2.cloudflarestorage.com
aws s3 cp file.txt s3://my-bucket/ --endpoint-url https://<accountid>.r2.cloudflarestorage.com
Generate presigned URL (expires in 1 hour)
生成预签名URL(1小时后过期)
aws s3 presign s3://my-bucket/file.txt --endpoint-url https://<accountid>.r2.cloudflarestorage.com --expires-in 3600
undefinedaws s3 presign s3://my-bucket/file.txt --endpoint-url https://<accountid>.r2.cloudflarestorage.com --expires-in 3600
undefinedJavaScript (AWS SDK v3)
JavaScript(AWS SDK v3)
javascript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "auto",
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "file.txt",
Body: fileContents,
}));javascript
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "auto",
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "file.txt",
Body: fileContents,
}));Python (Boto3)
Python(Boto3)
python
import boto3
s3 = boto3.client(
service_name="s3",
endpoint_url=f'https://{account_id}.r2.cloudflarestorage.com',
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name="auto",
)python
import boto3
s3 = boto3.client(
service_name="s3",
endpoint_url=f'https://{account_id}.r2.cloudflarestorage.com',
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name="auto",
)Upload file
上传文件
s3.upload_fileobj(file_obj, 'my-bucket', 'file.txt')
s3.upload_fileobj(file_obj, 'my-bucket', 'file.txt')
Download file
下载文件
s3.download_file('my-bucket', 'file.txt', './local-file.txt')
undefineds3.download_file('my-bucket', 'file.txt', './local-file.txt')
undefinedRclone (Large Files)
Rclone(大文件传输)
Configure:
bash
rclone config配置:
bash
rclone configSelect: Amazon S3 → Cloudflare R2
选择:Amazon S3 → Cloudflare R2
Enter credentials and endpoint
输入凭证和端点
**Upload with multipart optimization:**
```bash
**分段上传优化:**
```bashFor large files (>100MB)
适用于大文件(>100MB)
rclone copy large-video.mp4 r2:my-bucket/
--s3-upload-cutoff=100M
--s3-chunk-size=100M
--s3-upload-cutoff=100M
--s3-chunk-size=100M
undefinedrclone copy large-video.mp4 r2:my-bucket/
--s3-upload-cutoff=100M
--s3-chunk-size=100M
--s3-upload-cutoff=100M
--s3-chunk-size=100M
undefinedPublic Buckets
公开存储桶
Enable Public Access
启用公开访问
Wrangler:
bash
wrangler r2 bucket create my-public-bucket使用Wrangler:
bash
wrangler r2 bucket create my-public-bucketThen enable in dashboard: R2 → Bucket → Settings → Public Access
然后在控制台启用:R2 → 存储桶 → 设置 → 公开访问
undefinedundefinedAccess URLs
访问URL
r2.dev (development only, rate-limited):
https://pub-<hash>.r2.dev/file.txtCustom domain (recommended for production):
- Dashboard → R2 → Bucket → Settings → Public Access
- Add custom domain
- Cloudflare handles DNS/TLS automatically
r2.dev(仅用于开发,有速率限制):
https://pub-<hash>.r2.dev/file.txt自定义域名(生产环境推荐):
- 控制台 → R2 → 存储桶 → 设置 → 公开访问
- 添加自定义域名
- Cloudflare会自动处理DNS/TLS配置
CORS Configuration
CORS配置
Required for:
- Browser-based uploads
- Cross-origin API calls
- Presigned URL usage from web apps
Wrangler:
bash
wrangler r2 bucket cors put my-bucket --rules '[
{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]'Important: Origins must match exactly (no trailing slash).
适用场景:
- 基于浏览器的上传
- 跨域API调用
- 网页应用中使用预签名URL
使用Wrangler:
bash
wrangler r2 bucket cors put my-bucket --rules '[
{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]'注意: 源地址必须完全匹配(不能有末尾斜杠)。
Multipart Uploads
分段上传
For files >100MB or parallel uploads:
Workers API:
javascript
const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.mp4');
// Upload parts (5MiB - 5GiB each, max 10,000 parts)
const part1 = await multipart.uploadPart(1, chunk1);
const part2 = await multipart.uploadPart(2, chunk2);
// Complete upload
const object = await multipart.complete([part1, part2]);Constraints:
- Part size: 5MiB - 5GiB
- Max parts: 10,000
- Max object size: 5TB
- Incomplete uploads auto-abort after 7 days (configurable via lifecycle)
适用于文件>100MB或需要并行上传的场景:
使用Workers API:
javascript
const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.mp4');
// 上传分段(每个分段5MiB - 5GiB,最多10,000个分段)
const part1 = await multipart.uploadPart(1, chunk1);
const part2 = await multipart.uploadPart(2, chunk2);
// 完成上传
const object = await multipart.complete([part1, part2]);限制条件:
- 分段大小:5MiB - 5GiB
- 最大分段数:10,000
- 最大对象大小:5TB
- 未完成的上传会在7天后自动终止(可通过生命周期规则配置)
Data Migration
数据迁移
Sippy (Incremental, On-Demand)
Sippy(增量式、按需迁移)
Best for: Gradual migration, avoiding upfront egress fees
bash
undefined最佳适用:逐步迁移,避免前期出口费用
bash
undefinedEnable for bucket
为存储桶启用Sippy
wrangler r2 bucket sippy enable my-bucket
--provider=aws
--bucket=source-bucket
--region=us-east-1
--access-key-id=$AWS_KEY
--secret-access-key=$AWS_SECRET
--provider=aws
--bucket=source-bucket
--region=us-east-1
--access-key-id=$AWS_KEY
--secret-access-key=$AWS_SECRET
Objects migrate when first requested. Subsequent requests served from R2.wrangler r2 bucket sippy enable my-bucket
--provider=aws
--bucket=source-bucket
--region=us-east-1
--access-key-id=$AWS_KEY
--secret-access-key=$AWS_SECRET
--provider=aws
--bucket=source-bucket
--region=us-east-1
--access-key-id=$AWS_KEY
--secret-access-key=$AWS_SECRET
对象会在首次被请求时迁移,后续请求直接从R2获取。Super Slurper (Bulk, One-Time)
Super Slurper(批量、一次性迁移)
Best for: Complete migration, known object list
- Dashboard → R2 → Data Migration → Super Slurper
- Select source provider (AWS, GCS, Azure)
- Enter credentials and bucket name
- Start migration
最佳适用:完整迁移,对象列表明确
- 控制台 → R2 → 数据迁移 → Super Slurper
- 选择源服务商(AWS、GCS、Azure)
- 输入凭证和存储桶名称
- 启动迁移
Lifecycle Rules
生命周期规则
Auto-delete or transition storage classes:
Wrangler:
bash
wrangler r2 bucket lifecycle put my-bucket --rules '[
{
"action": {"type": "AbortIncompleteMultipartUpload"},
"filter": {},
"abortIncompleteMultipartUploadDays": 7
},
{
"action": {"type": "Transition", "storageClass": "InfrequentAccess"},
"filter": {"prefix": "archives/"},
"daysFromCreation": 90
}
]'自动删除对象或转换存储类别:
使用Wrangler:
bash
wrangler r2 bucket lifecycle put my-bucket --rules '[
{
"action": {"type": "AbortIncompleteMultipartUpload"},
"filter": {},
"abortIncompleteMultipartUploadDays": 7
},
{
"action": {"type": "Transition", "storageClass": "InfrequentAccess"},
"filter": {"prefix": "archives/"},
"daysFromCreation": 90
}
]'Event Notifications
事件通知
Trigger Workers on bucket events:
Wrangler:
bash
wrangler r2 bucket notification create my-bucket \
--queue=my-queue \
--event-type=object-createSupported events:
- - new uploads
object-create - - deletions
object-delete
Message format:
json
{
"account": "account-id",
"bucket": "my-bucket",
"object": {"key": "file.txt", "size": 1024, "etag": "..."},
"action": "PutObject",
"eventTime": "2024-01-15T12:00:00Z"
}在存储桶事件触发Workers:
使用Wrangler:
bash
wrangler r2 bucket notification create my-bucket \
--queue=my-queue \
--event-type=object-create支持的事件:
- - 新对象上传
object-create - - 对象删除
object-delete
消息格式:
json
{
"account": "account-id",
"bucket": "my-bucket",
"object": {"key": "file.txt", "size": 1024, "etag": "..."},
"action": "PutObject",
"eventTime": "2024-01-15T12:00:00Z"
}Best Practices
最佳实践
Performance
性能优化
- Use Cloudflare Cache with custom domains for frequently accessed objects
- Multipart uploads for files >100MB (faster, more reliable)
- Rclone for batch operations (concurrent transfers)
- Location hints match user geography
- 对频繁访问的对象,结合Cloudflare缓存和自定义域名使用
- 对>100MB的文件使用分段上传(更快、更可靠)
- 使用Rclone进行批量操作(并发传输)
- 位置提示与用户地理位置匹配
Security
安全建议
- Never commit Access Keys to version control
- Use environment variables for credentials
- Bucket-scoped tokens for least privilege
- Presigned URLs for temporary access
- Enable Cloudflare Access for additional protection
- 切勿将访问密钥提交到版本控制系统
- 使用环境变量存储凭证
- 使用存储桶级令牌实现最小权限原则
- 使用预签名URL提供临时访问
- 启用Cloudflare Access增强保护
Cost Optimization
成本优化
- Infrequent Access storage for archives (30+ day retention)
- Lifecycle rules to auto-transition or delete
- Larger multipart chunks = fewer Class A operations
- Monitor usage via dashboard analytics
- 对归档数据使用低频访问存储(需存储30天以上)
- 使用生命周期规则自动转换或删除对象
- 更大的分段大小 = 更少的A类操作
- 通过控制台分析监控使用情况
Naming
命名规范
- Bucket names: lowercase, hyphens, 3-63 chars
- Avoid sequential prefixes for better performance (e.g., use hashed prefixes)
- No dots in bucket names if using custom domains with TLS
- 存储桶名称:小写、连字符,3-63个字符
- 避免使用顺序前缀以提升性能(例如使用哈希前缀)
- 如果使用带TLS的自定义域名,存储桶名称中不要包含点
Limits
限制说明
- Buckets per account: 1,000
- Object size: 5TB max
- Bucket name: 3-63 characters
- Lifecycle rules: 1,000 per bucket
- Event notification rules: 100 per bucket
- r2.dev rate limit: 1,000 req/min (use custom domains for production)
- 每个账户的存储桶数量: 1,000个
- 对象大小上限: 5TB
- 存储桶名称长度: 3-63个字符
- 每个存储桶的生命周期规则数量: 1,000条
- 每个存储桶的事件通知规则数量: 100条
- r2.dev速率限制: 1,000次请求/分钟(生产环境请使用自定义域名)
Troubleshooting
故障排查
401 Unauthorized:
- Verify Access Keys are correct
- Check endpoint URL includes account ID
- Ensure region is "auto" for most operations
403 Forbidden:
- Check bucket permissions and token scopes
- Verify CORS configuration for browser requests
- Confirm bucket exists and name is correct
404 Not Found:
- Object key case-sensitive
- Check bucket name spelling
- Verify object was uploaded successfully
Presigned URLs not working:
- Verify CORS configuration
- Check URL expiry time
- Ensure origin matches CORS rules exactly
Multipart upload failures:
- Part size must be 5MiB - 5GiB
- Max 10,000 parts per upload
- Complete upload within 7 days (or configure lifecycle)
401 Unauthorized(未授权):
- 验证访问密钥是否正确
- 检查端点URL是否包含账户ID
- 确保大多数操作的区域设置为"auto"
403 Forbidden(禁止访问):
- 检查存储桶权限和令牌范围
- 验证浏览器请求的CORS配置
- 确认存储桶存在且名称正确
404 Not Found(未找到):
- 对象键区分大小写
- 检查存储桶名称拼写
- 验证对象是否成功上传
预签名URL无法工作:
- 验证CORS配置
- 检查URL过期时间
- 确保源地址与CORS规则完全匹配
分段上传失败:
- 分段大小必须在5MiB - 5GiB之间
- 每个上传最多10,000个分段
- 需在7天内完成上传(或配置生命周期规则)
Reference Files
参考文件
For detailed documentation, see:
- - Complete API endpoint documentation
references/api-reference.md - - SDK examples for all languages
references/sdk-examples.md - - Advanced Workers integration patterns
references/workers-patterns.md - - Detailed pricing and cost optimization
references/pricing-guide.md
如需详细文档,请查看:
- - 完整API端点文档
references/api-reference.md - - 全语言SDK示例
references/sdk-examples.md - - 高级Workers集成模式
references/workers-patterns.md - - 详细定价和成本优化指南
references/pricing-guide.md
Additional Resources
额外资源
- Documentation: https://developers.cloudflare.com/r2/
- Wrangler Commands: https://developers.cloudflare.com/r2/reference/wrangler-commands/
- S3 Compatibility: https://developers.cloudflare.com/r2/api/s3/api/
- Workers API: https://developers.cloudflare.com/r2/api/workers/workers-api-reference/