cloudflare-knowledge
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare Knowledge Skill
Cloudflare 知识库技能
Comprehensive Cloudflare platform knowledge covering all features, pricing, and best practices. Activate this skill when users need detailed information about Cloudflare's edge computing platform.
涵盖Cloudflare平台所有功能、定价及最佳实践的全面知识库。当用户需要Cloudflare边缘计算平台的详细信息时,可启用此技能。
Activation Triggers
触发条件
Activate this skill when users ask about:
- Cloudflare Workers development
- Wrangler CLI commands and configuration
- Storage services (R2, D1, KV, Durable Objects, Queues)
- Hyperdrive database connection pooling
- AI Workers (TTS, STT, LLM, image models)
- Zero Trust (tunnels, WARP, access policies)
- MCP server development and integration
- Workflows and durable execution
- Vectorize vector database
- Pages and static site deployment
- CI/CD with GitHub Actions or Workers Builds
- Observability (logs, traces, OpenTelemetry)
- Load balancing and health checks
- Cron triggers and scheduled tasks
- Cost optimization and pricing
当用户询问以下内容时启用此技能:
- Cloudflare Workers 开发
- Wrangler CLI 命令与配置
- 存储服务(R2、D1、KV、Durable Objects、Queues)
- Hyperdrive 数据库连接池
- AI Workers(TTS、STT、LLM、图像模型)
- Zero Trust(隧道、WARP、访问策略)
- MCP 服务器开发与集成
- Workflows 与持久化执行
- Vectorize 向量数据库
- Pages 与静态站点部署
- 基于GitHub Actions或Workers Builds的CI/CD
- 可观测性(日志、追踪、OpenTelemetry)
- 负载均衡与健康检查
- Cron 触发器与定时任务
- 成本优化与定价
Platform Overview
平台概述
Cloudflare is a global edge computing platform with 300+ data centers providing:
- Workers: Serverless JavaScript/TypeScript/Python/WASM at the edge
- Pages: Static site and full-stack app hosting
- R2: S3-compatible object storage with zero egress fees
- D1: Serverless SQLite database
- KV: Eventually consistent key-value store
- Durable Objects: Stateful coordination with WebSocket support
- Queues: Async message processing
- Hyperdrive: Database connection pooling
- AI Workers: Inference at the edge (LLM, TTS, STT, image)
- Zero Trust: Identity-based security platform
- Vectorize: Vector database for RAG applications
- Workflows: Durable multi-step execution
Cloudflare是拥有300+数据中心的全球边缘计算平台,提供以下服务:
- Workers:边缘环境下的无服务器JavaScript/TypeScript/Python/WASM运行环境
- Pages:静态站点与全栈应用托管
- R2:兼容S3的对象存储,无出口流量费
- D1:无服务器SQLite数据库
- KV:最终一致性的键值存储
- Durable Objects:支持WebSocket的有状态协调服务
- Queues:异步消息处理服务
- Hyperdrive:数据库连接池服务
- AI Workers:边缘环境下的推理服务(LLM、TTS、STT、图像)
- Zero Trust:基于身份的安全平台
- Vectorize:适用于RAG应用的向量数据库
- Workflows:持久化多步骤执行服务
Wrangler CLI Reference
Wrangler CLI 参考
Project Setup
项目搭建
bash
undefinedbash
undefinedCreate new project
创建新项目
npm create cloudflare@latest my-worker
npm create cloudflare@latest my-worker
Initialize in existing directory
在现有目录初始化
npx wrangler init
npx wrangler init
Login
登录
npx wrangler login
npx wrangler whoami
undefinednpx wrangler login
npx wrangler whoami
undefinedDevelopment
开发阶段
bash
undefinedbash
undefinedLocal development
本地开发
npx wrangler dev
npx wrangler dev --remote # Use remote bindings
npx wrangler dev --local # Fully local
npx wrangler dev
npx wrangler dev --remote # 使用远程绑定
npx wrangler dev --local # 完全本地运行
Test cron trigger locally
本地测试Cron触发器
undefinedundefinedDeployment
部署
bash
undefinedbash
undefinedDeploy to production
部署到生产环境
npx wrangler deploy
npx wrangler deploy
Deploy to environment
部署到指定环境
npx wrangler deploy --env staging
npx wrangler deploy --env staging
List versions
列出版本
npx wrangler versions list
npx wrangler versions list
Rollback
回滚版本
npx wrangler rollback
undefinednpx wrangler rollback
undefinedD1 Database
D1 数据库
bash
undefinedbash
undefinedCreate database
创建数据库
npx wrangler d1 create my-database
npx wrangler d1 create my-database
Execute SQL
执行SQL
npx wrangler d1 execute my-database --local --file=schema.sql
npx wrangler d1 execute my-database --remote --command="SELECT * FROM users"
npx wrangler d1 execute my-database --local --file=schema.sql
npx wrangler d1 execute my-database --remote --command="SELECT * FROM users"
Interactive shell
交互式shell
npx wrangler d1 execute my-database --local --command=".tables"
npx wrangler d1 execute my-database --local --command=".tables"
Export
导出数据
npx wrangler d1 export my-database --remote --output=backup.sql
undefinednpx wrangler d1 export my-database --remote --output=backup.sql
undefinedR2 Buckets
R2 存储桶
bash
undefinedbash
undefinedCreate bucket
创建存储桶
npx wrangler r2 bucket create my-bucket
npx wrangler r2 bucket create my-bucket
List buckets
列出存储桶
npx wrangler r2 bucket list
npx wrangler r2 bucket list
Upload/download
上传/下载文件
npx wrangler r2 object put my-bucket/file.txt --file=local.txt
npx wrangler r2 object get my-bucket/file.txt --file=download.txt
npx wrangler r2 object put my-bucket/file.txt --file=local.txt
npx wrangler r2 object get my-bucket/file.txt --file=download.txt
Delete
删除文件
npx wrangler r2 object delete my-bucket/file.txt
undefinednpx wrangler r2 object delete my-bucket/file.txt
undefinedKV Namespaces
KV 命名空间
bash
undefinedbash
undefinedCreate namespace
创建命名空间
npx wrangler kv namespace create MY_KV
npx wrangler kv namespace create MY_KV --preview # Preview namespace
npx wrangler kv namespace create MY_KV
npx wrangler kv namespace create MY_KV --preview # 预览命名空间
List namespaces
列出命名空间
npx wrangler kv namespace list
npx wrangler kv namespace list
Key operations
键操作
npx wrangler kv key put --binding MY_KV key "value"
npx wrangler kv key get --binding MY_KV key
npx wrangler kv key list --binding MY_KV
npx wrangler kv key delete --binding MY_KV key
npx wrangler kv key put --binding MY_KV key "value"
npx wrangler kv key get --binding MY_KV key
npx wrangler kv key list --binding MY_KV
npx wrangler kv key delete --binding MY_KV key
Bulk upload
批量上传
npx wrangler kv bulk put --binding MY_KV data.json
undefinednpx wrangler kv bulk put --binding MY_KV data.json
undefinedSecrets
密钥管理
bash
undefinedbash
undefinedSet secret
设置密钥
npx wrangler secret put API_KEY
npx wrangler secret put API_KEY
(prompts for value)
(会提示输入密钥值)
List secrets
列出密钥
npx wrangler secret list
npx wrangler secret list
Delete secret
删除密钥
npx wrangler secret delete API_KEY
undefinednpx wrangler secret delete API_KEY
undefinedQueues
队列服务
bash
undefinedbash
undefinedCreate queue
创建队列
npx wrangler queues create my-queue
npx wrangler queues create my-queue
List queues
列出队列
npx wrangler queues list
undefinednpx wrangler queues list
undefinedHyperdrive
Hyperdrive
bash
undefinedbash
undefinedCreate Hyperdrive config
创建Hyperdrive配置
npx wrangler hyperdrive create my-hyperdrive --connection-string="postgres://..."
npx wrangler hyperdrive create my-hyperdrive --connection-string="postgres://..."
List configs
列出配置
npx wrangler hyperdrive list
npx wrangler hyperdrive list
Update
更新配置
npx wrangler hyperdrive update my-hyperdrive --connection-string="postgres://..."
---npx wrangler hyperdrive update my-hyperdrive --connection-string="postgres://..."
---Wrangler Configuration (wrangler.jsonc)
Wrangler 配置文件(wrangler.jsonc)
Complete Configuration Reference
完整配置参考
jsonc
{
"$schema": "./node_modules/wrangler/config-schema.json",
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2024-01-01",
"compatibility_flags": ["nodejs_compat"],
// Account settings
"account_id": "<optional-account-id>",
// Build settings
"minify": true,
"node_compat": true,
// Environment variables
"vars": {
"API_URL": "https://api.example.com"
},
// KV Namespaces
"kv_namespaces": [
{
"binding": "MY_KV",
"id": "<namespace-id>",
"preview_id": "<preview-namespace-id>"
}
],
// R2 Buckets
"r2_buckets": [
{
"binding": "MY_BUCKET",
"bucket_name": "my-bucket",
"preview_bucket_name": "my-bucket-preview",
"jurisdiction": "eu"
}
],
// D1 Databases
"d1_databases": [
{
"binding": "DB",
"database_id": "<database-id>",
"database_name": "my-database"
}
],
// Durable Objects
"durable_objects": {
"bindings": [
{
"name": "MY_DO",
"class_name": "MyDurableObject"
}
]
},
"migrations": [
{
"tag": "v1",
"new_classes": ["MyDurableObject"]
},
{
"tag": "v2",
"new_sqlite_classes": ["MyDurableObjectWithSQL"]
}
],
// Queues
"queues": {
"producers": [
{
"binding": "MY_QUEUE",
"queue": "my-queue"
}
],
"consumers": [
{
"queue": "my-queue",
"max_batch_size": 10,
"max_batch_timeout": 30,
"max_retries": 3,
"dead_letter_queue": "my-dlq"
}
]
},
// Hyperdrive
"hyperdrive": [
{
"binding": "MY_DB_POOL",
"id": "<hyperdrive-config-id>"
}
],
// Workers AI
"ai": {
"binding": "AI"
},
// Vectorize
"vectorize": [
{
"binding": "MY_VECTORS",
"index_name": "my-index"
}
],
// Browser Rendering
"browser": {
"binding": "BROWSER"
},
// Service Bindings (Worker-to-Worker)
"services": [
{
"binding": "OTHER_WORKER",
"service": "other-worker-name"
}
],
// Cron Triggers
"triggers": {
"crons": ["0 * * * *", "0 6 * * *"]
},
// Routes
"routes": [
{
"pattern": "example.com/*",
"zone_name": "example.com"
}
],
// Observability
"observability": {
"logs": {
"enabled": true,
"invocation_logs": true,
"head_sampling_rate": 1
}
},
// Environments
"env": {
"staging": {
"name": "my-worker-staging",
"vars": {
"API_URL": "https://staging-api.example.com"
}
},
"production": {
"name": "my-worker-production",
"routes": [
{
"pattern": "api.example.com/*",
"zone_name": "example.com"
}
]
}
}
}jsonc
{
"$schema": "./node_modules/wrangler/config-schema.json",
"name": "my-worker",
"main": "src/index.ts",
"compatibility_date": "2024-01-01",
"compatibility_flags": ["nodejs_compat"],
// 账户设置
"account_id": "<可选-账户ID>",
// 构建设置
"minify": true,
"node_compat": true,
// 环境变量
"vars": {
"API_URL": "https://api.example.com"
},
// KV 命名空间
"kv_namespaces": [
{
"binding": "MY_KV",
"id": "<命名空间ID>",
"preview_id": "<预览命名空间ID>"
}
],
// R2 存储桶
"r2_buckets": [
{
"binding": "MY_BUCKET",
"bucket_name": "my-bucket",
"preview_bucket_name": "my-bucket-preview",
"jurisdiction": "eu"
}
],
// D1 数据库
"d1_databases": [
{
"binding": "DB",
"database_id": "<数据库ID>",
"database_name": "my-database"
}
],
// Durable Objects
"durable_objects": {
"bindings": [
{
"name": "MY_DO",
"class_name": "MyDurableObject"
}
]
},
"migrations": [
{
"tag": "v1",
"new_classes": ["MyDurableObject"]
},
{
"tag": "v2",
"new_sqlite_classes": ["MyDurableObjectWithSQL"]
}
],
// 队列服务
"queues": {
"producers": [
{
"binding": "MY_QUEUE",
"queue": "my-queue"
}
],
"consumers": [
{
"queue": "my-queue",
"max_batch_size": 10,
"max_batch_timeout": 30,
"max_retries": 3,
"dead_letter_queue": "my-dlq"
}
]
},
// Hyperdrive
"hyperdrive": [
{
"binding": "MY_DB_POOL",
"id": "<Hyperdrive配置ID>"
}
],
// Workers AI
"ai": {
"binding": "AI"
},
// Vectorize
"vectorize": [
{
"binding": "MY_VECTORS",
"index_name": "my-index"
}
],
// 浏览器渲染
"browser": {
"binding": "BROWSER"
},
// 服务绑定(Worker间调用)
"services": [
{
"binding": "OTHER_WORKER",
"service": "other-worker-name"
}
],
// Cron 触发器
"triggers": {
"crons": ["0 * * * *", "0 6 * * *"]
},
// 路由规则
"routes": [
{
"pattern": "example.com/*",
"zone_name": "example.com"
}
],
// 可观测性
"observability": {
"logs": {
"enabled": true,
"invocation_logs": true,
"head_sampling_rate": 1
}
},
// 环境配置
"env": {
"staging": {
"name": "my-worker-staging",
"vars": {
"API_URL": "https://staging-api.example.com"
}
},
"production": {
"name": "my-worker-production",
"routes": [
{
"pattern": "api.example.com/*",
"zone_name": "example.com"
}
]
}
}
}Storage Services Deep Dive
存储服务深度解析
KV (Key-Value Store)
KV(键值存储)
Characteristics:
- Eventually consistent (up to 60s propagation)
- Max value size: 25 MiB
- Max key size: 512 bytes
- Best for: Configuration, session data, caching
- Free tier: 100,000 reads/day, 1,000 writes/day
typescript
interface Env {
MY_KV: KVNamespace;
}
// Write operations
await env.MY_KV.put("key", "string value");
await env.MY_KV.put("key", JSON.stringify(object));
await env.MY_KV.put("key", arrayBuffer);
// With TTL (seconds)
await env.MY_KV.put("session", data, { expirationTtl: 3600 });
// With absolute expiration
await env.MY_KV.put("session", data, { expiration: Math.floor(Date.now() / 1000) + 3600 });
// With metadata
await env.MY_KV.put("user:123", userData, {
metadata: { type: "user", version: 2 }
});
// Read operations
const value = await env.MY_KV.get("key"); // Returns string or null
const json = await env.MY_KV.get("key", "json"); // Parses JSON
const buffer = await env.MY_KV.get("key", "arrayBuffer");
const stream = await env.MY_KV.get("key", "stream");
// With metadata
const { value, metadata } = await env.MY_KV.getWithMetadata("key");
// List keys
const list = await env.MY_KV.list();
const filtered = await env.MY_KV.list({ prefix: "user:", limit: 100 });
// Pagination: use list.cursor for next page
// Delete
await env.MY_KV.delete("key");特性:
- 最终一致性(最长60秒同步延迟)
- 最大值大小:25 MiB
- 最大键大小:512字节
- 适用场景:配置存储、会话数据、缓存
- 免费额度:每日10万次读取,1千次写入
typescript
interface Env {
MY_KV: KVNamespace;
}
// 写入操作
await env.MY_KV.put("key", "string value");
await env.MY_KV.put("key", JSON.stringify(object));
await env.MY_KV.put("key", arrayBuffer);
// 设置TTL(秒)
await env.MY_KV.put("session", data, { expirationTtl: 3600 });
// 设置绝对过期时间
await env.MY_KV.put("session", data, { expiration: Math.floor(Date.now() / 1000) + 3600 });
// 附带元数据
await env.MY_KV.put("user:123", userData, {
metadata: { type: "user", version: 2 }
});
// 读取操作
const value = await env.MY_KV.get("key"); // 返回字符串或null
const json = await env.MY_KV.get("key", "json"); // 解析JSON
const buffer = await env.MY_KV.get("key", "arrayBuffer");
const stream = await env.MY_KV.get("key", "stream");
// 读取带元数据
const { value, metadata } = await env.MY_KV.getWithMetadata("key");
// 列出键
const list = await env.MY_KV.list();
const filtered = await env.MY_KV.list({ prefix: "user:", limit: 100 });
// 分页:使用list.cursor获取下一页
// 删除键
await env.MY_KV.delete("key");R2 (Object Storage)
R2(对象存储)
Characteristics:
- S3-compatible API
- Zero egress fees
- Max object size: 5 TB
- Single upload max: 5 GB (use multipart for larger)
- Best for: Media files, backups, data lakes, large files
typescript
interface Env {
MY_BUCKET: R2Bucket;
}
// Put object
await env.MY_BUCKET.put("path/to/file.json", JSON.stringify(data), {
httpMetadata: {
contentType: "application/json",
cacheControl: "max-age=3600",
},
customMetadata: {
uploadedBy: "worker",
version: "1.0",
},
});
// Put with checksums
await env.MY_BUCKET.put("file.bin", data, {
md5: expectedMd5, // Validates on upload
sha256: expectedSha256,
});
// Get object
const object = await env.MY_BUCKET.get("path/to/file.json");
if (object) {
const text = await object.text();
const json = await object.json();
const buffer = await object.arrayBuffer();
const blob = await object.blob();
const stream = object.body; // ReadableStream
// Metadata
console.log(object.key, object.size, object.etag);
console.log(object.httpMetadata.contentType);
console.log(object.customMetadata.uploadedBy);
}
// Head (metadata only)
const head = await env.MY_BUCKET.head("path/to/file.json");
// List objects
const list = await env.MY_BUCKET.list();
const filtered = await env.MY_BUCKET.list({
prefix: "uploads/",
delimiter: "/",
limit: 1000,
});
// Delete
await env.MY_BUCKET.delete("path/to/file.json");
await env.MY_BUCKET.delete(["file1.json", "file2.json"]); // Batch delete
// Multipart upload (for files > 5GB)
const upload = await env.MY_BUCKET.createMultipartUpload("large-file.zip");
const part1 = await upload.uploadPart(1, chunk1);
const part2 = await upload.uploadPart(2, chunk2);
await upload.complete([part1, part2]);
// Or abort
await upload.abort();特性:
- 兼容S3 API
- 无出口流量费
- 最大对象大小:5 TB
- 单次上传最大值:5 GB(更大文件使用分片上传)
- 适用场景:媒体文件、备份、数据湖、大文件存储
typescript
interface Env {
MY_BUCKET: R2Bucket;
}
// 上传对象
await env.MY_BUCKET.put("path/to/file.json", JSON.stringify(data), {
httpMetadata: {
contentType: "application/json",
cacheControl: "max-age=3600",
},
customMetadata: {
uploadedBy: "worker",
version: "1.0",
},
});
// 带校验和上传
await env.MY_BUCKET.put("file.bin", data, {
md5: expectedMd5, // 上传时验证
sha256: expectedSha256,
});
// 获取对象
const object = await env.MY_BUCKET.get("path/to/file.json");
if (object) {
const text = await object.text();
const json = await object.json();
const buffer = await object.arrayBuffer();
const blob = await object.blob();
const stream = object.body; // 可读流
// 元数据
console.log(object.key, object.size, object.etag);
console.log(object.httpMetadata.contentType);
console.log(object.customMetadata.uploadedBy);
}
// 获取元数据(仅Head请求)
const head = await env.MY_BUCKET.head("path/to/file.json");
// 列出对象
const list = await env.MY_BUCKET.list();
const filtered = await env.MY_BUCKET.list({
prefix: "uploads/",
delimiter: "/",
limit: 1000,
});
// 删除对象
await env.MY_BUCKET.delete("path/to/file.json");
await env.MY_BUCKET.delete(["file1.json", "file2.json"]); // 批量删除
// 分片上传(适用于大于5GB的文件)
const upload = await env.MY_BUCKET.createMultipartUpload("large-file.zip");
const part1 = await upload.uploadPart(1, chunk1);
const part2 = await upload.uploadPart(2, chunk2);
await upload.complete([part1, part2]);
// 或取消上传
await upload.abort();D1 (SQLite Database)
D1(SQLite数据库)
Characteristics:
- Serverless SQLite
- Strong consistency
- Max database size: 10 GB (GA)
- Best for: Relational data, complex queries, ACID transactions
typescript
interface Env {
DB: D1Database;
}
// Prepared statements (recommended)
const stmt = env.DB.prepare("SELECT * FROM users WHERE id = ?");
const { results } = await stmt.bind(userId).all();
const user = await stmt.bind(userId).first();
const value = await stmt.bind(userId).first("name"); // Single column
// Insert/Update
const { meta } = await env.DB.prepare(
"INSERT INTO users (name, email) VALUES (?, ?)"
).bind(name, email).run();
console.log(meta.last_row_id, meta.changes);
// Batch operations (single transaction)
const results = await env.DB.batch([
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Alice"),
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Bob"),
env.DB.prepare("UPDATE counters SET value = value + 1 WHERE name = 'users'"),
]);
// Raw execution
await env.DB.exec("PRAGMA table_info(users)");
// Transaction pattern (using batch)
await env.DB.batch([
env.DB.prepare("UPDATE accounts SET balance = balance - ? WHERE id = ?").bind(100, fromId),
env.DB.prepare("UPDATE accounts SET balance = balance + ? WHERE id = ?").bind(100, toId),
]);D1 Best Practices:
sql
-- Create indexes for WHERE clause columns
CREATE INDEX idx_users_email ON users(email);
-- Use EXPLAIN QUERY PLAN to verify index usage
EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'test@example.com';
-- Batch large migrations
DELETE FROM logs WHERE created_at < '2024-01-01' LIMIT 1000;
-- Run after schema changes
PRAGMA optimize;特性:
- 无服务器SQLite
- 强一致性
- 最大数据库大小:10 GB(正式版)
- 适用场景:关系型数据、复杂查询、ACID事务
typescript
interface Env {
DB: D1Database;
}
// 预编译语句(推荐)
const stmt = env.DB.prepare("SELECT * FROM users WHERE id = ?");
const { results } = await stmt.bind(userId).all();
const user = await stmt.bind(userId).first();
const value = await stmt.bind(userId).first("name"); // 单列返回
// 插入/更新
const { meta } = await env.DB.prepare(
"INSERT INTO users (name, email) VALUES (?, ?)"
).bind(name, email).run();
console.log(meta.last_row_id, meta.changes);
// 批量操作(单事务)
const results = await env.DB.batch([
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Alice"),
env.DB.prepare("INSERT INTO users (name) VALUES (?)").bind("Bob"),
env.DB.prepare("UPDATE counters SET value = value + 1 WHERE name = 'users'"),
]);
// 原生执行
await env.DB.exec("PRAGMA table_info(users)");
// 事务模式(使用batch)
await env.DB.batch([
env.DB.prepare("UPDATE accounts SET balance = balance - ? WHERE id = ?").bind(100, fromId),
env.DB.prepare("UPDATE accounts SET balance = balance + ? WHERE id = ?").bind(100, toId),
]);D1 最佳实践:
sql
-- 为WHERE子句中的列创建索引
CREATE INDEX idx_users_email ON users(email);
-- 使用EXPLAIN QUERY PLAN验证索引使用
EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'test@example.com';
-- 批量处理大型迁移
DELETE FROM logs WHERE created_at < '2024-01-01' LIMIT 1000;
-- 架构变更后执行
PRAGMA optimize;Durable Objects
Durable Objects
Characteristics:
- Single-threaded, globally unique instances
- Built-in SQLite storage
- WebSocket support with Hibernation
- Best for: Real-time coordination, chat, games, counters
typescript
// Durable Object class
export class Counter {
state: DurableObjectState;
value: number = 0;
constructor(state: DurableObjectState, env: Env) {
this.state = state;
// Restore state from storage
this.state.blockConcurrencyWhile(async () => {
this.value = (await this.state.storage.get("value")) || 0;
});
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
switch (url.pathname) {
case "/increment":
this.value++;
await this.state.storage.put("value", this.value);
return Response.json({ value: this.value });
case "/value":
return Response.json({ value: this.value });
default:
return new Response("Not found", { status: 404 });
}
}
}
// Worker that uses the Durable Object
export default {
async fetch(request: Request, env: Env) {
const id = env.COUNTER.idFromName("global");
const stub = env.COUNTER.get(id);
return stub.fetch(request);
},
};WebSocket Hibernation:
typescript
export class ChatRoom {
state: DurableObjectState;
constructor(state: DurableObjectState, env: Env) {
this.state = state;
}
async fetch(request: Request): Promise<Response> {
if (request.headers.get("Upgrade") === "websocket") {
const pair = new WebSocketPair();
const [client, server] = Object.values(pair);
// Use Hibernation API
this.state.acceptWebSocket(server);
return new Response(null, { status: 101, webSocket: client });
}
return new Response("Expected WebSocket", { status: 400 });
}
// Called when hibernated DO receives WebSocket message
async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
// Broadcast to all connected clients
for (const client of this.state.getWebSockets()) {
if (client !== ws && client.readyState === WebSocket.READY_STATE_OPEN) {
client.send(message);
}
}
}
async webSocketClose(ws: WebSocket, code: number, reason: string) {
// Handle disconnect
}
async webSocketError(ws: WebSocket, error: unknown) {
// Handle error
ws.close(1011, "Internal error");
}
}特性:
- 单线程、全局唯一实例
- 内置SQLite存储
- 支持WebSocket与休眠功能
- 适用场景:实时协调、聊天、游戏、计数器
typescript
// Durable Object 类
export class Counter {
state: DurableObjectState;
value: number = 0;
constructor(state: DurableObjectState, env: Env) {
this.state = state;
// 从存储恢复状态
this.state.blockConcurrencyWhile(async () => {
this.value = (await this.state.storage.get("value")) || 0;
});
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
switch (url.pathname) {
case "/increment":
this.value++;
await this.state.storage.put("value", this.value);
return Response.json({ value: this.value });
case "/value":
return Response.json({ value: this.value });
default:
return new Response("Not found", { status: 404 });
}
}
}
// 使用Durable Object的Worker
export default {
async fetch(request: Request, env: Env) {
const id = env.COUNTER.idFromName("global");
const stub = env.COUNTER.get(id);
return stub.fetch(request);
},
};WebSocket 休眠功能:
typescript
export class ChatRoom {
state: DurableObjectState;
constructor(state: DurableObjectState, env: Env) {
this.state = state;
}
async fetch(request: Request): Promise<Response> {
if (request.headers.get("Upgrade") === "websocket") {
const pair = new WebSocketPair();
const [client, server] = Object.values(pair);
// 使用休眠API
this.state.acceptWebSocket(server);
return new Response(null, { status: 101, webSocket: client });
}
return new Response("Expected WebSocket", { status: 400 });
}
// 休眠的DO收到WebSocket消息时触发
async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer) {
// 广播给所有连接的客户端
for (const client of this.state.getWebSockets()) {
if (client !== ws && client.readyState === WebSocket.READY_STATE_OPEN) {
client.send(message);
}
}
}
async webSocketClose(ws: WebSocket, code: number, reason: string) {
// 处理断开连接
}
async webSocketError(ws: WebSocket, error: unknown) {
// 处理错误
ws.close(1011, "Internal error");
}
}Queues
队列服务
Characteristics:
- Async message processing
- At-least-once delivery
- Automatic retries with dead letter queues
- Best for: Decoupling, background jobs, event processing
typescript
// Producer
interface Env {
MY_QUEUE: Queue;
}
export default {
async fetch(request: Request, env: Env) {
// Send single message
await env.MY_QUEUE.send({ type: "email", to: "user@example.com" });
// Send with options
await env.MY_QUEUE.send(
{ type: "process", id: 123 },
{ contentType: "json" }
);
// Batch send
await env.MY_QUEUE.sendBatch([
{ body: { id: 1 } },
{ body: { id: 2 } },
{ body: { id: 3 } },
]);
return new Response("Queued");
},
};
// Consumer
interface QueueMessage {
type: string;
id?: number;
to?: string;
}
export default {
async queue(batch: MessageBatch<QueueMessage>, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
console.log(`Processing: ${JSON.stringify(message.body)}`);
await processMessage(message.body);
message.ack(); // Mark as processed
} catch (e) {
console.error(`Failed: ${e}`);
message.retry(); // Will retry (up to max_retries)
}
}
},
};特性:
- 异步消息处理
- 至少一次投递
- 自动重试与死信队列
- 适用场景:系统解耦、后台任务、事件处理
typescript
// 生产者
interface Env {
MY_QUEUE: Queue;
}
export default {
async fetch(request: Request, env: Env) {
// 发送单条消息
await env.MY_QUEUE.send({ type: "email", to: "user@example.com" });
// 带选项发送
await env.MY_QUEUE.send(
{ type: "process", id: 123 },
{ contentType: "json" }
);
// 批量发送
await env.MY_QUEUE.sendBatch([
{ body: { id: 1 } },
{ body: { id: 2 } },
{ body: { id: 3 } },
]);
return new Response("已加入队列");
},
};
// 消费者
interface QueueMessage {
type: string;
id?: number;
to?: string;
}
export default {
async queue(batch: MessageBatch<QueueMessage>, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
console.log(`处理中: ${JSON.stringify(message.body)}`);
await processMessage(message.body);
message.ack(); // 标记为已处理
} catch (e) {
console.error(`处理失败: ${e}`);
message.retry(); // 将重试(最多max_retries次)
}
}
},
};AI Workers Reference
AI Workers 参考
Available Models (2025-2026)
可用模型(2025-2026)
Text Generation:
| Model | Context | Best For |
|---|---|---|
| @cf/meta/llama-3.3-70b-instruct-fp8-fast | 128K | General, reasoning |
| @cf/mistral/mistral-7b-instruct-v0.2 | 32K | Fast, efficient |
| @cf/qwen/qwen2.5-72b-instruct | 128K | Multilingual |
| @cf/deepseek/deepseek-r1-distill-llama-70b | 64K | Complex reasoning |
Text-to-Speech (TTS):
| Model | Languages | Notes |
|---|---|---|
| @deepgram/aura-2-en | English | Best quality, context-aware |
| @deepgram/aura-1 | English | Fast, good quality |
| @cf/myshell-ai/melotts | en, fr, es, zh, ja, ko | Multi-lingual |
Speech-to-Text (STT):
| Model | Languages | Notes |
|---|---|---|
| @cf/openai/whisper-large-v3-turbo | 100+ | Fast, accurate |
| @cf/openai/whisper | 100+ | Original Whisper |
Image Generation:
| Model | Resolution | Notes |
|---|---|---|
| @cf/black-forest-labs/flux-1-schnell | Up to 1024x1024 | Fast |
| @cf/stabilityai/stable-diffusion-xl-base-1.0 | Up to 1024x1024 | Detailed |
Vision/Captioning:
| Model | Capabilities |
|---|---|
| @cf/meta/llama-3.2-11b-vision-instruct | Image understanding, captioning |
| @cf/llava-hf/llava-1.5-7b-hf | Visual Q&A |
Embeddings:
| Model | Dimensions | Notes |
|---|---|---|
| @cf/baai/bge-large-en-v1.5 | 1024 | Best quality |
| @cf/baai/bge-small-en-v1.5 | 384 | Faster |
文本生成:
| 模型 | 上下文窗口 | 最佳适用场景 |
|---|---|---|
| @cf/meta/llama-3.3-70b-instruct-fp8-fast | 128K | 通用场景、推理任务 |
| @cf/mistral/mistral-7b-instruct-v0.2 | 32K | 快速、高效场景 |
| @cf/qwen/qwen2.5-72b-instruct | 128K | 多语言场景 |
| @cf/deepseek/deepseek-r1-distill-llama-70b | 64K | 复杂推理任务 |
文本转语音(TTS):
| 模型 | 支持语言 | 说明 |
|---|---|---|
| @deepgram/aura-2-en | 英语 | 最佳音质、上下文感知 |
| @deepgram/aura-1 | 英语 | 快速、音质良好 |
| @cf/myshell-ai/melotts | 英、法、西、中、日、韩 | 多语言支持 |
语音转文本(STT):
| 模型 | 支持语言 | 说明 |
|---|---|---|
| @cf/openai/whisper-large-v3-turbo | 100+ | 快速、准确 |
| @cf/openai/whisper | 100+ | 原版Whisper模型 |
图像生成:
| 模型 | 分辨率 | 说明 |
|---|---|---|
| @cf/black-forest-labs/flux-1-schnell | 最高1024x1024 | 快速生成 |
| @cf/stabilityai/stable-diffusion-xl-base-1.0 | 最高1024x1024 | 细节丰富 |
视觉/图像描述:
| 模型 | 能力 |
|---|---|
| @cf/meta/llama-3.2-11b-vision-instruct | 图像理解、描述 |
| @cf/llava-hf/llava-1.5-7b-hf | 视觉问答 |
嵌入模型:
| 模型 | 维度 | 说明 |
|---|---|---|
| @cf/baai/bge-large-en-v1.5 | 1024 | 最佳质量 |
| @cf/baai/bge-small-en-v1.5 | 384 | 更快速度 |
Usage Examples
使用示例
typescript
interface Env {
AI: Ai;
}
// Text generation
const response = await env.AI.run("@cf/meta/llama-3.3-70b-instruct-fp8-fast", {
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is Cloudflare?" },
],
max_tokens: 512,
temperature: 0.7,
});
// Streaming
const stream = await env.AI.run("@cf/meta/llama-3.3-70b-instruct-fp8-fast", {
messages: [...],
stream: true,
});
return new Response(stream, {
headers: { "Content-Type": "text/event-stream" },
});
// Text-to-Speech
const audio = await env.AI.run("@deepgram/aura-2-en", {
text: "Hello, this is a test.",
});
return new Response(audio, {
headers: { "Content-Type": "audio/wav" },
});
// Speech-to-Text
const transcript = await env.AI.run("@cf/openai/whisper-large-v3-turbo", {
audio: audioArrayBuffer,
});
// Returns { text: "...", segments: [...] }
// Image generation
const image = await env.AI.run("@cf/black-forest-labs/flux-1-schnell", {
prompt: "A futuristic cityscape at sunset",
num_steps: 4,
});
return new Response(image, {
headers: { "Content-Type": "image/png" },
});
// Embeddings
const embeddings = await env.AI.run("@cf/baai/bge-large-en-v1.5", {
text: ["Hello world", "Cloudflare Workers"],
});
// Returns { data: [{ embedding: [...] }, { embedding: [...] }] }
// Image captioning
const caption = await env.AI.run("@cf/meta/llama-3.2-11b-vision-instruct", {
image: imageArrayBuffer,
prompt: "Describe this image in detail.",
});typescript
interface Env {
AI: Ai;
}
// 文本生成
const response = await env.AI.run("@cf/meta/llama-3.3-70b-instruct-fp8-fast", {
messages: [
{ role: "system", content: "你是一个乐于助人的助手。" },
{ role: "user", content: "什么是Cloudflare?" },
],
max_tokens: 512,
temperature: 0.7,
});
// 流式输出
const stream = await env.AI.run("@cf/meta/llama-3.3-70b-instruct-fp8-fast", {
messages: [...],
stream: true,
});
return new Response(stream, {
headers: { "Content-Type": "text/event-stream" },
});
// 文本转语音
const audio = await env.AI.run("@deepgram/aura-2-en", {
text: "Hello, this is a test.",
});
return new Response(audio, {
headers: { "Content-Type": "audio/wav" },
});
// 语音转文本
const transcript = await env.AI.run("@cf/openai/whisper-large-v3-turbo", {
audio: audioArrayBuffer,
});
// 返回 { text: "...", segments: [...] }
// 图像生成
const image = await env.AI.run("@cf/black-forest-labs/flux-1-schnell", {
prompt: "A futuristic cityscape at sunset",
num_steps: 4,
});
return new Response(image, {
headers: { "Content-Type": "image/png" },
});
// 嵌入生成
const embeddings = await env.AI.run("@cf/baai/bge-large-en-v1.5", {
text: ["Hello world", "Cloudflare Workers"],
});
// 返回 { data: [{ embedding: [...] }, { embedding: [...] }] }
// 图像描述
const caption = await env.AI.run("@cf/meta/llama-3.2-11b-vision-instruct", {
image: imageArrayBuffer,
prompt: "详细描述这张图片。",
});Hyperdrive Deep Dive
Hyperdrive 深度解析
Hyperdrive accelerates database connections by maintaining connection pools close to your database.
Hyperdrive通过在靠近数据库的位置维护连接池,加速数据库连接。
Setup
搭建步骤
bash
undefinedbash
undefinedCreate Hyperdrive config
创建Hyperdrive配置
npx wrangler hyperdrive create my-db
--connection-string="postgres://user:pass@host:5432/database"
--connection-string="postgres://user:pass@host:5432/database"
npx wrangler hyperdrive create my-db
--connection-string="postgres://user:pass@host:5432/database"
--connection-string="postgres://user:pass@host:5432/database"
Add to wrangler.jsonc
添加到wrangler.jsonc
undefinedundefinedUsage
使用示例
typescript
import { Client } from "pg";
interface Env {
MY_DB: Hyperdrive;
}
export default {
async fetch(request: Request, env: Env) {
// Connect using Hyperdrive connection string
const client = new Client({
connectionString: env.MY_DB.connectionString,
});
await client.connect();
const result = await client.query("SELECT * FROM users WHERE id = $1", [1]);
// No need to call client.end() - Hyperdrive manages pooling
return Response.json(result.rows);
},
};typescript
import { Client } from "pg";
interface Env {
MY_DB: Hyperdrive;
}
export default {
async fetch(request: Request, env: Env) {
// 使用Hyperdrive连接字符串连接
const client = new Client({
connectionString: env.MY_DB.connectionString,
});
await client.connect();
const result = await client.query("SELECT * FROM users WHERE id = $1", [1]);
// 无需调用client.end() - Hyperdrive管理连接池
return Response.json(result.rows);
},
};When to Use Hyperdrive
适用场景
Use Hyperdrive when:
- Connecting to remote PostgreSQL/MySQL databases
- High-latency database connections (different regions)
- Frequent identical read queries (caching)
- Many concurrent database connections needed
Don't use Hyperdrive when:
- Using D1 (already edge-native)
- Local development (use direct connection)
- Need prepared statements across requests (transaction mode limitation)
- Using Durable Objects storage
推荐使用Hyperdrive的场景:
- 连接远程PostgreSQL/MySQL数据库
- 数据库连接延迟高(跨区域)
- 频繁的重复读查询(缓存)
- 需要大量并发数据库连接
不推荐使用Hyperdrive的场景:
- 使用D1(本身就是边缘原生数据库)
- 本地开发(使用直接连接)
- 需要跨请求的预编译语句(事务模式限制)
- 使用Durable Objects存储
Performance Benefits
性能优势
Without Hyperdrive:
Worker -> TCP handshake (1 RTT)
-> TLS negotiation (3 RTTs)
-> DB authentication (3 RTTs)
-> Query (1 RTT)
Total: 8 round-trips before first queryWith Hyperdrive:
Worker -> Hyperdrive pool (cached connection)
-> Query (1 RTT to pool, reuses DB connection)
Total: 1 round-trip to query无Hyperdrive时:
Worker -> TCP握手(1次RTT)
-> TLS协商(3次RTT)
-> 数据库认证(3次RTT)
-> 查询(1次RTT)
总计:首次查询前需8次往返使用Hyperdrive时:
Worker -> Hyperdrive连接池(缓存的连接)
-> 查询(1次RTT到连接池,复用数据库连接)
总计:仅需1次往返即可执行查询Zero Trust Reference
Zero Trust 参考
Cloudflare Tunnel
Cloudflare Tunnel
Expose internal services securely without opening firewall ports.
Installation:
bash
undefined安全暴露内部服务,无需开放防火墙端口。
安装:
bash
undefinedmacOS
macOS
brew install cloudflared
brew install cloudflared
Windows
Windows
winget install Cloudflare.cloudflared
winget install Cloudflare.cloudflared
Linux
Linux
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o cloudflared
sudo chmod +x cloudflared && sudo mv cloudflared /usr/local/bin/
**Setup:**
```bashcurl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o cloudflared
sudo chmod +x cloudflared && sudo mv cloudflared /usr/local/bin/
**配置:**
```bashLogin
登录
cloudflared tunnel login
cloudflared tunnel login
Create tunnel
创建隧道
cloudflared tunnel create my-tunnel
cloudflared tunnel create my-tunnel
Create config file (~/.cloudflared/config.yml)
创建配置文件 (~/.cloudflared/config.yml)
cat << EOF > ~/.cloudflared/config.yml
tunnel: <tunnel-id>
credentials-file: $HOME/.cloudflared/<tunnel-id>.json
ingress:
- hostname: app.example.com service: http://localhost:3000
- hostname: api.example.com service: http://localhost:8080
- service: http_status:404 EOF
cat << EOF > ~/.cloudflared/config.yml
tunnel: <隧道ID>
credentials-file: $HOME/.cloudflared/<隧道ID>.json
ingress:
- hostname: app.example.com service: http://localhost:3000
- hostname: api.example.com service: http://localhost:8080
- service: http_status:404 EOF
Add DNS
添加DNS记录
cloudflared tunnel route dns my-tunnel app.example.com
cloudflared tunnel route dns my-tunnel app.example.com
Run
运行隧道
cloudflared tunnel run my-tunnel
**Run as Service:**
```bashcloudflared tunnel run my-tunnel
**作为服务运行:**
```bashLinux
Linux
sudo cloudflared service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflared
sudo cloudflared service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflared
macOS
macOS
sudo cloudflared service install
sudo launchctl load /Library/LaunchDaemons/com.cloudflare.cloudflared.plist
undefinedsudo cloudflared service install
sudo launchctl load /Library/LaunchDaemons/com.cloudflare.cloudflared.plist
undefinedAccess Policies
访问策略
Configure in Cloudflare dashboard (Zero Trust > Access > Applications):
yaml
Application:
name: Internal App
type: Self-hosted
domain: app.example.com
Policy:
name: Allow Company
action: Allow
include:
- email_domain: company.com
require:
- country: US在Cloudflare控制台配置(Zero Trust > Access > 应用):
yaml
Application:
name: 内部应用
type: 自托管
domain: app.example.com
Policy:
name: 允许公司员工访问
action: Allow
include:
- email_domain: company.com
require:
- country: USWARP Client
WARP 客户端
- Device client for Zero Trust enrollment
- Routes traffic through Cloudflare network
- Enables identity-based access policies
- Split tunneling for selective routing
- 用于Zero Trust注册的设备客户端
- 将流量路由到Cloudflare网络
- 启用基于身份的访问策略
- 支持分流路由
MCP Servers Reference
MCP Servers 参考
Building MCP Server on Workers
在Workers上构建MCP Server
typescript
import { McpServer } from "@cloudflare/mcp-server";
interface Env {
DB: D1Database;
}
const server = new McpServer({
name: "my-mcp-server",
version: "1.0.0",
});
// Define tools
server.addTool({
name: "query_database",
description: "Query the D1 database",
parameters: {
type: "object",
properties: {
query: { type: "string", description: "SQL query to execute" },
},
required: ["query"],
},
handler: async ({ query }, { env }) => {
const result = await env.DB.prepare(query).all();
return {
content: [{ type: "text", text: JSON.stringify(result.results) }],
};
},
});
// Define resources
server.addResource({
uri: "db://tables",
name: "Database Tables",
description: "List of all tables",
handler: async ({ env }) => {
const tables = await env.DB.prepare(
"SELECT name FROM sqlite_master WHERE type='table'"
).all();
return {
contents: [{ uri: "db://tables", text: JSON.stringify(tables.results) }],
};
},
});
export default {
async fetch(request: Request, env: Env) {
return server.handleRequest(request, env);
},
};typescript
import { McpServer } from "@cloudflare/mcp-server";
interface Env {
DB: D1Database;
}
const server = new McpServer({
name: "my-mcp-server",
version: "1.0.0",
});
// 定义工具
server.addTool({
name: "query_database",
description: "查询D1数据库",
parameters: {
type: "object",
properties: {
query: { type: "string", description: "要执行的SQL查询语句" },
},
required: ["query"],
},
handler: async ({ query }, { env }) => {
const result = await env.DB.prepare(query).all();
return {
content: [{ type: "text", text: JSON.stringify(result.results) }],
};
},
});
// 定义资源
server.addResource({
uri: "db://tables",
name: "数据库表",
description: "所有表的列表",
handler: async ({ env }) => {
const tables = await env.DB.prepare(
"SELECT name FROM sqlite_master WHERE type='table'"
).all();
return {
contents: [{ uri: "db://tables", text: JSON.stringify(tables.results) }],
};
},
});
export default {
async fetch(request: Request, env: Env) {
return server.handleRequest(request, env);
},
};MCP Transport Types
MCP 传输类型
-
Streamable HTTP (Recommended, March 2025+)
- Single HTTP endpoint
- Bidirectional messaging
- Standard for remote MCP
-
stdio (Local only)
- Standard input/output
- For local MCP connections
-
SSE (Deprecated)
- Use Streamable HTTP instead
-
Streamable HTTP(推荐,2025年3月+)
- 单一HTTP端点
- 双向消息传递
- 远程MCP标准
-
stdio(仅本地)
- 标准输入/输出
- 适用于本地MCP连接
-
SSE(已弃用)
- 建议使用Streamable HTTP替代
Cloudflare's Managed MCP Servers
Cloudflare托管的MCP Server
Available at :
https://mcp.cloudflare.com/- Workers management
- R2 bucket operations
- D1 database queries
- DNS management
- Analytics access
Connect from Claude/Cursor:
json
{
"mcpServers": {
"cloudflare": {
"url": "https://mcp.cloudflare.com/sse",
"transport": "sse"
}
}
}可通过访问:
https://mcp.cloudflare.com/- Workers管理
- R2存储桶操作
- D1数据库查询
- DNS管理
- 分析数据访问
从Claude/Cursor连接:
json
{
"mcpServers": {
"cloudflare": {
"url": "https://mcp.cloudflare.com/sse",
"transport": "sse"
}
}
}CI/CD Reference
CI/CD 参考
GitHub Actions
GitHub Actions
yaml
name: Deploy Worker
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Deploy to Cloudflare
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}yaml
name: 部署Worker
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: 设置Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: 安装依赖
run: npm ci
- name: 运行测试
run: npm test
- name: 部署到Cloudflare
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}Workers Builds (Native Git Integration)
Workers Builds(原生Git集成)
- Connect GitHub/GitLab in Cloudflare dashboard
- Select repository and branch
- Configure build command (optional)
- Automatic deployment on push
- Preview URLs for pull requests
- 在Cloudflare控制台连接GitHub/GitLab
- 选择仓库和分支
- 配置构建命令(可选)
- 推送代码时自动部署
- 为Pull Request生成预览URL
Pricing Reference (2025-2026)
定价参考(2025-2026)
Workers
Workers
| Plan | Price | Requests | CPU Time |
|---|---|---|---|
| Free | $0 | 100K/day | 10ms/invocation |
| Paid | $5/mo | 10M included | 30s/invocation |
| Usage | +$0.30/M requests | - | $0.02/M ms |
| 套餐 | 价格 | 请求次数 | CPU时间 |
|---|---|---|---|
| 免费版 | $0 | 每日10万次 | 每次调用10ms |
| 付费版 | $5/月 | 包含1000万次 | 每次调用30s |
| 按量付费 | 额外$0.30/百万次请求 | - | $0.02/百万毫秒 |
Storage
存储服务
| Service | Free Tier | Paid |
|---|---|---|
| KV | 100K reads, 1K writes/day | $0.50/M reads, $5/M writes |
| R2 | 10GB storage, 10M Class A ops | $0.015/GB, $4.50/M Class A |
| D1 | 5M rows read, 100K writes/day | $0.001/M rows, $1/M writes |
| Durable Objects | 1M requests | $0.15/M requests |
| Queues | 1M messages | $0.40/M messages |
| 服务 | 免费额度 | 付费价格 |
|---|---|---|
| KV | 每日10万次读取,1千次写入 | $0.50/百万次读取,$5/百万次写入 |
| R2 | 10GB存储,1000万次Class A操作 | $0.015/GB存储,$4.50/百万次Class A操作 |
| D1 | 每日500万行读取,10万次写入 | $0.001/百万行读取,$1/百万次写入 |
| Durable Objects | 100万次请求 | $0.15/百万次请求 |
| Queues | 100万条消息 | $0.40/百万条消息 |
AI Workers
AI Workers
- Pay per inference
- Varies by model (check dashboard for current pricing)
- Free tier includes limited inferences
- 按推理次数付费
- 价格因模型而异(请查看控制台当前定价)
- 免费版包含有限次数的推理
Best Practices
最佳实践
Performance
性能优化
- Use edge caching: Cache API responses with
caches.default - Minimize cold starts: Keep Workers small, use dynamic imports
- Use Service Bindings: Zero-cost Worker-to-Worker calls
- Batch operations: Combine KV/R2/D1 operations
- Use Hyperdrive: For remote database connections
- 使用边缘缓存:用缓存API响应
caches.default - 减少冷启动:保持Worker体积小巧,使用动态导入
- 使用服务绑定:Worker间调用零成本
- 批量操作:合并KV/R2/D1操作
- 使用Hyperdrive:用于远程数据库连接
Security
安全
- Use secrets: Never hardcode credentials
- Validate input: Sanitize all user input
- Use HTTPS: Always use secure connections
- Implement rate limiting: Protect against abuse
- Use Zero Trust: For internal service access
- 使用密钥:永远不要硬编码凭证
- 验证输入: sanitize所有用户输入
- 使用HTTPS:始终使用安全连接
- 实现速率限制:防止滥用
- 使用Zero Trust:用于内部服务访问
Cost Optimization
成本优化
- Use Static Assets: Free, unlimited static file serving
- Sample logs: Use for high-traffic Workers
head_sampling_rate - Use KV for caching: Reduce D1/external API calls
- Batch queue messages: Reduce per-message overhead
- Use GPU-appropriate models: Don't overprovision AI
- 使用静态资源:免费、无限制的静态文件托管
- 日志采样:对高流量Worker使用
head_sampling_rate - 用KV做缓存:减少D1/外部API调用
- 批量队列消息:降低单消息开销
- 使用合适的GPU模型:不要过度配置AI资源
Quick Reference
速查指南
| Task | Command/Code |
|---|---|
| New project | |
| Local dev | |
| Deploy | |
| Create D1 | |
| Create KV | |
| Create R2 | |
| Set secret | |
| Create queue | |
| Create tunnel | |
| 任务 | 命令/代码 |
|---|---|
| 创建新项目 | |
| 本地开发 | |
| 部署 | |
| 创建D1数据库 | |
| 创建KV命名空间 | |
| 创建R2存储桶 | |
| 设置密钥 | |
| 创建队列 | |
| 创建隧道 | |