workers-performance
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare Workers Performance Optimization
Cloudflare Workers 性能优化
Techniques for maximizing Worker performance and minimizing latency.
最大化Worker性能、最小化延迟的技巧。
Quick Wins
快速优化技巧
typescript
// 1. Avoid unnecessary cloning
// ❌ Bad: Clones entire request
const body = await request.clone().json();
// ✅ Good: Parse directly when not re-using body
const body = await request.json();
// 2. Use streaming instead of buffering
// ❌ Bad: Buffers entire response
const text = await response.text();
return new Response(transform(text));
// ✅ Good: Stream transformation
return new Response(response.body.pipeThrough(new TransformStream({
transform(chunk, controller) {
controller.enqueue(process(chunk));
}
})));
// 3. Cache expensive operations
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;typescript
// 1. 避免不必要的克隆
// ❌ 不良写法:克隆整个请求
const body = await request.clone().json();
// ✅ 推荐写法:不重复使用请求体时直接解析
const body = await request.json();
// 2. 使用流处理而非缓冲
// ❌ 不良写法:缓冲整个响应
const text = await response.text();
return new Response(transform(text));
// ✅ 推荐写法:流式转换
return new Response(response.body.pipeThrough(new TransformStream({
transform(chunk, controller) {
controller.enqueue(process(chunk));
}
})));
// 3. 缓存开销大的操作
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;Critical Rules
核心规则
- Stay under CPU limits - 10ms (free), 30ms (paid), 50ms (unbound)
- Minimize cold starts - Keep bundles < 1MB, avoid dynamic imports
- Use Cache API - Cache responses at the edge
- Stream large payloads - Don't buffer entire responses
- Batch operations - Combine multiple KV/D1 calls
- 控制在CPU限制内 - 免费版10ms,付费版30ms,无限制版50ms
- 最小化冷启动时间 - 包大小保持在1MB以下,避免动态导入
- 使用Cache API - 在边缘节点缓存响应
- 流式处理大负载 - 不要缓冲整个响应
- 批量操作 - 合并多个KV/D1调用
Top 10 Performance Errors
十大性能错误
| Error | Symptom | Fix |
|---|---|---|
| CPU limit exceeded | Worker terminated | Optimize hot paths, use streaming |
| Cold start latency | First request slow | Reduce bundle size, avoid top-level await |
| Memory pressure | Slow GC, timeouts | Stream data, avoid large arrays |
| KV latency | Slow reads | Use Cache API, batch reads |
| D1 slow queries | High latency | Add indexes, optimize SQL |
| Large bundles | Slow cold starts | Tree-shake, code split |
| Blocking operations | Request timeouts | Use Promise.all, streaming |
| Unnecessary cloning | Memory spike | Only clone when needed |
| Missing cache | Repeated computation | Implement caching layer |
| Sync operations | CPU spikes | Use async alternatives |
| 错误 | 症状 | 修复方案 |
|---|---|---|
| 超出CPU限制 | Worker被终止 | 优化热点路径,使用流处理 |
| 冷启动延迟高 | 首次请求缓慢 | 减小包大小,避免顶级await |
| 内存压力大 | GC缓慢、超时 | 流式处理数据,避免大型数组 |
| KV读取延迟 | 读取缓慢 | 使用Cache API,批量读取 |
| D1查询缓慢 | 延迟高 | 添加索引,优化SQL语句 |
| 包体积过大 | 冷启动缓慢 | 摇树优化,代码拆分 |
| 阻塞操作 | 请求超时 | 使用Promise.all,流式处理 |
| 不必要的克隆 | 内存峰值 | 仅在需要时克隆 |
| 缺少缓存 | 重复计算 | 实现缓存层 |
| 同步操作 | CPU峰值 | 使用异步替代方案 |
CPU Optimization
CPU优化
Profile Hot Paths
分析热点路径
typescript
async function profiledHandler(request: Request): Promise<Response> {
const timing: Record<string, number> = {};
const time = async <T>(name: string, fn: () => Promise<T>): Promise<T> => {
const start = Date.now();
const result = await fn();
timing[name] = Date.now() - start;
return result;
};
const data = await time('fetch', () => fetchData());
const processed = await time('process', () => processData(data));
const response = await time('serialize', () => serialize(processed));
console.log('Timing:', timing);
return new Response(response);
}typescript
async function profiledHandler(request: Request): Promise<Response> {
const timing: Record<string, number> = {};
const time = async <T>(name: string, fn: () => Promise<T>): Promise<T> => {
const start = Date.now();
const result = await fn();
timing[name] = Date.now() - start;
return result;
};
const data = await time('fetch', () => fetchData());
const processed = await time('process', () => processData(data));
const response = await time('serialize', () => serialize(processed));
console.log('Timing:', timing);
return new Response(response);
}Optimize JSON Operations
优化JSON操作
typescript
// For large JSON, use streaming parser
import { JSONParser } from '@streamparser/json';
async function parseStreamingJSON(stream: ReadableStream): Promise<unknown[]> {
const parser = new JSONParser();
const results: unknown[] = [];
parser.onValue = (value) => results.push(value);
for await (const chunk of stream) {
parser.write(chunk);
}
return results;
}typescript
// 处理大型JSON时,使用流式解析器
import { JSONParser } from '@streamparser/json';
async function parseStreamingJSON(stream: ReadableStream): Promise<unknown[]> {
const parser = new JSONParser();
const results: unknown[] = [];
parser.onValue = (value) => results.push(value);
for await (const chunk of stream) {
parser.write(chunk);
}
return results;
}Memory Optimization
内存优化
Avoid Large Arrays
避免大型数组
typescript
// ❌ Bad: Loads all into memory
const items = await db.prepare('SELECT * FROM items').all();
const processed = items.results.map(transform);
// ✅ Good: Process in batches
async function* batchProcess(db: D1Database, batchSize = 100) {
let offset = 0;
while (true) {
const { results } = await db
.prepare('SELECT * FROM items LIMIT ? OFFSET ?')
.bind(batchSize, offset)
.all();
if (results.length === 0) break;
for (const item of results) {
yield transform(item);
}
offset += batchSize;
}
}typescript
// ❌ 不良写法:全部加载到内存
const items = await db.prepare('SELECT * FROM items').all();
const processed = items.results.map(transform);
// ✅ 推荐写法:批量处理
async function* batchProcess(db: D1Database, batchSize = 100) {
let offset = 0;
while (true) {
const { results } = await db
.prepare('SELECT * FROM items LIMIT ? OFFSET ?')
.bind(batchSize, offset)
.all();
if (results.length === 0) break;
for (const item of results) {
yield transform(item);
}
offset += batchSize;
}
}Caching Strategies
缓存策略
Multi-Layer Cache
多层缓存
typescript
interface CacheLayer {
get(key: string): Promise<unknown | null>;
set(key: string, value: unknown, ttl?: number): Promise<void>;
}
// Layer 1: In-memory (request-scoped)
const memoryCache = new Map<string, unknown>();
// Layer 2: Cache API (edge-local)
const edgeCache: CacheLayer = {
async get(key) {
const response = await caches.default.match(new Request(`https://cache/${key}`));
return response ? response.json() : null;
},
async set(key, value, ttl = 60) {
await caches.default.put(
new Request(`https://cache/${key}`),
new Response(JSON.stringify(value), {
headers: { 'Cache-Control': `max-age=${ttl}` }
})
);
}
};
// Layer 3: KV (global)
// Use env.KV.get/puttypescript
interface CacheLayer {
get(key: string): Promise<unknown | null>;
set(key: string, value: unknown, ttl?: number): Promise<void>;
}
// 第一层:内存缓存(请求作用域)
const memoryCache = new Map<string, unknown>();
// 第二层:Cache API(边缘本地)
const edgeCache: CacheLayer = {
async get(key) {
const response = await caches.default.match(new Request(`https://cache/${key}`));
return response ? response.json() : null;
},
async set(key, value, ttl = 60) {
await caches.default.put(
new Request(`https://cache/${key}`),
new Response(JSON.stringify(value), {
headers: { 'Cache-Control': `max-age=${ttl}` }
})
);
}
};
// 第三层:KV(全局)
// 使用env.KV.get/putBundle Optimization
包优化
typescript
// 1. Tree-shake imports
// ❌ Bad
import * as lodash from 'lodash';
// ✅ Good
import { debounce } from 'lodash-es';
// 2. Lazy load heavy dependencies
let heavyLib: typeof import('heavy-lib') | undefined;
async function getHeavyLib() {
if (!heavyLib) {
heavyLib = await import('heavy-lib');
}
return heavyLib;
}typescript
// 1. 摇树优化导入
// ❌ 不良写法
import * as lodash from 'lodash';
// ✅ 推荐写法
import { debounce } from 'lodash-es';
// 2. 懒加载重型依赖
let heavyLib: typeof import('heavy-lib') | undefined;
async function getHeavyLib() {
if (!heavyLib) {
heavyLib = await import('heavy-lib');
}
return heavyLib;
}When to Load References
何时加载参考文档
Load specific references based on the task:
- Optimizing CPU usage? → Load
references/cpu-optimization.md - Memory issues? → Load
references/memory-optimization.md - Implementing caching? → Load
references/caching-strategies.md - Reducing bundle size? → Load
references/bundle-optimization.md - Cold start problems? → Load
references/cold-starts.md
根据任务加载特定参考文档:
- 优化CPU使用率? → 加载
references/cpu-optimization.md - 内存问题? → 加载
references/memory-optimization.md - 实现缓存? → 加载
references/caching-strategies.md - 减小包大小? → 加载
references/bundle-optimization.md - 冷启动问题? → 加载
references/cold-starts.md
Templates
模板
| Template | Purpose | Use When |
|---|---|---|
| Performance monitoring | Adding timing/profiling |
| Multi-layer caching | Implementing cache |
| Performance patterns | Starting optimized worker |
| 模板 | 用途 | 使用场景 |
|---|---|---|
| 性能监控 | 添加计时/分析功能时 |
| 多层缓存 | 实现缓存功能时 |
| 性能模式模板 | 启动优化后的Worker时 |
Scripts
脚本
| Script | Purpose | Command |
|---|---|---|
| Load testing | |
| CPU profiling | |
| 脚本 | 用途 | 命令 |
|---|---|---|
| 负载测试 | |
| CPU分析 | |