workers-performance

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cloudflare Workers Performance Optimization

Cloudflare Workers 性能优化

Techniques for maximizing Worker performance and minimizing latency.
最大化Worker性能、最小化延迟的技巧。

Quick Wins

快速优化技巧

typescript
// 1. Avoid unnecessary cloning
// ❌ Bad: Clones entire request
const body = await request.clone().json();

// ✅ Good: Parse directly when not re-using body
const body = await request.json();

// 2. Use streaming instead of buffering
// ❌ Bad: Buffers entire response
const text = await response.text();
return new Response(transform(text));

// ✅ Good: Stream transformation
return new Response(response.body.pipeThrough(new TransformStream({
  transform(chunk, controller) {
    controller.enqueue(process(chunk));
  }
})));

// 3. Cache expensive operations
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;
typescript
// 1. 避免不必要的克隆
// ❌ 不良写法:克隆整个请求
const body = await request.clone().json();

// ✅ 推荐写法:不重复使用请求体时直接解析
const body = await request.json();

// 2. 使用流处理而非缓冲
// ❌ 不良写法:缓冲整个响应
const text = await response.text();
return new Response(transform(text));

// ✅ 推荐写法:流式转换
return new Response(response.body.pipeThrough(new TransformStream({
  transform(chunk, controller) {
    controller.enqueue(process(chunk));
  }
})));

// 3. 缓存开销大的操作
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;

Critical Rules

核心规则

  1. Stay under CPU limits - 10ms (free), 30ms (paid), 50ms (unbound)
  2. Minimize cold starts - Keep bundles < 1MB, avoid dynamic imports
  3. Use Cache API - Cache responses at the edge
  4. Stream large payloads - Don't buffer entire responses
  5. Batch operations - Combine multiple KV/D1 calls
  1. 控制在CPU限制内 - 免费版10ms,付费版30ms,无限制版50ms
  2. 最小化冷启动时间 - 包大小保持在1MB以下,避免动态导入
  3. 使用Cache API - 在边缘节点缓存响应
  4. 流式处理大负载 - 不要缓冲整个响应
  5. 批量操作 - 合并多个KV/D1调用

Top 10 Performance Errors

十大性能错误

ErrorSymptomFix
CPU limit exceededWorker terminatedOptimize hot paths, use streaming
Cold start latencyFirst request slowReduce bundle size, avoid top-level await
Memory pressureSlow GC, timeoutsStream data, avoid large arrays
KV latencySlow readsUse Cache API, batch reads
D1 slow queriesHigh latencyAdd indexes, optimize SQL
Large bundlesSlow cold startsTree-shake, code split
Blocking operationsRequest timeoutsUse Promise.all, streaming
Unnecessary cloningMemory spikeOnly clone when needed
Missing cacheRepeated computationImplement caching layer
Sync operationsCPU spikesUse async alternatives
错误症状修复方案
超出CPU限制Worker被终止优化热点路径,使用流处理
冷启动延迟高首次请求缓慢减小包大小,避免顶级await
内存压力大GC缓慢、超时流式处理数据,避免大型数组
KV读取延迟读取缓慢使用Cache API,批量读取
D1查询缓慢延迟高添加索引,优化SQL语句
包体积过大冷启动缓慢摇树优化,代码拆分
阻塞操作请求超时使用Promise.all,流式处理
不必要的克隆内存峰值仅在需要时克隆
缺少缓存重复计算实现缓存层
同步操作CPU峰值使用异步替代方案

CPU Optimization

CPU优化

Profile Hot Paths

分析热点路径

typescript
async function profiledHandler(request: Request): Promise<Response> {
  const timing: Record<string, number> = {};

  const time = async <T>(name: string, fn: () => Promise<T>): Promise<T> => {
    const start = Date.now();
    const result = await fn();
    timing[name] = Date.now() - start;
    return result;
  };

  const data = await time('fetch', () => fetchData());
  const processed = await time('process', () => processData(data));
  const response = await time('serialize', () => serialize(processed));

  console.log('Timing:', timing);
  return new Response(response);
}
typescript
async function profiledHandler(request: Request): Promise<Response> {
  const timing: Record<string, number> = {};

  const time = async <T>(name: string, fn: () => Promise<T>): Promise<T> => {
    const start = Date.now();
    const result = await fn();
    timing[name] = Date.now() - start;
    return result;
  };

  const data = await time('fetch', () => fetchData());
  const processed = await time('process', () => processData(data));
  const response = await time('serialize', () => serialize(processed));

  console.log('Timing:', timing);
  return new Response(response);
}

Optimize JSON Operations

优化JSON操作

typescript
// For large JSON, use streaming parser
import { JSONParser } from '@streamparser/json';

async function parseStreamingJSON(stream: ReadableStream): Promise<unknown[]> {
  const parser = new JSONParser();
  const results: unknown[] = [];

  parser.onValue = (value) => results.push(value);

  for await (const chunk of stream) {
    parser.write(chunk);
  }

  return results;
}
typescript
// 处理大型JSON时,使用流式解析器
import { JSONParser } from '@streamparser/json';

async function parseStreamingJSON(stream: ReadableStream): Promise<unknown[]> {
  const parser = new JSONParser();
  const results: unknown[] = [];

  parser.onValue = (value) => results.push(value);

  for await (const chunk of stream) {
    parser.write(chunk);
  }

  return results;
}

Memory Optimization

内存优化

Avoid Large Arrays

避免大型数组

typescript
// ❌ Bad: Loads all into memory
const items = await db.prepare('SELECT * FROM items').all();
const processed = items.results.map(transform);

// ✅ Good: Process in batches
async function* batchProcess(db: D1Database, batchSize = 100) {
  let offset = 0;
  while (true) {
    const { results } = await db
      .prepare('SELECT * FROM items LIMIT ? OFFSET ?')
      .bind(batchSize, offset)
      .all();

    if (results.length === 0) break;

    for (const item of results) {
      yield transform(item);
    }
    offset += batchSize;
  }
}
typescript
// ❌ 不良写法:全部加载到内存
const items = await db.prepare('SELECT * FROM items').all();
const processed = items.results.map(transform);

// ✅ 推荐写法:批量处理
async function* batchProcess(db: D1Database, batchSize = 100) {
  let offset = 0;
  while (true) {
    const { results } = await db
      .prepare('SELECT * FROM items LIMIT ? OFFSET ?')
      .bind(batchSize, offset)
      .all();

    if (results.length === 0) break;

    for (const item of results) {
      yield transform(item);
    }
    offset += batchSize;
  }
}

Caching Strategies

缓存策略

Multi-Layer Cache

多层缓存

typescript
interface CacheLayer {
  get(key: string): Promise<unknown | null>;
  set(key: string, value: unknown, ttl?: number): Promise<void>;
}

// Layer 1: In-memory (request-scoped)
const memoryCache = new Map<string, unknown>();

// Layer 2: Cache API (edge-local)
const edgeCache: CacheLayer = {
  async get(key) {
    const response = await caches.default.match(new Request(`https://cache/${key}`));
    return response ? response.json() : null;
  },
  async set(key, value, ttl = 60) {
    await caches.default.put(
      new Request(`https://cache/${key}`),
      new Response(JSON.stringify(value), {
        headers: { 'Cache-Control': `max-age=${ttl}` }
      })
    );
  }
};

// Layer 3: KV (global)
// Use env.KV.get/put
typescript
interface CacheLayer {
  get(key: string): Promise<unknown | null>;
  set(key: string, value: unknown, ttl?: number): Promise<void>;
}

// 第一层:内存缓存(请求作用域)
const memoryCache = new Map<string, unknown>();

// 第二层:Cache API(边缘本地)
const edgeCache: CacheLayer = {
  async get(key) {
    const response = await caches.default.match(new Request(`https://cache/${key}`));
    return response ? response.json() : null;
  },
  async set(key, value, ttl = 60) {
    await caches.default.put(
      new Request(`https://cache/${key}`),
      new Response(JSON.stringify(value), {
        headers: { 'Cache-Control': `max-age=${ttl}` }
      })
    );
  }
};

// 第三层:KV(全局)
// 使用env.KV.get/put

Bundle Optimization

包优化

typescript
// 1. Tree-shake imports
// ❌ Bad
import * as lodash from 'lodash';

// ✅ Good
import { debounce } from 'lodash-es';

// 2. Lazy load heavy dependencies
let heavyLib: typeof import('heavy-lib') | undefined;

async function getHeavyLib() {
  if (!heavyLib) {
    heavyLib = await import('heavy-lib');
  }
  return heavyLib;
}
typescript
// 1. 摇树优化导入
// ❌ 不良写法
import * as lodash from 'lodash';

// ✅ 推荐写法
import { debounce } from 'lodash-es';

// 2. 懒加载重型依赖
let heavyLib: typeof import('heavy-lib') | undefined;

async function getHeavyLib() {
  if (!heavyLib) {
    heavyLib = await import('heavy-lib');
  }
  return heavyLib;
}

When to Load References

何时加载参考文档

Load specific references based on the task:
  • Optimizing CPU usage? → Load
    references/cpu-optimization.md
  • Memory issues? → Load
    references/memory-optimization.md
  • Implementing caching? → Load
    references/caching-strategies.md
  • Reducing bundle size? → Load
    references/bundle-optimization.md
  • Cold start problems? → Load
    references/cold-starts.md
根据任务加载特定参考文档:
  • 优化CPU使用率? → 加载
    references/cpu-optimization.md
  • 内存问题? → 加载
    references/memory-optimization.md
  • 实现缓存? → 加载
    references/caching-strategies.md
  • 减小包大小? → 加载
    references/bundle-optimization.md
  • 冷启动问题? → 加载
    references/cold-starts.md

Templates

模板

TemplatePurposeUse When
templates/performance-middleware.ts
Performance monitoringAdding timing/profiling
templates/caching-layer.ts
Multi-layer cachingImplementing cache
templates/optimized-worker.ts
Performance patternsStarting optimized worker
模板用途使用场景
templates/performance-middleware.ts
性能监控添加计时/分析功能时
templates/caching-layer.ts
多层缓存实现缓存功能时
templates/optimized-worker.ts
性能模式模板启动优化后的Worker时

Scripts

脚本

ScriptPurposeCommand
scripts/benchmark.sh
Load testing
./benchmark.sh <url>
scripts/profile-worker.sh
CPU profiling
./profile-worker.sh
脚本用途命令
scripts/benchmark.sh
负载测试
./benchmark.sh <url>
scripts/profile-worker.sh
CPU分析
./profile-worker.sh

Resources

资源