workers-observability
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloudflare Workers Observability
Cloudflare Workers 可观测性
Production-grade observability for Cloudflare Workers: logging, metrics, tracing, and alerting.
面向Cloudflare Workers的生产级可观测性方案:包含日志、指标、追踪与告警功能。
Quick Start
快速开始
typescript
// Structured logging with context
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const requestId = crypto.randomUUID();
const logger = createLogger(requestId, env);
try {
logger.info('Request received', { method: request.method, url: request.url });
const result = await handleRequest(request, env);
logger.info('Request completed', { status: result.status });
return result;
} catch (error) {
logger.error('Request failed', { error: error.message, stack: error.stack });
throw error;
}
}
};
// Simple logger factory
function createLogger(requestId: string, env: Env) {
return {
info: (msg: string, data?: object) => console.log(JSON.stringify({ level: 'info', requestId, msg, ...data, timestamp: Date.now() })),
error: (msg: string, data?: object) => console.error(JSON.stringify({ level: 'error', requestId, msg, ...data, timestamp: Date.now() })),
warn: (msg: string, data?: object) => console.warn(JSON.stringify({ level: 'warn', requestId, msg, ...data, timestamp: Date.now() })),
};
}typescript
// Structured logging with context
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const requestId = crypto.randomUUID();
const logger = createLogger(requestId, env);
try {
logger.info('Request received', { method: request.method, url: request.url });
const result = await handleRequest(request, env);
logger.info('Request completed', { status: result.status });
return result;
} catch (error) {
logger.error('Request failed', { error: error.message, stack: error.stack });
throw error;
}
}
};
// Simple logger factory
function createLogger(requestId: string, env: Env) {
return {
info: (msg: string, data?: object) => console.log(JSON.stringify({ level: 'info', requestId, msg, ...data, timestamp: Date.now() })),
error: (msg: string, data?: object) => console.error(JSON.stringify({ level: 'error', requestId, msg, ...data, timestamp: Date.now() })),
warn: (msg: string, data?: object) => console.warn(JSON.stringify({ level: 'warn', requestId, msg, ...data, timestamp: Date.now() })),
};
}Critical Rules
关键规则
- Always use structured JSON logging - Plain text logs are hard to parse and aggregate
- Include request context - Request ID, method, path in every log entry
- Never log sensitive data - Redact tokens, passwords, PII from logs
- Use appropriate log levels - ERROR for failures, WARN for recoverable issues, INFO for operations
- Sample high-volume logs - Use 1-10% sampling for request logs in production
- 始终使用结构化JSON日志 - 纯文本日志难以解析和聚合
- 包含请求上下文 - 每条日志条目都要包含请求ID、方法、路径
- 切勿记录敏感数据 - 从日志中屏蔽令牌、密码、个人可识别信息(PII)
- 使用合适的日志级别 - ERROR用于故障场景,WARN用于可恢复问题,INFO用于常规操作
- 对高流量日志进行采样 - 生产环境中对请求日志采用1-10%的采样率
Observability Components
可观测性组件
| Component | Purpose | When to Use |
|---|---|---|
| Basic logging | Development, debugging |
| Tail Workers | Real-time log streaming | Production log aggregation |
| Analytics Engine | Custom metrics/analytics | Business metrics, performance tracking |
| Logpush | Log export to external services | Long-term storage, compliance |
| Workers Trace Events | Distributed tracing | Request flow debugging |
| 组件 | 用途 | 使用场景 |
|---|---|---|
| 基础日志记录 | 开发、调试阶段 |
| Tail Workers | 实时日志流 | 生产环境日志聚合 |
| Analytics Engine | 自定义指标/分析 | 业务指标统计、性能追踪 |
| Logpush | 日志导出至外部服务 | 长期存储、合规需求 |
| Workers Trace Events | 分布式追踪 | 请求链路调试 |
Top 8 Errors Prevented
可预防的8类常见错误
| Error | Symptom | Prevention |
|---|---|---|
| Logs not appearing | No output in dashboard | Enable "Standard" logging in wrangler.jsonc |
| Log truncation | Messages cut off at 128KB | Chunk large payloads, use sampling |
| Tail Worker not receiving | No events processed | Check binding name matches wrangler.jsonc |
| Analytics Engine write fails | Data not recorded | Verify AE binding, check blobs format |
| PII in logs | Security/compliance violation | Implement redaction middleware |
| Missing request context | Can't correlate logs | Add requestId to all log entries |
| Log volume explosion | High costs, noise | Implement sampling for high-frequency events |
| Alerting gaps | Incidents not detected | Configure monitors for error rate thresholds |
| 错误类型 | 症状 | 预防措施 |
|---|---|---|
| 日志不显示 | 控制台无输出 | 在wrangler.jsonc中启用「Standard」日志模式 |
| 日志截断 | 消息在128KB处被截断 | 拆分大 payload,使用采样机制 |
| Tail Worker 未接收日志 | 无事件被处理 | 检查绑定名称与wrangler.jsonc中的配置一致 |
| Analytics Engine 写入失败 | 数据未被记录 | 验证AE绑定配置,检查Blob格式 |
| 日志中包含PII | 违反安全/合规要求 | 实现数据屏蔽中间件 |
| 缺少请求上下文 | 无法关联日志 | 为所有日志条目添加requestId |
| 日志量爆炸 | 成本过高、噪音大 | 对高频事件实现采样机制 |
| 告警覆盖不全 | 未检测到故障 | 配置错误率阈值监控器 |
Logging Configuration
日志配置
wrangler.jsonc:
jsonc
{
"name": "my-worker",
"observability": {
"enabled": true,
"head_sampling_rate": 1 // 0-1, 1 = 100% of requests
},
"tail_consumers": [
{
"service": "log-aggregator", // Tail Worker name
"environment": "production"
}
],
"analytics_engine_datasets": [
{
"binding": "ANALYTICS",
"dataset": "my_worker_metrics"
}
]
}wrangler.jsonc:
jsonc
{
"name": "my-worker",
"observability": {
"enabled": true,
"head_sampling_rate": 1 // 0-1, 1 = 100% of requests
},
"tail_consumers": [
{
"service": "log-aggregator", // Tail Worker name
"environment": "production"
}
],
"analytics_engine_datasets": [
{
"binding": "ANALYTICS",
"dataset": "my_worker_metrics"
}
]
}Structured Logging Pattern
结构化日志模式
typescript
interface LogEntry {
level: 'debug' | 'info' | 'warn' | 'error';
message: string;
requestId: string;
timestamp: number;
// Contextual data
method?: string;
path?: string;
status?: number;
duration?: number;
// Error details
error?: {
name: string;
message: string;
stack?: string;
};
// Custom fields
[key: string]: unknown;
}
class Logger {
constructor(private requestId: string, private baseContext: object = {}) {}
private log(level: LogEntry['level'], message: string, data?: object) {
const entry: LogEntry = {
level,
message,
requestId: this.requestId,
timestamp: Date.now(),
...this.baseContext,
...data,
};
// Redact sensitive fields
const sanitized = this.redact(entry);
const output = JSON.stringify(sanitized);
level === 'error' ? console.error(output) : console.log(output);
}
private redact(entry: LogEntry): LogEntry {
const sensitiveKeys = ['password', 'token', 'secret', 'authorization', 'cookie'];
const redacted = { ...entry };
for (const key of Object.keys(redacted)) {
if (sensitiveKeys.some(s => key.toLowerCase().includes(s))) {
redacted[key] = '[REDACTED]';
}
}
return redacted;
}
info(message: string, data?: object) { this.log('info', message, data); }
warn(message: string, data?: object) { this.log('warn', message, data); }
error(message: string, data?: object) { this.log('error', message, data); }
debug(message: string, data?: object) { this.log('debug', message, data); }
}typescript
interface LogEntry {
level: 'debug' | 'info' | 'warn' | 'error';
message: string;
requestId: string;
timestamp: number;
// Contextual data
method?: string;
path?: string;
status?: number;
duration?: number;
// Error details
error?: {
name: string;
message: string;
stack?: string;
};
// Custom fields
[key: string]: unknown;
}
class Logger {
constructor(private requestId: string, private baseContext: object = {}) {}
private log(level: LogEntry['level'], message: string, data?: object) {
const entry: LogEntry = {
level,
message,
requestId: this.requestId,
timestamp: Date.now(),
...this.baseContext,
...data,
};
// Redact sensitive fields
const sanitized = this.redact(entry);
const output = JSON.stringify(sanitized);
level === 'error' ? console.error(output) : console.log(output);
}
private redact(entry: LogEntry): LogEntry {
const sensitiveKeys = ['password', 'token', 'secret', 'authorization', 'cookie'];
const redacted = { ...entry };
for (const key of Object.keys(redacted)) {
if (sensitiveKeys.some(s => key.toLowerCase().includes(s))) {
redacted[key] = '[REDACTED]';
}
}
return redacted;
}
info(message: string, data?: object) { this.log('info', message, data); }
warn(message: string, data?: object) { this.log('warn', message, data); }
error(message: string, data?: object) { this.log('error', message, data); }
debug(message: string, data?: object) { this.log('debug', message, data); }
}Analytics Engine Usage
Analytics Engine 使用示例
typescript
interface Env {
ANALYTICS: AnalyticsEngineDataset;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const start = Date.now();
const url = new URL(request.url);
try {
const response = await handleRequest(request, env);
// Write success metric
env.ANALYTICS.writeDataPoint({
blobs: [request.method, url.pathname, String(response.status)],
doubles: [Date.now() - start], // Response time in ms
indexes: [url.pathname.split('/')[1] || 'root'], // Index for fast queries
});
return response;
} catch (error) {
// Write error metric
env.ANALYTICS.writeDataPoint({
blobs: [request.method, url.pathname, 'error', error.message],
doubles: [Date.now() - start],
indexes: ['error'],
});
throw error;
}
}
};typescript
interface Env {
ANALYTICS: AnalyticsEngineDataset;
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const start = Date.now();
const url = new URL(request.url);
try {
const response = await handleRequest(request, env);
// Write success metric
env.ANALYTICS.writeDataPoint({
blobs: [request.method, url.pathname, String(response.status)],
doubles: [Date.now() - start], // Response time in ms
indexes: [url.pathname.split('/')[1] || 'root'], // Index for fast queries
});
return response;
} catch (error) {
// Write error metric
env.ANALYTICS.writeDataPoint({
blobs: [request.method, url.pathname, 'error', error.message],
doubles: [Date.now() - start],
indexes: ['error'],
});
throw error;
}
}
};Tail Worker Pattern
Tail Worker 模式
typescript
// tail-worker.ts - Receives logs from other workers
interface TailEvent {
scriptName: string;
event: {
request?: { method: string; url: string };
response?: { status: number };
};
logs: Array<{
level: string;
message: unknown[];
timestamp: number;
}>;
exceptions: Array<{
name: string;
message: string;
timestamp: number;
}>;
outcome: 'ok' | 'exception' | 'exceededCpu' | 'exceededMemory' | 'canceled';
eventTimestamp: number;
}
export default {
async tail(events: TailEvent[], env: Env): Promise<void> {
for (const event of events) {
// Filter and forward logs
const errorLogs = event.logs.filter(l => l.level === 'error');
const exceptions = event.exceptions;
if (errorLogs.length > 0 || exceptions.length > 0) {
// Send to external logging service
await fetch(env.LOGGING_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scriptName: event.scriptName,
timestamp: event.eventTimestamp,
errors: errorLogs,
exceptions,
outcome: event.outcome,
}),
});
}
}
}
};typescript
// tail-worker.ts - Receives logs from other workers
interface TailEvent {
scriptName: string;
event: {
request?: { method: string; url: string };
response?: { status: number };
};
logs: Array<{
level: string;
message: unknown[];
timestamp: number;
}>;
exceptions: Array<{
name: string;
message: string;
timestamp: number;
}>;
outcome: 'ok' | 'exception' | 'exceededCpu' | 'exceededMemory' | 'canceled';
eventTimestamp: number;
}
export default {
async tail(events: TailEvent[], env: Env): Promise<void> {
for (const event of events) {
// Filter and forward logs
const errorLogs = event.logs.filter(l => l.level === 'error');
const exceptions = event.exceptions;
if (errorLogs.length > 0 || exceptions.length > 0) {
// Send to external logging service
await fetch(env.LOGGING_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
scriptName: event.scriptName,
timestamp: event.eventTimestamp,
errors: errorLogs,
exceptions,
outcome: event.outcome,
}),
});
}
}
}
};When to Load References
参考文档加载指南
Load specific references based on the task:
- Setting up logging? → Load for structured logging patterns, log levels, redaction
references/logging.md - Building custom metrics? → Load for Analytics Engine SQL queries, data modeling
references/analytics-engine.md - Implementing log aggregation? → Load for Tail Worker patterns, external service integration
references/tail-workers.md - Creating dashboards/tracking? → Load for business metrics, performance tracking
references/custom-metrics.md - Setting up alerts? → Load for error rate monitoring, PagerDuty/Slack integration
references/alerting.md
根据任务需求加载对应参考文档:
- 配置日志? → 加载获取结构化日志模式、日志级别、数据屏蔽相关内容
references/logging.md - 构建自定义指标? → 加载获取Analytics Engine SQL查询、数据建模相关内容
references/analytics-engine.md - 实现日志聚合? → 加载获取Tail Worker模式、外部服务集成相关内容
references/tail-workers.md - 创建仪表盘/追踪? → 加载获取业务指标、性能追踪相关内容
references/custom-metrics.md - 配置告警? → 加载获取错误率监控、PagerDuty/Slack集成相关内容
references/alerting.md
Templates
模板
| Template | Purpose | Use When |
|---|---|---|
| Production logging class | Setting up new worker with logging |
| Analytics Engine integration | Adding custom metrics |
| Complete Tail Worker | Building log aggregation pipeline |
| 模板 | 用途 | 使用场景 |
|---|---|---|
| 生产级日志类 | 为新Worker配置日志功能 |
| Analytics Engine 集成 | 添加自定义指标 |
| 完整Tail Worker实现 | 构建日志聚合流水线 |
Scripts
脚本
| Script | Purpose | Command |
|---|---|---|
| Configure logging settings | |
| Query and analyze logs | |
| 脚本 | 用途 | 命令 |
|---|---|---|
| 配置日志设置 | |
| 查询与分析日志 | |
Resources
资源链接
- Workers Observability: https://developers.cloudflare.com/workers/observability/
- Analytics Engine: https://developers.cloudflare.com/analytics/analytics-engine/
- Tail Workers: https://developers.cloudflare.com/workers/observability/tail-workers/
- Logpush: https://developers.cloudflare.com/logs/get-started/