otel-collector
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOpenTelemetry Collector configuration guide
OpenTelemetry Collector配置指南
Expert guidance for configuring and deploying the OpenTelemetry Collector to receive, process, and export telemetry.
OpenTelemetry Collector配置与部署专家指南,用于接收、处理和导出遥测数据。
Rules
规则
| Rule | Description |
|---|---|
| receivers | Receivers — OTLP, Prometheus, filelog, hostmetrics |
| exporters | Exporters — OTLP/gRPC to Dash0, debug, authentication |
| processors | Processors — memory limiter, resource detection, ordering, sending queue |
| pipelines | Pipelines — service section, per-signal configuration, connectors |
| deployment | Deployment — agent vs gateway patterns, deployment method selection |
| dash0-operator | Dash0 Kubernetes Operator — automated instrumentation, Collector management, Dash0 export |
| collector-helm-chart | Collector Helm chart — presets, modes, image selection |
| opentelemetry-operator | OpenTelemetry Operator — Collector CRD, auto-instrumentation, sidecar |
| raw-manifests | Raw Kubernetes manifests — DaemonSet, Deployment, RBAC, Docker Compose |
| sampling | Sampling — head, tail, load balancing |
| red-metrics | RED metrics — span-derived request rate, error rate, duration histograms |
| 规则 | 描述 |
|---|---|
| receivers | 接收器(Receivers)—— OTLP、Prometheus、filelog、hostmetrics |
| exporters | 导出器(Exporters)—— 向Dash0发送OTLP/gRPC数据、调试、身份验证 |
| processors | 处理器(Processors)—— 内存限制器、资源检测、排序、发送队列 |
| pipelines | 管道(Pipelines)—— 服务段、按信号配置、连接器 |
| deployment | 部署(Deployment)—— Agent与Gateway模式、部署方式选择 |
| dash0-operator | Dash0 Kubernetes Operator —— 自动插桩、Collector管理、Dash0数据导出 |
| collector-helm-chart | Collector Helm Chart —— 预设配置、模式、镜像选择 |
| opentelemetry-operator | OpenTelemetry Operator —— Collector CRD、自动插桩、Sidecar |
| raw-manifests | 原生Kubernetes清单 —— DaemonSet、Deployment、RBAC、Docker Compose |
| sampling | 采样(Sampling)—— 头部采样、尾部采样、负载均衡 |
| red-metrics | RED指标 —— 基于Span生成的请求率、错误率、持续时间直方图 |
Key principles
核心原则
- Processor ordering matters.
Place first in every pipeline. Use the exporter's
memory_limiterwithsending_queueinstead of thefile_storageprocessor. Incorrect ordering causes memory exhaustion or data loss.batch - One pipeline per signal type. Define separate pipelines for traces, metrics, and logs. Mixing signals in a single pipeline breaks processing and causes runtime errors.
- Every declared component must appear in a pipeline. The Collector rejects configurations that declare receivers, processors, or exporters not referenced by any pipeline.
- Consistent resource enrichment across pipelines.
Apply processors that enrich resource attributes like and
resourcedetectionto every signal pipeline (traces, metrics, and logs), not just one. If one pipeline enriches telemetry withk8sattributesork8s.namespace.namebut another does not, correlation between signals is compromised by incomplete metadata.host.name - Memory safety is non-negotiable.
Always configure in production. Without it, a burst of telemetry can cause the Collector to OOM and crash.
memory_limiter
- 处理器顺序至关重要。
在每个管道中优先放置。 使用导出器的
memory_limiter结合sending_queue,而非file_storage处理器。 错误的顺序会导致内存耗尽或数据丢失。batch - 每种信号类型对应一个管道。 为追踪(traces)、指标(metrics)和日志(logs)分别定义独立管道。 在单个管道中混合信号会破坏处理流程并导致运行时错误。
- 所有声明的组件必须在管道中使用。 如果配置中声明了未被任何管道引用的接收器、处理器或导出器,Collector会拒绝该配置。
- 跨管道统一资源增强。
将用于增强资源属性的处理器(如和
resourcedetection)应用于所有信号管道(追踪、指标、日志),而非仅单个管道。 如果一个管道用k8sattributes或k8s.namespace.name增强遥测数据,而另一个没有,不完整的元数据会影响信号间的关联分析。host.name - 内存安全不容妥协。
生产环境中必须配置。 若不配置,突发的遥测数据可能导致Collector内存溢出(OOM)并崩溃。
memory_limiter
Quick reference
快速参考
| What do you need? | Rule |
|---|---|
| Accept OTLP telemetry from applications | receivers |
| Scrape Prometheus endpoints | receivers |
| Collect log files or host metrics | receivers |
| Send telemetry to Dash0 | exporters |
| Configure retry, queue, or compression | exporters |
| Set processor ordering | processors |
| Add Kubernetes or cloud metadata | processors |
| Wire receivers → processors → exporters | pipelines |
| Complete working configuration | pipelines |
| Validate the pipeline with the debug exporter | collector-helm-chart, opentelemetry-operator, raw-manifests, or dash0-operator |
| Deploy as DaemonSet or Deployment | raw-manifests |
| Deploy with Helm | collector-helm-chart |
| Deploy with the OTel Operator | opentelemetry-operator |
| Deploy with the Dash0 Operator | dash0-operator |
| Auto-instrument applications in Kubernetes | opentelemetry-operator or dash0-operator |
| Local development with Docker Compose | raw-manifests |
| Reduce trace volume | sampling |
| Keep errors and slow traces, drop the rest | sampling |
| Generate RED metrics from traces | red-metrics |
| 需求场景 | 对应规则 |
|---|---|
| 接收来自应用的OTLP遥测数据 | receivers |
| 采集Prometheus端点数据 | receivers |
| 收集日志文件或主机指标 | receivers |
| 将遥测数据发送至Dash0 | exporters |
| 配置重试、队列或压缩 | exporters |
| 设置处理器顺序 | processors |
| 添加Kubernetes或云元数据 | processors |
| 连接接收器→处理器→导出器 | pipelines |
| 完整可用配置 | pipelines |
| 使用调试导出器验证管道 | collector-helm-chart、opentelemetry-operator、raw-manifests 或 dash0-operator |
| 以DaemonSet或Deployment方式部署 | raw-manifests |
| 使用Helm部署 | collector-helm-chart |
| 使用OTel Operator部署 | opentelemetry-operator |
| 使用Dash0 Operator部署 | dash0-operator |
| 在Kubernetes中自动为应用插桩 | opentelemetry-operator 或 dash0-operator |
| 使用Docker Compose进行本地开发 | raw-manifests |
| 减少追踪数据量 | sampling |
| 保留错误和慢追踪数据,丢弃其余数据 | sampling |
| 基于追踪生成RED指标 | red-metrics |