beyla
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGrafana Beyla
Grafana Beyla
Beyla is a Grafana eBPF auto-instrumentation tool that captures HTTP/gRPC traffic and generates traces
and metrics without modifying application code.
Beyla是Grafana推出的一款基于eBPF的自动插桩工具,无需修改应用代码即可捕获HTTP/gRPC流量并生成追踪数据和指标。
Requirements
系统要求
- Linux kernel: 5.8+ with BTF (BPF Type Format) enabled
- Privileges: root or ; in Kubernetes must run in host PID namespace
CAP_SYS_ADMIN - Architectures: x86_64, ARM64
Check BTF support:
bash
ls /sys/kernel/btf/vmlinux # must exist- Linux内核:5.8及以上版本,且启用BTF(BPF Type Format)
- 权限:root权限或权限;在Kubernetes中需运行在主机PID命名空间
CAP_SYS_ADMIN - 架构:x86_64、ARM64
检查BTF支持情况:
bash
ls /sys/kernel/btf/vmlinux # 文件必须存在Supported Languages / Runtimes
支持的语言/运行时
| Language | HTTP | gRPC | DB queries |
|---|---|---|---|
| Go | ✅ | ✅ | ✅ |
| Java (JVM) | ✅ | ✅ | ✅ |
| Python | ✅ | ✅ | - |
| Ruby | ✅ | - | - |
| Node.js | ✅ | - | - |
| .NET | ✅ | ✅ | - |
| Rust | ✅ | ✅ | - |
| C/C++ | ✅ | - | - |
| PHP | ✅ | - | - |
| 语言 | HTTP | gRPC | 数据库查询 |
|---|---|---|---|
| Go | ✅ | ✅ | ✅ |
| Java (JVM) | ✅ | ✅ | ✅ |
| Python | ✅ | ✅ | - |
| Ruby | ✅ | - | - |
| Node.js | ✅ | - | - |
| .NET | ✅ | ✅ | - |
| Rust | ✅ | ✅ | - |
| C/C++ | ✅ | - | - |
| PHP | ✅ | - | - |
Installation
安装方式
bash
undefinedbash
undefinedDocker
Docker部署
docker run --privileged --pid=host
-v /sys/kernel/debug:/sys/kernel/debug:ro
-e BEYLA_OPEN_PORT=8080
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
grafana/beyla
-v /sys/kernel/debug:/sys/kernel/debug:ro
-e BEYLA_OPEN_PORT=8080
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
grafana/beyla
docker run --privileged --pid=host
-v /sys/kernel/debug:/sys/kernel/debug:ro
-e BEYLA_OPEN_PORT=8080
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
grafana/beyla
-v /sys/kernel/debug:/sys/kernel/debug:ro
-e BEYLA_OPEN_PORT=8080
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
grafana/beyla
Kubernetes (Helm)
Kubernetes(Helm)部署
helm repo add grafana https://grafana.github.io/helm-charts
helm install beyla grafana/beyla
--set discovery.services[0].open_port=8080
--set otelTraces.endpoint=http://tempo:4318
--set discovery.services[0].open_port=8080
--set otelTraces.endpoint=http://tempo:4318
undefinedhelm repo add grafana https://grafana.github.io/helm-charts
helm install beyla grafana/beyla
--set discovery.services[0].open_port=8080
--set otelTraces.endpoint=http://tempo:4318
--set discovery.services[0].open_port=8080
--set otelTraces.endpoint=http://tempo:4318
undefinedConfiguration File
配置文件
yaml
undefinedyaml
undefinedbeyla-config.yml
beyla-config.yml
log_level: INFO
discovery:
services:
- name: my-app
open_port: 8080
# or by process name:
# exe_path: /usr/bin/myapp
# or by K8s pod metadata (auto-detected in K8s)
ebpf:
wakeup_len: 100 # batch size for events
track_request_headers: false # enable to capture HTTP headers (high cardinality risk)
high_request_volume: false # optimize for high-traffic services
log_level: INFO
discovery:
services:
- name: my-app
open_port: 8080
# 或通过进程名称指定:
# exe_path: /usr/bin/myapp
# 或通过K8s Pod元数据指定(在K8s环境中会自动检测)
ebpf:
wakeup_len: 100 # 事件批量处理大小
track_request_headers: false # 启用后可捕获HTTP请求头(存在高基数风险)
high_request_volume: false # 针对高流量服务进行优化
Distributed tracing output (OTLP)
分布式追踪输出(OTLP)
otel_traces_export:
endpoint: http://tempo:4318 # HTTP OTLP endpoint
Or gRPC:
endpoint: tempo:4317
protocol: grpc
otel_traces_export:
endpoint: http://tempo:4318 # HTTP OTLP端点
或使用gRPC协议:
endpoint: tempo:4317
protocol: grpc
Metrics output (Prometheus)
指标输出(Prometheus)
prometheus_export:
port: 9090
path: /metrics
prometheus_export:
port: 9090
path: /metrics
Or metrics via OTLP
或通过OTLP输出指标
otel_metrics_export:
endpoint: http://prometheus-otlp:9090
undefinedotel_metrics_export:
endpoint: http://prometheus-otlp:9090
undefinedKubernetes Deployment
Kubernetes部署
DaemonSet (recommended for cluster-wide)
DaemonSet(集群范围监控推荐)
yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: beyla
namespace: monitoring
spec:
selector:
matchLabels:
app: beyla
template:
metadata:
labels:
app: beyla
spec:
hostPID: true # required for eBPF
serviceAccountName: beyla
containers:
- name: beyla
image: grafana/beyla:latest
securityContext:
privileged: true # or use specific capabilities
# Alternative (non-privileged):
# capabilities:
# add: [SYS_ADMIN, SYS_PTRACE, NET_ADMIN]
env:
- name: BEYLA_OPEN_PORT
value: "8080"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://alloy:4318"
volumeMounts:
- name: sys-kernel-debug
mountPath: /sys/kernel/debug
readOnly: true
volumes:
- name: sys-kernel-debug
hostPath:
path: /sys/kernel/debugyaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: beyla
namespace: monitoring
spec:
selector:
matchLabels:
app: beyla
template:
metadata:
labels:
app: beyla
spec:
hostPID: true # eBPF功能必需
serviceAccountName: beyla
containers:
- name: beyla
image: grafana/beyla:latest
securityContext:
privileged: true # 或使用特定权限
# 替代方案(非特权模式):
# capabilities:
# add: [SYS_ADMIN, SYS_PTRACE, NET_ADMIN]
env:
- name: BEYLA_OPEN_PORT
value: "8080"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://alloy:4318"
volumeMounts:
- name: sys-kernel-debug
mountPath: /sys/kernel/debug
readOnly: true
volumes:
- name: sys-kernel-debug
hostPath:
path: /sys/kernel/debugNetwork Policies and RBAC
网络策略与RBAC
yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: beyla
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: beyla
rules:
- apiGroups: [""]
resources: [nodes, pods, services, endpoints, namespaces]
verbs: [get, list, watch]yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: beyla
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: beyla
rules:
- apiGroups: [""]
resources: [nodes, pods, services, endpoints, namespaces]
verbs: [get, list, watch]Environment Variables
环境变量
| Variable | Description |
|---|---|
| Port(s) to instrument (e.g., |
| Process name pattern to instrument |
| OTLP endpoint for traces and metrics |
| |
| Override service name in spans |
| |
| Port for Prometheus metrics scrape |
| Path for Prometheus metrics (default |
| 变量名 | 描述 |
|---|---|
| 要插桩的端口(例如: |
| 要插桩的进程名称匹配模式 |
| 用于追踪和指标的OTLP端点 |
| 协议类型: |
| 覆盖追踪跨度中的服务名称 |
| 日志级别: |
| Prometheus指标采集端口 |
| Prometheus指标路径(默认值 |
Grafana Cloud Integration
Grafana Cloud集成
yaml
undefinedyaml
undefinedUsing Alloy as the OTLP receiver
使用Alloy作为OTLP接收器
otel_traces_export:
endpoint: http://alloy:4318 # Alloy forwards to Grafana Cloud Tempo
otel_metrics_export:
endpoint: http://alloy:4318 # Alloy forwards to Grafana Cloud Prometheus
Via Alloy config:
```alloy
otelcol.receiver.otlp "beyla" {
http { endpoint = "0.0.0.0:4318" }
output {
traces = [otelcol.exporter.otlp.grafana_cloud.input]
metrics = [otelcol.exporter.prometheus.local.input]
}
}otel_traces_export:
endpoint: http://alloy:4318 # Alloy将数据转发至Grafana Cloud Tempo
otel_metrics_export:
endpoint: http://alloy:4318 # Alloy将数据转发至Grafana Cloud Prometheus
通过Alloy配置:
```alloy
otelcol.receiver.otlp "beyla" {
http { endpoint = "0.0.0.0:4318" }
output {
traces = [otelcol.exporter.otlp.grafana_cloud.input]
metrics = [otelcol.exporter.prometheus.local.input]
}
}Routes Decorator (Cardinality Control)
路由装饰器(基数控制)
Critical for production — prevents HTTP path cardinality explosion:
yaml
routes:
patterns:
- /user/{id}
- /api/v1/resources/{resource_id}
ignored_patterns:
- /health
- /metrics
ignore_mode: traces # or: metrics, both
unmatched: heuristic # or: path, wildcard, low-cardinalityunmatchedheuristiclow-cardinalitywildcard/**path生产环境必备 —— 防止HTTP路径基数爆炸:
yaml
routes:
patterns:
- /user/{id}
- /api/v1/resources/{resource_id}
ignored_patterns:
- /health
- /metrics
ignore_mode: traces # 可选值:metrics、both
unmatched: heuristic # 可选值:path、wildcard、low-cardinalityunmatchedheuristiclow-cardinalitywildcard/**pathTrace Sampling
追踪采样
yaml
otel_traces_export:
sampler:
name: "parentbased_traceidratio" # parent-aware fraction sampling
arg: "0.1" # 10% sampling — arg is a quoted stringSamplers: , , , (default), .
always_onalways_offtraceidratioparentbased_always_onparentbased_traceidratioyaml
otel_traces_export:
sampler:
name: "parentbased_traceidratio" # 基于父跨度的比例采样
arg: "0.1" # 采样率10% —— 参数需为字符串类型采样器类型:、、、(默认值)、。
always_onalways_offtraceidratioparentbased_always_onparentbased_traceidratioGenerated Metrics
生成的指标
| Metric | Type | Description |
|---|---|---|
| Histogram | Inbound HTTP request duration |
| Histogram | Outbound HTTP request duration |
| Histogram | gRPC server call duration |
| Histogram | gRPC client call duration |
| Histogram | DB query duration |
Labels: , , , ,
http.methodhttp.routehttp.response.status_codeservice.nameservice.namespace| 指标名称 | 类型 | 描述 |
|---|---|---|
| 直方图 | 入站HTTP请求持续时间 |
| 直方图 | 出站HTTP请求持续时间 |
| 直方图 | gRPC服务端调用持续时间 |
| 直方图 | gRPC客户端调用持续时间 |
| 直方图 | 数据库查询持续时间 |
标签:、、、、
http.methodhttp.routehttp.response.status_codeservice.nameservice.namespaceKubernetes Auto-Discovery
Kubernetes自动发现
In Kubernetes, Beyla auto-discovers pods and enriches telemetry with K8s metadata:
yaml
discovery:
services:
- k8s_namespace: "production" # instrument all pods in namespace
- k8s_pod_name: "frontend.*" # by pod name regex
- k8s_deployment_name: "api" # by deployment name
- open_port: 8080 # or by port (any pod)Auto-enriched span attributes: , , ,
k8s.namespace.namek8s.pod.namek8s.node.namek8s.deployment.name在Kubernetes环境中,Beyla会自动发现Pod,并为遥测数据添加K8s元数据:
yaml
discovery:
services:
- k8s_namespace: "production" # 插桩指定命名空间下的所有Pod
- k8s_pod_name: "frontend.*" # 通过Pod名称正则匹配
- k8s_deployment_name: "api" # 通过Deployment名称匹配
- open_port: 8080 # 或通过端口匹配(任意Pod)自动添加的跨度属性:、、、
k8s.namespace.namek8s.pod.namek8s.node.namek8s.deployment.name