alloy

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Grafana Alloy

Grafana Alloy

Alloy is an open-source OpenTelemetry collector distribution that unifies telemetry collection (metrics, logs, traces, profiles) in a single binary supporting Prometheus and OTel standards.
Alloy是一款开源的OpenTelemetry收集器发行版,通过单一二进制文件统一遥测数据(指标、日志、链路追踪、性能剖析)的收集,支持Prometheus和OTel标准。

Installation

安装

bash
undefined
bash
undefined

macOS

macOS系统

brew install grafana/grafana/alloy
brew install grafana/grafana/alloy

Linux (Debian/Ubuntu)

Linux(Debian/Ubuntu发行版)

sudo apt install alloy
sudo apt install alloy

Docker

Docker

docker run -v $(pwd)/config.alloy:/etc/alloy/config.alloy
grafana/alloy:latest run /etc/alloy/config.alloy
docker run -v $(pwd)/config.alloy:/etc/alloy/config.alloy
grafana/alloy:latest run /etc/alloy/config.alloy

Kubernetes (Helm)

Kubernetes(Helm)

helm repo add grafana https://grafana.github.io/helm-charts helm install alloy grafana/alloy -f values.yaml
helm repo add grafana https://grafana.github.io/helm-charts helm install alloy grafana/alloy -f values.yaml

Run

运行

alloy run /path/to/config.alloy

**Default config paths:**
- Linux: `/etc/alloy/config.alloy`
- macOS: `$(brew --prefix)/etc/alloy/config.alloy`
- Windows: `%ProgramFiles%\GrafanaLabs\Alloy\config.alloy`
alloy run /path/to/config.alloy

**默认配置路径:**
- Linux:`/etc/alloy/config.alloy`
- macOS:`$(brew --prefix)/etc/alloy/config.alloy`
- Windows:`%ProgramFiles%\GrafanaLabs\Alloy\config.alloy`

Config Language Syntax

配置语言语法

Config files use
.alloy
extension (UTF-8). See
references/config-syntax.md
for full reference.
alloy
// Block syntax: BLOCK_TYPE "LABEL" { ... }
prometheus.scrape "my_scraper" {
  targets    = [{"__address__" = "localhost:9090"}]
  forward_to = [prometheus.remote_write.cloud.receiver]
}

// Attribute: NAME = VALUE
scrape_interval = "30s"

// Reference another component's export
forward_to = [prometheus.remote_write.cloud.receiver]

// Environment variable
password = sys.env("GRAFANA_API_KEY")

// String concat
url = "https://" + sys.env("HOST")
配置文件使用
.alloy
扩展名(UTF-8编码)。完整参考请查看
references/config-syntax.md
alloy
// 块语法:BLOCK_TYPE "LABEL" { ... }
prometheus.scrape "my_scraper" {
  targets    = [{"__address__" = "localhost:9090"}]
  forward_to = [prometheus.remote_write.cloud.receiver]
}

// 属性:NAME = VALUE
scrape_interval = "30s"

// 引用其他组件的输出
forward_to = [prometheus.remote_write.cloud.receiver]

// 环境变量
password = sys.env("GRAFANA_API_KEY")

// 字符串拼接
url = "https://" + sys.env("HOST")

Core Component Patterns

核心组件模式

See
references/components.md
for full component reference.
完整组件参考请查看
references/components.md

Metrics: Scrape → Remote Write

指标:采集 → 远程写入

alloy
prometheus.scrape "app" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [prometheus.remote_write.cloud.receiver]
  scrape_interval = "30s"
}

prometheus.remote_write "cloud" {
  endpoint {
    url = "https://prometheus-xxx.grafana.net/api/prom/push"
    basic_auth {
      username = sys.env("PROM_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}
alloy
prometheus.scrape "app" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [prometheus.remote_write.cloud.receiver]
  scrape_interval = "30s"
}

prometheus.remote_write "cloud" {
  endpoint {
    url = "https://prometheus-xxx.grafana.net/api/prom/push"
    basic_auth {
      username = sys.env("PROM_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}

Logs: File → Loki

日志:文件 → Loki

alloy
loki.source.file "app_logs" {
  targets = [
    {__path__ = "/var/log/app/*.log",   job = "app"},
    {__path__ = "/var/log/nginx/*.log", job = "nginx"},
  ]
  forward_to = [loki.write.cloud.receiver]
}

loki.write "cloud" {
  endpoint {
    url = "https://logs-xxx.grafana.net/loki/api/v1/push"
    basic_auth {
      username = sys.env("LOKI_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}
alloy
loki.source.file "app_logs" {
  targets = [
    {__path__ = "/var/log/app/*.log",   job = "app"},
    {__path__ = "/var/log/nginx/*.log", job = "nginx"},
  ]
  forward_to = [loki.write.cloud.receiver]
}

loki.write "cloud" {
  endpoint {
    url = "https://logs-xxx.grafana.net/loki/api/v1/push"
    basic_auth {
      username = sys.env("LOKI_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}

Traces: OTLP Receive → Export

链路追踪:OTLP接收 → 导出

alloy
otelcol.receiver.otlp "default" {
  grpc { endpoint = "0.0.0.0:4317" }
  http { endpoint = "0.0.0.0:4318" }
  output {
    traces  = [otelcol.exporter.otlp.tempo.input]
    metrics = [otelcol.exporter.prometheus.local.input]
    logs    = [otelcol.exporter.loki.cloud.input]
  }
}

otelcol.exporter.otlp "tempo" {
  client {
    endpoint = "tempo-xxx.grafana.net/tempo:443"
    auth     = otelcol.auth.basic.grafana_cloud.handler
  }
}

otelcol.auth.basic "grafana_cloud" {
  username = sys.env("TEMPO_USER")
  password = sys.env("GRAFANA_API_KEY")
}
alloy
otelcol.receiver.otlp "default" {
  grpc { endpoint = "0.0.0.0:4317" }
  http { endpoint = "0.0.0.0:4318" }
  output {
    traces  = [otelcol.exporter.otlp.tempo.input]
    metrics = [otelcol.exporter.prometheus.local.input]
    logs    = [otelcol.exporter.loki.cloud.input]
  }
}

otelcol.exporter.otlp "tempo" {
  client {
    endpoint = "tempo-xxx.grafana.net/tempo:443"
    auth     = otelcol.auth.basic.grafana_cloud.handler
  }
}

otelcol.auth.basic "grafana_cloud" {
  username = sys.env("TEMPO_USER")
  password = sys.env("GRAFANA_API_KEY")
}

Kubernetes Discovery

Kubernetes服务发现

alloy
discovery.kubernetes "pods" {
  role = "pod"
}

discovery.relabel "pods" {
  targets = discovery.kubernetes.pods.targets
  rule {
    source_labels = ["__meta_kubernetes_pod_label_app"]
    target_label  = "app"
  }
  rule {
    source_labels = ["__meta_kubernetes_namespace"]
    target_label  = "namespace"
  }
  // Drop pods without app label
  rule {
    source_labels = ["__meta_kubernetes_pod_label_app"]
    regex         = ""
    action        = "drop"
  }
}

prometheus.scrape "kubernetes" {
  targets    = discovery.relabel.pods.output
  forward_to = [prometheus.remote_write.cloud.receiver]
}
alloy
discovery.kubernetes "pods" {
  role = "pod"
}

discovery.relabel "pods" {
  targets = discovery.kubernetes.pods.targets
  rule {
    source_labels = ["__meta_kubernetes_pod_label_app"]
    target_label  = "app"
  }
  rule {
    source_labels = ["__meta_kubernetes_namespace"]
    target_label  = "namespace"
  }
  // 丢弃没有app标签的Pod
  rule {
    source_labels = ["__meta_kubernetes_pod_label_app"]
    regex         = ""
    action        = "drop"
  }
}

prometheus.scrape "kubernetes" {
  targets    = discovery.relabel.pods.output
  forward_to = [prometheus.remote_write.cloud.receiver]
}

Configuration Blocks (top-level)

顶层配置块

alloy
logging {
  level  = "info"   // debug, info, warn, error
  format = "logfmt" // logfmt, json
}

http {
  listen_addr = "0.0.0.0:12345"  // UI at http://localhost:12345
}

// Fleet Management remote config
remotecfg {
  url = "https://fleet-management.grafana.net"
  basic_auth {
    username = sys.env("FM_USERNAME")
    password = sys.env("FM_TOKEN")
  }
  poll_interval = "1m"
}

tracing {
  sampling_fraction = 0.1
  write_to = [otelcol.exporter.otlp.default.input]
}
alloy
logging {
  level  = "info"   // debug、info、warn、error
  format = "logfmt" // logfmt、json
}

http {
  listen_addr = "0.0.0.0:12345"  // UI界面地址:http://localhost:12345
}

// Fleet Management远程配置
remotecfg {
  url = "https://fleet-management.grafana.net"
  basic_auth {
    username = sys.env("FM_USERNAME")
    password = sys.env("FM_TOKEN")
  }
  poll_interval = "1m"
}

tracing {
  sampling_fraction = 0.1
  write_to = [otelcol.exporter.otlp.default.input]
}

Modules and Imports

模块与导入

alloy
// Import from local file
import.file "utils" {
  filename = "./modules/utils.alloy"
}

// Import from Git
import.git "k8s_monitoring" {
  repository = "https://github.com/grafana/alloy-modules"
  revision   = "main"
  path       = "modules/kubernetes/"
}

// Import from HTTP
import.http "shared" {
  url            = "https://config-server/alloy/shared.alloy"
  poll_frequency = "5m"
}

// Use imported component
utils.my_component "example" {
  arg = "value"
}
alloy
// 从本地文件导入
import.file "utils" {
  filename = "./modules/utils.alloy"
}

// 从Git仓库导入
import.git "k8s_monitoring" {
  repository = "https://github.com/grafana/alloy-modules"
  revision   = "main"
  path       = "modules/kubernetes/"
}

// 从HTTP地址导入
import.http "shared" {
  url            = "https://config-server/alloy/shared.alloy"
  poll_frequency = "5m"
}

// 使用导入的组件
utils.my_component "example" {
  arg = "value"
}

Clustering

集群部署

alloy
clustering {
  enabled = true
}

prometheus.scrape "cluster_aware" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [prometheus.remote_write.cloud.receiver]
  clustering { enabled = true }  // distributes scrape targets across cluster nodes
}
alloy
clustering {
  enabled = true
}

prometheus.scrape "cluster_aware" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [prometheus.remote_write.cloud.receiver]
  clustering { enabled = true }  // 在集群节点间分配采集目标
}

Processing: Relabeling and Transformation

数据处理:重新标记与转换

alloy
// Relabel metrics
prometheus.relabel "filter" {
  forward_to = [prometheus.remote_write.cloud.receiver]
  rule {
    source_labels = ["__name__"]
    regex         = "go_.*"
    action        = "drop"
  }
  rule {
    source_labels = ["env"]
    replacement   = "production"
    target_label  = "environment"
  }
}

// Loki pipeline processing
loki.process "parse" {
  forward_to = [loki.write.cloud.receiver]
  stage.json {
    expressions = { level = "level", msg = "message" }
  }
  stage.labels {
    values = { level = "" }
  }
  stage.drop {
    expression = ".*health check.*"
  }
}
alloy
// 重新标记指标
prometheus.relabel "filter" {
  forward_to = [prometheus.remote_write.cloud.receiver]
  rule {
    source_labels = ["__name__"]
    regex         = "go_.*"
    action        = "drop"
  }
  rule {
    source_labels = ["env"]
    replacement   = "production"
    target_label  = "environment"
  }
}

// Loki管道处理
loki.process "parse" {
  forward_to = [loki.write.cloud.receiver]
  stage.json {
    expressions = { level = "level", msg = "message" }
  }
  stage.labels {
    values = { level = "" }
  }
  stage.drop {
    expression = ".*health check.*"
  }
}

Key Components Quick Reference

关键组件速查表

ComponentPurpose
prometheus.scrape
Scrape Prometheus metrics endpoints
prometheus.remote_write
Send metrics via remote write
prometheus.relabel
Relabel/filter metrics
loki.source.file
Read logs from files
loki.source.kubernetes
Read Kubernetes pod logs
loki.write
Send logs to Loki
loki.process
Process/transform logs (pipeline stages)
otelcol.receiver.otlp
Receive OTLP data (gRPC/HTTP)
otelcol.exporter.otlp
Export via OTLP gRPC
otelcol.exporter.otlphttp
Export via OTLP HTTP
otelcol.processor.batch
Batch telemetry before exporting
otelcol.processor.memory_limiter
Limit memory usage
discovery.kubernetes
Discover Kubernetes targets
discovery.docker
Discover Docker containers
discovery.ec2
Discover AWS EC2 instances
discovery.relabel
Relabel discovery targets
pyroscope.scrape
Scrape profiling data
pyroscope.write
Send profiles to Pyroscope
beyla.ebpf
eBPF auto-instrumentation
组件用途
prometheus.scrape
采集Prometheus指标端点数据
prometheus.remote_write
通过远程写入方式发送指标
prometheus.relabel
重新标记/过滤指标
loki.source.file
从文件读取日志
loki.source.kubernetes
读取Kubernetes Pod日志
loki.write
向Loki发送日志
loki.process
处理/转换日志(管道阶段)
otelcol.receiver.otlp
接收OTLP数据(gRPC/HTTP)
otelcol.exporter.otlp
通过OTLP gRPC协议导出数据
otelcol.exporter.otlphttp
通过OTLP HTTP协议导出数据
otelcol.processor.batch
导出前批量处理遥测数据
otelcol.processor.memory_limiter
限制内存使用
discovery.kubernetes
发现Kubernetes采集目标
discovery.docker
发现Docker容器
discovery.ec2
发现AWS EC2实例
discovery.relabel
重新标记服务发现目标
pyroscope.scrape
采集性能剖析数据
pyroscope.write
向Pyroscope发送性能剖析数据
beyla.ebpf
eBPF自动插桩

Complete Grafana Cloud Pipeline

完整Grafana Cloud管道示例

alloy
// METRICS
prometheus.scrape "all" {
  targets = array.concat(
    discovery.kubernetes.nodes.targets,
    discovery.kubernetes.pods.targets,
  )
  forward_to      = [prometheus.remote_write.grafana_cloud.receiver]
  scrape_interval = "60s"
}

prometheus.remote_write "grafana_cloud" {
  endpoint {
    url = sys.env("PROMETHEUS_URL")
    basic_auth {
      username = sys.env("PROMETHEUS_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
  external_labels = {
    cluster = "prod-us-east",
    env     = "production",
  }
}

// LOGS
loki.source.kubernetes "pods" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [loki.process.add_labels.receiver]
}

loki.process "add_labels" {
  forward_to = [loki.write.grafana_cloud.receiver]
  stage.static_labels {
    values = { cluster = "prod-us-east" }
  }
}

loki.write "grafana_cloud" {
  endpoint {
    url = sys.env("LOKI_URL")
    basic_auth {
      username = sys.env("LOKI_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}

// TRACES
otelcol.receiver.otlp "default" {
  grpc {}
  http {}
  output {
    traces = [otelcol.exporter.otlp.grafana_cloud.input]
  }
}

otelcol.exporter.otlp "grafana_cloud" {
  client {
    endpoint = sys.env("TEMPO_ENDPOINT")
    auth     = otelcol.auth.basic.grafana_cloud.handler
  }
}

otelcol.auth.basic "grafana_cloud" {
  username = sys.env("TEMPO_USER")
  password = sys.env("GRAFANA_API_KEY")
}

// PROFILES
pyroscope.scrape "default" {
  targets    = [{"__address__" = "localhost:6060", "service_name" = "myapp"}]
  forward_to = [pyroscope.write.grafana_cloud.receiver]
}

pyroscope.write "grafana_cloud" {
  endpoint {
    url = sys.env("PYROSCOPE_URL")
    basic_auth {
      username = sys.env("PYROSCOPE_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}
alloy
// 指标采集
prometheus.scrape "all" {
  targets = array.concat(
    discovery.kubernetes.nodes.targets,
    discovery.kubernetes.pods.targets,
  )
  forward_to      = [prometheus.remote_write.grafana_cloud.receiver]
  scrape_interval = "60s"
}

prometheus.remote_write "grafana_cloud" {
  endpoint {
    url = sys.env("PROMETHEUS_URL")
    basic_auth {
      username = sys.env("PROMETHEUS_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
  external_labels = {
    cluster = "prod-us-east",
    env     = "production",
  }
}

// 日志采集
loki.source.kubernetes "pods" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [loki.process.add_labels.receiver]
}

loki.process "add_labels" {
  forward_to = [loki.write.grafana_cloud.receiver]
  stage.static_labels {
    values = { cluster = "prod-us-east" }
  }
}

loki.write "grafana_cloud" {
  endpoint {
    url = sys.env("LOKI_URL")
    basic_auth {
      username = sys.env("LOKI_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}

// 链路追踪
otelcol.receiver.otlp "default" {
  grpc {}
  http {}
  output {
    traces = [otelcol.exporter.otlp.grafana_cloud.input]
  }
}

otelcol.exporter.otlp "grafana_cloud" {
  client {
    endpoint = sys.env("TEMPO_ENDPOINT")
    auth     = otelcol.auth.basic.grafana_cloud.handler
  }
}

otelcol.auth.basic "grafana_cloud" {
  username = sys.env("TEMPO_USER")
  password = sys.env("GRAFANA_API_KEY")
}

// 性能剖析
pyroscope.scrape "default" {
  targets    = [{"__address__" = "localhost:6060", "service_name" = "myapp"}]
  forward_to = [pyroscope.write.grafana_cloud.receiver]
}

pyroscope.write "grafana_cloud" {
  endpoint {
    url = sys.env("PYROSCOPE_URL")
    basic_auth {
      username = sys.env("PYROSCOPE_USER")
      password = sys.env("GRAFANA_API_KEY")
    }
  }
}

References

参考资料

  • Components Reference
  • Config Language Syntax
  • Collection Patterns
  • 组件参考文档
  • 配置语言语法
  • 数据采集模式