loki
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGrafana Loki - Log Aggregation
Grafana Loki - 日志聚合
Indexes only metadata (labels), not full log content — dramatically cheaper than full-text search systems.
仅索引元数据(标签),而非完整日志内容——比全文搜索系统成本低得多。
LogQL Quick Reference
LogQL 速查手册
Log Stream Selector (required in every query)
日志流选择器(每个查询必需)
logql
{app="nginx"} # exact match
{app!="nginx"} # not equal
{app=~"nginx|apache"} # regex match
{app!~"debug.*"} # regex not match
{app="nginx", env="prod"} # AND (multiple labels)logql
{app="nginx"} # 精确匹配
{app!="nginx"} # 不等于
{app=~"nginx|apache"} # 正则匹配
{app!~"debug.*"} # 正则不匹配
{app="nginx", env="prod"} # 逻辑与(多标签)Line Filters (pipeline stage 1 - put first for performance)
行过滤器(管道阶段1 - 优先使用以提升性能)
logql
{app="nginx"} |= "error" # contains string
{app="nginx"} != "info" # does not contain
{app="nginx"} |~ "error|warn" # regex match
{app="nginx"} !~ "health.*check" # regex not match
{app="nginx"} |= `"status":5` # backtick avoids escapinglogql
{app="nginx"} |= "error" # 包含指定字符串
{app="nginx"} != "info" # 不包含指定字符串
{app="nginx"} |~ "error|warn" # 正则匹配
{app="nginx"} !~ "health.*check" # 正则不匹配
{app="nginx"} |= `"status":5` # 反引号避免转义Parsers
解析器
logql
undefinedlogql
undefinedJSON
JSON
{app="api"} | json
{app="api"} | json status="http_status", path="request.path"
{app="api"} | json
{app="api"} | json status="http_status", path="request.path"
Logfmt
Logfmt
{app="api"} | logfmt
{app="api"} | logfmt --strict
{app="api"} | logfmt --keep-empty
{app="api"} | logfmt
{app="api"} | logfmt --strict
{app="api"} | logfmt --keep-empty
Pattern (positional, _ discards)
Pattern(位置匹配,_ 表示丢弃)
{app="nginx"} | pattern
<ip> - - <_> "<method> <uri> <_>" <status> <bytes>{app="nginx"} | pattern
<ip> - - <_> "<method> <uri> <_>" <status> <bytes>Regexp (named capture groups)
Regexp(命名捕获组)
{app="nginx"} | regexp
(?P<method>\w+) (?P<path>\S+) HTTP/(?P<version>\S+){app="nginx"} | regexp
(?P<method>\w+) (?P<path>\S+) HTTP/(?P<version>\S+)Unpack (unwrap Promtail packed labels)
Unpack(解析Promtail打包的标签)
{app="api"} | unpack
undefined{app="api"} | unpack
undefinedLabel Filters (after parsers)
标签过滤器(解析器之后使用)
logql
{app="api"} | json | status >= 500
{app="api"} | json | status == 200 and method != "OPTIONS"
{app="api"} | logfmt | duration > 1s
{app="api"} | json | level =~ "error|warn"
{app="api"} | json | bytes > 20MB
{app="api"} | json | path != "/healthz"logql
{app="api"} | json | status >= 500
{app="api"} | json | status == 200 and method != "OPTIONS"
{app="api"} | logfmt | duration > 1s
{app="api"} | json | level =~ "error|warn"
{app="api"} | json | bytes > 20MB
{app="api"} | json | path != "/healthz"Line Format
Line Format
logql
{app="api"} | json | line_format "{{.method}} {{.path}} -> {{.status}} ({{.duration}})"
{app="api"} | logfmt | line_format `{{.level | upper}}: {{.msg}}`logql
{app="api"} | json | line_format "{{.method}} {{.path}} -> {{.status}} ({{.duration}})"
{app="api"} | logfmt | line_format `{{.level | upper}}: {{.msg}}`Label Format
Label Format
logql
{app="api"} | logfmt | label_format new_name=old_name
{app="api"} | logfmt | label_format severity=level, svc=app
{app="api"} | logfmt | label_format msg=`{{.level}}: {{.message}}`logql
{app="api"} | logfmt | label_format new_name=old_name
{app="api"} | logfmt | label_format severity=level, svc=app
{app="api"} | logfmt | label_format msg=`{{.level}}: {{.message}}`Drop/Keep Labels
丢弃/保留标签
logql
{app="api"} | json | drop filename, level="debug"
{app="api"} | json | keep level, status, methodlogql
{app="api"} | json | drop filename, level="debug"
{app="api"} | json | keep level, status, methodDecolorize
去除颜色
logql
{app="cli-tool"} | decolorizelogql
{app="cli-tool"} | decolorizeMetric Queries
指标查询
Log Range Aggregations
日志范围聚合
logql
undefinedlogql
undefinedRequests per second
每秒请求数
rate({app="nginx"}[5m])
rate({app="nginx"}[5m])
Total log lines in window
时间窗口内的日志总行数
count_over_time({app="nginx"}[1h])
count_over_time({app="nginx"}[1h])
Bytes per second
每秒字节数
bytes_rate({app="nginx"}[5m])
bytes_rate({app="nginx"}[5m])
Total bytes
总字节数
bytes_over_time({app="nginx"}[1h])
bytes_over_time({app="nginx"}[1h])
Returns 1 if no logs in range (for absence alerting)
如果时间窗口内无日志则返回1(用于缺失告警)
absent_over_time({app="nginx"}[5m])
undefinedabsent_over_time({app="nginx"}[5m])
undefinedAggregation
聚合操作
logql
undefinedlogql
undefinedError rate by service
按服务统计错误率
sum(rate({env="prod"} |= "error" [5m])) by (app)
sum(rate({env="prod"} |= "error" [5m])) by (app)
Top 5 most active services
最活跃的Top5服务
topk(5, sum(rate({env="prod"}[5m])) by (app))
topk(5, sum(rate({env="prod"}[5m])) by (app))
Total errors across all services
所有服务的总错误数
sum(count_over_time({env="prod"} |= "error" [5m]))
undefinedsum(count_over_time({env="prod"} |= "error" [5m]))
undefinedUnwrapped Range Aggregations (numeric values from logs)
数值解析范围聚合(从日志中提取数值)
logql
undefinedlogql
undefinedAverage request duration from logfmt
从logfmt日志中计算平均请求耗时
avg_over_time({app="api"} | logfmt | unwrap duration [5m])
avg_over_time({app="api"} | logfmt | unwrap duration [5m])
95th percentile latency
95分位延迟
quantile_over_time(0.95, {app="api"} | logfmt | unwrap duration [5m]) by (app)
quantile_over_time(0.95, {app="api"} | logfmt | unwrap duration [5m]) by (app)
Sum of bytes from JSON logs
从JSON日志中计算总字节数
sum_over_time({app="api"} | json | unwrap bytes [5m])
sum_over_time({app="api"} | json | unwrap bytes [5m])
With conversion (duration string → seconds)
带转换(时长字符串→秒)
avg_over_time({app="api"} | logfmt | unwrap duration | duration_seconds [5m])
undefinedavg_over_time({app="api"} | logfmt | unwrap duration | duration_seconds [5m])
undefinedOffset Modifier
偏移修饰符
logql
undefinedlogql
undefinedCompare current rate vs 1 hour ago
对比当前速率与1小时前的速率
rate({app="nginx"}[5m]) / rate({app="nginx"}[5m] offset 1h)
undefinedrate({app="nginx"}[5m]) / rate({app="nginx"}[5m] offset 1h)
undefinedPractical Examples
实用示例
Error rate alert query
错误率告警查询
logql
sum(rate({env="prod"} |= "error" [5m])) by (service)
/
sum(rate({env="prod"}[5m])) by (service)
> 0.05logql
sum(rate({env="prod"} |= "error" [5m])) by (service)
/
sum(rate({env="prod"}[5m])) by (service)
> 0.05Slow requests
慢请求查询
logql
{app="api"} | logfmt | duration > 1s | line_format "SLOW: {{.method}} {{.path}} {{.duration}}"logql
{app="api"} | logfmt | duration > 1s | line_format "SLOW: {{.method}} {{.path}} {{.duration}}"HTTP 5xx errors with details
带详情的HTTP 5xx错误查询
logql
{app="nginx"} | pattern `<ip> - - <_> "<method> <uri> <_>" <status> <bytes>` | status >= 500logql
{app="nginx"} | pattern `<ip> - - <_> "<method> <uri> <_>" <status> <bytes>` | status >= 500Credential leak detection
凭证泄露检测
logql
{namespace="prod"} |~ `https?://\w+:\w+@`logql
{namespace="prod"} |~ `https?://\w+:\w+@`Sending Logs to Loki
向Loki发送日志
Via Grafana Alloy
通过Grafana Alloy
alloy
loki.source.file "app" {
targets = [{__path__ = "/var/log/app/*.log", job = "app"}]
forward_to = [loki.process.parse.receiver]
}
loki.process "parse" {
forward_to = [loki.write.cloud.receiver]
stage.json {
expressions = { level = "level", msg = "message" }
}
stage.labels {
values = { level = "" }
}
stage.drop {
expression = ".*healthcheck.*"
}
}
loki.write "cloud" {
endpoint {
url = "https://logs-xxx.grafana.net/loki/api/v1/push"
basic_auth {
username = sys.env("LOKI_USER")
password = sys.env("GRAFANA_API_KEY")
}
}
external_labels = { cluster = "prod" }
}alloy
loki.source.file "app" {
targets = [{__path__ = "/var/log/app/*.log", job = "app"}]
forward_to = [loki.process.parse.receiver]
}
loki.process "parse" {
forward_to = [loki.write.cloud.receiver]
stage.json {
expressions = { level = "level", msg = "message" }
}
stage.labels {
values = { level = "" }
}
stage.drop {
expression = ".*healthcheck.*"
}
}
loki.write "cloud" {
endpoint {
url = "https://logs-xxx.grafana.net/loki/api/v1/push"
basic_auth {
username = sys.env("LOKI_USER")
password = sys.env("GRAFANA_API_KEY")
}
}
external_labels = { cluster = "prod" }
}Via Kubernetes (Alloy DaemonSet)
通过Kubernetes(Alloy DaemonSet)
alloy
discovery.kubernetes "pods" {
role = "pod"
}
loki.source.kubernetes "pods" {
targets = discovery.kubernetes.pods.targets
forward_to = [loki.write.cloud.receiver]
}alloy
discovery.kubernetes "pods" {
role = "pod"
}
loki.source.kubernetes "pods" {
targets = discovery.kubernetes.pods.targets
forward_to = [loki.write.cloud.receiver]
}Loki HTTP Push API
Loki HTTP推送API
bash
curl -X POST https://logs-xxx.grafana.net/loki/api/v1/push \
-u "user:apikey" \
-H 'Content-Type: application/json' \
-d '{
"streams": [{
"stream": { "app": "myapp", "env": "prod" },
"values": [
["1609459200000000000", "log line here"]
]
}]
}'bash
curl -X POST https://logs-xxx.grafana.net/loki/api/v1/push \
-u "user:apikey" \
-H 'Content-Type: application/json' \
-d '{
"streams": [{
"stream": { "app": "myapp", "env": "prod" },
"values": [
["1609459200000000000", "log line here"]
]
}]
}'Architecture
架构
Push path: Client → Distributor → Ingester (WAL) → Object Storage (chunks)
Read path: Query → Query Frontend → Querier → Ingester + Store (chunks)Components:
- Distributor: Validates and hashes incoming log streams
- Ingester: Buffers chunks in memory, flushes to object storage
- Querier: Executes LogQL queries
- Query Frontend: Caches, splits, and parallelizes queries
- Compactor: Manages retention and deduplication
推送路径: Client → Distributor → Ingester (WAL) → Object Storage (chunks)
读取路径: Query → Query Frontend → Querier → Ingester + Store (chunks)组件:
- Distributor: 验证并哈希传入的日志流
- Ingester: 在内存中缓冲数据块,刷新至对象存储
- Querier: 执行LogQL查询
- Query Frontend: 缓存、拆分并并行化查询
- Compactor: 管理日志保留与去重
References
参考资料
- LogQL Reference
- Configuration
- Sending Data
- LogQL参考文档
- 配置文档
- 数据发送文档