logql-generator
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLogQL Query Generator
LogQL查询生成器
Overview
概述
Interactive workflow for generating production-ready LogQL queries. LogQL is Grafana Loki's query language with indexed label selection, line filtering, parsing, and metric aggregation.
用于生成可投入生产的LogQL查询语句的交互式工作流。LogQL是Grafana Loki的查询语言,支持索引标签选择、行过滤、解析以及指标聚合。
Trigger Hints
触发提示
- "Write a LogQL query for error rate by service."
- "Help me build a Loki alert query."
- "Convert this troubleshooting requirement into LogQL."
- "I need step-by-step LogQL query construction."
Use this skill for query generation, dashboard queries, alerting expressions, and troubleshooting with Loki logs.
- "编写一个按服务统计错误率的LogQL查询语句。"
- "帮我构建一个Loki告警查询语句。"
- "将这个故障排查需求转换为LogQL语句。"
- "我需要分步构建LogQL查询语句的指导。"
可使用此技能进行查询语句生成、仪表盘查询、告警表达式编写以及基于Loki日志的故障排查。
Execution Flow (Deterministic)
执行流程(确定性)
Always run stages in order. Do not skip required stages.
请始终按顺序执行各个阶段,不得跳过必填阶段。
Stage 1 (Required): Capture Intent
阶段1(必填):捕获需求意图
Use to collect goal and use case.
AskUserQuestionTemplate:
- "What is your primary goal: debugging, alerting, dashboard metric, or investigation?"
- "Do you need a log query (raw lines) or a metric query (numeric output)?"
- "What time window should this cover (example: last 15m, 1h, 24h)?"
Fallback if is unavailable:
AskUserQuestion- Ask the same questions in plain text and continue.
使用收集目标和使用场景。
AskUserQuestion模板:
- "你的主要目标是什么:调试、告警、仪表盘指标还是故障调查?"
- "你需要日志查询(原始日志行)还是指标查询(数值输出)?"
- "查询的时间范围是什么(例如:最近15分钟、1小时、24小时)?"
如果无法使用的备选方案:
AskUserQuestion- 用纯文本提出相同问题,然后继续流程。
Stage 2 (Required): Capture Log Source Details
阶段2(必填):捕获日志源详情
Collect:
- Labels for stream selectors (,
job,namespace,app,service_name)cluster - Log format (JSON, logfmt, plain text, mixed)
- Known fields to filter/aggregate (,
status,level,duration,path)trace_id
Ambiguity and partial-answer handling:
- If a required field is missing, ask one focused follow-up question.
- If still missing, proceed with explicit assumptions.
- Prefix assumptions with in the output so the user can correct them quickly.
Assumptions:
收集以下信息:
- 日志流选择器的标签(、
job、namespace、app、service_name)cluster - 日志格式(JSON、logfmt、纯文本、混合格式)
- 需要过滤/聚合的已知字段(、
status、level、duration、path)trace_id
模糊信息与部分答案的处理方式:
- 如果缺少必填字段,提出一个针对性的跟进问题。
- 如果仍无法获取,基于明确的假设继续流程。
- 在输出中以为前缀标注假设内容,方便用户快速修正。
假设:
Stage 3 (Required): Discover Loki and Grafana Versions
阶段3(必填):确认Loki与Grafana版本
Collect or infer:
- Loki version (example: ,
2.9.x, unknown)3.0+ - Grafana version (example: ,
10.x, unknown)11.x - Deployment context (self-hosted Loki, Grafana Cloud, unknown)
Version compatibility policy:
- If versions are known, use the newest compatible syntax only.
- If versions are unknown, use compatibility-first syntax and avoid 3.x-only features by default.
- For unknown versions, provide an optional "3.x optimized variant" separately.
Avoid by default when version is unknown:
- Pattern match operators and
|>!> approx_topk- Structured metadata specific behavior (, accelerated metadata filtering assumptions)
detected_level
收集或推断以下信息:
- Loki版本(例如:、
2.9.x、未知)3.0+ - Grafana版本(例如:、
10.x、未知)11.x - 部署环境(自托管Loki、Grafana Cloud、未知)
版本兼容性策略:
- 如果已知版本,仅使用最新的兼容语法。
- 如果版本未知,默认使用兼容性优先的语法,避免仅Loki 3.x支持的特性。
- 对于未知版本,额外提供一个可选的“3.x优化版本”。
版本未知时默认避免使用的特性:
- 模式匹配运算符和
|>!> - 函数
approx_topk - 结构化元数据的特定行为(、加速元数据过滤假设)
detected_level
Stage 4 (Required): Plan Confirmation and Output Mode
阶段4(必填):方案确认与输出模式选择
Present a plain-English plan, then ask the user to choose output mode.
Plan template:
text
LogQL Query Plan
Goal: <goal>
Query type: <log or metric>
Streams: <selector>
Filters/parsing: <filters + parser>
Aggregation window: <function and [range]>
Compatibility mode: <version-aware or compatibility-first>Mode selection template:
- "Do you want (default) or
final query only(step-by-step)?"incremental build
If user does not choose, default to .
final query only先以通俗易懂的语言呈现方案,然后让用户选择输出模式。
方案模板:
text
LogQL查询方案
目标:<目标>
查询类型:<日志查询或指标查询>
日志流:<选择器>
过滤/解析:<过滤规则 + 解析器>
聚合窗口:<函数和[时间范围]>
兼容模式:<版本适配或兼容性优先>模式选择模板:
- "你希望选择(默认)还是
仅最终查询语句(逐步展示)?"分步构建
如果用户未选择,默认使用。
仅最终查询语句Stage 5 (Conditional, Blocking): Reference Checkpoint for Complex Queries
阶段5(条件触发,阻塞式):复杂查询的参考检查点
Complex query triggers:
- Nested aggregations (, multiple
topk(sum by(...)), percentiles)sum by - Performance-sensitive queries (high volume streams, long ranges)
- Alerting expressions
- Template functions (,
line_format)label_format - Regex-heavy extraction, IP matching, pattern parsing
- Loki 3.x feature usage
Blocking checkpoint rule:
- Read relevant files before generation using explicit file-open/read actions.
- Minimum file set:
- for syntax and query patterns
examples/common_queries.logql - for performance and alerting guidance
references/best_practices.md
- Do not generate the final query until this checkpoint is complete.
Fallback when file-read tools are unavailable:
- State that reference files could not be read in this environment.
- Generate a conservative query (compatibility-first, simpler operators).
- Mark result as .
Unverified against local references
复杂查询的触发场景:
- 嵌套聚合(、多个
topk(sum by(...))、百分位数)sum by - 对性能敏感的查询(高流量日志流、长时间范围)
- 告警表达式
- 模板函数(、
line_format)label_format - 大量使用正则表达式的提取、IP匹配、模式解析
- 使用Loki 3.x特性
阻塞式检查点规则:
- 在生成查询语句前,使用明确的文件打开/读取操作查阅相关文件。
- 最低文件集:
- :语法和查询模式参考
examples/common_queries.logql - :性能优化和告警指导
references/best_practices.md
- 完成此检查点前不得生成最终查询语句。
无法使用文件读取工具的备选方案:
- 说明当前环境无法读取参考文件。
- 生成一个保守的查询语句(兼容性优先、使用更简单的运算符)。
- 将结果标记为。
未针对本地参考文件验证
Stage 6 (Conditional): External Docs Lookup Policy (Context7 Before WebSearch)
阶段6(条件触发):外部文档查询策略(优先使用Context7,其次WebSearch)
Use external lookup only for version-specific behavior, unclear syntax, or advanced features not covered in local references.
Decision order:
- Context7 first:
- with
mcp__context7__resolve-library-idlibraryName="grafana loki" - for the exact topic
mcp__context7__query-docs
- WebSearch second (fallback only) when:
- Context7 is unavailable
- Context7 does not provide required version-specific detail
- You need latest release/deprecation confirmation
WebSearch fallback constraints:
- Prefer official Grafana/Loki docs and release notes.
- Note which statement came from fallback search.
仅在以下场景使用外部查询:版本特定行为、语法不明确、本地参考未覆盖的高级特性。
决策顺序:
- 优先使用Context7:
- 调用,参数
mcp__context7__resolve-library-idlibraryName="grafana loki" - 调用查询具体主题
mcp__context7__query-docs
- 调用
- 其次使用WebSearch(仅作为备选):
- 当Context7不可用时
- 当Context7未提供所需的版本特定细节时
- 需要确认最新版本发布/废弃信息时
WebSearch备选方案的约束:
- 优先选择官方Grafana/Loki文档和发布说明。
- 标注哪些内容来自WebSearch。
Stage 7 (Required): Generate Query
阶段7(必填):生成查询语句
Stage 7A (Default): Final Query Only
阶段7A(默认):仅最终查询语句
Return one production-ready query plus short explanation.
返回一条可投入生产的查询语句及简短说明。
Stage 7B (Optional): Incremental Build Mode
阶段7B(可选):分步构建模式
Use this when requested or when debugging complex pipelines.
Step-by-step template:
- Stream selector
- Line filter
- Parser
- Parsed-field filter
- Aggregation/window
当用户要求或调试复杂流水线时使用此模式。
分步模板:
- 日志流选择器
- 行过滤
- 解析器
- 解析后字段过滤
- 聚合/窗口
Stage 8 (Required): Deliver Usage and Checks
阶段8(必填):交付使用说明与检查项
Always include:
- Final query or incremental sequence
- How to run it (Grafana Explore/panel or )
logcli - Tunables (labels, thresholds, range)
- Any assumptions and compatibility notes
必须包含以下内容:
- 最终查询语句或分步构建序列
- 运行方式(Grafana Explore/面板或)
logcli - 可调整项(标签、阈值、时间范围)
- 所有假设和兼容性说明
AskUserQuestion Templates
AskUserQuestion模板
Intake Template
信息收集模板
- "What system/service should this query target?"
- "Which labels are reliable for stream selection?"
- "What defines a match (error text, status code, latency threshold, user path)?"
- "Should output be raw logs or a metric for alert/dashboard?"
- "此查询语句的目标系统/服务是什么?"
- "哪些标签可用于可靠的日志流选择?"
- "匹配条件是什么(错误文本、状态码、延迟阈值、用户路径)?"
- "输出应为原始日志还是用于告警/仪表盘的指标?"
Version Template
版本信息模板
- "What Loki version are you running?"
- "What Grafana version are you using?"
- "If unknown, should I generate a compatibility-first query and add an optional 3.x variant?"
- "你正在使用哪个版本的Loki?"
- "你正在使用哪个版本的Grafana?"
- "如果版本未知,我是否应该生成一个兼容性优先的查询语句,并额外提供3.x优化版本?"
Ambiguity Follow-up Template
模糊信息跟进模板
- "I am missing . Should I assume
<field>so I can continue?"<default>
- "我缺少<字段>信息。是否可以假设<默认值>以继续流程?"
Core Patterns
核心模式
Stream Selection and Filtering
日志流选择与过滤
logql
{job="app"} |= "error" |= "timeout"
{job="app"} |~ "error|fatal|critical"
{job="app"} != "debug"logql
{job="app"} |= "error" |= "timeout"
{job="app"} |~ "error|fatal|critical"
{job="app"} != "debug"Parsing
解析
logql
{app="api"} | json | level="error" | status_code >= 500
{app="api"} | logfmt | caller="database.go"
{job="nginx"} | pattern "<ip> - - [<_>] \"<method> <path>\" <status> <size>"logql
{app="api"} | json | level="error" | status_code >= 500
{app="api"} | logfmt | caller="database.go"
{job="nginx"} | pattern "<ip> - - [<_>] \"<method> <path>\" <status> <size>"Metric Aggregation
指标聚合
logql
rate({job="app"} | json | level="error" [5m])
sum by (app) (count_over_time({namespace="prod"} | json [5m]))
sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m])) * 100
quantile_over_time(0.95, {app="api"} | json | unwrap duration [5m])
topk(10, sum by (error_type) (count_over_time({job="app"} | json | level="error" [1h])))logql
rate({job="app"} | json | level="error" [5m])
sum by (app) (count_over_time({namespace="prod"} | json [5m]))
sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m])) * 100
quantile_over_time(0.95, {app="api"} | json | unwrap duration [5m])
topk(10, sum by (error_type) (count_over_time({job="app"} | json | level="error" [1h])))Formatting and IP Matching
格式化与IP匹配
logql
{job="app"} | json | line_format "{{.level}}: {{.message}}"
{job="app"} | json | label_format env=`{{.environment}}`
{job="nginx"} | logfmt | remote_addr = ip("192.168.4.0/24")logql
{job="app"} | json | line_format "{{.level}}: {{.message}}"
{job="app"} | json | label_format env=`{{.environment}}`
{job="nginx"} | logfmt | remote_addr = ip("192.168.4.0/24")Query Construction Rules
查询语句构建规则
- Use specific stream selectors (indexed labels first).
- Prefer filter order: line filter -> parse -> parsed-field filter.
- Prefer parser cost order: >
pattern>logfmt>json.regexp - For unknown Loki version, stay on compatibility-first syntax.
- For complex/critical queries, complete Stage 5 checkpoint before final output.
- 使用特定的日志流选择器(优先使用索引标签)。
- 优先使用以下过滤顺序:行过滤 -> 解析 -> 解析后字段过滤。
- 优先选择解析器的成本顺序:>
pattern>logfmt>json。regexp - 对于未知Loki版本,使用兼容性优先的语法。
- 对于复杂/关键查询,在生成最终输出前完成阶段5的检查点。
Advanced Techniques
高级技巧
Multiple Parsers
多解析器组合
logql
{app="api"} | json | regexp "user_(?P<user_id>\\d+)"logql
{app="api"} | json | regexp "user_(?P<user_id>\\d+)"Unwrap for Numeric Metrics
数值指标的Unwrap操作
logql
sum(sum_over_time({app="api"} | json | unwrap duration [5m]))logql
sum(sum_over_time({app="api"} | json | unwrap duration [5m]))Pattern Match Operators (Loki 3.0+, 10x faster than regex)
模式匹配运算符(Loki 3.0+,比正则快10倍)
logql
{service_name=`app`} |> "<_> level=debug <_>"logql
{service_name=`app`} |> "<_> level=debug <_>"Logical Operators
逻辑运算符
logql
{app="api"} | json | (status_code >= 400 and status_code < 500) or level="error"logql
{app="api"} | json | (status_code >= 400 and status_code < 500) or level="error"Offset Modifier
偏移修饰符
logql
sum(rate({app="api"} | json | level="error" [5m])) - sum(rate({app="api"} | json | level="error" [5m] offset 1d))logql
sum(rate({app="api"} | json | level="error" [5m])) - sum(rate({app="api"} | json | level="error" [5m] offset 1d))Label Operations
标签操作
logql
{app="api"} | json | keep namespace, pod, level
{app="api"} | json | drop pod, instanceNote: LogQL has noordedupoperators. Use metric aggregations likedistinctfor programmatic deduplication.sum by (field)
logql
{app="api"} | json | keep namespace, pod, level
{app="api"} | json | drop pod, instance注意:LogQL没有或dedup运算符。可使用distinct等指标聚合方式实现程序化去重。sum by (field)
Loki 3.x Key Features
Loki 3.x关键特性
Structured Metadata
结构化元数据
High-cardinality data without indexing (trace_id, user_id, request_id):
logql
undefined无需索引的高基数数据(trace_id、user_id、request_id):
logql
undefinedFilter AFTER stream selector, NOT in it
在日志流选择器之后过滤,而非在选择器内
{app="api"} | trace_id="abc123" | json | level="error"
undefined{app="api"} | trace_id="abc123" | json | level="error"
undefinedQuery Acceleration (Bloom Filters)
查询加速(布隆过滤器)
Place structured metadata filters BEFORE parsers:
logql
undefined将结构化元数据过滤规则放在解析器之前:
logql
undefinedACCELERATED
加速查询
{cluster="prod"} | detected_level="error" | logfmt | json
{cluster="prod"} | detected_level="error" | logfmt | json
NOT ACCELERATED
非加速查询
{cluster="prod"} | logfmt | json | detected_level="error"
undefined{cluster="prod"} | logfmt | json | detected_level="error"
undefinedapprox_topk (Probabilistic)
approx_topk(概率性函数)
logql
approx_topk(10, sum by (endpoint) (rate({app="api"}[5m])))logql
approx_topk(10, sum by (endpoint) (rate({app="api"}[5m])))vector() for Alerting
vector()用于告警
logql
sum(count_over_time({app="api"} | json | level="error" [5m])) or vector(0)logql
sum(count_over_time({app="api"} | json | level="error" [5m])) or vector(0)Automatic Labels
自动标签
- service_name: Auto-populated from container name
- detected_level: Auto-detected when (stored as structured metadata)
discover_log_levels: true
- service_name:从容器名称自动填充
- detected_level:当时自动检测(存储为结构化元数据)
discover_log_levels: true
Function Reference
函数参考
Log Range Aggregations
日志范围聚合函数
| Function | Description |
|---|---|
| Entries per second |
| Count entries |
| Bytes per second |
| Total bytes in time range |
| Returns 1 if no logs |
Rule:
- Use for raw log-byte volume.
bytes_over_time(<log-range>) - Use with unwrapped range aggregations for numeric byte fields extracted from log content.
| unwrap bytes(field)
| 函数 | 描述 |
|---|---|
| 每秒日志条目数 |
| 日志条目总数 |
| 每秒字节数 |
| 时间范围内的总字节数 |
| 无日志时返回1 |
规则:
- 使用统计原始日志字节量。
bytes_over_time(<log-range>) - 对于从日志内容中提取的数值字节字段,使用结合展开后的范围聚合函数。
| unwrap bytes(field)
Unwrapped Range Aggregations
展开后的范围聚合函数
| Function | Description |
|---|---|
| Aggregate numeric values |
| φ-quantile (0 ≤ φ ≤ 1) |
| First/last value in interval |
| Population standard deviation of unwrapped values |
| Population variance of unwrapped values |
| Per-second rate treating values as a monotonically increasing counter |
| 函数 | 描述 |
|---|---|
| 聚合数值 |
| φ分位数(0 ≤ φ ≤ 1) |
| 时间区间内的第一个/最后一个值 |
| 展开后数值的总体标准差 |
| 展开后数值的总体方差 |
| 将数值视为单调递增计数器的每秒速率 |
Aggregation Operators
聚合运算符
sumavgminmaxcountstddevtopkbottomkapprox_topksortsort_descWith grouping: or
sum by (label1, label2)sum without (label1)sumavgminmaxcountstddevtopkbottomkapprox_topksortsort_desc分组使用:或
sum by (label1, label2)sum without (label1)Conversion Functions
转换函数
| Function | Description |
|---|---|
| Convert duration string |
| Convert byte string (KB, MB) |
| 函数 | 描述 |
|---|---|
| 转换时长字符串 |
| 转换字节字符串(KB、MB) |
label_replace()
label_replace()
logql
label_replace(rate({job="api"} |= "err" [1m]), "foo", "$1", "service", "(.*):.*")logql
label_replace(rate({job="api"} |= "err" [1m]), "foo", "$1", "service", "(.*):.*")Parser Reference
解析器参考
logfmt
logfmt
logql
| logfmt [--strict] [--keep-empty]- : Error on malformed entries
--strict - : Keep standalone keys
--keep-empty
logql
| logfmt [--strict] [--keep-empty]- :对格式错误的条目抛出错误
--strict - :保留独立的键
--keep-empty
JSON
JSON
logql
| json # All fields
| json method="request.method", status="response.status" # Specific fields
| json servers[0], headers="request.headers[\"User-Agent\"]" # Nested/arraylogql
| json # 所有字段
| json method="request.method", status="response.status" # 指定字段
| json servers[0], headers="request.headers[\"User-Agent\"]" # 嵌套/数组pattern
pattern
logql
| pattern "<ip> - - [<timestamp>] \"<method> <path> <_>\" <status> <size>"Named placeholders become extracted labels; discards a field.
<_>logql
| pattern "<ip> - - [<timestamp>] \"<method> <path> <_>\" <status> <size>"命名占位符会成为提取的标签;用于丢弃字段。
<_>regexp
regexp
logql
| regexp "(?P<level>\\w+): (?P<message>.+)"Uses named capture groups (). Slower than //.
?P<name>patternlogfmtjsonlogql
| regexp "(?P<level>\\w+): (?P<message>.+)"使用命名捕获组()。比//慢。
?P<name>patternlogfmtjsondecolorize
decolorize
logql
| decolorizeStrips ANSI color escape codes. Apply before parsing when logs come from terminal output.
logql
| decolorize移除ANSI颜色转义码。当日志来自终端输出时,在解析前应用此操作。
unpack
unpack
logql
| unpackUnpacks log entries that were packed by Promtail's pipeline stage. Restores the original log line and any embedded labels.
packlogql
| unpack解压由Promtail的流水线阶段打包的日志条目。恢复原始日志行和所有嵌入标签。
packTemplate Functions
模板函数
Common functions for and :
line_formatlabel_formatString: , , , , , , , ,
Math: , , , , , , , ,
Date: , , , ,
Regex: ,
Other: , , , , ,
trimupperlowerreplacetruncsubstrprintfcontainshasPrefixaddsubmuldivaddfsubffloorceilrounddatenowunixEpochtoDateduration_secondsregexReplaceAllcountfromJsondefaultintfloat64__line____timestamp__See for detailed usage.
examples/common_queries.logqlline_formatlabel_format字符串处理:, , , , , , , ,
数学运算:, , , , , , , ,
日期处理:, , , ,
正则处理:,
其他:, , , , ,
trimupperlowerreplacetruncsubstrprintfcontainshasPrefixaddsubmuldivaddfsubffloorceilrounddatenowunixEpochtoDateduration_secondsregexReplaceAllcountfromJsondefaultintfloat64__line____timestamp__详细用法请参考。
examples/common_queries.logqlAlerting Rules
告警规则
logql
undefinedlogql
undefinedAlert when error rate exceeds 5%
当错误率超过5%时触发告警
(sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m]))) > 0.05
(sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m]))) > 0.05
With vector() to avoid "no data"
使用vector()避免“无数据”情况
sum(rate({app="api"} | json | level="error" [5m])) or vector(0) > 10
undefinedsum(rate({app="api"} | json | level="error" [5m])) or vector(0) > 10
undefinedError Handling
错误处理
| Issue | Solution |
|---|---|
| No results | Check labels exist, verify time range, test stream selector alone |
| Query slow | Use specific selectors, filter before parsing, reduce time range |
| Parse errors | Verify log format matches parser, test JSON validity |
| High cardinality | Use line filters not label filters for unique values, aggregate |
| 问题 | 解决方案 |
|---|---|
| 无查询结果 | 检查标签是否存在、验证时间范围、单独测试日志流选择器 |
| 查询速度慢 | 使用特定的选择器、解析前先过滤、缩短时间范围 |
| 解析错误 | 验证日志格式与解析器是否匹配、测试JSON有效性 |
| 高基数问题 | 对唯一值使用行过滤而非标签过滤、进行聚合 |
Documentation Lookup
文档查询
Use Stage 6 policy. Trigger external docs for:
| Trigger | Topic to Search | Tool to Use |
|---|---|---|
| User mentions Loki 3.x features | | Context7 first |
| | Context7 first |
Pattern match operators ( | | Context7 first |
| | Context7 first |
| Recording rules configuration | | Context7 first |
| Unclear syntax or edge cases | Specific function/operator | Context7 first |
| Version-specific behavior questions | Version + feature | WebSearch fallback |
| Grafana Alloy integration | | WebSearch fallback |
遵循阶段6的策略。以下场景触发外部文档查询:
| 触发条件 | 查询主题 | 使用工具 |
|---|---|---|
| 用户提及Loki 3.x特性 | | 优先使用Context7 |
需要 | | 优先使用Context7 |
使用模式匹配运算符( | | 优先使用Context7 |
告警中使用 | | 优先使用Context7 |
| 记录规则配置 | | 优先使用Context7 |
| 语法或边界情况不明确 | 特定函数/运算符 | 优先使用Context7 |
| 版本特定行为问题 | 版本 + 特性 | WebSearch备选 |
| Grafana Alloy集成 | | WebSearch备选 |
Resources
资源
- : Query patterns, template function examples
examples/common_queries.logql - : Optimization, anti-patterns, alerting guidance
references/best_practices.md
- :查询模式、模板函数示例
examples/common_queries.logql - :优化建议、反模式、告警指导
references/best_practices.md
Example Flows
示例流程
Example A: Final Query Only (Default)
示例A:仅最终查询语句(默认)
- User asks for 5xx rate by service over 15m.
- Capture labels and format ().
json - Confirm version and mode ().
final query only - Generate one query:
logql
sum by (service) (rate({namespace="prod", app="api"} | json | status_code >= 500 [15m]))- 用户要求按服务统计最近15分钟的5xx错误率。
- 收集标签和格式()。
json - 确认版本和模式()。
仅最终查询语句 - 生成查询语句:
logql
sum by (service) (rate({namespace="prod", app="api"} | json | status_code >= 500 [15m]))Example B: Incremental Build (Optional)
示例B:分步构建(可选)
- User asks to debug login failures and requests step-by-step mode.
- Provide staged build:
logql
{app="auth"}
{app="auth"} |= "login failed"
{app="auth"} |= "login failed" | json
sum(count_over_time({app="auth"} |= "login failed" | json [5m]))- Explain where to stop if any step returns zero results.
- 用户要求排查登录失败问题,并选择分步模式。
- 提供分步构建过程:
logql
{app="auth"}
{app="auth"} |= "login failed"
{app="auth"} |= "login failed" | json
sum(count_over_time({app="auth"} |= "login failed" | json [5m]))- 说明如果某一步返回无结果,可在该步骤停止排查。
Done Criteria
完成标准
Mark task done only when all checks pass:
- Required stages (1, 2, 3, 4, 7, 8) were completed.
- Stage 5 checkpoint was completed for any complex query.
- Stage 6 lookup order followed Context7 before WebSearch when external docs were needed.
- Output mode was explicitly selected or defaulted ().
final query only - Loki/Grafana compatibility assumptions were stated when versions were unknown.
- Final output includes query text, usage note, tunables, and assumptions.
仅当所有检查项通过时,标记任务完成:
- 已完成必填阶段(1、2、3、4、7、8)。
- 对于复杂查询,已完成阶段5的检查点。
- 当需要外部文档时,遵循了Context7优先于WebSearch的查询顺序。
- 已明确选择输出模式或使用默认模式()。
仅最终查询语句 - 当版本未知时,已标注Loki/Grafana兼容性假设。
- 最终输出包含查询语句、使用说明、可调整项和假设。
Version Notes
版本说明
- Loki 3.0+: Bloom filters, structured metadata, pattern match operators (,
|>)!> - Loki 3.3+: function
approx_topk - Loki 3.5+: Promtail deprecated (use Grafana Alloy)
- Loki 3.6+: Horizontally scalable compactor, Loki UI as Grafana plugin
Deprecations: Promtail (use Alloy), BoltDB store (use TSDB with v13 schema)
- Loki 3.0+:布隆过滤器、结构化元数据、模式匹配运算符(,
|>)!> - Loki 3.3+:函数
approx_topk - Loki 3.5+:Promtail已废弃(使用Grafana Alloy)
- Loki 3.6+:水平扩展的压缩器、Loki UI作为Grafana插件
废弃说明:Promtail(使用Alloy替代)、BoltDB存储(使用TSDB v13 schema替代)