investigating-with-observability

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Troubleshooting with Observability Skills

基于可观测性技能的故障排查

Core Principle

核心原则

Random querying wastes time and produces misleading results. Empty results from wrong metric names look identical to "no problem exists." Jumping between signals without a hypothesis leads to thrashing.
Discover before you query. Hypothesize before you correlate. Confirm before you conclude.
If you haven't completed Phase 1, you cannot propose root causes. If you haven't correlated across at least two signal types, your conclusion is a guess.
随机查询会浪费时间并产生误导性结果。错误指标名称导致的空结果看起来和“不存在问题”完全一样。在没有假设的情况下在不同信号间切换会导致无效操作。
先发现再查询,先假设再关联,先确认再结论。
如果未完成阶段1,不得提出根本原因。如果未跨至少两种信号类型进行关联,你的结论只是猜测。

The Investigation Protocol

调查流程

Complete each phase before proceeding to the next.
Phase 1: Gather Signals     → What's already known? What's alerting?
Phase 2: Discover and Scope  → What data exists? What are the real names?
Phase 3: Hypothesize and Test → Form one theory, query to confirm or refute
Phase 4: Correlate and Confirm → Cross-reference across signal types, find root cause
完成每个阶段后再进入下一阶段。
阶段1:收集信号     → 已知信息有哪些?哪些警报触发了?
阶段2:发现与范围界定  → 存在哪些数据?真实的名称是什么?
阶段3:假设与测试 → 形成一个理论,通过查询来证实或推翻
阶段4:关联与确认 → 跨信号类型交叉引用,找到根本原因

Phase 1: Gather Signals

阶段1:收集信号

Before writing any query, establish what's already known.
1. Check env var availability — run the gating check from the Subagent Dispatch section.
2. Dispatch signal-gathering subagents in parallel:
SubagentConditionWhat it does
AlertManager check
VM_ALERTMANAGER_URL
available
Checks VM alerts + AlertManager alerts and silences
Metrics discovery (alerts only)
VM_METRICS_URL
available AND
VM_ALERTMANAGER_URL
NOT available
Checks VM alerts as fallback when AlertManager agent can't be dispatched
If
VM_ALERTMANAGER_URL
IS available, the AlertManager check agent handles BOTH VM alerts and AlertManager queries — no need to dispatch a separate metrics agent for alerts.
Read the agent prompt files and dispatch in a single Agent tool call. Include in each subagent's prompt:
  • The agent file content
  • The investigation target (namespace, service, or component name if known)
3. Synthesize results — once subagents return:
  • Combine alert findings from all sources
  • Establish a timeline: when did symptoms start? What changed?
  • If timeline is unclear, ask the user
4. Identify which signal type to start with:
SymptomStart withThen correlate with
Resource/rate issueMetricsLogs
Errors/crashesLogsMetrics
Latency/slow requestsTracesLogs
Alert firingMetrics (alert details)Logs + Traces
在编写任何查询之前,先明确已知信息。
1. 检查环境变量可用性 — 运行Subagent调度部分的准入检查。
2. 并行调度信号收集子代理:
子代理条件功能
AlertManager检查
VM_ALERTMANAGER_URL
可用
检查VM警报 + AlertManager警报与静默规则
指标发现(仅警报)
VM_METRICS_URL
可用 且
VM_ALERTMANAGER_URL
不可用
当无法调度AlertManager代理时,将VM警报作为备选进行检查
如果
VM_ALERTMANAGER_URL
可用,AlertManager检查代理会同时处理VM警报和AlertManager查询 — 无需单独调度指标代理来处理警报。
读取代理提示文件并通过一次Agent工具调用进行调度。在每个子代理的提示中包含:
  • 代理文件内容
  • 调查目标(已知的命名空间、服务或组件名称)
3. 综合结果 — 子代理返回结果后:
  • 合并所有来源的警报发现
  • 确定时间线:症状何时开始?发生了哪些变更?
  • 如果时间线不明确,询问用户
4. 确定从哪种信号类型开始:
症状起始信号后续关联信号
资源/速率问题指标日志
错误/崩溃日志指标
延迟/请求缓慢链路追踪日志
警报触发指标(警报详情)日志 + 链路追踪

Phase 2: Discover and Scope

阶段2:发现与范围界定

Never guess metric names, log field names, or service names. Discovery is not optional — it prevents the single most common investigation failure: drawing conclusions from empty results caused by wrong names.
Dispatch discovery subagents in parallel for ALL available backends. Read each agent prompt file and dispatch in a single Agent tool call. Include in each subagent's prompt:
  • The agent file content
  • Target namespace and/or service name (from Phase 1 findings)
  • Time range for the investigation (RFC3339 format)
  • Any specific keywords or components to search for
SubagentCondition
Metrics discovery
VM_METRICS_URL
available
Logs discovery
VM_LOGS_URL
available
Traces discovery
VM_TRACES_URL
available
Synthesize discovery results:
  • Merge discovered names across all backends
  • Note which backends have data for the target and which don't
  • Identify the richest signal source for Phase 3 hypothesis testing
Consult skill references for complex queries. You do NOT know LogsQL syntax from training data — it is NOT Loki LogQL. For complex queries beyond what the subagents already ran, invoke the corresponding
*-query
skill or use the LogsQL Quick Reference below.
永远不要猜测指标名称、日志字段名称或服务名称。 发现环节是必不可少的 — 它能避免最常见的调查失败原因:因名称错误导致空结果,进而得出错误结论。
并行调度发现子代理到所有可用后端。读取每个代理提示文件并通过一次Agent工具调用进行调度。在每个子代理的提示中包含:
  • 代理文件内容
  • 目标命名空间和/或服务名称(来自阶段1的发现)
  • 调查的时间范围(RFC3339格式)
  • 任何需要搜索的特定关键词或组件
子代理条件
指标发现
VM_METRICS_URL
可用
日志发现
VM_LOGS_URL
可用
链路追踪发现
VM_TRACES_URL
可用
综合发现结果:
  • 合并所有后端的已发现名称
  • 记录哪些后端有目标数据,哪些没有
  • 确定阶段3假设测试的最丰富信号源
复杂查询请参考技能文档。 你无法仅凭训练数据掌握LogsQL语法 — 它与Loki LogQL不同。对于子代理已执行查询之外的复杂查询,请调用对应的
*-query
技能或参考下方的LogsQL快速指南。

Phase 3: Hypothesize and Test

阶段3:假设与测试

After discovery, form a specific hypothesis before querying further.
State it clearly: "I think [component X] is [failing/slow/OOM] because [evidence Y from Phase 1]."
Test minimally:
  • Query ONE thing to confirm or refute the hypothesis
  • Don't query everything at once — you'll drown in data
  • Use instant queries first (cheaper, faster) before range queries
If the hypothesis is wrong:
  • Don't add more queries on top — form a NEW hypothesis
  • Re-examine what Phase 1 and Phase 2 revealed
  • Ask: did discovery show anything unexpected?
After 3 failed hypotheses: STOP. Three wrong guesses means you're missing something fundamental. Either:
  • A key data source hasn't been discovered yet
  • The scope is wrong (different namespace, different service, different time range)
  • You need to ask the user for more context
完成发现后,在进一步查询前形成具体假设。
清晰陈述假设:“我认为[组件X]出现了[故障/缓慢/内存不足],因为[阶段1中的证据Y]。”
最小化测试:
  • 仅查询一件事来证实或推翻假设
  • 不要同时查询所有内容 — 你会被数据淹没
  • 先使用即时查询(成本更低、速度更快),再使用范围查询
如果假设错误:
  • 不要在错误假设上叠加更多查询 — 形成新的假设
  • 重新审视阶段1和阶段2的发现
  • 思考:发现环节是否显示了意外内容?
3次假设失败后:停止。 三次错误猜测意味着你遗漏了关键信息。可能的原因:
  • 关键数据源尚未被发现
  • 范围错误(错误的命名空间、服务或时间范围)
  • 需要向用户询问更多上下文

Phase 4: Correlate and Confirm

阶段4:关联与确认

A single signal type is not proof. Correlate across at least two before concluding.
Dispatch correlation subagents in parallel for the signal types you need. Reuse the same agent prompt files from
agents/
, but provide specific queries rather than discovery tasks. Include in each subagent's prompt:
  • The agent file content
  • The specific query to run (metric expression, log filter, trace search parameters)
  • The exact time range to query (narrowed from Phase 3 findings)
  • What to look for (the confirmed hypothesis from Phase 3)
Example parallel dispatch for correlation:
  • Metrics agent: "Query
    rate(http_requests_total{code=~'5..', namespace='myapp'}[5m])
    from T1 to T2"
  • Logs agent: "Search
    {namespace='myapp'} error
    from T1 to T2, return sample messages"
  • Traces agent: "Search traces for service
    myapp
    with
    minDuration=1s
    from T1 to T2"
Correlation techniques:
  • Time-based: Identify anomaly timestamp in metrics, query logs/traces at that time
  • Trace ID: Find trace IDs in traces, search logs for
    trace_id:"<id>"
  • Pod name: Get pod name from metrics labels, use it in log stream filters
Only after correlation: propose root cause and remediation.
单一信号类型不足以作为证据。得出结论前至少要跨两种信号类型进行关联。
并行调度关联子代理到所需的信号类型。重复使用
agents/
中的代理提示文件,但提供具体查询而非发现任务。在每个子代理的提示中包含:
  • 代理文件内容
  • 要运行的具体查询(指标表达式、日志过滤器、链路追踪搜索参数)
  • 要查询的精确时间范围(从阶段3的发现中缩小范围)
  • 需要关注的内容(阶段3中已确认的假设)
关联的并行调度示例:
  • 指标代理:“在T1到T2时间段内查询
    rate(http_requests_total{code=~'5..', namespace='myapp'}[5m])
  • 日志代理:“在T1到T2时间段内搜索
    {namespace='myapp'} error
    ,返回样本消息”
  • 链路追踪代理:“在T1到T2时间段内搜索服务
    myapp
    minDuration=1s
    的链路追踪数据”
关联技巧:
  • 基于时间:在指标中识别异常时间戳,查询该时间点的日志/链路追踪数据
  • Trace ID:在链路追踪中找到Trace ID,在日志中搜索
    trace_id:"<id>"
  • Pod名称:从指标标签中获取Pod名称,将其用于日志流过滤器
只有完成关联后,才能提出根本原因和补救措施。

Red Flags — STOP and Return to Phase 1

危险信号 — 停止并返回阶段1

If you catch yourself:
  • Proposing a root cause after querying only one signal type
  • Writing a LogsQL query from memory without checking syntax
  • Querying a metric name you haven't confirmed exists via discovery
  • Getting empty results and concluding "no problem"
  • Skipping the alerts check because "it's probably not that"
  • Running five different queries hoping one shows something
  • Saying "let me just try..." instead of forming a hypothesis
All of these mean: STOP. You're guessing, not investigating.
如果你发现自己出现以下行为:
  • 仅查询一种信号类型就提出根本原因
  • 凭记忆编写LogsQL查询而不检查语法
  • 查询未通过发现环节确认存在的指标名称
  • 得到空结果就得出“没有问题”的结论
  • 因为“可能没什么用”而跳过警报检查
  • 运行5个不同查询,希望其中一个能发现问题
  • 不说“我形成了一个假设”,而是说“让我试试...”
所有这些行为都意味着:停止。你在猜测,而非调查。

Common Rationalizations

常见自我辩解

ExcuseReality
"I know the metric name"Maybe. Discovery takes 2 seconds and prevents 20 minutes of chasing empty results.
"Alerts won't help here"Alerts are free to check and frequently contain the exact answer. Skip at your peril.
"Just need to check logs quickly"Quick log checks without discovery produce wrong field names and misleading results.
"Empty results = no problem"Empty results more often mean wrong query than absent problem. Verify names first.
"I'll correlate later"Single-signal conclusions are guesses. Correlate before claiming root cause.
"LogsQL is like LogQL/Elasticsearch"It's not. The syntax differences cause silent failures. Consult the reference.
借口现实
“我知道指标名称”也许吧。发现环节只需2秒,却能避免20分钟的空结果排查。
“警报对这里没用”检查警报是免费的,且常常包含确切答案。跳过需自行承担风险。
“只需快速查看日志”没有发现环节的快速日志检查会导致错误的字段名称和误导性结果。
“空结果=没有问题”空结果更可能是查询错误而非没有问题。先验证名称。
“我之后再关联”单一信号的结论只是猜测。得出根本原因前必须关联。
“LogsQL和LogQL/Elasticsearch类似”并不相似。语法差异会导致静默失败。请参考文档。

Environment

环境

Environment is controlled by env vars. Check current state:
bash
echo "VM_METRICS_URL:      $VM_METRICS_URL"
echo "VM_LOGS_URL:         $VM_LOGS_URL"
echo "VM_TRACES_URL:       $VM_TRACES_URL"
echo "VM_ALERTMANAGER_URL: $VM_ALERTMANAGER_URL"
if [ -n "${VM_AUTH_HEADER-}" ]; then
  echo "VM_AUTH_HEADER:      (set)"
else
  echo "VM_AUTH_HEADER:      (empty - no auth)"
fi
If unsure which environment the application runs in, ask user.
环境由环境变量控制。检查当前状态:
bash
echo "VM_METRICS_URL:      $VM_METRICS_URL"
echo "VM_LOGS_URL:         $VM_LOGS_URL"
echo "VM_TRACES_URL:       $VM_TRACES_URL"
echo "VM_ALERTMANAGER_URL: $VM_ALERTMANAGER_URL"
if [ -n "${VM_AUTH_HEADER-}" ]; then
  echo "VM_AUTH_HEADER:      (set)"
else
  echo "VM_AUTH_HEADER:      (empty - no auth)"
fi
如果不确定应用运行在哪个环境,询问用户。

Subagent Dispatch

子代理调度

This skill dispatches parallel subagents at phase boundaries to speed up investigations. Each subagent carries embedded API reference and returns structured findings.
本技能会在阶段边界处并行调度子代理,以加快调查速度。每个子代理都携带嵌入式API参考并返回结构化发现结果。

Env Var Gating

环境变量准入

Before each dispatch round, check which backends are available:
bash
echo "METRICS:${VM_METRICS_URL:+available}"
echo "LOGS:${VM_LOGS_URL:+available}"
echo "TRACES:${VM_TRACES_URL:+available}"
echo "ALERTMANAGER:${VM_ALERTMANAGER_URL:+available}"
Only dispatch subagents for backends that report
available
.
Do not dispatch a subagent if its env var is empty or unset.
每次调度前,检查哪些后端可用:
bash
echo "METRICS:${VM_METRICS_URL:+available}"
echo "LOGS:${VM_LOGS_URL:+available}"
echo "TRACES:${VM_TRACES_URL:+available}"
echo "ALERTMANAGER:${VM_ALERTMANAGER_URL:+available}"
仅向后端可用的子代理发送调度请求。 如果环境变量为空或未设置,不要调度对应的子代理。

How to Dispatch

调度方式

  1. Read the agent prompt file from the
    agents/
    directory (relative to this skill's directory)
  2. Use the Agent tool to dispatch, including:
    • The agent prompt file content as the base instructions
    • Investigation context: target namespace, service name, time range (RFC3339)
    • Any specific queries or metrics to look for
  3. Dispatch independent subagents in the SAME tool-call message for parallel execution
  4. Set
    allowed-tools: Bash(curl:*)
    on each subagent
  1. agents/
    目录(相对于本技能目录)读取代理提示文件
  2. 使用Agent工具进行调度,包含:
    • 代理提示文件内容作为基础指令
    • 调查上下文:目标命名空间、服务名称、时间范围(RFC3339格式)
    • 任何特定查询或要查找的指标
  3. 在同一条工具调用消息中调度独立子代理以实现并行执行
  4. 为每个子代理设置
    allowed-tools: Bash(curl:*)

Agent Files

代理文件

AgentFileRequiresUsed in
AlertManager check
agents/alertmanager-check.md
VM_ALERTMANAGER_URL
+
VM_METRICS_URL
Phase 1
Metrics discovery
agents/metrics-discovery.md
VM_METRICS_URL
Phase 2, 4
Logs discovery
agents/logs-discovery.md
VM_LOGS_URL
Phase 2, 4
Traces discovery
agents/traces-discovery.md
VM_TRACES_URL
Phase 2, 4
代理文件依赖使用阶段
AlertManager检查
agents/alertmanager-check.md
VM_ALERTMANAGER_URL
+
VM_METRICS_URL
阶段1
指标发现
agents/metrics-discovery.md
VM_METRICS_URL
阶段2、4
日志发现
agents/logs-discovery.md
VM_LOGS_URL
阶段2、4
链路追踪发现
agents/traces-discovery.md
VM_TRACES_URL
阶段2、4

Skill-to-API Mapping

技能到API的映射

victoriametrics-query  = Metrics only (MetricsQL/PromQL) → $VM_METRICS_URL
victorialogs-query     = Logs only (LogsQL)              → $VM_LOGS_URL
victoriatraces-query   = Traces only (Jaeger API)        → $VM_TRACES_URL
alertmanager-query     = AlertManager (silences, routing) → $VM_ALERTMANAGER_URL
Never cross APIs between backends. Use the correct env var and endpoint for each data type.
AlertManager provides what VM alerts cannot: silences and inhibition state. But it's in-cluster and may be down — fall back to
$VM_METRICS_URL/api/v1/alerts
when unavailable.
victoriametrics-query  = 仅指标(MetricsQL/PromQL) → $VM_METRICS_URL
victorialogs-query     = 仅日志(LogsQL)              → $VM_LOGS_URL
victoriatraces-query   = 仅链路追踪(Jaeger API)        → $VM_TRACES_URL
alertmanager-query     = AlertManager(静默规则、路由) → $VM_ALERTMANAGER_URL
永远不要跨后端混用API。为每种数据类型使用正确的环境变量和端点。
AlertManager提供VM警报不具备的功能:静默规则抑制状态。但它运行在集群内部,可能会宕机 — 不可用时回退到
$VM_METRICS_URL/api/v1/alerts

Timestamp Formats

时间戳格式

BackendParameterFormatExample
VictoriaMetricsstart/endRFC3339 string
2026-02-06T09:00:00Z
VictoriaLogsstart (REQUIRED), endRFC3339 string
2026-02-06T09:00:00Z
VictoriaTracesstart/endUnix microseconds NUMBER
1738836000000000
(16 digits)
VictoriaTraces (dependencies)endTs/lookbackUnix milliseconds NUMBER
1738836000000
(13 digits) /
3600000
VictoriaLogs
start
is always required — omitting it scans ALL stored data (extremely expensive).
后端参数格式示例
VictoriaMetricsstart/endRFC3339字符串
2026-02-06T09:00:00Z
VictoriaLogsstart(必填), endRFC3339字符串
2026-02-06T09:00:00Z
VictoriaTracesstart/endUnix微秒数字
1738836000000000
(16位)
VictoriaTraces(依赖)endTs/lookbackUnix毫秒数字
1738836000000
(13位) /
3600000
VictoriaLogs的
start
参数始终是必填项 — 省略它会扫描所有存储数据(成本极高)。

Discovery Protocol

发现流程

Follow this order for each signal type. For full API details and additional endpoints, invoke the corresponding query skill.
为每种信号类型按以下顺序操作。如需完整API详情和额外端点,调用对应的查询技能。

Metrics Discovery →
victoriametrics-query
skill

指标发现 →
victoriametrics-query
技能

  1. Search metadata by keyword:
    $VM_METRICS_URL/api/v1/metadata?metric=<keyword>&limit=10
  2. Label values for scoping:
    $VM_METRICS_URL/api/v1/label/<label_name>/values
    (filter with
    match[]
    )
  3. Series for a namespace:
    $VM_METRICS_URL/api/v1/series?limit=20
    with
    match[]={namespace="X"}
  4. Then query — instant at
    api/v1/query
    or range at
    api/v1/query_range
    (range requires
    start
    , RFC3339)
  1. 按关键词搜索元数据
    $VM_METRICS_URL/api/v1/metadata?metric=<keyword>&limit=10
  2. 标签值(用于范围界定)
    $VM_METRICS_URL/api/v1/label/<label_name>/values
    (使用
    match[]
    过滤)
  3. 命名空间下的序列
    $VM_METRICS_URL/api/v1/series?limit=20
    ,搭配
    match[]={namespace="X"}
  4. 执行查询 — 即时查询使用
    api/v1/query
    ,范围查询使用
    api/v1/query_range
    (范围查询需要
    start
    ,RFC3339格式)

Logs Discovery →
victorialogs-query
skill

日志发现 →
victorialogs-query
技能

ALL VictoriaLogs endpoints require
start
(RFC3339). Use
--data-urlencode
for the query parameter.
  1. Stream field names:
    $VM_LOGS_URL/select/logsql/stream_field_names?start=<RFC3339>
  2. Stream field values:
    $VM_LOGS_URL/select/logsql/stream_field_values?start=<RFC3339>&field=namespace
  3. Facets (best discovery tool — all field distributions in one call):
    $VM_LOGS_URL/select/logsql/facets?start=<RFC3339>
  4. Non-stream field names:
    $VM_LOGS_URL/select/logsql/field_names?start=<RFC3339>
  5. Then query (JSON Lines response):
    $VM_LOGS_URL/select/logsql/query?start=<RFC3339>&limit=100
所有VictoriaLogs端点都需要
start
参数(RFC3339格式)。对查询参数使用
--data-urlencode
  1. 流字段名称
    $VM_LOGS_URL/select/logsql/stream_field_names?start=<RFC3339>
  2. 流字段值
    $VM_LOGS_URL/select/logsql/stream_field_values?start=<RFC3339>&field=namespace
  3. 维度分析(最佳发现工具 — 一次调用返回所有字段分布):
    $VM_LOGS_URL/select/logsql/facets?start=<RFC3339>
  4. 非流字段名称
    $VM_LOGS_URL/select/logsql/field_names?start=<RFC3339>
  5. 执行查询(JSON Lines响应):
    $VM_LOGS_URL/select/logsql/query?start=<RFC3339>&limit=100

Traces Discovery →
victoriatraces-query
skill

链路追踪发现 →
victoriatraces-query
技能

Trace discovery endpoints accept NO time-range parameters:
  1. List services:
    $VM_TRACES_URL/api/services
  2. Operations for a service:
    $VM_TRACES_URL/api/services/<service>/operations
  3. Dependencies (Unix milliseconds, 13 digits):
    $VM_TRACES_URL/api/dependencies?endTs=<ms>&lookback=3600000
  4. Then search traces (
    service
    required, times in Unix microseconds, 16 digits):
    $VM_TRACES_URL/api/traces?service=<svc>&start=<µs>&end=<µs>&limit=20
链路追踪发现端点不接受任何时间范围参数:
  1. 列出服务
    $VM_TRACES_URL/api/services
  2. 服务的操作
    $VM_TRACES_URL/api/services/<service>/operations
  3. 依赖关系(Unix毫秒,13位):
    $VM_TRACES_URL/api/dependencies?endTs=<ms>&lookback=3600000
  4. 搜索链路追踪数据
    service
    为必填项,时间为Unix微秒,16位):
    $VM_TRACES_URL/api/traces?service=<svc>&start=<µs>&end=<µs>&limit=20

LogsQL Quick Reference

LogsQL快速参考

For full LogsQL syntax, invoke the
victorialogs-query
skill. Key points:
  • LogsQL is space-separated (AND by default). Pipes use
    |
    .
  • Stream filters:
    {namespace="myapp"}
  • Word filters:
    {namespace="myapp"} error
  • OR:
    (error OR warning)
    , Regex:
    ~"err|warn"
    , Field-specific:
    level:error
  • Time filter:
    _time:1h
    (alternative to API
    start
    /
    end
    params — use one OR the other, never both)
  • Negation:
    -"expected error"
  • Stats:
    | stats by (level) count() as total
Common mistakes:
| grep
does NOT exist (use word filters or
~"regex"
).
| filter
is valid ONLY after
| stats
. Stream field names depend on ingestion config — discover them first.
如需完整LogsQL语法,调用
victorialogs-query
技能。关键点:
  • LogsQL使用空格分隔(默认逻辑与)。管道符使用
    |
  • 流过滤器:
    {namespace="myapp"}
  • 关键词过滤器:
    {namespace="myapp"} error
  • 逻辑或:
    (error OR warning)
    ,正则表达式:
    ~"err|warn"
    ,字段特定过滤:
    level:error
  • 时间过滤器:
    _time:1h
    (API
    start
    /
    end
    参数的替代方案 — 二选一,不要同时使用)
  • 否定:
    -"expected error"
  • 统计:
    | stats by (level) count() as total
常见错误:
| grep
不存在(使用关键词过滤器或
~"regex"
)。
| filter
仅在
| stats
之后有效。流字段名称取决于 ingestion 配置 — 先进行发现。

Investigation Playbooks

调查手册

"Application is slow"

“应用程序运行缓慢”

  1. Phase 1: Check alerts. Establish timeline — when did latency increase?
  2. Phase 2: Discover traced services and metrics matching the app
  3. Phase 3: Hypothesize — "latency is in [service X] based on [alert/user report]"
    • Query latency/error rate metrics for that service
    • Search traces with
      minDuration
      filter to find slow spans
  4. Phase 4: Correlate trace timestamps with logs around those times
  1. 阶段1:检查警报。确定时间线 — 延迟何时增加?
  2. 阶段2:发现与应用匹配的链路追踪服务和指标
  3. 阶段3:假设 — “基于[警报/用户报告],延迟出现在[服务X]中”
    • 查询该服务的延迟/错误率指标
    • 使用
      minDuration
      过滤器搜索链路追踪数据以找到缓慢的Span
  4. 阶段4:将链路追踪时间戳与该时间段的日志进行关联

"Pod crash looping"

“Pod崩溃循环”

  1. Phase 1: Check alerts (may already show KubePodCrashLooping). Get pod name.
  2. Phase 2: Discover metrics for restart counts, memory usage. Discover log streams for the pod.
  3. Phase 3: Hypothesize — OOM? Liveness probe failure? Startup crash?
    • Regular interval crashes → liveness probe. Memory spike before crash → OOM.
  4. Phase 4: Correlate error logs with metric timestamps to confirm cause.
  1. 阶段1:检查警报(可能已显示KubePodCrashLooping)。获取Pod名称。
  2. 阶段2:发现重启次数、内存使用的指标。发现该Pod的日志流。
  3. 阶段3:假设 — 内存不足?存活探针失败?启动崩溃?
    • 定期间隔崩溃 → 存活探针问题。崩溃前内存飙升 → 内存不足。
  4. 阶段4:将错误日志与指标时间戳关联以确认原因。

"Resource growing"

“资源持续增长”

  1. Phase 1: Check alerts. How fast is it growing?
  2. Phase 2: Discover resource usage metrics for the namespace/pod
  3. Phase 3: Hypothesize — leak? Increased load? Missing limits?
    • Use
      deriv()
      or
      increase()
      to quantify growth rate
    • Check per-pod breakdown to isolate the culprit
  4. Phase 4: Correlate with deployment events in logs. Did growth start after a deploy?
  1. 阶段1:检查警报。增长速度有多快?
  2. 阶段2:发现命名空间/Pod的资源使用指标
  3. 阶段3:假设 — 内存泄漏?负载增加?缺少资源限制?
    • 使用
      deriv()
      increase()
      量化增长速度
    • 检查每个Pod的细分数据以定位问题来源
  4. 阶段4:与日志中的部署事件关联。增长是否在部署后开始?

Common Mistakes

常见错误

MistakeFix
Guessing metric namesUse metadata endpoint:
$VM_METRICS_URL/api/v1/metadata?metric=keyword
Writing LogsQL from memoryConsult LogsQL Quick Reference above or victorialogs-query skill
Wrong timestamp formatSee Timestamp Formats table above
Skipping alerts checkQuery
$VM_METRICS_URL/api/v1/alerts
first — it's free
Empty results → "no problem"Verify metric/field names exist via discovery first
Not using
facets
for log exploration
facets
returns field distributions in one call
Not URL-encoding queriesUse
--data-urlencode 'query=...'
for POST requests
Missing
start
on VictoriaLogs
Omitting
start
scans ALL data (extremely expensive)
Forgetting
match[]
needs
[]
match
alone won't work — must be
match[]
Wrong timestamp type for tracesSearch uses MICROSECONDS (16 digits), dependencies use MILLISECONDS (13 digits)
Confusing
stats_query
vs
stats_query_range
Instant uses
time
, range uses
start
/
end
/
step
Mixing
_time:
filter with API
start
Use one OR the other, never both
Searching "error" catching vmselect noiseAdd
-"vm_slow_query_stats"
to exclude PromQL text
Grouping logs by
cluster
field
Vector logs lack
cluster
stream field — use
kubernetes.pod_namespace
Blocking on AlertManager failureUse VM alerts as primary, AlertManager as best-effort
Single-signal conclusionCorrelate across at least two signal types before claiming root cause
错误修复方案
猜测指标名称使用元数据端点:
$VM_METRICS_URL/api/v1/metadata?metric=keyword
凭记忆编写LogsQL参考上方的LogsQL快速指南或victorialogs-query技能
时间戳格式错误参考上方的时间戳格式表格
跳过警报检查先查询
$VM_METRICS_URL/api/v1/alerts
— 这是免费的
空结果→“没有问题”先通过发现环节验证指标/字段名称是否存在
日志探索时未使用
facets
facets
可一次调用返回字段分布
未对查询进行URL编码POST请求使用
--data-urlencode 'query=...'
VictoriaLogs缺少
start
参数
省略该参数会扫描所有数据(成本极高)
忘记
match[]
需要
[]
match
无效 — 必须使用
match[]
链路追踪时间戳类型错误搜索使用微秒(16位),依赖关系使用毫秒(13位)
混淆
stats_query
stats_query_range
即时查询使用
time
,范围查询使用
start
/
end
/
step
同时使用
_time:
过滤器与API
start
参数
二选一,不要同时使用
搜索“error”时捕获到vmselect噪声添加
-"vm_slow_query_stats"
以排除PromQL文本
cluster
字段分组日志
Vector日志缺少
cluster
流字段 — 使用
kubernetes.pod_namespace
因AlertManager故障而阻塞将VM警报作为主要来源,AlertManager作为可选补充
单一信号结论得出根本原因前至少跨两种信号类型进行关联