alibabacloud-sls-query

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Alibaba Cloud SLS Query & Analysis

阿里云SLS查询与分析

Scenario Description

场景说明

Use this skill when the user wants to:
  • Explain, rewrite, optimize or execute an existing query
  • Translate a natural-language requirement into an SLS index query, SQL, or SPL statement

当用户有以下需求时,可使用本技能:
  • 解释、重写、优化或执行现有查询语句
  • 将自然语言需求转换为SLS 索引查询SQLSPL语句

Prerequisites

前置条件

Install Aliyun CLI

安装Aliyun CLI

Run
aliyun version
to verify if version >=
3.3.8
. If not installed or outdated, follow the doc references/cli-installation-guide.md to install or update.
执行
aliyun version
验证版本是否 ≥
3.3.8
。若未安装或版本过时,请参考文档references/cli-installation-guide.md进行安装或更新。

Ensure AI Mode Enabled

确保AI模式已启用

Before executing any CLI commands, enable AI-Mode, set User-Agent, and update plugins:
bash
aliyun configure ai-mode enable
aliyun configure ai-mode set-user-agent --user-agent "AlibabaCloud-Agent-Skills/alibabacloud-sls-query"
aliyun plugin update
执行任何CLI命令前,需启用AI模式、设置User-Agent并更新插件:
bash
aliyun configure ai-mode enable
aliyun configure ai-mode set-user-agent --user-agent "AlibabaCloud-Agent-Skills/alibabacloud-sls-query"
aliyun plugin update

Check Alibaba Cloud credentials configured

检查阿里云凭证配置情况

Run
aliyun configure list
to check if credentials configured. If no valid profile is shown, STOP here and ask the user to run
aliyun configure
outside of this session.
Security rules:
  • NEVER read, echo, or print AK/SK values
  • NEVER ask the user to paste AK/SK into the conversation
  • ONLY use
    aliyun configure list
    to check credential status

执行
aliyun configure list
检查凭证是否已配置。 若未显示有效配置文件,请立即停止并告知用户需在本会话外执行
aliyun configure
完成配置。
安全规则:
  • 严禁读取、回显或打印AK/SK值
  • 严禁要求用户在对话中粘贴AK/SK
  • 仅可使用
    aliyun configure list
    检查凭证状态

RAM Permission Requirements

RAM权限要求

APICLIActionPurpose
GetLogsV2
get-logs-v2
log:GetLogStoreLogs
Run query / SQL / SPL and read results
GetIndex
get-index
log:GetIndex
Read index config to verify prerequisites
For the minimum and complete RAM policy JSON, see references/ram-policies.md.
Permission failure handling: If a call returns
Unauthorized
permission error, stop and surface references/ram-policies.md to the user. Do not retry with a different account without explicit user confirmation.

APICLI命令操作权限用途
GetLogsV2
get-logs-v2
log:GetLogStoreLogs
运行查询/SQL/SPL并读取结果
GetIndex
get-index
log:GetIndex
读取索引配置以验证前置条件
最小权限与完整RAM策略JSON,请参考references/ram-policies.md
**权限失败处理:**若调用返回
Unauthorized
权限错误,请停止操作并向用户提供references/ram-policies.md链接。未经用户明确确认,请勿使用其他账号重试。

Core Workflow

核心工作流程

  1. Read index configuration (GetIndex)
  2. Pick query mode
  3. Build statement
  4. Resolve time range
  5. Execute query
  6. Extract data from response
  7. Present CLI command and results
  1. 读取索引配置(GetIndex)
  2. 选择查询模式
  3. 构建语句
  4. 解析时间范围
  5. 执行查询
  6. 从响应中提取数据
  7. 展示CLI命令与结果

Step 1: Read the Index Configuration (Mandatory)

步骤1:读取索引配置(必填)

Always call
get-index
first — the index config decides which query modes are available in Step 2.
bash
aliyun sls get-index \
  --project <project> --logstore <logstore>
Two sections in the response drive every later decision:
SectionMeaning
line
Full-text index — absence means full-text search is disabled
keys
Field indexes — map of field →
{ type, doc_value, token, caseSensitive, chn, ... }
.
doc_value: true
means statistics are enabled on that field
If the call returns
IndexConfigNotExist
(HTTP 404), or the response has neither
line
nor
keys
populated, the Logstore has no index at all — stop immediately and tell the user they must create an index before any query / SQL / SPL can run.
  • The response can be large — extract only the fields relevant to the current query. Cache per
    logstore
    and reuse within the session.
For field types, tokenization, and how
get-index
maps to capabilities, see references/related-apis.md and references/query-analysis.md.

必须先调用
get-index
——索引配置决定了步骤2中可用的查询模式。
bash
aliyun sls get-index \
  --project <project> --logstore <logstore>
响应中的两个部分会影响后续所有决策:
部分含义
line
全文索引——不存在则表示全文搜索已禁用
keys
字段索引——字段到
{ type, doc_value, token, caseSensitive, chn, ... }
的映射。
doc_value: true
表示该字段已启用统计功能
若调用返回
IndexConfigNotExist
(HTTP 404),或响应中
line
keys
均为空,则该Logstore无任何索引——需立即停止操作并告知用户必须先创建索引才能运行查询/SQL/SPL。
  • 响应内容可能较大——仅提取与当前查询相关的字段。按
    logstore
    缓存并在会话内复用。
关于字段类型、分词规则以及
get-index
与功能的对应关系,请参考references/related-apis.mdreferences/query-analysis.md

Step 2: Pick the Query Mode (Critical)

步骤2:选择查询模式(关键)

The query statement takes one of the following forms:
PriorityModeStatement FormUse whenRequires
1Index search
<index-search>
Filtering raw logs; return time-ordered and paginated logsFull-text (
line
) or any field index (
keys.<field>
)
2SQL
<index-search> | <SQL>
Aggregation,
GROUP BY
, sort, window, top-N, projection, and other analytical operations
Target field has
keys.<field>
with
doc_value: true
3SQL scan
<index-search> | <SQL scan>
User requestedNone
4SPL
<index-search> | <SPL>
User requestedNone
Selection rule:
  • Always prefer Index search for fastest speed.
  • Use Index search + SQL when the user needs analytical operations or field projection rather than full raw-log retrieval, such as aggregation,
    GROUP BY
    , sorting, window analysis, top-N, or returning only the required fields/columns.
  • Do not proactively choose SQL scan or SPL; use them only when the user explicitly requests.
For the full decision guide, see references/query-analysis.md.

查询语句可采用以下形式之一:
优先级模式语句格式使用场景要求
1索引搜索
<index-search>
过滤原始日志;返回按时间排序的分页日志全文索引(
line
)或任意字段索引(
keys.<field>
2SQL
<index-search> | <SQL>
聚合、
GROUP BY
、排序、窗口分析、Top-N、字段投影等分析操作
目标字段的
keys.<field>
doc_value: true
3SQL扫描
<index-search> | <SQL scan>
用户明确要求
4SPL
<index-search> | <SPL>
用户明确要求
选择规则:
  • 优先选择索引搜索以获得最快速度。
  • 当用户需要分析操作或字段投影而非完整原始日志检索时(如聚合、
    GROUP BY
    、排序、窗口分析、Top-N或仅返回所需字段),使用索引搜索+SQL
  • 请勿主动选择SQL扫描或SPL;仅在用户明确要求时使用。
完整决策指南,请参考references/query-analysis.md

Step 3: Write the Statement

步骤3:编写语句

3.1 Build the index-search segment first (left of
|
)

3.1 先构建索引搜索段(
|
左侧)

Collect every filter that can be expressed in index-search syntax and place it before the first
|
. Use
*
if no filter applies.
text
* and "payment failed" and status: "500" and not path: "/healthz"
  • *
    matches all;
    "..."
    is full-text (needs full-text index).
  • key: "value"
    is a field filter (needs field index).
  • Combine with
    and
    /
    or
    /
    not
    ; group with parentheses.
  • key: *
    means field exists. Range (
    >
    ,
    >=
    ,
    [a, b]
    ) works only on
    long
    /
    double
    .
If the requirement can be fully answered without aggregation or row-level processing, stop here — this is already a complete index search. For full index-search syntax, see references/query-analysis.md.
收集所有可用索引搜索语法表达的过滤条件,放在第一个
|
之前。若无过滤条件,使用
*
text
* and "payment failed" and status: "500" and not path: "/healthz"
  • *
    匹配所有;
    "..."
    为全文搜索(需全文索引)。
  • key: "value"
    为字段过滤(需字段索引)。
  • 使用
    and
    /
    or
    /
    not
    组合;用括号分组。
  • key: *
    表示字段存在。范围查询(
    >
    >=
    [a, b]
    )仅适用于
    long
    /
    double
    类型字段。
若需求无需聚合或行级处理即可完全满足,则到此为止——这已是完整的索引搜索语句。完整索引搜索语法,请参考references/query-analysis.md

3.2 Append SQL — for aggregation / analytics

3.2 追加SQL——用于聚合/分析

sql
status: 500 | SELECT date_trunc('minute', __time__) AS minute,
                    count(*) AS errors
              FROM log
              GROUP BY minute
              ORDER BY minute
  • Read references/query-analysis.md for Query & SQL rules
  • Table name is
    log
    (recommended to omit).
  • SQL respects the indexed field type from
    get-index
    — a
    long
    /
    double
    field can be compared directly (
    status >= 500
    ). Cast only when a field is indexed as
    text
    but numeric semantics are needed (
    try_cast
    to suppress errors).
  • Read references/functions-guide.md for unusual Function selection (aggregate, JSON, regex, datetime, IP geo …)
sql
status: 500 | SELECT date_trunc('minute', __time__) AS minute,
                    count(*) AS errors
              FROM log
              GROUP BY minute
              ORDER BY minute
  • 查询与SQL规则请参考references/query-analysis.md
  • 表名推荐使用
    log
    (可省略)。
  • SQL遵循
    get-index
    返回的索引字段类型——
    long
    /
    double
    类型字段可直接比较(
    status >= 500
    )。仅当字段被索引为
    text
    但需要数值语义时才进行转换(使用
    try_cast
    避免报错)。
  • 特殊函数选择(聚合、JSON、正则、日期时间、IP地理信息等)请参考references/functions-guide.md

3.3 Append SPL — for row-level processing / flexible filtering

3.3 追加SPL——用于行级处理/灵活过滤

spl
status: 500 and service: payment
| where try_cast(latency as BIGINT) > 1000
| extend latency_ms = try_cast(latency as BIGINT)
| project service, latency_ms, message
For SPL syntax, pipeline commands, and field-handling rules, read references/spl-guide.md.
spl
status: 500 and service: payment
| where try_cast(latency as BIGINT) > 1000
| extend latency_ms = try_cast(latency as BIGINT)
| project service, latency_ms, message
SPL语法、管道命令与字段处理规则,请参考references/spl-guide.md

3.4 Append SQL scan — fallback when the target field has no index / statistics

3.4 追加SQL扫描——目标字段无索引/统计时的 fallback 方案

Syntax follows regular SQL (see 3.2), with one difference: every field is
varchar
, so always
cast()
/
try_cast()
before numeric comparison or arithmetic. See references/query-analysis.md for scan semantics.
sql
* | set session mode=scan; SELECT api, count(1) AS pv FROM log GROUP BY api

语法遵循常规SQL(见3.2),唯一区别:所有字段均为
varchar
类型
,因此在数值比较或运算前必须先执行
cast()
/
try_cast()
。扫描语义请参考references/query-analysis.md
sql
* | set session mode=scan; SELECT api, count(1) AS pv FROM log GROUP BY api

Step 4: Resolve the Time Range

步骤4:解析时间范围

Generate
--from
/
--to
as Unix timestamps in seconds before building the CLI command.
--from
is inclusive and
--to
is exclusive.
Choose one of three input patterns:
  1. Relative time — user says "recent / last N minutes|hours|days".
  2. Natural-language absolute time without timezone — normalize to
    YYYY-MM-DD HH:MM:SS
    , then parse using the machine's local timezone.
  3. Absolute time with explicit timezone — parse using the customer-provided timezone or UTC offset.
1. Relative time
bash
undefined
构建CLI命令前,将
--from
/
--to
生成为秒级Unix时间戳
--from
为包含性时间,
--to
为排他性时间。
可选择以下三种输入模式之一:
  1. 相对时间——用户提及"最近/过去N分钟|小时|天"。
  2. 无时区的自然语言绝对时间——标准化为
    YYYY-MM-DD HH:MM:SS
    ,然后使用机器本地时区解析。
  3. 带明确时区的绝对时间——使用用户提供的时区或UTC偏移量解析。
1. 相对时间
bash
undefined

recent 15 minutes

最近15分钟

FROM=$(($(date +%s) - 900)) TO=$(date +%s)

**2. Natural-language absolute time without timezone**

If the user gives a date/time but no timezone, use the machine's local timezone. First normalize natural language such as `2026年3月13日12点` to `2026-03-13 12:00:00`, then parse it as local time.

```bash
FROM=$(($(date +%s) - 900)) TO=$(date +%s)

**2. 无时区的自然语言绝对时间**

若用户提供日期/时间但未指定时区,使用机器本地时区。先将自然语言(如`2026年3月13日12点`)标准化为`2026-03-13 12:00:00`,再按本地时间解析。

```bash

Example: 2026年3月13日12点 -> 2026-03-13 12:00:00

示例:2026年3月13日12点 -> 2026-03-13 12:00:00

Linux (GNU date): local timezone

Linux(GNU date):本地时区

FROM=$(date -d "2026-03-13 12:00:00" +%s)
FROM=$(date -d "2026-03-13 12:00:00" +%s)

macOS (BSD date): local timezone

macOS(BSD date):本地时区

FROM=$(date -j -f "%Y-%m-%d %H:%M:%S" "2026-03-13 12:00:00" +%s)

For a time range such as "2026年3月13日12点到13点", compute both endpoints the same way. For a single point-in-time request, infer a practical window from the user's intent; if unclear, ask for the range before executing.

**3. Absolute time with explicit timezone**

To convert a local date/time to a Unix timestamp: parse the input as UTC with `date -u`, then **subtract** the timezone's UTC offset in seconds.

Formula: `unix_ts = date_utc_parse(input) − (UTC_offset_hours × 3600)`

```bash
FROM=$(date -j -f "%Y-%m-%d %H:%M:%S" "2026-03-13 12:00:00" +%s)

对于时间范围(如"2026年3月13日12点到13点"),按相同方式计算两个端点。若用户仅指定单个时间点,根据用户意图推断合理时间窗口;若不明确,执行前需询问用户时间范围。

**3. 带明确时区的绝对时间**

将本地日期/时间转换为Unix时间戳:使用`date -u`将输入解析为UTC时间,然后**减去**时区的UTC偏移秒数。

公式:`unix_ts = date_utc_parse(input) − (UTC_offset_hours × 3600)`

```bash

Example: 2025-01-15 10:30:00 Beijing Time (UTC+8)

示例:2025-01-15 10:30:00 北京时间(UTC+8)

Beijing is UTC+8, so subtract 8 × 3600 = 28800

北京时区为UTC+8,因此减去8 × 3600 = 28800

Linux (GNU date)

Linux(GNU date)

FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) - 28800 ))
FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) - 28800 ))

macOS (BSD date)

macOS(BSD date)

FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) - 28800 ))

```bash
FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) - 28800 ))

```bash

Example: 2025-01-15 10:30:00 New York Time (UTC-5)

示例:2025-01-15 10:30:00 纽约时间(UTC-5)

New York is UTC-5, so subtract -5 × 3600 = subtract -18000 = add 18000

纽约时区为UTC-5,因此减去-5 × 3600 = 减去-18000 = 加上18000

Linux (GNU date)

Linux(GNU date)

FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) + 18000 ))
FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) + 18000 ))

macOS (BSD date)

macOS(BSD date)

FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) + 18000 ))

Common UTC offsets (value to subtract):

| Timezone         | UTC offset hours | Seconds to subtract |
|------------------|------------------|---------------------|
| Beijing (UTC+8)  | +8               | `28800`             |
| Tokyo (UTC+9)    | +9               | `32400`             |
| London (UTC)     | 0                | `0`                 |
| New York (UTC-5) | -5               | `-18000`            |

---
FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) + 18000 ))

常见UTC偏移量(需减去的值):

| 时区         | UTC偏移小时数 | 需减去的秒数 |
|------------------|------------------|---------------------|
| 北京(UTC+8)  | +8               | `28800`             |
| 东京(UTC+9)    | +9               | `32400`             |
| 伦敦(UTC)     | 0                | `0`                 |
| 纽约(UTC-5) | -5               | `-18000`            |

---

Step 5: Execute via
get-logs-v2

步骤5:通过
get-logs-v2
执行

Use
aliyun sls get-logs-v2
to execute queries. Run
aliyun help sls get-logs-v2
to see CLI parameter usage; read references/related-apis.md for detailed API parameter descriptions.
Required CLI flags:
  • --project
    : SLS project name
  • --logstore
    : Logstore name within the project
  • --from
    : Start of time range, Unix timestamp in seconds (inclusive)
  • --to
    : End of time range, Unix timestamp in seconds (exclusive)
  • --query
    : Statement built in Step 3
Pagination works differently depending on whether the statement has a
|
:
使用
aliyun sls get-logs-v2
执行查询。执行
aliyun help sls get-logs-v2
查看CLI参数用法;详细API参数说明请参考references/related-apis.md
必填CLI参数:
  • --project
    : SLS项目名称
  • --logstore
    : 项目内的Logstore名称
  • --from
    : 时间范围起始,秒级Unix时间戳(包含)
  • --to
    : 时间范围结束,秒级Unix时间戳(排他)
  • --query
    : 步骤3中构建的语句
分页方式因语句是否包含
|
而不同:

5.1 Index-search only — paginate with
--offset
/
--line

5.1 仅索引搜索——使用
--offset
/
--line
分页

bash
aliyun sls get-logs-v2 \
  --project my-project --logstore my-logstore \
  --from 1740000000 --to 1740003600 \
  --query '* and "payment failed" and status: "500"' \
  --line 100 --offset 0 --reverse true
  • Pagination:
    --line
    is page size (
    1–100
    , required);
    --offset
    is the start row (optional, default
    0
    ).
  • Ordering:
    --reverse true
    returns newest first; default
    false
    is oldest first.
bash
aliyun sls get-logs-v2 \
  --project my-project --logstore my-logstore \
  --from 1740000000 --to 1740003600 \
  --query '* and "payment failed" and status: "500"' \
  --line 100 --offset 0 --reverse true
  • 分页:
    --line
    为每页大小(
    1–100
    ,必填);
    --offset
    为起始行(可选,默认
    0
    )。
  • 排序:
    --reverse true
    返回最新日志优先;默认
    false
    为最旧日志优先。

5.2 With SQL — paginate with
LIMIT
inside the statement

5.2 包含SQL——在语句内使用
LIMIT
分页

bash
aliyun sls get-logs-v2 \
  --project my-project --logstore my-logstore \
  --from 1740000000 --to 1740003600 \
  --query 'status: "500" | SELECT request_uri, count(*) AS cnt FROM log GROUP BY request_uri ORDER BY cnt DESC LIMIT 20'
  • SQL default result cap is 100 rows. To get more results or paginate:
    • LIMIT count
      — raise the cap (e.g.,
      LIMIT 500
      returns up to 500 rows)
    • LIMIT offset, count
      — paginate (e.g.,
      LIMIT 20, 20
      for rows 21–40;
      LIMIT 40, 20
      for rows 41–60). Max offset+count is 1000000.
    • Do not use
      LIMIT count OFFSET offset
      syntax — it is not supported. Always use
      LIMIT offset, count
      .
  • Ordering: use
    ORDER BY <field> DESC/ASC
    to sort.
Result completeness check: every response contains
meta.progress
. If it is
Incomplete
, re-issue the same request until it returns
Complete
.

bash
aliyun sls get-logs-v2 \
  --project my-project --logstore my-logstore \
  --from 1740000000 --to 1740003600 \
  --query 'status: "500" | SELECT request_uri, count(*) AS cnt FROM log GROUP BY request_uri ORDER BY cnt DESC LIMIT 20'
  • SQL默认结果上限为100行。如需获取更多结果或分页:
    • LIMIT count
      ——提高上限(如
      LIMIT 500
      最多返回500行)
    • LIMIT offset, count
      ——分页(如
      LIMIT 20, 20
      返回第21–40行;
      LIMIT 40, 20
      返回第41–60行)。最大offset+count为1000000。
    • 请勿使用
      LIMIT count OFFSET offset
      语法——该语法不被支持。请始终使用
      LIMIT offset, count
  • 排序:使用
    ORDER BY <field> DESC/ASC
    进行排序。
结果完整性检查:每个响应均包含
meta.progress
。若值为
Incomplete
,需
重新发起相同请求
直到返回
Complete

Step 6: Extract Data from the Response

步骤6:从响应中提取数据

get-logs-v2
returns:
json
{
  "meta": { "progress": "Complete", "count": 10, ... },
  "data": [ { "field1": "value1", ... }, ... ]
}
FieldMeaning
meta.progress
Complete
or
Incomplete
(see Step 5)
meta.count
Number of rows returned
data
Array of log entries or aggregation rows; may contain
__time__
(Unix seconds, string)
Use
jq
(preferred) or
--cli-query
(JMESPath) to extract the fields the user needs:
Extract
jq
--cli-query
(JMESPath)
Data rows
| jq '.data'
--cli-query 'data'
Progress
| jq '.meta.progress'
--cli-query 'meta.progress'
Row count
| jq '.meta.count'
--cli-query 'meta.count'
Specific fields
| jq '.data[] | {LogStore, read_mb}'
--cli-query 'data[].{LogStore: LogStore, read_mb: read_mb}'

get-logs-v2
返回格式如下:
json
{
  "meta": { "progress": "Complete", "count": 10, ... },
  "data": [ { "field1": "value1", ... }, ... ]
}
字段含义
meta.progress
Complete
Incomplete
(见步骤5)
meta.count
返回的行数
data
日志条目或聚合行数组;可能包含
__time__
(秒级Unix时间戳,字符串类型)
使用
jq
(推荐)或
--cli-query
(JMESPath)提取用户所需字段:
提取内容
jq
命令
--cli-query
(JMESPath)
数据行
| jq '.data'
--cli-query 'data'
进度
| jq '.meta.progress'
--cli-query 'meta.progress'
行数
| jq '.meta.count'
--cli-query 'meta.count'
指定字段
| jq '.data[] | {LogStore, read_mb}'
--cli-query 'data[].{LogStore: LogStore, read_mb: read_mb}'

Step 7: Present the CLI Command and Results

步骤7:展示CLI命令与结果

CLI command — always show the full, copy-paste-ready
aliyun sls get-logs-v2 ...
command. Redact any AK/SK. If the query was not executed (write / explain scenario), present the command the user should run.
Results — when a query was executed, use Step 6 to extract
data
and format according to the user's request (table, list, summary, etc.). Append one sentence explaining the query mode choice.

CLI命令——始终展示完整、可直接复制粘贴的
aliyun sls get-logs-v2 ...
命令。隐藏所有AK/SK信息。若未执行查询(仅编写/解释场景),展示用户应执行的命令。
结果——若已执行查询,使用步骤6提取
data
并根据用户请求格式化为表格、列表、摘要等形式。附加一句话说明查询模式的选择理由。

Cleanup

清理操作

Whether operations succeed or fail, you MUST disable AI-Mode before ending the session:
bash
aliyun configure ai-mode disable

无论操作成功或失败,会话结束前必须禁用AI模式:
bash
aliyun configure ai-mode disable

Global Rules

全局规则

  • Always prefer Index search for fastest raw-log retrieval, and use Index search + SQL for analysis or field projection.
  • When the user only needs specific fields, use
    SELECT
    to project them
    rather than fetching full raw logs — this reduces network overhead. Requires
    doc_value: true
    on the target fields (confirmed in Step 1).
  • Do not hard-code
    __time__
    filters — pass time range via
    --from
    /
    --to
    .
  • Deprecated API: never call
    get-logs
    ; always use
    get-logs-v2
    .

  • 优先选择索引搜索以实现最快的原始日志检索,使用索引搜索+SQL进行分析或字段投影。
  • 当用户仅需特定字段时,使用
    SELECT
    进行字段投影
    而非获取完整原始日志——这可减少网络开销。要求目标字段的
    doc_value: true
    (步骤1中已确认)。
  • 请勿硬编码
    __time__
    过滤条件——通过
    --from
    /
    --to
    传递时间范围。
  • 已废弃API:请勿调用
    get-logs
    ;请始终使用
    get-logs-v2

Troubleshooting

故障排查

When the user reports "no data", "wrong result", or a CLI error, walk through the checklist in this exact order:
  1. Time range — wrong
    --from
    /
    --to
    ? Milliseconds instead of seconds? Recent writes still indexing?
  2. Index configuration — field index missing? Full-text index off? Target field not in
    keys
    ?
  3. Field type / statistics — range query on a
    text
    field? SQL on a field without
    doc_value
    ?
  4. Syntax — mixed SQL and SPL? Leading
    *
    in fuzzy match? SPL string escaping?
  5. Mode choice — scanning when an index-based query would do? Aggregating in SPL instead of SQL?
  6. Completeness
    meta.progress = Incomplete
    , caller did not retry (see Step 5).
  7. ProjectNotExist — region or endpoint is wrong. See references/regions.md.
  8. Network failure (timeout, connection refused) — try switching to internal endpoint. See references/regions.md.
For the full catalog of failure modes and error codes, see references/troubleshooting.md and the
Common Errors
table in references/related-apis.md.

当用户反馈"无数据"、"结果错误"或CLI报错时,请严格按照以下顺序排查:
  1. 时间范围——
    --from
    /
    --to
    是否错误?是否使用毫秒而非秒?最近写入的日志是否仍在索引中?
  2. 索引配置——字段索引是否缺失?全文索引是否关闭?目标字段是否不在
    keys
    中?
  3. 字段类型/统计——是否对
    text
    字段执行范围查询?是否对无
    doc_value
    的字段执行SQL?
  4. 语法——是否混合使用SQL与SPL?模糊匹配是否以
    *
    开头?SPL字符串转义是否正确?
  5. 模式选择——是否在可使用索引查询的情况下使用了扫描?是否在SPL中执行聚合而非SQL?
  6. 完整性——
    meta.progress = Incomplete
    ,调用方未重试(见步骤5)。
  7. ProjectNotExist——区域或端点错误。请参考references/regions.md
  8. 网络故障(超时、连接拒绝)——尝试切换至内部端点。请参考references/regions.md
完整故障模式与错误码目录,请参考references/troubleshooting.md以及references/related-apis.md中的"常见错误"表格。

Reference Documents

参考文档

DocumentDescription
references/query-analysis.mdMode decision, index-search / SQL rules, scan semantics
references/spl-guide.mdSPL pipeline syntax, common commands, field handling
references/functions-guide.mdFunction categories, SQL/SPL differences, templates
references/troubleshooting.md"No data / wrong result / error" playbook
references/related-apis.md
GetLogsV2
and
GetIndex
API & CLI reference
references/ram-policies.mdMinimum and complete RAM policies
references/cli-installation-guide.mdAliyun CLI install, auth modes, profiles
references/regions.mdRegion / endpoint configuration, internal endpoint, ProjectNotExist troubleshooting
references/acceptance-criteria.mdCLI invocation acceptance tests
references/query_analysis/*.yaml
·
references/spl/*.yaml
·
references/functions/*.yaml
Source-of-truth YAMLs bundled with this skill
文档描述
references/query-analysis.md模式决策、索引搜索/SQL规则、扫描语义
references/spl-guide.mdSPL管道语法、常用命令、字段处理
references/functions-guide.md函数分类、SQL/SPL差异、模板
references/troubleshooting.md"无数据/结果错误/报错"排查手册
references/related-apis.md
GetLogsV2
GetIndex
API及CLI参考
references/ram-policies.md最小权限与完整RAM策略
references/cli-installation-guide.mdAliyun CLI安装、认证模式、配置文件
references/regions.md区域/端点配置、内部端点、ProjectNotExist故障排查
references/acceptance-criteria.mdCLI调用验收测试
references/query_analysis/*.yaml
·
references/spl/*.yaml
·
references/functions/*.yaml
本技能附带的权威YAML配置文件