Alibaba Cloud SLS Query & Analysis
Scenario Description
Use this skill when the user wants to:
- Explain, rewrite, optimize or execute an existing query
- Translate a natural-language requirement into an SLS index query, SQL, or SPL statement
Prerequisites
Install Aliyun CLI
Run
to verify if version >=
. If not installed or outdated, follow the doc
references/cli-installation-guide.md to install or update.
Ensure AI Mode Enabled
Before executing any CLI commands, enable AI-Mode, set User-Agent, and update plugins:
bash
aliyun configure ai-mode enable
aliyun configure ai-mode set-user-agent --user-agent "AlibabaCloud-Agent-Skills/alibabacloud-sls-query"
aliyun plugin update
Check Alibaba Cloud credentials configured
Run
to check if credentials configured.
If no valid profile is shown,
STOP here and ask the user to run
outside of this session.
Security rules:
- NEVER read, echo, or print AK/SK values
- NEVER ask the user to paste AK/SK into the conversation
- ONLY use to check credential status
RAM Permission Requirements
| API | CLI | Action | Purpose |
|---|
| GetLogsV2 | | | Run query / SQL / SPL and read results |
| GetIndex | | | Read index config to verify prerequisites |
For the minimum and complete RAM policy JSON, see references/ram-policies.md.
Permission failure handling: If a call returns
permission error, stop and surface
references/ram-policies.md to the user. Do
not retry with a different account without explicit user confirmation.
Core Workflow
- Read index configuration (GetIndex)
- Pick query mode
- Build statement
- Resolve time range
- Execute query
- Extract data from response
- Present CLI command and results
Step 1: Read the Index Configuration (Mandatory)
Always call
first — the index config decides which query modes are available in Step 2.
bash
aliyun sls get-index \
--project <project> --logstore <logstore>
Two sections in the response drive every later decision:
| Section | Meaning |
|---|
| Full-text index — absence means full-text search is disabled |
| Field indexes — map of field → { type, doc_value, token, caseSensitive, chn, ... }
. means statistics are enabled on that field |
If the call returns
(HTTP 404), or the response has neither
nor
populated, the Logstore has no index at all — stop immediately and tell the user they must create an index before any query / SQL / SPL can run.
- The response can be large — extract only the fields relevant to the current query. Cache per and reuse within the session.
For field types, tokenization, and how
maps to capabilities, see
references/related-apis.md and
references/query-analysis.md.
Step 2: Pick the Query Mode (Critical)
The query statement takes one of the following forms:
| Priority | Mode | Statement Form | Use when | Requires |
|---|
| 1 | Index search | | Filtering raw logs; return time-ordered and paginated logs | Full-text () or any field index () |
| 2 | SQL | | Aggregation, , sort, window, top-N, projection, and other analytical operations | Target field has with |
| 3 | SQL scan | <index-search> | <SQL scan>
| User requested | None |
| 4 | SPL | | User requested | None |
Selection rule:
- Always prefer Index search for fastest speed.
- Use Index search + SQL when the user needs analytical operations or field projection rather than full raw-log retrieval, such as aggregation, , sorting, window analysis, top-N, or returning only the required fields/columns.
- Do not proactively choose SQL scan or SPL; use them only when the user explicitly requests.
For the full decision guide, see references/query-analysis.md.
Step 3: Write the Statement
3.1 Build the index-search segment first (left of )
Collect every filter that can be expressed in index-search syntax and place it before the first
. Use
if no filter applies.
text
* and "payment failed" and status: "500" and not path: "/healthz"
- matches all; is full-text (needs full-text index).
- is a field filter (needs field index).
- Combine with / / ; group with parentheses.
- means field exists. Range (, , ) works only on / .
If the requirement can be fully answered without aggregation or row-level processing, stop here — this is already a complete index search. For full index-search syntax, see references/query-analysis.md.
3.2 Append SQL — for aggregation / analytics
sql
status: 500 | SELECT date_trunc('minute', __time__) AS minute,
count(*) AS errors
FROM log
GROUP BY minute
ORDER BY minute
- Read references/query-analysis.md for Query & SQL rules
- Table name is (recommended to omit).
- SQL respects the indexed field type from — a / field can be compared directly (). Cast only when a field is indexed as but numeric semantics are needed ( to suppress errors).
- Read references/functions-guide.md for unusual Function selection (aggregate, JSON, regex, datetime, IP geo …)
3.3 Append SPL — for row-level processing / flexible filtering
spl
status: 500 and service: payment
| where try_cast(latency as BIGINT) > 1000
| extend latency_ms = try_cast(latency as BIGINT)
| project service, latency_ms, message
For SPL syntax, pipeline commands, and field-handling rules, read references/spl-guide.md.
3.4 Append SQL scan — fallback when the target field has no index / statistics
Syntax follows regular SQL (see 3.2), with one difference:
every field is , so always
/
before numeric comparison or arithmetic. See
references/query-analysis.md for scan semantics.
sql
* | set session mode=scan; SELECT api, count(1) AS pv FROM log GROUP BY api
Step 4: Resolve the Time Range
Generate
/
as
Unix timestamps in seconds before building the CLI command.
is inclusive and
is exclusive.
Choose one of three input patterns:
- Relative time — user says "recent / last N minutes|hours|days".
- Natural-language absolute time without timezone — normalize to , then parse using the machine's local timezone.
- Absolute time with explicit timezone — parse using the customer-provided timezone or UTC offset.
1. Relative time
bash
# recent 15 minutes
FROM=$(($(date +%s) - 900))
TO=$(date +%s)
2. Natural-language absolute time without timezone
If the user gives a date/time but no timezone, use the machine's local timezone. First normalize natural language such as
to
, then parse it as local time.
bash
# Example: 2026年3月13日12点 -> 2026-03-13 12:00:00
# Linux (GNU date): local timezone
FROM=$(date -d "2026-03-13 12:00:00" +%s)
# macOS (BSD date): local timezone
FROM=$(date -j -f "%Y-%m-%d %H:%M:%S" "2026-03-13 12:00:00" +%s)
For a time range such as "2026年3月13日12点到13点", compute both endpoints the same way. For a single point-in-time request, infer a practical window from the user's intent; if unclear, ask for the range before executing.
3. Absolute time with explicit timezone
To convert a local date/time to a Unix timestamp: parse the input as UTC with
, then
subtract the timezone's UTC offset in seconds.
Formula:
unix_ts = date_utc_parse(input) − (UTC_offset_hours × 3600)
bash
# Example: 2025-01-15 10:30:00 Beijing Time (UTC+8)
# Beijing is UTC+8, so subtract 8 × 3600 = 28800
# Linux (GNU date)
FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) - 28800 ))
# macOS (BSD date)
FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) - 28800 ))
bash
# Example: 2025-01-15 10:30:00 New York Time (UTC-5)
# New York is UTC-5, so subtract -5 × 3600 = subtract -18000 = add 18000
# Linux (GNU date)
FROM=$(( $(date -u -d "2025-01-15 10:30:00" +%s) + 18000 ))
# macOS (BSD date)
FROM=$(( $(date -u -j -f "%Y-%m-%d %H:%M:%S" "2025-01-15 10:30:00" +%s) + 18000 ))
Common UTC offsets (value to subtract):
| Timezone | UTC offset hours | Seconds to subtract |
|---|
| Beijing (UTC+8) | +8 | |
| Tokyo (UTC+9) | +9 | |
| London (UTC) | 0 | |
| New York (UTC-5) | -5 | |
Step 5: Execute via
Use
to execute queries. Run
aliyun help sls get-logs-v2
to see CLI parameter usage; read
references/related-apis.md for detailed API parameter descriptions.
Required CLI flags:
- : SLS project name
- : Logstore name within the project
- : Start of time range, Unix timestamp in seconds (inclusive)
- : End of time range, Unix timestamp in seconds (exclusive)
- : Statement built in Step 3
Pagination works differently depending on whether the statement has a
:
5.1 Index-search only — paginate with /
bash
aliyun sls get-logs-v2 \
--project my-project --logstore my-logstore \
--from 1740000000 --to 1740003600 \
--query '* and "payment failed" and status: "500"' \
--line 100 --offset 0 --reverse true
- Pagination: is page size (, required); is the start row (optional, default ).
- Ordering: returns newest first; default is oldest first.
5.2 With SQL — paginate with inside the statement
bash
aliyun sls get-logs-v2 \
--project my-project --logstore my-logstore \
--from 1740000000 --to 1740003600 \
--query 'status: "500" | SELECT request_uri, count(*) AS cnt FROM log GROUP BY request_uri ORDER BY cnt DESC LIMIT 20'
- SQL default result cap is 100 rows. To get more results or paginate:
- — raise the cap (e.g., returns up to 500 rows)
- — paginate (e.g., for rows 21–40; for rows 41–60). Max offset+count is 1000000.
- Do not use
LIMIT count OFFSET offset
syntax — it is not supported. Always use .
- Ordering: use
ORDER BY <field> DESC/ASC
to sort.
Result completeness check: every response contains
. If it is
,
re-issue the same request until it returns
.
Step 6: Extract Data from the Response
json
{
"meta": { "progress": "Complete", "count": 10, ... },
"data": [ { "field1": "value1", ... }, ... ]
}
| Field | Meaning |
|---|
| or (see Step 5) |
| Number of rows returned |
| Array of log entries or aggregation rows; may contain (Unix seconds, string) |
Use
(preferred) or
(JMESPath) to extract the fields the user needs:
| Extract | | (JMESPath) |
|---|
| Data rows | | |
| Progress | | --cli-query 'meta.progress'
|
| Row count | | |
| Specific fields | | jq '.data[] | {LogStore, read_mb}'
| --cli-query 'data[].{LogStore: LogStore, read_mb: read_mb}'
|
Step 7: Present the CLI Command and Results
CLI command — always show the full, copy-paste-ready
aliyun sls get-logs-v2 ...
command. Redact any AK/SK. If the query was not executed (write / explain scenario), present the command the user should run.
Results — when a query was executed, use Step 6 to extract
and format according to the user's request (table, list, summary, etc.). Append one sentence explaining the query mode choice.
Cleanup
Whether operations succeed or fail, you MUST disable AI-Mode before ending the session:
bash
aliyun configure ai-mode disable
Global Rules
- Always prefer Index search for fastest raw-log retrieval, and use Index search + SQL for analysis or field projection.
- When the user only needs specific fields, use to project them rather than fetching full raw logs — this reduces network overhead. Requires on the target fields (confirmed in Step 1).
- Do not hard-code filters — pass time range via / .
- Deprecated API: never call ; always use .
Troubleshooting
When the user reports "no data", "wrong result", or a CLI error, walk through the checklist in this exact order:
- Time range — wrong /? Milliseconds instead of seconds? Recent writes still indexing?
- Index configuration — field index missing? Full-text index off? Target field not in ?
- Field type / statistics — range query on a field? SQL on a field without ?
- Syntax — mixed SQL and SPL? Leading in fuzzy match? SPL string escaping?
- Mode choice — scanning when an index-based query would do? Aggregating in SPL instead of SQL?
- Completeness —
meta.progress = Incomplete
, caller did not retry (see Step 5).
- ProjectNotExist — region or endpoint is wrong. See references/regions.md.
- Network failure (timeout, connection refused) — try switching to internal endpoint. See references/regions.md.
For the full catalog of failure modes and error codes, see
references/troubleshooting.md and the
table in
references/related-apis.md.
Reference Documents
| Document | Description |
|---|
| references/query-analysis.md | Mode decision, index-search / SQL rules, scan semantics |
| references/spl-guide.md | SPL pipeline syntax, common commands, field handling |
| references/functions-guide.md | Function categories, SQL/SPL differences, templates |
| references/troubleshooting.md | "No data / wrong result / error" playbook |
| references/related-apis.md | and API & CLI reference |
| references/ram-policies.md | Minimum and complete RAM policies |
| references/cli-installation-guide.md | Aliyun CLI install, auth modes, profiles |
| references/regions.md | Region / endpoint configuration, internal endpoint, ProjectNotExist troubleshooting |
| references/acceptance-criteria.md | CLI invocation acceptance tests |
references/query_analysis/*.yaml
· · references/functions/*.yaml
| Source-of-truth YAMLs bundled with this skill |