elasticsearch-esql

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Elasticsearch ES|QL

Elasticsearch ES|QL

Execute ES|QL queries against Elasticsearch.
针对Elasticsearch执行ES|QL查询。

What is ES|QL?

什么是ES|QL?

ES|QL (Elasticsearch Query Language) is a piped query language for Elasticsearch. It is NOT the same as:
  • Elasticsearch Query DSL (JSON-based)
  • SQL
  • EQL (Event Query Language)
ES|QL uses pipes (
|
) to chain commands:
FROM index | WHERE condition | STATS aggregation BY field | SORT field | LIMIT n
Prerequisite: ES|QL requires
_source
to be enabled on queried indices. Indices with
_source
disabled (e.g.,
"_source": { "enabled": false }
) will cause ES|QL queries to fail.
Version Compatibility: ES|QL was introduced in 8.11 (tech preview) and became GA in 8.14. Features like
LOOKUP JOIN
(8.18+),
MATCH
(8.17+), and
INLINE STATS
(9.2+) were added in later versions. On pre-8.18 clusters, use
ENRICH
as a fallback for
LOOKUP JOIN
(see generation tips).
INLINE STATS
and counter-field
RATE()
have no fallback before 9.2. Check references/esql-version-history.md for feature availability by version.
Cluster Detection: Use the
GET /
response to determine the cluster type and version:
  • build_flavor: "serverless"
    — Elastic Cloud Serverless. Reports version
    8.11.0
    but supports all latest ES|QL features. Ignore the version number and assume all GA and preview features are available.
  • build_flavor: "default"
    — Self-managed or Elastic Cloud Hosted. Use
    version.number
    for feature availability.
  • Snapshot builds have
    version.number
    like
    9.4.0-SNAPSHOT
    . Strip the
    -SNAPSHOT
    suffix and use the major.minor for version checks. Snapshot builds include all features from that version plus potentially unreleased features from development — if a query fails with an unknown function/command, it may simply not have landed yet. Elastic employees commonly use snapshot builds for testing.
ES|QL(Elasticsearch查询语言)是Elasticsearch的一种管道式查询语言。它与以下查询语言不同
  • Elasticsearch Query DSL(基于JSON)
  • SQL
  • EQL(事件查询语言)
ES|QL使用管道符(
|
)来串联命令:
FROM index | WHERE condition | STATS aggregation BY field | SORT field | LIMIT n
前置要求: ES|QL要求被查询的索引启用
_source
。禁用
_source
的索引(例如
"_source": { "enabled": false }
)会导致ES|QL查询失败。
版本兼容性: ES|QL在8.11版本中引入(技术预览版),并在8.14版本正式GA。
LOOKUP JOIN
(8.18+)、
MATCH
(8.17+)和
INLINE STATS
(9.2+)等功能在后续版本中添加。在8.18之前的集群上,可使用
ENRICH
作为
LOOKUP JOIN
的替代方案(参见生成技巧)。
INLINE STATS
和计数器字段
RATE()
在9.2版本之前没有替代方案。可查看references/esql-version-history.md了解各版本支持的功能。
集群检测: 使用
GET /
响应来确定集群类型和版本:
  • build_flavor: "serverless"
    — Elastic Cloud Serverless。报告版本为
    8.11.0
    ,但支持所有最新的ES|QL功能。忽略版本号,假设所有GA和预览版功能均可用。
  • build_flavor: "default"
    — 自托管或Elastic Cloud托管集群。使用
    version.number
    来判断功能可用性。
  • 快照版本
    version.number
    格式类似
    9.4.0-SNAPSHOT
    。去掉
    -SNAPSHOT
    后缀,使用主版本号.次版本号来检查版本。快照版本包含该版本的所有功能,以及可能未发布的开发中功能——如果查询因未知函数/命令失败,可能是该功能尚未正式发布。Elastic员工通常使用快照版本进行测试。

Environment Configuration

环境配置

Elasticsearch connection is configured via environment variables. Run
node scripts/esql.js test
to verify the connection. If the test fails, suggest these setup options to the user, then stop. Do not try to explore further until a successful connection test.
Elasticsearch连接通过环境变量配置。运行
node scripts/esql.js test
来验证连接。如果测试失败,向用户建议以下设置选项,然后停止操作。在连接测试成功前,不要进行进一步的探索。

Option 1: Elastic Cloud (recommended for production)

选项1:Elastic Cloud(生产环境推荐)

bash
export ELASTICSEARCH_CLOUD_ID="deployment-name:base64encodedcloudid"
export ELASTICSEARCH_API_KEY="base64encodedapikey"
bash
export ELASTICSEARCH_CLOUD_ID="deployment-name:base64encodedcloudid"
export ELASTICSEARCH_API_KEY="base64encodedapikey"

Option 2: Direct URL with API Key

选项2:使用API Key的直接URL连接

bash
export ELASTICSEARCH_URL="https://elasticsearch:9200"
export ELASTICSEARCH_API_KEY="base64encodedapikey"
bash
export ELASTICSEARCH_URL="https://elasticsearch:9200"
export ELASTICSEARCH_API_KEY="base64encodedapikey"

Option 3: Basic Authentication

选项3:基础认证

bash
export ELASTICSEARCH_URL="https://elasticsearch:9200"
export ELASTICSEARCH_USERNAME="elastic"
export ELASTICSEARCH_PASSWORD="changeme"
bash
export ELASTICSEARCH_URL="https://elasticsearch:9200"
export ELASTICSEARCH_USERNAME="elastic"
export ELASTICSEARCH_PASSWORD="changeme"

Option 4: Local Development with start-local

选项4:使用start-local进行本地开发

For local development and testing, use start-local to quickly spin up Elasticsearch and Kibana using Docker or Podman:
bash
curl -fsSL https://elastic.co/start-local | sh
After installation completes, Elasticsearch runs at
http://localhost:9200
and Kibana at
http://localhost:5601
. The script generates a random password for the
elastic
user and an API key, both stored in the
.env
file inside the created
elastic-start-local
folder.
To configure the environment variables for this skill, source the
.env
file and export the connection settings:
bash
source elastic-start-local/.env
export ELASTICSEARCH_URL="$ES_LOCAL_URL"
export ELASTICSEARCH_API_KEY="$ES_LOCAL_API_KEY"
Then run
node scripts/esql.js test
to verify the connection.
对于本地开发和测试,使用start-local通过Docker或Podman快速启动Elasticsearch和Kibana:
bash
curl -fsSL https://elastic.co/start-local | sh
安装完成后,Elasticsearch运行在
http://localhost:9200
,Kibana运行在
http://localhost:5601
。该脚本会为
elastic
用户生成随机密码和API密钥,两者都存储在创建的
elastic-start-local
文件夹内的
.env
文件中。
要为此技能配置环境变量,加载
.env
文件并导出连接设置:
bash
source elastic-start-local/.env
export ELASTICSEARCH_URL="$ES_LOCAL_URL"
export ELASTICSEARCH_API_KEY="$ES_LOCAL_API_KEY"
然后运行
node scripts/esql.js test
来验证连接。

Optional: Skip TLS verification (development only)

可选:跳过TLS验证(仅开发环境)

bash
export ELASTICSEARCH_INSECURE="true"
bash
export ELASTICSEARCH_INSECURE="true"

Usage

使用方法

Get Index Information (for schema discovery)

获取索引信息(用于Schema发现)

bash
node scripts/esql.js indices                    # List all indices
node scripts/esql.js indices "logs-*"           # List matching indices
node scripts/esql.js schema "logs-2024.01.01"   # Get field mappings for an index
bash
node scripts/esql.js indices                    # 列出所有索引
node scripts/esql.js indices "logs-*"           # 列出匹配的索引
node scripts/esql.js schema "logs-2024.01.01"   # 获取索引的字段映射

Execute Raw ES|QL

执行原生ES|QL查询

bash
node scripts/esql.js raw "FROM logs-* | STATS count = COUNT(*) BY host.name | SORT count DESC | LIMIT 5"
bash
node scripts/esql.js raw "FROM logs-* | STATS count = COUNT(*) BY host.name | SORT count DESC | LIMIT 5"

Execute with TSV Output

执行查询并输出TSV格式结果

bash
node scripts/esql.js raw "FROM logs-* | STATS count = COUNT(*) BY component | SORT count DESC" --tsv
TSV Output Options:
  • --tsv
    or
    -t
    : Output as tab-separated values (clean, no decorations)
  • --no-header
    : Omit the header row
bash
node scripts/esql.js raw "FROM logs-* | STATS count = COUNT(*) BY component | SORT count DESC" --tsv
TSV输出选项:
  • --tsv
    -t
    :以制表符分隔值格式输出(简洁,无装饰)
  • --no-header
    :省略表头行

Test Connection

测试连接

bash
node scripts/esql.js test
bash
node scripts/esql.js test

Guidelines

指南

  1. Detect deployment type: Always run
    node scripts/esql.js test
    first. This detects whether the deployment is a Serverless project (all features available) or a versioned cluster (features depend on version). The
    build_flavor
    field from
    GET /
    is the authoritative signal — if it equals
    "serverless"
    , ignore the reported version number and use all ES|QL features freely.
  2. Discover schema (required — never guess index or field names):
    bash
    node scripts/esql.js indices "pattern*"
    node scripts/esql.js schema "index-name"
    Always run schema discovery before generating queries. Index names and field names vary across deployments and cannot be reliably guessed. Even common-sounding data (e.g., "logs") may live in indices named
    logs-test
    ,
    logs-app-*
    , or
    application_logs
    . Field names may use ECS dotted notation (
    source.ip
    ,
    service.name
    ) or flat custom names — the only way to know is to check.
    The
    schema
    command reports the index mode. If it shows
    Index mode: time_series
    , the output includes the data stream name and copy-pasteable TS syntax — use
    TS <data-stream>
    (not
    FROM
    ),
    TBUCKET(interval)
    (not
    DATE_TRUNC
    ), and wrap counter fields with
    SUM(RATE(...))
    . Read the full TS section in Generation Tips before writing any time series query. You can also check the index mode directly via the Elasticsearch index settings API:
    bash
    curl -s "$ELASTICSEARCH_URL/<index-name>/_settings/index.mode" -H "Authorization: ApiKey $ELASTICSEARCH_API_KEY"
  3. Choose the right ES|QL feature for the task: Before writing queries, match the user's intent to the most appropriate ES|QL feature. Prefer a single advanced query over multiple basic ones.
    • "find patterns," "categorize," "group similar messages" →
      CATEGORIZE(field)
    • "spike," "dip," "anomaly," "when did X change" →
      CHANGE_POINT value ON key
    • "trend over time," "time series" →
      STATS ... BY BUCKET(@timestamp, interval)
      or
      TS
      for TSDB
    • "search," "find documents matching" →
      MATCH
      ,
      QSTR
      ,
      KQL
    • "count," "average," "breakdown" →
      STATS
      with aggregation functions
  4. Read the references before generating queries:
    • Generation Tips - key patterns (TS/TBUCKET/RATE, per-agg WHERE, LOOKUP JOIN, CIDR_MATCH), common templates, and ambiguity handling
    • Time Series Queries - read before any TS query: inner/outer aggregation model, TBUCKET syntax, RATE constraints
    • ES|QL Complete Reference - full syntax for all commands and functions
    • ES|QL Search Reference — for full-text search queries (MATCH, QSTR, KQL, scoring)
  5. Generate the query following ES|QL syntax:
    • Start with
      FROM index-pattern
      (or
      TS index-pattern
      for time series indices)
    • Add
      WHERE
      for filtering (use
      TRANGE
      for time ranges on 9.3+)
    • Use
      EVAL
      for computed fields
    • Use
      STATS ... BY
      for aggregations
    • For time series metrics:
      TS
      with
      SUM(RATE(...))
      for counters,
      AVG(...)
      for gauges, and
      TBUCKET(interval)
      for time bucketing — see the TS section in Generation Tips for the three critical syntax rules
    • For detecting spikes, dips, or anomalies, use
      CHANGE_POINT
      after time-bucketed aggregation
    • Add
      SORT
      and
      LIMIT
      as needed
  6. Execute with TSV flag:
    bash
    node scripts/esql.js raw "FROM index | STATS count = COUNT(*) BY field" --tsv
  1. 检测部署类型:始终先运行
    node scripts/esql.js test
    。这会检测部署是Serverless项目(支持所有功能)还是版本化集群(功能取决于版本)。
    GET /
    响应中的
    build_flavor
    字段是权威信号——如果其值为
    "serverless"
    ,忽略报告的版本号,自由使用所有ES|QL功能。
  2. 发现Schema(必填——切勿猜测索引或字段名称):
    bash
    node scripts/esql.js indices "pattern*"
    node scripts/esql.js schema "index-name"
    在生成查询前,始终执行Schema发现。索引名称和字段名称因部署而异,无法可靠猜测。即使是常见的数据(例如“日志”)也可能存储在
    logs-test
    logs-app-*
    application_logs
    等索引中。字段名称可能使用ECS点表示法(
    source.ip
    service.name
    )或扁平化自定义名称——唯一能准确获取的方法是查询。
    schema
    命令会报告索引模式。如果显示
    Index mode: time_series
    ,输出会包含数据流名称和可直接复制的TS语法——使用
    TS <data-stream>
    (而非
    FROM
    )、
    TBUCKET(interval)
    (而非
    DATE_TRUNC
    ),并将计数器字段用
    SUM(RATE(...))
    包裹。在编写任何时间序列查询前,请阅读生成技巧中的TS部分。您也可以通过Elasticsearch索引设置API直接检查索引模式:
    bash
    curl -s "$ELASTICSEARCH_URL/<index-name>/_settings/index.mode" -H "Authorization: ApiKey $ELASTICSEARCH_API_KEY"
  3. 为任务选择合适的ES|QL功能:在编写查询前,将用户的意图与最合适的ES|QL功能匹配。优先使用单个高级查询而非多个基础查询。
    • "查找模式"、"分类"、"分组相似消息" →
      CATEGORIZE(field)
    • "峰值"、"谷值"、"异常"、"X何时发生变化" →
      CHANGE_POINT value ON key
    • "随时间变化的趋势"、"时间序列" →
      STATS ... BY BUCKET(@timestamp, interval)
      或针对TSDB使用
      TS
    • "搜索"、"查找匹配的文档" →
      MATCH
      QSTR
      KQL
    • "计数"、"平均值"、"细分" → 带聚合函数的
      STATS
  4. 生成查询前阅读参考文档
    • 生成技巧 - 关键模式(TS/TBUCKET/RATE、聚合前WHERE、LOOKUP JOIN、CIDR_MATCH)、通用模板和歧义处理
    • 时间序列查询 - 编写任何TS查询前必读:内外聚合模型、TBUCKET语法、RATE约束
    • ES|QL完整参考 - 所有命令和函数的完整语法
    • ES|QL搜索参考 — 全文搜索查询(MATCH、QSTR、KQL、评分)
  5. 遵循ES|QL语法生成查询
    • FROM index-pattern
      开头(对于时间序列索引,使用
      TS index-pattern
    • 添加
      WHERE
      进行过滤(9.3+版本中对时间范围使用
      TRANGE
    • 使用
      EVAL
      计算字段
    • 使用
      STATS ... BY
      进行聚合
    • 对于时间序列指标:使用
      TS
      ,计数器字段用
      SUM(RATE(...))
      ,仪表指标用
      AVG(...)
      ,并使用
      TBUCKET(interval)
      进行时间分桶——请查看生成技巧中的TS部分,了解三个关键语法规则
    • 要检测峰值、谷值或异常,在时间分桶聚合后使用
      CHANGE_POINT
    • 根据需要添加
      SORT
      LIMIT
  6. 使用TSV标志执行查询
    bash
    node scripts/esql.js raw "FROM index | STATS count = COUNT(*) BY field" --tsv

ES|QL Quick Reference

ES|QL快速参考

Version availability: This section omits version annotations for readability. Check ES|QL Version History for feature availability by Elasticsearch version.
版本可用性: 为提高可读性,本节省略了版本注释。可查看ES|QL版本历史了解各Elasticsearch版本支持的功能。

Basic Structure

基本结构

esql
FROM index-pattern
| WHERE condition
| EVAL new_field = expression
| STATS aggregation BY grouping
| SORT field DESC
| LIMIT n
esql
FROM index-pattern
| WHERE condition
| EVAL new_field = expression
| STATS aggregation BY grouping
| SORT field DESC
| LIMIT n

Common Patterns

常见模式

Filter and limit:
esql
FROM logs-*
| WHERE @timestamp > NOW() - 24 hours AND level == "error"
| SORT @timestamp DESC
| LIMIT 100
Aggregate by time:
esql
FROM metrics-*
| WHERE @timestamp > NOW() - 7 days
| STATS avg_cpu = AVG(cpu.percent) BY bucket = DATE_TRUNC(1 hour, @timestamp)
| SORT bucket DESC
Top N with count:
esql
FROM web-logs
| STATS count = COUNT(*) BY response.status_code
| SORT count DESC
| LIMIT 10
Text search: Use
MATCH
,
QSTR
, or
KQL
for full-text search instead of
LIKE
/
RLIKE
— they are significantly faster and support relevance scoring. See ES|QL Search Reference for the full guide.
esql
FROM documents METADATA _score
| WHERE MATCH(content, "search terms")
| SORT _score DESC
| LIMIT 20
Log categorization (Platinum license): Use
CATEGORIZE
to auto-cluster log messages into pattern groups. Prefer this over running multiple
STATS ... BY field
queries when exploring or finding patterns in unstructured text.
esql
FROM logs-*
| WHERE @timestamp > NOW() - 24 hours
| STATS count = COUNT(*) BY category = CATEGORIZE(message)
| SORT count DESC
| LIMIT 20
Change point detection (Platinum license): Use
CHANGE_POINT
to detect spikes, dips, and trend shifts in a metric series. Prefer this over manual inspection of time-bucketed counts.
esql
FROM logs-*
| STATS c = COUNT(*) BY t = BUCKET(@timestamp, 30 seconds)
| SORT t
| CHANGE_POINT c ON t
| WHERE type IS NOT NULL
Time series metrics:
esql
// Counter metric: SUM(RATE(...)) with TBUCKET(duration)
TS metrics-tsds
| WHERE TRANGE(1 hour)
| STATS SUM(RATE(requests)) BY TBUCKET(1 hour), host

// Gauge metric: AVG(...) — no RATE needed
TS metrics-tsds
| STATS avg_cpu = AVG(cpu) BY service.name, bucket = TBUCKET(5 minutes)
| SORT bucket
Data enrichment with LOOKUP JOIN: Use
RENAME
when the join key has a different name in the source vs the lookup index. Use multiple fields in
ON
when the lookup table has a composite key.
esql
// Field name mismatch — RENAME before joining
FROM support_tickets
| RENAME product AS product_name
| LOOKUP JOIN knowledge_base ON product_name

// Multi-field join (9.2+)
FROM application_logs
| LOOKUP JOIN service_registry ON service_name, environment
| KEEP service_name, environment, owner_team
Multivalue field filtering: Use
MV_CONTAINS
to check if a multivalue field contains a specific value. Use
MV_COUNT
to count values.
esql
// Filter by multivalue membership
FROM employees
| WHERE MV_CONTAINS(languages, "Python")

// Find entries matching multiple values
FROM employees
| WHERE MV_CONTAINS(languages, "Java") AND MV_CONTAINS(languages, "Python")

// Count multivalue entries
FROM employees
| EVAL num_languages = MV_COUNT(languages)
| SORT num_languages DESC
Change point detection (alternate example): Use when the user asks about spikes, dips, or anomalies. Requires time-bucketed aggregation,
SORT
, then
CHANGE_POINT
.
esql
FROM logs-*
| STATS error_count = COUNT(*) BY bucket = DATE_TRUNC(1 hour, @timestamp)
| SORT bucket
| CHANGE_POINT error_count ON bucket AS type, pvalue
过滤和限制:
esql
FROM logs-*
| WHERE @timestamp > NOW() - 24 hours AND level == "error"
| SORT @timestamp DESC
| LIMIT 100
按时间聚合:
esql
FROM metrics-*
| WHERE @timestamp > NOW() - 7 days
| STATS avg_cpu = AVG(cpu.percent) BY bucket = DATE_TRUNC(1 hour, @timestamp)
| SORT bucket DESC
带计数的Top N:
esql
FROM web-logs
| STATS count = COUNT(*) BY response.status_code
| SORT count DESC
| LIMIT 10
文本搜索: 全文搜索请使用
MATCH
QSTR
KQL
,而非
LIKE
/
RLIKE
——它们速度更快且支持相关性评分。请查看ES|QL搜索参考获取完整指南。
esql
FROM documents METADATA _score
| WHERE MATCH(content, "search terms")
| SORT _score DESC
| LIMIT 20
日志分类(铂金版授权): 使用
CATEGORIZE
将日志消息自动聚类为模式组。在探索或查找非结构化文本中的模式时,优先使用此方法而非多次运行
STATS ... BY field
查询。
esql
FROM logs-*
| WHERE @timestamp > NOW() - 24 hours
| STATS count = COUNT(*) BY category = CATEGORIZE(message)
| SORT count DESC
| LIMIT 20
变更点检测(铂金版授权): 使用
CHANGE_POINT
检测指标序列中的峰值、谷值和趋势变化。优先使用此方法而非手动检查时间分桶计数。
esql
FROM logs-*
| STATS c = COUNT(*) BY t = BUCKET(@timestamp, 30 seconds)
| SORT t
| CHANGE_POINT c ON t
| WHERE type IS NOT NULL
时间序列指标:
esql
// 计数器指标:SUM(RATE(...)) 搭配 TBUCKET(duration)
TS metrics-tsds
| WHERE TRANGE(1 hour)
| STATS SUM(RATE(requests)) BY TBUCKET(1 hour), host

// 仪表指标:AVG(...) — 无需使用RATE
TS metrics-tsds
| STATS avg_cpu = AVG(cpu) BY service.name, bucket = TBUCKET(5 minutes)
| SORT bucket
使用LOOKUP JOIN进行数据增强: 当源索引和查找索引中的连接键名称不同时,使用
RENAME
。当查找表有复合键时,在
ON
中使用多个字段。
esql
// 字段名称不匹配 — 连接前重命名
FROM support_tickets
| RENAME product AS product_name
| LOOKUP JOIN knowledge_base ON product_name

// 多字段连接(9.2+)
FROM application_logs
| LOOKUP JOIN service_registry ON service_name, environment
| KEEP service_name, environment, owner_team
多值字段过滤: 使用
MV_CONTAINS
检查多值字段是否包含特定值。使用
MV_COUNT
统计值的数量。
esql
// 按多值成员身份过滤
FROM employees
| WHERE MV_CONTAINS(languages, "Python")

// 查找匹配多个值的条目
FROM employees
| WHERE MV_CONTAINS(languages, "Java") AND MV_CONTAINS(languages, "Python")

// 统计多值条目数量
FROM employees
| EVAL num_languages = MV_COUNT(languages)
| SORT num_languages DESC
变更点检测(替代示例): 当用户询问峰值、谷值或异常时使用。需要先进行时间分桶聚合、
SORT
,然后使用
CHANGE_POINT
esql
FROM logs-*
| STATS error_count = COUNT(*) BY bucket = DATE_TRUNC(1 hour, @timestamp)
| SORT bucket
| CHANGE_POINT error_count ON bucket AS type, pvalue

Full Reference

完整参考

For complete ES|QL syntax including all commands, functions, and operators, read:
  • ES|QL Complete Reference
  • ES|QL Search Reference - Full-text search: MATCH, QSTR, KQL, MATCH_PHRASE, scoring, semantic search
  • ES|QL Version History - Feature availability by Elasticsearch version
  • Query Patterns - Natural language to ES|QL translation
  • Generation Tips - Best practices for query generation
  • Time Series Queries - TS command, time series aggregation functions, TBUCKET
  • DSL to ES|QL Migration - Convert Query DSL to ES|QL
如需包含所有命令、函数和运算符的完整ES|QL语法,请阅读:
  • ES|QL完整参考
  • ES|QL搜索参考 - 全文搜索:MATCH、QSTR、KQL、MATCH_PHRASE、评分、语义搜索
  • ES|QL版本历史 - 各Elasticsearch版本支持的功能
  • 查询模式 - 自然语言到ES|QL的转换
  • 生成技巧 - 查询生成最佳实践
  • 时间序列查询 - TS命令、时间序列聚合函数、TBUCKET
  • DSL到ES|QL迁移 - 将Query DSL转换为ES|QL

Error Handling

错误处理

When query execution fails, the script returns:
  • The generated ES|QL query
  • The error message from Elasticsearch
  • Suggestions for common issues
Common issues:
  • Field doesn't exist → Check schema with
    node scripts/esql.js schema "index"
  • Type mismatch → Use type conversion functions (TO_STRING, TO_INTEGER, etc.)
  • Syntax error → Review ES|QL reference for correct syntax
  • No results → Check time range and filter conditions
当查询执行失败时,脚本会返回:
  • 生成的ES|QL查询
  • Elasticsearch返回的错误消息
  • 常见问题的解决建议
常见问题:
  • 字段不存在 → 使用
    node scripts/esql.js schema "index"
    检查Schema
  • 类型不匹配 → 使用类型转换函数(TO_STRING、TO_INTEGER等)
  • 语法错误 → 查看ES|QL参考文档确认正确语法
  • 无结果 → 检查时间范围和过滤条件

Examples

示例

bash
undefined
bash
undefined

Schema discovery

Schema发现

node scripts/esql.js test node scripts/esql.js indices "logs-*" node scripts/esql.js schema "logs-2024.01.01"
node scripts/esql.js test node scripts/esql.js indices "logs-*" node scripts/esql.js schema "logs-2024.01.01"

Execute queries

执行查询

node scripts/esql.js raw "FROM logs-* | STATS count = COUNT() BY host.name | LIMIT 10" node scripts/esql.js raw "FROM metrics- | STATS avg = AVG(cpu.percent) BY hour = DATE_TRUNC(1 hour, @timestamp)" --tsv
undefined
node scripts/esql.js raw "FROM logs-* | STATS count = COUNT() BY host.name | LIMIT 10" node scripts/esql.js raw "FROM metrics- | STATS avg = AVG(cpu.percent) BY hour = DATE_TRUNC(1 hour, @timestamp)" --tsv
undefined