eventhouse-consumption-cli

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
  • GitHub Copilot CLI / VS Code: invoke the
    check-updates
    skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.
CRITICAL NOTES
  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
更新检查 — 每个会话一次(必填) 在会话中首次使用此技能前,请先运行check-updates技能。
  • GitHub Copilot CLI / VS Code:调用
    check-updates
    技能。
  • Claude Code / Cowork / Cursor / Windsurf / Codex:对比本地与远程package.json版本。
  • 若本次会话中已完成检查,可跳过此步骤。
重要说明
  1. 从工作区名称查找工作区详情(包括其ID):列出所有工作区,然后使用JMESPath过滤
  2. 从工作区ID、项类型和项名称查找项详情(包括其ID):列出该工作区中该类型的所有项,然后使用JMESPath过滤

eventhouse-consumption-cli — Read-Only KQL Queries via CLI

eventhouse-consumption-cli — 通过CLI运行只读KQL查询

Table of Contents

目录

TaskReferenceNotes
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id]
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key Concepts
Environment URLsCOMMON-CORE.md § Environment URLsKQL Cluster URI is per-item
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; read before any auth issue
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIs
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires
storage.azure.com
token, not Fabric token
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Authentication RecipesCOMMON-CLI.md § Authentication Recipes
az login
flows and token acquisition
Fabric Control-Plane API via
az rest
COMMON-CLI.md § Fabric Control-Plane API via az restAlways pass
--resource https://api.fabric.microsoft.com
or
az rest
fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via
curl
COMMON-CLI.md § OneLake Data Access via curlUse
curl
not
az rest
(different token audience)
Job Execution (CLI)COMMON-CLI.md § Job Execution
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)
az rest
audience, shell escaping, token expiry
Quick Reference:
az rest
Template
COMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich
--resource
+ tool for each service
Connection FundamentalsEVENTHOUSE-CONSUMPTION-CORE.md § Connection FundamentalsCluster URI discovery,
az rest
, REST API
Schema Discovery and SecurityEVENTHOUSE-CONSUMPTION-CORE.md § Schema Discovery and SecuritySchema Discovery, Security — workspace roles + KQL DB roles
Monitoring and DiagnosticsEVENTHOUSE-CONSUMPTION-CORE.md § Monitoring and Diagnostics
Performance Best PracticesEVENTHOUSE-CONSUMPTION-CORE.md § Performance Best PracticesRead before writing KQL — time filters,
has
vs
contains
Common Consumption PatternsEVENTHOUSE-CONSUMPTION-CORE.md § Common Consumption PatternsTime-series, Top-N, percentile, dynamic fields
Gotchas, Troubleshooting, and Quick ReferenceEVENTHOUSE-CONSUMPTION-CORE.md § Gotchas, Troubleshooting, and Quick ReferenceGotchas and Troubleshooting (12 issues), Quick Reference: Consumption Capabilities by Scenario
Table and Column Discoverydiscovery-queries.md § Table and Column DiscoveryTable Discovery, Column Statistics
Function and View Discoverydiscovery-queries.md § Function and View DiscoveryFunction Discovery, Materialized View Discovery
Policy Discoverydiscovery-queries.md § Policy Discovery
External Tables and Ingestion Mappingsdiscovery-queries.md § External Tables and Ingestion MappingsExternal Table Discovery, Ingestion Mapping Discovery
Security Discoverydiscovery-queries.md § Security Discovery
Database Overview Scriptdiscovery-queries.md § Database Overview Script
Tool StackSKILL.md § Tool Stack
ConnectionSKILL.md § Connectioneventhouse-specific
az rest
connection steps
Agentic Exploration ("Chat With My Data")SKILL.md § Agentic ExplorationStart here for data exploration
Running QueriesSKILL.md § Running Queries
az rest
, output formatting, export
MonitoringSKILL.md § Monitoring
Must / Prefer / Avoid / TroubleshootingSKILL.md § Must / Prefer / Avoid / TroubleshootingMUST DO / AVOID / PREFER checklists
ExamplesSKILL.md § Examples
Agent Integration NotesSKILL.md § Agent Integration Notes

任务参考文档备注
在Fabric中查找工作区和项COMMON-CLI.md § 在Fabric中查找工作区和项必填先阅读链接内容 [需要通过名称查找工作区ID,或通过名称、项类型、工作区ID查找项ID]
Fabric拓扑与核心概念COMMON-CORE.md § Fabric拓扑与核心概念
环境URLCOMMON-CORE.md § 环境URLKQL集群URI为每个项独有
认证与令牌获取COMMON-CORE.md § 认证与令牌获取受众错误会导致401;遇到认证问题前请先阅读
核心控制面REST APICOMMON-CORE.md § 核心控制面REST API
分页COMMON-CORE.md § 分页
长时间运行操作(LRO)COMMON-CORE.md § 长时间运行操作(LRO)
速率限制与节流COMMON-CORE.md § 速率限制与节流
OneLake数据访问COMMON-CORE.md § OneLake数据访问需要
storage.azure.com
令牌,而非Fabric令牌
任务执行COMMON-CORE.md § 任务执行
容量管理COMMON-CORE.md § 容量管理
常见陷阱与故障排查COMMON-CORE.md § 常见陷阱与故障排查
最佳实践COMMON-CORE.md § 最佳实践
工具选择理由COMMON-CLI.md § 工具选择理由
认证方案COMMON-CLI.md § 认证方案
az login
流程与令牌获取
通过
az rest
调用Fabric控制面API
COMMON-CLI.md § 通过az rest调用Fabric控制面API必须传递
--resource https://api.fabric.microsoft.com
,否则
az rest
会失败
分页模式COMMON-CLI.md § 分页模式
长时间运行操作(LRO)模式COMMON-CLI.md § 长时间运行操作(LRO)模式
通过
curl
访问OneLake数据
COMMON-CLI.md § 通过curl访问OneLake数据使用
curl
而非
az rest
(令牌受众不同)
任务执行(CLI)COMMON-CLI.md § 任务执行
OneLake快捷方式COMMON-CLI.md § OneLake快捷方式
容量管理(CLI)COMMON-CLI.md § 容量管理
复合方案COMMON-CLI.md § 复合方案
常见陷阱与故障排查(CLI专属)COMMON-CLI.md § 常见陷阱与故障排查(CLI专属)
az rest
受众、shell转义、令牌过期
快速参考:
az rest
模板
COMMON-CLI.md § 快速参考:az rest模板
快速参考:令牌受众 ↔ CLI工具矩阵COMMON-CLI.md § 快速参考:令牌受众 ↔ CLI工具矩阵各服务对应的
--resource
和工具
连接基础EVENTHOUSE-CONSUMPTION-CORE.md § 连接基础集群URI发现、
az rest
、REST API
架构发现与安全EVENTHOUSE-CONSUMPTION-CORE.md § 架构发现与安全架构发现、安全 — 工作区角色 + KQL DB角色
监控与诊断EVENTHOUSE-CONSUMPTION-CORE.md § 监控与诊断
性能最佳实践EVENTHOUSE-CONSUMPTION-CORE.md § 性能最佳实践编写KQL前请阅读 — 时间过滤器、
has
vs
contains
常见使用模式EVENTHOUSE-CONSUMPTION-CORE.md § 常见使用模式时间序列、Top-N、百分位数、动态字段
常见陷阱、故障排查与快速参考EVENTHOUSE-CONSUMPTION-CORE.md § 常见陷阱、故障排查与快速参考常见陷阱与故障排查(12个问题)、按场景划分的使用能力快速参考
表与列发现discovery-queries.md § 表与列发现表发现、列统计
函数与视图发现discovery-queries.md § 函数与视图发现函数发现、物化视图发现
策略发现discovery-queries.md § 策略发现
外部表与Ingestion映射discovery-queries.md § 外部表与Ingestion映射外部表发现、Ingestion映射发现
安全发现discovery-queries.md § 安全发现
数据库概览脚本discovery-queries.md § 数据库概览脚本
工具栈SKILL.md § 工具栈
连接SKILL.md § 连接Eventhouse专属的
az rest
连接步骤
智能探索("与我的数据对话")SKILL.md § 智能探索数据探索从此处开始
运行查询SKILL.md § 运行查询
az rest
、输出格式化、导出
监控SKILL.md § 监控
必须遵守/推荐做法/需避免操作/故障排查SKILL.md § 必须遵守/推荐做法/需避免操作/故障排查必须做/避免/推荐检查清单
示例SKILL.md § 示例
Agent集成说明SKILL.md § Agent集成说明

Tool Stack

工具栈

ToolPurposeInstall
az cliKQL queries and management commands via Kusto REST API; Fabric control-plane discovery
winget install Microsoft.AzureCLI
jqJSON processing and output formatting
winget install jqlang.jq
工具用途安装方式
az cli通过Kusto REST API执行KQL查询和管理命令;Fabric控制面发现
winget install Microsoft.AzureCLI
jqJSON处理和输出格式化
winget install jqlang.jq

Connection

连接

Step 1 — Discover KQL Database Query URI

步骤1 — 发现KQL Database查询URI

bash
undefined
bash
undefined

Get workspace ID (if not known)

获取工作区ID(若未知)

WS_ID=$(az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
WS_ID=$(az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')

List KQL Databases and get connection properties

列出KQL数据库并获取连接属性

az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
undefined
az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
undefined

Step 2 — Set Connection Variables

步骤2 — 设置连接变量

bash
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"
bash
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"

Step 3 — Verify Connection

步骤3 — 验证连接

Important — body file pattern: KQL queries contain
|
(pipe) characters which break shell escaping in both bash and PowerShell. Always write the JSON body to a temp file and reference it with
--body @<file>
. This is the recommended approach for all
az rest
KQL calls. On PowerShell, use
@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
then
--body "@$env:TEMP\kql_body.json"
.
bash
undefined
重要提示 — 主体文件模式:KQL查询包含
|
(管道)字符,会破坏bash和PowerShell中的shell转义。始终将JSON主体写入临时文件,并使用
--body @<file>
引用它。这是所有
az rest
KQL调用的推荐方法。 在PowerShell中,使用
@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
,然后使用
--body "@$env:TEMP\kql_body.json"
bash
undefined

Write body to temp file (avoids pipe escaping issues)

将主体写入临时文件(避免管道转义问题)

cat > /tmp/kql_body.json << 'EOF' {"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

---
cat > /tmp/kql_body.json << 'EOF' {"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

---

Agentic Exploration

智能探索("与我的数据对话")

"Chat With My Data" — Discovery Sequence

"与我的数据对话" — 发现流程

When the user asks to explore or query an Eventhouse without specifying tables:
kql
Step 1 → .show tables                                    // discover tables
Step 2 → .show table <TABLE> schema as json              // understand columns + types
Step 3 → <TABLE> | take 10                               // see sample data
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart  // shape of data
Step 5 → Formulate targeted query based on user's question
当用户要求探索或查询Eventhouse但未指定表时:
kql
Step 1 → .show tables                                    // 发现表
Step 2 → .show table <TABLE> schema as json              // 了解列和类型
Step 3 → <TABLE> | take 10                               // 查看示例数据
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart  // 数据形态
Step 5 → 根据用户问题制定针对性查询

Schema-Aware Query Generation

基于架构的查询生成

After schema discovery, generate queries using actual column names and types:
kql
// Example: user asks "show me errors in the last hour"
// After discovering table "AppEvents" with columns: Timestamp, Level, Message, Source
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount desc

完成架构发现后,使用实际列名和类型生成查询:
kql
// 示例:用户询问"显示过去一小时内的错误"
// 发现表"AppEvents"包含列:Timestamp、Level、Message、Source后
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount desc

Running Queries

运行查询

Via
az rest

通过
az rest

Always use the temp-file pattern for
--body
— KQL pipes (
|
) break inline shell escaping.
bash
undefined
始终使用临时文件模式处理
--body
— KQL管道符(
|
)会破坏内联shell转义。
bash
undefined

Run a KQL query

运行KQL查询

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
undefined
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
undefined

Output Formatting

输出格式化

bash
undefined
bash
undefined

Pretty-print results as a table with jq

使用jq将结果格式化为美观的表格

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".show tables"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".show tables"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'

Save results to file

将结果保存到文件

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json

---
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json

---

Monitoring

监控

kql
// Active queries
.show queries

// Recent commands (last hour)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc

// Ingestion failures (for context when data seems stale)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_

kql
// 活跃查询
.show queries

// 近期命令(过去一小时)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc

// Ingestion失败情况(当数据看起来过时的排查上下文)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_

Must / Prefer / Avoid / Troubleshooting

必须遵守/推荐做法/需避免操作/故障排查

Must

必须遵守

  • Always include time filters
    where Timestamp > ago(...)
    must be present on time-series tables.
  • Discover schema before querying — run
    .show tables
    and
    .show table T schema as json
    first.
  • Use
    has
    for term search
    — indexed and fast; only fall back to
    contains
    for substring needs.
  • Verify cluster URI — KQL Database URIs are per-item; always resolve via Fabric REST API.
  • 始终包含时间过滤器 — 时间序列表必须包含
    where Timestamp > ago(...)
  • 查询前先发现架构 — 先运行
    .show tables
    .show table T schema as json
  • 使用
    has
    进行术语搜索
    — 已建立索引且速度快;仅当需要子字符串搜索时才使用
    contains
  • 验证集群URI — KQL Database URI为每个项独有;始终通过Fabric REST API解析。

Prefer

推荐做法

  • az rest
    for CLI query sessions; Fabric KQL MCP server for agent-integrated workflows.
  • project
    early
    to drop unneeded columns before aggregation.
  • materialize()
    when a sub-expression is used multiple times.
  • take 100
    for initial exploration; avoid full table scans.
  • render timechart
    for time-series;
    render piechart
    for distribution.
  • **
    az rest
    **用于CLI查询会话;Fabric KQL MCP server用于Agent集成工作流。
  • **尽早使用
    project
    **在聚合前丢弃不需要的列。
  • **
    materialize()
    **当子表达式被多次使用时。
  • **
    take 100
    **用于初始探索;避免全表扫描。
  • **
    render timechart
    **用于时间序列;
    render piechart
    用于分布统计。

Avoid

需避免操作

  • contains
    on large tables — full scan, not indexed. Use
    has
    or
    has_cs
    .
  • join
    without filtering both sides first — causes memory explosion.
  • SELECT *
    equivalent (
    project
    all columns) on wide tables.
  • Missing
    bin()
    in time-series
    summarize
    — produces one row per unique timestamp.
  • Hardcoded cluster URIs — always resolve from Fabric REST API or environment variables.
  • 在大表上使用**
    contains
    ** — 全表扫描,未建立索引。使用
    has
    has_cs
  • **
    join
    **前未先过滤两边数据 — 会导致内存占用激增。
  • 等效于
    SELECT *
    的操作(
    project
    所有列)用于宽表。
  • 时间序列
    summarize
    缺少
    bin()
    — 会为每个唯一时间戳生成一行。
  • 硬编码集群URI — 始终从Fabric REST API或环境变量解析。

Troubleshooting

故障排查

SymptomFix
az rest
auth fails
Run
az login
first; ensure
--resource "https://kusto.kusto.windows.net"
is set
Empty results on valid tableCheck database context; may need
database("name").table
Query timeoutAdd tighter time filter; check
.show queries
for competing queries
Forbidden (403)
Request
viewer
role on the KQL Database
Results truncatedDefault limit is 500K rows; add
set truncationmaxrecords = N;
before query
KQL pipe
|
breaks PowerShell or bash
Never inline KQL in
--body
. Write JSON to a temp file and use
--body @file.json
(see Running Queries)

症状解决方法
az rest
认证失败
先运行
az login
;确保设置了
--resource "https://kusto.kusto.windows.net"
有效表返回空结果检查数据库上下文;可能需要使用
database("name").table
查询超时添加更严格的时间过滤器;查看
.show queries
是否有竞争查询
Forbidden (403)
请求KQL Database的
viewer
角色
结果被截断默认限制为500K行;在查询前添加
set truncationmaxrecords = N;
KQL管道符
|
破坏PowerShell或bash
永远不要在
--body
中内联KQL
。将JSON写入临时文件并使用
--body @file.json
(参见运行查询

Examples

示例

Example 1: Discover and Query

示例1:发现与查询

bash
undefined
bash
undefined

1. Set connection variables (after discovering URI via Step 1)

1. 设置连接变量(通过步骤1发现URI后)

CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com" DB_NAME="SalesDB"
CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com" DB_NAME="SalesDB"

2. Discover tables

2. 发现表

cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show tables"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show tables"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

3. Explore schema

3. 探索架构

cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show table Orders schema as json"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show table Orders schema as json"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

4. Sample data

4. 示例数据

cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":"Orders | take 10"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

```kql
// 5. Analytical query (via az rest --body @file)
Orders
| where OrderDate > ago(30d)
| summarize
    TotalOrders = count(),
    TotalRevenue = sum(Amount)
    by bin(OrderDate, 1d)
| render timechart
cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":"Orders | take 10"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

```kql
// 5. 分析查询(通过az rest --body @file执行)
Orders
| where OrderDate > ago(30d)
| summarize
    TotalOrders = count(),
    TotalRevenue = sum(Amount)
    by bin(OrderDate, 1d)
| render timechart

Example 2: Cross-Database Query

示例2:跨数据库查询

kql
// Query across KQL databases in the same Eventhouse
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue desc
kql
// 查询同一Eventhouse中的多个KQL数据库
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue desc

Example 3: Export Results to File

示例3:将结果导出到文件

bash
undefined
bash
undefined

Run query and save results to JSON

运行查询并将结果保存为JSON

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"} EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json

Convert to CSV with jq

使用jq转换为CSV

cat results.json
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv

---
cat results.json
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv

---

Agent Integration Notes

Agent集成说明

  • This skill is read-only — it does not create, alter, or drop database objects.
  • For authoring operations (table management, ingestion, policies), delegate to eventhouse-authoring-cli.
  • For cross-workload orchestration (Spark + SQL + KQL), delegate to the FabricDataEngineer agent.
  • The Fabric KQL MCP server (
    fabric-kql
    in
    mcp-setup/mcp-config-template.json
    ) can be used as an alternative to
    az rest
    for agent-integrated query execution.
  • 此技能为只读 — 不会创建、修改或删除数据库对象。
  • 对于创作操作(表管理、Ingestion、策略),请委托给eventhouse-authoring-cli
  • 对于跨工作负载编排(Spark + SQL + KQL),请委托给FabricDataEngineer Agent。
  • Fabric KQL MCP server
    mcp-setup/mcp-config-template.json
    中的
    fabric-kql
    )可作为
    az rest
    的替代方案,用于Agent集成的查询执行。