eventhouse-consumption-cli
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseUpdate Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
skill.check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
更新检查 — 每个会话一次(必填) 在会话中首次使用此技能前,请先运行check-updates技能。
- GitHub Copilot CLI / VS Code:调用
技能。check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex:对比本地与远程package.json版本。
- 若本次会话中已完成检查,可跳过此步骤。
重要说明
- 从工作区名称查找工作区详情(包括其ID):列出所有工作区,然后使用JMESPath过滤
- 从工作区ID、项类型和项名称查找项详情(包括其ID):列出该工作区中该类型的所有项,然后使用JMESPath过滤
eventhouse-consumption-cli — Read-Only KQL Queries via CLI
eventhouse-consumption-cli — 通过CLI运行只读KQL查询
Table of Contents
目录
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI is per-item |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; read before any auth issue |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | |
Fabric Control-Plane API via | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via | COMMON-CLI.md § OneLake Data Access via curl | Use |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | |
Quick Reference: | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which |
| Connection Fundamentals | EVENTHOUSE-CONSUMPTION-CORE.md § Connection Fundamentals | Cluster URI discovery, |
| Schema Discovery and Security | EVENTHOUSE-CONSUMPTION-CORE.md § Schema Discovery and Security | Schema Discovery, Security — workspace roles + KQL DB roles |
| Monitoring and Diagnostics | EVENTHOUSE-CONSUMPTION-CORE.md § Monitoring and Diagnostics | |
| Performance Best Practices | EVENTHOUSE-CONSUMPTION-CORE.md § Performance Best Practices | Read before writing KQL — time filters, |
| Common Consumption Patterns | EVENTHOUSE-CONSUMPTION-CORE.md § Common Consumption Patterns | Time-series, Top-N, percentile, dynamic fields |
| Gotchas, Troubleshooting, and Quick Reference | EVENTHOUSE-CONSUMPTION-CORE.md § Gotchas, Troubleshooting, and Quick Reference | Gotchas and Troubleshooting (12 issues), Quick Reference: Consumption Capabilities by Scenario |
| Table and Column Discovery | discovery-queries.md § Table and Column Discovery | Table Discovery, Column Statistics |
| Function and View Discovery | discovery-queries.md § Function and View Discovery | Function Discovery, Materialized View Discovery |
| Policy Discovery | discovery-queries.md § Policy Discovery | |
| External Tables and Ingestion Mappings | discovery-queries.md § External Tables and Ingestion Mappings | External Table Discovery, Ingestion Mapping Discovery |
| Security Discovery | discovery-queries.md § Security Discovery | |
| Database Overview Script | discovery-queries.md § Database Overview Script | |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | eventhouse-specific |
| Agentic Exploration ("Chat With My Data") | SKILL.md § Agentic Exploration | Start here for data exploration |
| Running Queries | SKILL.md § Running Queries | |
| Monitoring | SKILL.md § Monitoring | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
| 任务 | 参考文档 | 备注 |
|---|---|---|
| 在Fabric中查找工作区和项 | COMMON-CLI.md § 在Fabric中查找工作区和项 | 必填 — 先阅读链接内容 [需要通过名称查找工作区ID,或通过名称、项类型、工作区ID查找项ID] |
| Fabric拓扑与核心概念 | COMMON-CORE.md § Fabric拓扑与核心概念 | |
| 环境URL | COMMON-CORE.md § 环境URL | KQL集群URI为每个项独有 |
| 认证与令牌获取 | COMMON-CORE.md § 认证与令牌获取 | 受众错误会导致401;遇到认证问题前请先阅读 |
| 核心控制面REST API | COMMON-CORE.md § 核心控制面REST API | |
| 分页 | COMMON-CORE.md § 分页 | |
| 长时间运行操作(LRO) | COMMON-CORE.md § 长时间运行操作(LRO) | |
| 速率限制与节流 | COMMON-CORE.md § 速率限制与节流 | |
| OneLake数据访问 | COMMON-CORE.md § OneLake数据访问 | 需要 |
| 任务执行 | COMMON-CORE.md § 任务执行 | |
| 容量管理 | COMMON-CORE.md § 容量管理 | |
| 常见陷阱与故障排查 | COMMON-CORE.md § 常见陷阱与故障排查 | |
| 最佳实践 | COMMON-CORE.md § 最佳实践 | |
| 工具选择理由 | COMMON-CLI.md § 工具选择理由 | |
| 认证方案 | COMMON-CLI.md § 认证方案 | |
通过 | COMMON-CLI.md § 通过az rest调用Fabric控制面API | 必须传递 |
| 分页模式 | COMMON-CLI.md § 分页模式 | |
| 长时间运行操作(LRO)模式 | COMMON-CLI.md § 长时间运行操作(LRO)模式 | |
通过 | COMMON-CLI.md § 通过curl访问OneLake数据 | 使用 |
| 任务执行(CLI) | COMMON-CLI.md § 任务执行 | |
| OneLake快捷方式 | COMMON-CLI.md § OneLake快捷方式 | |
| 容量管理(CLI) | COMMON-CLI.md § 容量管理 | |
| 复合方案 | COMMON-CLI.md § 复合方案 | |
| 常见陷阱与故障排查(CLI专属) | COMMON-CLI.md § 常见陷阱与故障排查(CLI专属) | |
快速参考: | COMMON-CLI.md § 快速参考:az rest模板 | |
| 快速参考:令牌受众 ↔ CLI工具矩阵 | COMMON-CLI.md § 快速参考:令牌受众 ↔ CLI工具矩阵 | 各服务对应的 |
| 连接基础 | EVENTHOUSE-CONSUMPTION-CORE.md § 连接基础 | 集群URI发现、 |
| 架构发现与安全 | EVENTHOUSE-CONSUMPTION-CORE.md § 架构发现与安全 | 架构发现、安全 — 工作区角色 + KQL DB角色 |
| 监控与诊断 | EVENTHOUSE-CONSUMPTION-CORE.md § 监控与诊断 | |
| 性能最佳实践 | EVENTHOUSE-CONSUMPTION-CORE.md § 性能最佳实践 | 编写KQL前请阅读 — 时间过滤器、 |
| 常见使用模式 | EVENTHOUSE-CONSUMPTION-CORE.md § 常见使用模式 | 时间序列、Top-N、百分位数、动态字段 |
| 常见陷阱、故障排查与快速参考 | EVENTHOUSE-CONSUMPTION-CORE.md § 常见陷阱、故障排查与快速参考 | 常见陷阱与故障排查(12个问题)、按场景划分的使用能力快速参考 |
| 表与列发现 | discovery-queries.md § 表与列发现 | 表发现、列统计 |
| 函数与视图发现 | discovery-queries.md § 函数与视图发现 | 函数发现、物化视图发现 |
| 策略发现 | discovery-queries.md § 策略发现 | |
| 外部表与Ingestion映射 | discovery-queries.md § 外部表与Ingestion映射 | 外部表发现、Ingestion映射发现 |
| 安全发现 | discovery-queries.md § 安全发现 | |
| 数据库概览脚本 | discovery-queries.md § 数据库概览脚本 | |
| 工具栈 | SKILL.md § 工具栈 | |
| 连接 | SKILL.md § 连接 | Eventhouse专属的 |
| 智能探索("与我的数据对话") | SKILL.md § 智能探索 | 数据探索从此处开始 |
| 运行查询 | SKILL.md § 运行查询 | |
| 监控 | SKILL.md § 监控 | |
| 必须遵守/推荐做法/需避免操作/故障排查 | SKILL.md § 必须遵守/推荐做法/需避免操作/故障排查 | 必须做/避免/推荐检查清单 |
| 示例 | SKILL.md § 示例 | |
| Agent集成说明 | SKILL.md § Agent集成说明 |
Tool Stack
工具栈
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL queries and management commands via Kusto REST API; Fabric control-plane discovery | |
| jq | JSON processing and output formatting | |
| 工具 | 用途 | 安装方式 |
|---|---|---|
| az cli | 通过Kusto REST API执行KQL查询和管理命令;Fabric控制面发现 | |
| jq | JSON处理和输出格式化 | |
Connection
连接
Step 1 — Discover KQL Database Query URI
步骤1 — 发现KQL Database查询URI
bash
undefinedbash
undefinedGet workspace ID (if not known)
获取工作区ID(若未知)
WS_ID=$(az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
WS_ID=$(az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
--url "https://api.fabric.microsoft.com/v1/workspaces"
--resource "https://api.fabric.microsoft.com"
| jq -r '.value[] | select(.displayName=="MyWorkspace") | .id')
List KQL Databases and get connection properties
列出KQL数据库并获取连接属性
az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
undefinedaz rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, id: .id, queryUri: .properties.queryServiceUri, dbName: .properties.databaseName}'
undefinedStep 2 — Set Connection Variables
步骤2 — 设置连接变量
bash
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"bash
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyKqlDatabase"Step 3 — Verify Connection
步骤3 — 验证连接
Important — body file pattern: KQL queries contain(pipe) characters which break shell escaping in both bash and PowerShell. Always write the JSON body to a temp file and reference it with|. This is the recommended approach for all--body @<file>KQL calls. On PowerShell, useaz restthen@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM.--body "@$env:TEMP\kql_body.json"
bash
undefined重要提示 — 主体文件模式:KQL查询包含(管道)字符,会破坏bash和PowerShell中的shell转义。始终将JSON主体写入临时文件,并使用|引用它。这是所有--body @<file>KQL调用的推荐方法。 在PowerShell中,使用az rest,然后使用@{db="X";csl="..."} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM。--body "@$env:TEMP\kql_body.json"
bash
undefinedWrite body to temp file (avoids pipe escaping issues)
将主体写入临时文件(避免管道转义问题)
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
---cat > /tmp/kql_body.json << 'EOF'
{"db":"MyKqlDatabase","csl":"print Message = 'Connected successfully', Cluster = current_cluster_endpoint(), Timestamp = now()"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
---Agentic Exploration
智能探索("与我的数据对话")
"Chat With My Data" — Discovery Sequence
"与我的数据对话" — 发现流程
When the user asks to explore or query an Eventhouse without specifying tables:
kql
Step 1 → .show tables // discover tables
Step 2 → .show table <TABLE> schema as json // understand columns + types
Step 3 → <TABLE> | take 10 // see sample data
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart // shape of data
Step 5 → Formulate targeted query based on user's question当用户要求探索或查询Eventhouse但未指定表时:
kql
Step 1 → .show tables // 发现表
Step 2 → .show table <TABLE> schema as json // 了解列和类型
Step 3 → <TABLE> | take 10 // 查看示例数据
Step 4 → <TABLE> | summarize count() by bin(Timestamp, 1h) | render timechart // 数据形态
Step 5 → 根据用户问题制定针对性查询Schema-Aware Query Generation
基于架构的查询生成
After schema discovery, generate queries using actual column names and types:
kql
// Example: user asks "show me errors in the last hour"
// After discovering table "AppEvents" with columns: Timestamp, Level, Message, Source
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount desc完成架构发现后,使用实际列名和类型生成查询:
kql
// 示例:用户询问"显示过去一小时内的错误"
// 发现表"AppEvents"包含列:Timestamp、Level、Message、Source后
AppEvents
| where Timestamp > ago(1h)
| where Level == "Error"
| summarize ErrorCount = count() by Source, bin(Timestamp, 5m)
| order by ErrorCount descRunning Queries
运行查询
Via az rest
az rest通过az rest
az restAlways use the temp-file pattern for— KQL pipes (--body) break inline shell escaping.|
bash
undefined始终使用临时文件模式处理— KQL管道符(--body)会破坏内联shell转义。|
bash
undefinedRun a KQL query
运行KQL查询
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
undefinedcat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | count"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
undefinedOutput Formatting
输出格式化
bash
undefinedbash
undefinedPretty-print results as a table with jq
使用jq将结果格式化为美观的表格
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".show tables"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".show tables"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0] | [.Columns[].ColumnName] as $cols | .Rows[] | [$cols, .] | transpose | map({(.[0]): .[1]}) | add'
Save results to file
将结果保存到文件
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
---cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1h) | summarize count() by EventType"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
---Monitoring
监控
kql
// Active queries
.show queries
// Recent commands (last hour)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc
// Ingestion failures (for context when data seems stale)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_kql
// 活跃查询
.show queries
// 近期命令(过去一小时)
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 80), Duration, State
| order by StartedOn desc
// Ingestion失败情况(当数据看起来过时的排查上下文)
.show ingestion failures
| where FailedOn > ago(24h)
| summarize count() by ErrorCode
| top 5 by count_Must / Prefer / Avoid / Troubleshooting
必须遵守/推荐做法/需避免操作/故障排查
Must
必须遵守
- Always include time filters — must be present on time-series tables.
where Timestamp > ago(...) - Discover schema before querying — run and
.show tablesfirst..show table T schema as json - Use for term search — indexed and fast; only fall back to
hasfor substring needs.contains - Verify cluster URI — KQL Database URIs are per-item; always resolve via Fabric REST API.
- 始终包含时间过滤器 — 时间序列表必须包含。
where Timestamp > ago(...) - 查询前先发现架构 — 先运行和
.show tables。.show table T schema as json - 使用进行术语搜索 — 已建立索引且速度快;仅当需要子字符串搜索时才使用
has。contains - 验证集群URI — KQL Database URI为每个项独有;始终通过Fabric REST API解析。
Prefer
推荐做法
- for CLI query sessions; Fabric KQL MCP server for agent-integrated workflows.
az rest - early to drop unneeded columns before aggregation.
project - when a sub-expression is used multiple times.
materialize() - for initial exploration; avoid full table scans.
take 100 - for time-series;
render timechartfor distribution.render piechart
- ****用于CLI查询会话;Fabric KQL MCP server用于Agent集成工作流。
az rest - **尽早使用**在聚合前丢弃不需要的列。
project - ****当子表达式被多次使用时。
materialize() - ****用于初始探索;避免全表扫描。
take 100 - ****用于时间序列;
render timechart用于分布统计。render piechart
Avoid
需避免操作
- on large tables — full scan, not indexed. Use
containsorhas.has_cs - without filtering both sides first — causes memory explosion.
join - equivalent (
SELECT *all columns) on wide tables.project - Missing in time-series
bin()— produces one row per unique timestamp.summarize - Hardcoded cluster URIs — always resolve from Fabric REST API or environment variables.
- 在大表上使用**** — 全表扫描,未建立索引。使用
contains或has。has_cs - ****前未先过滤两边数据 — 会导致内存占用激增。
join - 等效于的操作(
SELECT *所有列)用于宽表。project - 时间序列中缺少
summarize— 会为每个唯一时间戳生成一行。bin() - 硬编码集群URI — 始终从Fabric REST API或环境变量解析。
Troubleshooting
故障排查
| Symptom | Fix |
|---|---|
| Run |
| Empty results on valid table | Check database context; may need |
| Query timeout | Add tighter time filter; check |
| Request |
| Results truncated | Default limit is 500K rows; add |
KQL pipe | Never inline KQL in |
| 症状 | 解决方法 |
|---|---|
| 先运行 |
| 有效表返回空结果 | 检查数据库上下文;可能需要使用 |
| 查询超时 | 添加更严格的时间过滤器;查看 |
| 请求KQL Database的 |
| 结果被截断 | 默认限制为500K行;在查询前添加 |
KQL管道符 | 永远不要在 |
Examples
示例
Example 1: Discover and Query
示例1:发现与查询
bash
undefinedbash
undefined1. Set connection variables (after discovering URI via Step 1)
1. 设置连接变量(通过步骤1发现URI后)
CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com"
DB_NAME="SalesDB"
CLUSTER_URI="https://<your-cluster>.kusto.fabric.microsoft.com"
DB_NAME="SalesDB"
2. Discover tables
2. 发现表
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show tables"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show tables"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
3. Explore schema
3. 探索架构
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show table Orders schema as json"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show table Orders schema as json"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
4. Sample data
4. 示例数据
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":"Orders | take 10"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
```kql
// 5. Analytical query (via az rest --body @file)
Orders
| where OrderDate > ago(30d)
| summarize
TotalOrders = count(),
TotalRevenue = sum(Amount)
by bin(OrderDate, 1d)
| render timechartcat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":"Orders | take 10"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
```kql
// 5. 分析查询(通过az rest --body @file执行)
Orders
| where OrderDate > ago(30d)
| summarize
TotalOrders = count(),
TotalRevenue = sum(Amount)
by bin(OrderDate, 1d)
| render timechartExample 2: Cross-Database Query
示例2:跨数据库查询
kql
// Query across KQL databases in the same Eventhouse
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue desckql
// 查询同一Eventhouse中的多个KQL数据库
let orders = database("SalesDB").Orders | where OrderDate > ago(7d);
let products = database("CatalogDB").Products;
orders
| join kind=inner (products) on ProductId
| summarize Revenue = sum(Amount) by ProductName
| top 10 by Revenue descExample 3: Export Results to File
示例3:将结果导出到文件
bash
undefinedbash
undefinedRun query and save results to JSON
运行查询并将结果保存为JSON
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":"Events | where Timestamp > ago(1d) | summarize count() by EventType"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
--url "${CLUSTER_URI}/v1/rest/query"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
--output-file results.json
Convert to CSV with jq
使用jq转换为CSV
cat results.json
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv
---cat results.json
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv
| jq -r '.Tables[0] | (.Columns | map(.ColumnName)), (.Rows[]) | @csv' > results.csv
---Agent Integration Notes
Agent集成说明
- This skill is read-only — it does not create, alter, or drop database objects.
- For authoring operations (table management, ingestion, policies), delegate to eventhouse-authoring-cli.
- For cross-workload orchestration (Spark + SQL + KQL), delegate to the FabricDataEngineer agent.
- The Fabric KQL MCP server (in
fabric-kql) can be used as an alternative tomcp-setup/mcp-config-template.jsonfor agent-integrated query execution.az rest
- 此技能为只读 — 不会创建、修改或删除数据库对象。
- 对于创作操作(表管理、Ingestion、策略),请委托给eventhouse-authoring-cli。
- 对于跨工作负载编排(Spark + SQL + KQL),请委托给FabricDataEngineer Agent。
- Fabric KQL MCP server(中的
mcp-setup/mcp-config-template.json)可作为fabric-kql的替代方案,用于Agent集成的查询执行。az rest