Loading...
Loading...
Execute KQL management commands (table management, ingestion, policies, functions, materialized views) against Fabric Eventhouse and KQL Databases via CLI. Use when the user wants to: 1. Create or alter KQL tables, columns, or functions 2. Ingest data into an Eventhouse (inline, from storage, streaming) 3. Configure retention, caching, or partitioning policies 4. Create or manage materialized views and update policies 5. Manage data mappings for ingestion pipelines 6. Deploy KQL schema via scripts Triggers: "create kql table", "kql ingestion", "ingest into eventhouse", "kql function", "materialized view", "kql retention policy", "eventhouse schema", "kql authoring", "create eventhouse table", "kql mapping"
npx skill4agent add microsoft/skills-for-fabric eventhouse-authoring-cliUpdate Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
skill.check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for workspace/item ID resolution] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | Hierarchy, Finding Things in Fabric |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI, KQL Ingestion URI |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; KQL audience: |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | List Workspaces, List Items, Item Creation |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | |
Fabric Control-Plane API via | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via | COMMON-CLI.md § OneLake Data Access via curl | Use |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | |
Quick Reference: | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which |
| Authoring Capability Matrix | EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix | Read first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role |
| Table Management and Schema Evolution | EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution | Create Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green) |
| Ingestion and Data Mappings | EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings | Inline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON) |
| Policies | EVENTHOUSE-AUTHORING-CORE.md § Policies | Retention, Caching, Partitioning, Merge |
| Materialized Views | EVENTHOUSE-AUTHORING-CORE.md § Materialized Views | Create, Alter, Lifecycle, Supported aggregations |
| Stored Functions and Update Policies | EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies | Stored Functions, Update Policies (auto-transform on ingestion) |
| External Tables | EVENTHOUSE-AUTHORING-CORE.md § External Tables | OneLake / ADLS External Table, Query External Table |
| Permission Model | EVENTHOUSE-AUTHORING-CORE.md § Permission Model | Database Roles, Grant Permissions |
| Authoring Gotchas and Troubleshooting | EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference | 10 numbered issues with cause + fix |
| Bash Templates | authoring-script-templates.md § Bash Templates | Create Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching |
| PowerShell Templates | authoring-script-templates.md § PowerShell Templates | Create Table + Ingest, Schema Deployment |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | |
| Authoring Scope | SKILL.md § Authoring Scope | |
| Execute KQL Command | SKILL.md § Execute KQL Command | |
| Table Management via CLI | SKILL.md § Table Management via CLI | Create Table, Add Column, Drop Table |
| Data Ingestion via CLI | SKILL.md § Data Ingestion via CLI | Inline, From Storage, From OneLake, Set-or-Append |
| Policies via CLI | SKILL.md § Policies via CLI | Retention, Caching, Streaming Ingestion |
| Materialized Views via CLI | SKILL.md § Materialized Views via CLI | |
| Functions and Update Policies via CLI | SKILL.md § Functions and Update Policies via CLI | Create Function, Create Update Policy |
| Schema Evolution via CLI | SKILL.md § Schema Evolution via CLI | Safe Schema Deployment Script, Export Current Schema |
| Monitoring Authoring Operations | SKILL.md § Monitoring Authoring Operations | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Agentic Workflows | SKILL.md § Agentic Workflows | Exploration Before Authoring, Script Generation Workflow |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL management commands via Kusto REST API; Fabric control-plane discovery | |
| jq | JSON processing and output formatting | |
# Discover KQL Database query URI
WS_ID="<workspace-id>"
az rest --method GET \
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \
--resource "https://api.fabric.microsoft.com" \
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
# Set connection variables
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyDatabase"
# Verify admin access
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'| Operation | Command Pattern |
|---|---|
| Create table | |
| Add column | |
| Drop table | |
| Ingest data | |
| Set retention | |
| Set caching | |
| Create function | |
| Create materialized view | |
| Create update policy | |
| Create data mapping | |
az restCLUSTER_URIDB/tmp/kql_body.jsoncat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace<< 'EOF'with the literal database name.${DB}
PowerShell equivalent —then@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM. See PowerShell Templates.--body "@$env:TEMP\kql_body.json"
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Set 365-day retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Keep last 30 days in hot cache
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy caching hot = 30d"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Create materialized view with backfill
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Check health
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
.kqlaz rest# deploy_schema.kql contains one command per line:
# .create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)
# .create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)
# .alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'
# .alter table Events policy caching hot = 30d
# Execute each command from the file (see "Execute KQL Command" section)
while IFS= read -r cmd; do
[[ "$cmd" =~ ^// ]] && continue # skip comment lines
[[ -z "$cmd" ]] && continue # skip blank lines
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"${cmd}"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
done < deploy_schema.kqlcat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq -r '.Tables[0].Rows[][0]' > current_schema.kql// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc
// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc
// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTo.create-merge table.create-or-alter function.create table ifnotexistsAdminIngestorimpersonateaz rest.create-merge table.create table.show database DB schema as csl script.drop tableifexists.alter table.alter-merge table| Symptom | Fix |
|---|---|
| Use |
| Ingestion succeeds but table empty | Check data mappings: |
| Update policy not firing | Verify function runs standalone; check |
| Request |
| Materialized view stuck | Check |
| OneLake ingest auth error | Add |
Step 0 → Is the request specific? Does it name a table, operation, and/or schema?
→ NO → Ask: "What would you like to set up? Options: create tables,
configure policies, set up ingestion mappings, create materialized views."
STOP — do not proceed until user specifies.
→ YES → Continue to Step 1.
Step 1 → .show tables details // what exists?
Step 2 → .show table <TABLE> schema as json // current columns
Step 3 → .show table <TABLE> policy retention // current policies
Step 4 → Plan changes (create-merge, alter, etc.)
Step 5 → Execute changes
Step 6 → Verify: .show table <TABLE> schema as json // confirm changesStep 1 → Understand requirements from user
Step 2 → Generate KQL management commands
Step 3 → Save to .kql file
Step 4 → Deploy via az rest (one command at a time)
Step 5 → Verify deployed state matches intent# Create table
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Set retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table SensorData policy retention '{\"SoftDeletePeriod\":\"90.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Set caching
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
# Create JSON mapping
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{\"column\":\"Timestamp\",\"path\":\"$.ts\",\"datatype\":\"datetime\"},{\"column\":\"DeviceId\",\"path\":\"$.deviceId\",\"datatype\":\"string\"},{\"column\":\"Temperature\",\"path\":\"$.temp\",\"datatype\":\"real\"},{\"column\":\"Humidity\",\"path\":\"$.humidity\",\"datatype\":\"real\"},{\"column\":\"Location\",\"path\":\"$.location\",\"datatype\":\"dynamic\"}]'"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)
// 2. Transform function
.create-or-alter function ParseRawLogs() {
RawLogs
| extend J = parse_json(RawMessage)
| project
Timestamp = todatetime(J.timestamp),
Level = tostring(J.level),
Message = tostring(J.message),
Source = tostring(J.source)
}
// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'AdminIngestor