Loading...
Loading...
REQUIRED before generating any DQL queries. Provides critical syntax rules, common pitfalls, and patterns. Load this skill BEFORE writing DQL to avoid syntax errors.
npx skill4agent add dynatrace/dynatrace-for-ai dt-dql-essentials|| Use case | Reference |
|---|---|
| Useful expressions in DQL | references/useful-expressions.md |
| Smartscape topology navigation syntax and patterns | references/smartscape-topology-navigation.md |
| Dynatrace Semantic Dictionary: field namespaces, data models, stability levels, query patterns, and best practices | references/semantic-dictionary.md |
| Various applications of summarize and makeTimeseries commands | references/summarization.md |
Operators (in, time alignment | references/operators.md |
| Array and timeseries manipulation (creation, modifications, use in filters) using DQL | references/iterative-expressions.md |
| Query optimization (filter early, time ranges, field selection, performance) | references/optimization.md |
| Description | Items |
|---|---|
| Data Types | |
| Parameter Value Types | |
| Commands | |
| Functions — Aggregation | |
| Functions — Array | |
| Functions — Bitwise | |
| Functions — Boolean | |
| Functions — Cast | |
| Functions — Constant | |
| Functions — Conversion | |
| Functions — Create | |
| Functions — Cryptographic | |
| Functions — Entities | |
| Functions — Time series aggregation for expressions | |
| Functions — Flow | |
| Functions — General | |
| Functions — Get | |
| Functions — Iterative | |
| Functions — Mathematical | |
| Functions — Network | |
| Functions — Smartscape | |
| Functions — String | |
| Functions — Time | |
| Functions — Time series aggregation for metrics | |
| ❌ Wrong | ✅ Right | Issue |
|---|---|---|
| | No array literal syntax |
| | Multiple grouping fields require curly braces |
| | There's no function for |
| | Mid-string wildcards not allowed; use |
| | |
| | The correct function in DQL is called |
| | |
| | |
Chained | | Each |
| | DQL functions use named parameters — positional args cause |
| | Log severity field is |
| | fields with special characters must use backticks |
| Fetch Command | Data Model | Key Fields / Notes |
|---|---|---|
| Distributed tracing | |
| Log events | |
| Davis / infra events | |
| Business events | |
| Security events | |
| RUM sessions | |
| Metrics | NOT |
dt.entity.*dt.smartscape.*smartscapeNodestimeseries sum(`my.metric-name`)fetch <data_object>timeseriesfetch dt.metricfetch dt.metricstimeseriesfetch| Data Object | Description |
|---|---|
| Log entries |
| Distributed traces / spans |
| Platform events |
| Business events |
| RUM individual events (page views, clicks, requests, errors) |
| RUM session-level aggregates |
| Session replay recordings |
| Security events |
| Application snapshots |
| Smartscape entity fields (e.g., |
| DAVIS-detected problems |
| DAVIS events |
timeseriesfetchtimeseries cpu = avg(dt.host.cpu.usage), by: {dt.smartscape.host}smartscapeNodesfetchsmartscapeNodes "HOST"fetch dt.system.data_objects | fields name, display_name, typemetric.seriesfetch metric.series, from: now() - 1h
| filter contains(metric.key, "replay")
| summarize count(), by: {metric.key}
| sort `count()` descfetch dt.metricfetch dt.metricsentity.iddt.smartscape.host| Entity | ID field |
|---|---|
| Host | |
| Service | |
| Process | |
| Kubernetes cluster | |
smartscapeNodesfetchsmartscapeNodes| Entity | Field name | |
|---|---|---|
| Host | | |
| Service | | |
| K8s cluster | | |
toSmartscapeId()matchesValue()dt.tags| filter matchesValue(dt.tags, "env:production")contains()matchesValue()lookuplookup.fieldsRenamefetch bizevents
// Step 1: First lookup — enrich orders with product info
| lookup [fetch bizevents
| filter event.type == "product_catalog"
| fields product_id, category],
sourceField: product_id, lookupField: product_id
// Step 2: Rename BEFORE next lookup — or lookup.category gets wiped
| fieldsRename product_category = lookup.category
// Step 3: Second lookup — lookup.* is now clean for new results
| lookup [fetch bizevents
| filter event.type == "warehouse_stock"
| fields category, warehouse_region],
sourceField: product_category, lookupField: category
// Both product_category and lookup.warehouse_region are availablefieldsRenamelookupmakeTimeseriestimeseriestimeseriesmakeTimeseriesfetch logs
| makeTimeseries count = count(), by: {loglevel}, interval: 5m| Parameter | Required | Description |
|---|---|---|
| Yes | Aggregation to compute per bucket (e.g. |
| No | Bucket size — e.g. |
| No | Optional grouping dimensions (same |
| No | Explicit time range; defaults to the query timeframe |
| No | Number of time buckets (alternative to |
| No | Field to use as the timestamp; defaults to |
| No | Timeframe expression for bucket calculation; alternative to |
| No | Boolean; when |
fetch logs
| makeTimeseries
total = count(),
errors = countIf(loglevel == "ERROR"),
interval: 5m,
by: {k8s.cluster.name}
| fieldsAdd error_rate = errors / total * 100spread:smartscapeNodes "HOST"
| makeTimeseries concurrently_existing_hosts = count(), spread: lifetimespread: lifetimefrom:to:now()timeframe:@from:now()-1h@h, to:now()@h // last complete hourfrom:now()-1d@d, to:now()@d // yesterday completefrom:now()@M // this month so far, till nowfrom:now()-2h@h // go back 2 hours, then align to hour boundaryfrom:"2024-01-15T08:00:00Z", to:"2024-01-15T09:00:00Z"timestamp - timestamp → duration2h / 1m120.0no_of_h * 1h → durationformatTimestampin@