Loading...
Loading...
Found 41 Skills
Debugs and fixes dbt errors systematically. Use when working with dbt errors for: (1) Task mentions "fix", "error", "broken", "failing", "debug", "wrong", or "not working" (2) Compilation Error, Database Error, or test failures occur (3) Model produces incorrect output or unexpected results (4) Need to troubleshoot why a dbt command failed Reads full error, checks upstream first, runs dbt build (not just compile) to verify fix.
Documents dbt models and columns in schema.yml. Use when working with dbt documentation for: (1) Adding model descriptions or column definitions to schema.yml (2) Task mentions "document", "describe", "description", "dbt docs", or "schema.yml" (3) Explaining business context, grain, meaning of data, or business rules (4) Preparing dbt docs generate or improving model discoverability Matches existing project documentation style and conventions before writing.
Develops and troubleshoots dbt incremental models. Use when working with incremental materialization for: (1) Creating new incremental models (choosing strategy, unique_key, partition) (2) Task mentions "incremental", "append", "merge", "upsert", or "late arriving data" (3) Troubleshooting incremental failures (merge errors, partition pruning, schema drift) (4) Optimizing incremental performance or deciding table vs incremental Guides through strategy selection, handles common incremental gotchas.
Expert-level dbt (data build tool), models, tests, documentation, incremental models, macros, and Jinja templating
Expert guidance for creating, modifying, and optimizing dbt pipelines for BigQuery. Use this skill whenever user asks for generating or modifying a dbt model or project. Activate this skill when the user - Creates, modifies, or troubleshoots **dbt models or pipelines** - Needs to **optimize SQL** within a dbt project - Is **setting up a new dbt project** or configuring existing one
Creates dbt models following project conventions. Use when working with dbt models for: (1) Creating new models (any layer - discovers project's naming conventions first) (2) Task mentions "create", "build", "add", "write", "new", or "implement" with model, table, or SQL (3) Modifying existing model logic, columns, joins, or transformations (4) Implementing a model from schema.yml specs or expected output requirements Discovers project conventions before writing. Runs dbt build (not just compile) to verify.
Safely refactors dbt models with downstream impact analysis. Use when restructuring dbt models for: (1) Task mentions "refactor", "restructure", "extract", "split", "break into", or "reorganize" (2) Extracting CTEs to intermediate models or creating macros (3) Modifying model logic that has downstream consumers (4) Renaming columns, changing types, or reorganizing model dependencies Analyzes all downstream dependencies BEFORE making changes.
dbt (data build tool) patterns for model organization, incremental strategies, and testing.
Use this skill when the user or agent wants to take a DBTI personality assessment, do a trading personality quiz, get investment style label, DBTI test, 做题, DBTI测评, 投资人格测试, 交易风格测评, Agent人格标签, agent personality, investment personality test
Primary entry point for building, managing, and orchestrating data pipelines on Google Cloud. Guides users to the appropriate skill for dbt, Dataflow (Apache Beam), Dataform, Spark (Dataproc Serverless), BigQuery Data Transfer Service (DTS) or orchestration pipeline using Cloud Composer. Clarify requirements and resolve ambiguity for creating, updating and running data pipelines.
Wire a semantic layer into a nao agent so that metric queries are routed through a single source of truth. Supports dbt MetricFlow (dbt Cloud with Semantic Layer), Snowflake (views or semantic views via MCP), an in-house nao YAML semantic layer, or other tools (via MCP discovery). Installs the right MCP server, updates RULES.md to route metric queries through the semantic layer, and (for the nao YAML option) generates starter metric files. Use after a first round of tests has shown the agent struggling with metric reliability. Do not use for raw rule writing (write-context-rules) or first-time setup (setup-context).
Builds data infrastructure — ETL/ELT pipelines, data warehousing, stream processing, data quality, orchestration (Airflow/Dagster), and analytics engineering (dbt). Use when the user asks to build data pipelines, set up ETL/ELT workflows, design a data warehouse, configure stream processing, or implement analytics engineering with dbt, Airflow, or Dagster.