Loading...
Loading...
Found 25 Skills
Use this skill when designing data warehouses, building star or snowflake schemas, implementing slowly changing dimensions (SCDs), writing analytical SQL for Snowflake or BigQuery, creating fact and dimension tables, or planning ETL/ELT pipelines for analytics. Triggers on dimensional modeling, surrogate keys, conformed dimensions, warehouse architecture, data vault, partitioning strategies, materialized views, and any task requiring OLAP schema design or warehouse query optimization.
Optimizes Snowflake query performance using query ID from history. Use when optimizing Snowflake queries for: (1) User provides a Snowflake query_id (UUID format) to analyze or optimize (2) Task mentions "slow query", "optimize", "query history", or "query profile" with a query ID (3) Analyzing query performance metrics - bytes scanned, spillage, partition pruning (4) User references a previously run query that needs optimization Fetches query profile, identifies bottlenecks, returns optimized SQL with expected improvements.
Install and initialize task-master for AI-powered task management and specification-driven development. Use this skill when users ask you to parse a new PRD, when starting a new project that needs structured task management, when users mention wanting task breakdown or project planning, or when implementing specification-driven development workflows.
Write correct, performant SQL across all major data warehouse dialects (Snowflake, BigQuery, Databricks, PostgreSQL, etc.). Use when writing queries, optimizing slow SQL, translating between dialects, or building complex analytical queries with CTEs, window functions, or aggregations.
Creates and maintains dlt (data load tool) pipelines from APIs, databases, and other sources. Use when the user wants to build or debug pipelines; use verified sources (e.g. Salesforce, GitHub, Stripe) or declarative REST API or custom Python; configure destinations (e.g. DuckDB, BigQuery, Snowflake); implement incremental loading; or edit .dlt config and secrets. Use when the user mentions data ingestion, dlt pipeline, dlt init, rest_api_source, incremental load, or pipeline dashboard.
Write optimized SQL for your dialect with best practices. Use when translating a natural-language data need into SQL, building a multi-CTE query with joins and aggregations, optimizing a query against a large partitioned table, or getting dialect-specific syntax for Snowflake, BigQuery, Postgres, etc.
Iterable platform help — cross-channel customer engagement with Studio journey builder, AI suite (Brand Affinity, STO, Predictive Goals), and Smart Ingest from 23+ data sources. Use when configuring Studio journeys, setting up campaigns or experiments, managing email/SMS/push/in-app/WhatsApp channels, using Iterable AI features, configuring Smart Ingest or Snowflake sync, or troubleshooting Iterable. Do NOT use for general email marketing strategy (use /sales-email-marketing), push notification strategy (use /sales-push-notification), in-app messaging strategy (use /sales-in-app-messaging), transactional email strategy (use /sales-transactional-email), cross-platform deliverability (use /sales-deliverability), or connecting tools generically (use /sales-integration).
Use when "data pipelines", "ETL", "data warehousing", "data lakes", or asking about "Airflow", "Spark", "dbt", "Snowflake", "BigQuery", "data modeling"
Transition from static LLM chats to autonomous agents that execute multi-step tasks. Use this when you need to automate cross-platform reports (e.g., Snowflake to Google Docs), build self-service tools for non-technical teams, or create "anticipatory" engineering workflows that draft PRs based on Slack discussions.
Creates Robot Framework test cases for SnapLogic account creation. Use when the user wants to create accounts (Oracle, PostgreSQL, Snowflake, Kafka, S3, etc.), needs to know what environment variables to configure, or wants to see account test case examples.
Connect Spice to data sources and query across them with federated SQL. Use when connecting to databases (Postgres, MySQL, DynamoDB), data lakes (S3, Delta Lake, Iceberg), warehouses (Snowflake, Databricks), files, APIs, or catalogs; configuring datasets; creating views; writing data; or setting up cross-source queries.
Creates new Robot Framework test cases for SnapLogic pipeline testing. Use when the user wants to create a new test file, add test cases to existing files, or needs test templates for specific system types (Oracle, PostgreSQL, Snowflake, etc.).