Loading...
Loading...
Found 49 Skills
PostgreSQL database documentation - SQL queries, database design, administration, performance tuning, and advanced features. Use when working with PostgreSQL databases, writing SQL, or managing database systems.
Deep-dive data profiling for a specific table. Use when the user asks to profile a table, wants statistics about a dataset, asks about data quality, or needs to understand a table's structure and content. Requires a table name.
Expert-level PostgreSQL database administration, advanced queries, performance tuning, and production operations
Quick data freshness check. Use when the user asks if data is up to date, when a table was last updated, if data is stale, or needs to verify data currency before using it.
SQLiteData queries, @Table models, Point-Free SQLite, RETURNING clause, FTS5 full-text search, CloudKit sync, CTEs, JSON aggregation, @DatabaseFunction
Expert in Microsoft SQL Server development and administration. Use when writing T-SQL queries, optimizing database performance, designing schemas, configuring SQL Server, or integrating SQL Server with Node.js using mssql package.
SQL database queries, joins, aggregations, subqueries, and optimization. Use for .sql files and database operations.
PostgreSQL database helper. Use when writing SQL queries, exploring schema, or working with the database.
BigQuery Expert Engineer Skill - Comprehensive guide for GoogleSQL queries, data management, performance optimization, and cost management Use when: - Running bq commands (query, load, extract) - Writing GoogleSQL queries (functions, JOINs, CTEs) - Designing partitioned/clustered tables - Using BigQuery ML or external data sources
Execute ES|QL (Elasticsearch Query Language) queries, use when the user wants to query Elasticsearch data, analyze logs, aggregate metrics, explore data, or create charts and dashboards from ES|QL results.
Execute read-only T-SQL queries against Fabric Data Warehouse, Lakehouse SQL Endpoints, and Mirrored Databases via CLI. Default skill for any lakehouse data query (row counts, SELECT, filtering, aggregation) unless the user explicitly requests PySpark or Spark DataFrames. Use when the user wants to: (1) query warehouse/lakehouse data, (2) count rows or explore lakehouse tables, (3) discover schemas/columns, (4) generate T-SQL scripts, (5) monitor SQL performance, (6) export results to CSV/JSON. Triggers: "warehouse", "SQL query", "T-SQL", "query warehouse", "show warehouse tables", "show lakehouse tables", "query lakehouse", "lakehouse table", "how many rows", "count rows", "SQL endpoint", "describe warehouse schema", "generate T-SQL script", "warehouse performance", "export SQL data", "connect to warehouse", "lakehouse data", "explore lakehouse".
Generates optimized read queries using Dapper. Includes multi-mapping for joins, pagination, dynamic filtering, CTEs, and best practices for high-performance data access.