Loading...
Loading...
Found 42 Skills
This skill should be used when the user asks to "validate a DataFrame with pandera", "write a pandera schema", "use pandera DataFrameModel", "add data validation to a pipeline", or needs guidance on pandera best practices for data quality.
Data pipeline expert for ETL, Apache Spark, Airflow, dbt, and data quality
Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data. Use when the user wants to set up, fix, or evaluate analytics tracking (GA4, GTM, product analytics, events, conversions, UTMs). This skill focuses on measurement strategy, signal quality, and validation— not just firing events.
Adds schema tests and data quality validation to dbt models. Use when working with dbt tests for: (1) Adding or modifying tests in schema.yml files (2) Task mentions "test", "validate", "data quality", "unique", "not_null", or "accepted_values" (3) Ensuring data integrity - primary keys, foreign keys, relationships (4) Debugging test failures or understanding why dbt test failed Matches existing project test patterns and YAML style before adding new tests.
pytest, data validation, Great Expectations, and quality assurance for data systems
Principal backend engineering intelligence for Python AI/ML systems. Actions: plan, design, build, implement, review, fix, optimize, refactor, debug, secure, scale ML services and pipelines. Focus: data quality, reproducibility, reliability, performance, security, observability, model evaluation, MLOps.
Analyze datasets to discover patterns, anomalies, and relationships. Use when exploring data files, generating statistical summaries, checking data quality, or creating visualizations. Supports CSV, Excel, JSON, Parquet, and more.
Use when implementing data governance frameworks, building data catalogs, establishing data lineage, defining data quality rules, or setting up data stewardship programs - covers metadata management, data quality, and complianceUse when ", " mentioned.
Optimize provider selection, routing, and credit usage across 150+ enrichment sources for company/contact intelligence.
Bronze/Silver/Gold layer design patterns and templates for building scalable data lakehouse architectures. Includes incremental processing, data quality checks, and optimization strategies.
Profile datasets to understand schema, quality, and characteristics. Use when analyzing data files (CSV, JSON, Parquet), discovering dataset properties, assessing data quality, or when user mentions data profiling, schema detection, data analysis, or quality metrics. Provides basic and intermediate profiling including distributions, uniqueness, and pattern detection.
Audit and improve CRM data quality by identifying missing fields, inconsistent values, duplicate records, and stale data