Loading...
Loading...
Found 32 Skills
Stream-JSON chaining for multi-agent pipelines, data transformation, and sequential workflows
Ingest and transform data files (CSV/JSON/Parquet/Arrow IPC) into Elasticsearch with stream processing, custom transforms, and cross-version reindexing. Use when loading files, batch importing data, or migrating indices across versions — not for general ingest pipeline design or bulk API patterns.
Integrate external APIs and services with error handling, retry logic, and data transformation. Use when connecting to payment processors, messaging services, analytics platforms, or other third-party providers.
Master of the Modern Utility Toolbelt, specialized in AI-enhanced CLI, structured data transformation, and advanced Unix forensics.
Creates dbt models following project conventions. Use when working with dbt models for: (1) Creating new models (any layer - discovers project's naming conventions first) (2) Task mentions "create", "build", "add", "write", "new", or "implement" with model, table, or SQL (3) Modifying existing model logic, columns, joins, or transformations (4) Implementing a model from schema.yml specs or expected output requirements Discovers project conventions before writing. Runs dbt build (not just compile) to verify.
Pyspark Transformer - Auto-activating skill for Data Pipelines. Triggers on: pyspark transformer, pyspark transformer Part of the Data Pipelines skill category.
Structured data extraction from web pages using claude-in-chrome MCP with sequential-thinking planning. Focus on READ operations, data transformation, and pagination handling for multi-page extraction.
Transform raw data into analytical assets using ETL/ELT patterns, SQL (dbt), Python (pandas/polars/PySpark), and orchestration (Airflow). Use when building data pipelines, implementing incremental models, migrating from pandas to polars, or orchestrating multi-step transformations with testing and quality checks.
dbt (data build tool) patterns for model organization, incremental strategies, and testing.
Pandas for time series analysis, OrcaFlex results processing, and marine engineering data workflows
Transform raw data into analytical assets using ETL/ELT patterns, SQL (dbt), Python (pandas/polars/PySpark), and orchestration (Airflow). Use when building data pipelines, implementing incremental models, migrating from pandas to polars, or orchestrating multi-step transformations with testing and quality checks.
Convert between physical units (length, mass, temperature, time, etc.). Use for scientific calculations, data transformation, or unit standardization.