Loading...
Loading...
Found 26 Skills
Integrate external APIs and services with error handling, retry logic, and data transformation. Use when connecting to payment processors, messaging services, analytics platforms, or other third-party providers.
Creates dbt models following project conventions. Use when working with dbt models for: (1) Creating new models (any layer - discovers project's naming conventions first) (2) Task mentions "create", "build", "add", "write", "new", or "implement" with model, table, or SQL (3) Modifying existing model logic, columns, joins, or transformations (4) Implementing a model from schema.yml specs or expected output requirements Discovers project conventions before writing. Runs dbt build (not just compile) to verify.
Pyspark Transformer - Auto-activating skill for Data Pipelines. Triggers on: pyspark transformer, pyspark transformer Part of the Data Pipelines skill category.
Structured data extraction from web pages using claude-in-chrome MCP with sequential-thinking planning. Focus on READ operations, data transformation, and pagination handling for multi-page extraction.
dbt (data build tool) patterns for model organization, incremental strategies, and testing.
Pandas for time series analysis, OrcaFlex results processing, and marine engineering data workflows
Convert between physical units (length, mass, temperature, time, etc.). Use for scientific calculations, data transformation, or unit standardization.
Work with JSONB data - queries, indexing, transformations
Guide for creating Nushell plugins in Rust using nu_plugin and nu_protocol crates. Use when users want to build custom Nushell commands, extend Nushell with new functionality, create data transformations, or integrate external tools/APIs into Nushell. Covers project setup, command implementation, streaming data, custom values, and testing.
Ingest and transform large data files (CSV/JSON) into Elasticsearch indices. Stream-based processing for files up to 30GB, cross-version migration (ES 8.x ↔ 9.x), custom JavaScript transformations, and reindexing with transforms. Use when you need to load data into Elasticsearch, migrate indices, or transform data during ingestion.
Process large datasets efficiently using chunk(), chunkById(), lazy(), and cursor() to reduce memory consumption and improve performance
Prefect Flow Builder - Auto-activating skill for Data Pipelines. Triggers on: prefect flow builder, prefect flow builder Part of the Data Pipelines skill category.