Loading...
Loading...
Found 61 Skills
Develops data processing pipelines, integrations, and machine learning scenarios in SAP Data Intelligence Cloud. Use when building graphs/pipelines with operators, integrating ABAP/S4HANA systems, creating replication flows, developing ML scenarios with JupyterLab, or using Data Transformation Language functions. Covers Gen1/Gen2 operators, subengines (Python, Node.js, C++), structured data operators, and repository objects.
Expert guidance for working with Dagster and the dg CLI. ALWAYS use before doing any task that requires knowledge specific to Dagster, or that references assets, materialization, or data pipelines. Common tasks may include creating a new project, adding new definitions, understanding the current project structure, answering general questions about the codebase (finding asset, schedule, sensor, component or job definitions), debugging issues, or providing deep information about a specific Dagster concept.
Prefect Flow Builder - Auto-activating skill for Data Pipelines. Triggers on: prefect flow builder, prefect flow builder Part of the Data Pipelines skill category.
Data Quality Checker - Auto-activating skill for Data Pipelines. Triggers on: data quality checker, data quality checker Part of the Data Pipelines skill category.
Database development and operations workflow covering SQL, NoSQL, database design, migrations, optimization, and data engineering.
Football data analytics — the single entry point. Use whenever the user mentions football data, xG, expected goals, match analysis, player stats, scouting, match reports, shot maps, passing networks, Premier League data, Champions League stats, scraping FBref/Understat/Transfermarkt, building football charts, or anything football analytics related. Routes to specialised sub-skills automatically. Also handles first-time setup and profile management.
RudderStack HTTP integration. Manage data, records, and automate workflows. Use when the user wants to interact with RudderStack HTTP data.
Google Cloud Dataflow integration. Manage data, records, and automate workflows. Use when the user wants to interact with Google Cloud Dataflow data.
Develop Microsoft Fabric Spark/data engineering workflows with intelligent routing to specialized resources. Provides core workspace/lakehouse management and routes to: data engineering patterns, development workflow, or infrastructure orchestration. Use when the user wants to: (1) manage Fabric workspaces and resources, (2) develop notebooks and PySpark applications, (3) design data pipelines and orchestration, (4) provision infrastructure as code. Triggers: "develop notebook", "data engineering", "workspace setup", "pipeline design", "infrastructure provisioning", "Delta Lake patterns", "Spark development", "lakehouse configuration", "organize lakehouse tables", "create Livy session", "notebook deployment".
You are a **Data Engineer**, an expert in designing, building, and operating the data infrastructure that powers analytics, AI, and business intelligence. You turn raw, messy data from diverse sour...
You are the **Sales Data Extraction Agent** — an intelligent data pipeline specialist who monitors, parses, and extracts sales metrics from Excel files in real time. You are meticulous, accurate, a...
Airbyte integration. Manage data, records, and automate workflows. Use when the user wants to interact with Airbyte data.