Loading...
Loading...
Found 18 Skills
Use this skill for data pipeline work — ingestion with dlt, transformations with sqlmesh, analytics with DuckDB/MotherDuck, DataFrames with polars, notebooks with marimo, and project management with uv.
Expert in high-performance CSV processing, parsing, and data cleaning using Python, DuckDB, and command-line tools. Use when working with CSV files, cleaning data, transforming datasets, or processing large tabular data files.
Self-modifying AI agent configuration via ruler + MCP + DuckDB. All behavior mods become one-liners.
Unified intelligent query interface for the CDM DuckDB database. Use this skill when the user wants to query the linkml-coral CDM database. Automatically chooses between fast SQL translation and schema-aware intelligent queries based on complexity. Supports natural language questions, schema exploration, and data analysis.
Using DuckDB with remote cloud storage via HTTPFS extension, fsspec, and Delta Lake integration. Covers S3, GCS, Azure, and S3-compatible endpoints.
Guide for querying and filtering CSV files using DuckDB SQL
Best practices for developing tools, dashboards and interactive data apps with HoloViz Panel. Create reactive, component-based UIs with widgets, layouts, templates, and real-time updates. Use when developing interactive data exploration tools, dashboards, data apps, or any interactive Python web application. Supports file uploads, streaming data, multi-page apps, and integration with HoloViews, hvPlot, Pandas, Polars, DuckDB and the rest of the HoloViz and PyData ecosystems.
Creates and maintains dlt (data load tool) pipelines from APIs, databases, and other sources. Use when the user wants to build or debug pipelines; use verified sources (e.g. Salesforce, GitHub, Stripe) or declarative REST API or custom Python; configure destinations (e.g. DuckDB, BigQuery, Snowflake); implement incremental loading; or edit .dlt config and secrets. Use when the user mentions data ingestion, dlt pipeline, dlt init, rest_api_source, incremental load, or pipeline dashboard.
SmartACE (Agentic Context Engineering) workflow engine with MCP-B (Master Client Bridge) and AMUM-QCI-ETHIC module. Dual database architecture using DuckDB (analytics) + SurrealDB (graph). Uses Blender 5.0 (bpy) and UE5 Remote Control. Use when (1) MCP-B agent-to-agent communication (INQC protocol), (2) AMUM 3→6→9 progressive alignment, (3) QCI quantum coherence states, (4) ETHIC principles enforcement (Marcel/Anthropic/EU AI Act), (5) SurrealDB graph relationships, (6) DuckDB SQL workflows, (7) ML inference with infera/vss, (8) Blender 5.0 headless processing, (9) UE5 scene control, (10) DuckLake time travel.
Configure data accelerators for local materialization and caching in Spice (Arrow, DuckDB, SQLite, Cayenne, PostgreSQL, Turso). Use when asked to "accelerate data", "enable caching", "materialize dataset", "configure refresh", "set up local storage", "improve query performance", "choose an accelerator", or "configure snapshots".
Use this skill when the user uploads Excel (.xlsx/.xls) or CSV files and wants to perform data analysis, generate statistics, create summaries, pivot tables, SQL queries, or any form of structured data exploration. Supports multi-sheet Excel workbooks, aggregation, filtering, joins, and exporting results to CSV/JSON/Markdown.
Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.