Loading...
Loading...
Found 314 Skills
Bun implementation guide for PMA-managed backend and full-stack projects. Covers project layout (src/modules), strict linting with ESLint + @antfu/eslint-config, database access (Drizzle ORM + bun:sqlite or PostgreSQL), HTTP patterns (OpenAPIHono + Bun.serve), layered config with environment variables, dual logging (consola + pino), single-binary compilation with embedded assets, and CI quality gates.
Expert skill for Xata open-source cloud-native Postgres platform with copy-on-write branching, scale-to-zero, and Kubernetes deployment
Expert knowledge for Drizzle ORM - the lightweight, type-safe SQL ORM for edge and serverlessUse when "drizzle, drizzle orm, drizzle-kit, drizzle schema, drizzle migration, drizzle relations, sql orm typescript, edge database, d1 database, orm, database, typescript, sql, edge, serverless, d1, postgres, mysql, sqlite" mentioned.
Automatically discover database skills when working with SQL, PostgreSQL, MongoDB, Redis, database schema design, query optimization, migrations, connection pooling, ORMs, or database selection. Activates for database design, optimization, and implementation tasks.
Database operations including querying, schema exploration, and data analysis. Activates for tasks involving PostgreSQL, MySQL, MariaDB, SQLite, MongoDB, Redis, Elasticsearch, or ClickHouse databases.
Comprehensive guide for implementing Supabase Realtime features with best practices, scalable patterns, and migration strategies. Use when building realtime features in Supabase applications including messaging, notifications, presence, live updates, collaborative features, or migrating from postgres_changes to broadcast. Covers client setup, database triggers with realtime.broadcast_changes, RLS authorization, naming conventions, and performance optimization.
This skill should be used when the user asks to "create a new FastAPI project", "setup a fastapi api", "new fastapi project", "scaffold a fastapi app", "initialize a fastapi backend", or "start a new python api". Scaffolds a complete production-ready FastAPI project with SQLAlchemy, PostgreSQL, JWT auth, Pydantic v2 settings, and uv package management.
Add a Docker dev service to this project. Supported services: Redis, RabbitMQ, PostgreSQL, MySQL/MariaDB, MongoDB. Writes Docker Compose and Taskfile configs to .devtools/.
Use when deploying a database to Zeabur. Use when user needs MySQL, PostgreSQL, MongoDB, or Redis. Use when user says "I need a database", "add database", "deploy postgres", "set up MySQL", "add Redis", "add MongoDB", or "connect to database". Also use when user mentions data persistence issues like "data lost after restart", "data not saved", "data disappears", "need persistent storage for data", or "how to persist data". Also use when integrating a database with an existing service.
GEOFlow open-source GEO/SEO content production system with AI generation, review workflow, and publishing pipeline built on PHP and PostgreSQL.
Implement AI Coaching best practices on AnalyticDB for PostgreSQL (ADBPG): Leverage Supabase projects (training data management) + ADBPG instances with vector optimization to build RAG-driven coaching systems that guide users through domain-specific workflows, decision-making, or skill development. Use when: User wants to create Supabase projects (spb-xxx), ADBPG instances (gp-xxx), vector knowledge bases, or RAG-driven coaching systems on ADBPG. Triggers: "Supabase", "ADBPG", "vector database", "knowledge base", "RAG", "AI coaching", "coaching system", "spb-xxx", "gp-xxx"
Import data into the AWS data lake from S3 files, local uploads, JDBC databases (Oracle, SQL Server, PostgreSQL, MySQL, RDS, Aurora), Amazon Redshift, Snowflake, BigQuery, DynamoDB, or existing Glue catalog tables (migration). Default target is S3 Tables; standard Iceberg on a general purpose bucket is supported where S3 Tables is not adopted. Handles one-time loads, recurring pipelines, migrations. Triggers on: import data, load data, ingest, sync database, migrate table, move data to AWS, set up pipeline, ETL, pull from Snowflake, query BigQuery into S3, export DynamoDB, CTAS, convert to Iceberg. Do NOT use for setting up or troubleshooting Glue connections (use connecting-to-data-source), creating empty tables (use creating-data-lake-table), running queries (use querying-data-lake), finding tables by fuzzy name (use finding-data-lake-assets), catalog audit (use exploring-data-catalog), or SaaS platforms like Salesforce, ServiceNow, SAP, MongoDB, Kafka.