Loading...
Loading...
Found 304 Skills
Instantly provision production-ready Postgres databases with Neon Instagres. Use when setting up databases, when users mention PostgreSQL/Postgres, database setup, or need a development database. Works with Drizzle, Prisma, raw SQL.
SQL query optimization and database performance specialist. Use when optimizing slow queries, fixing N+1 problems, designing indexes, implementing caching, or improving database performance. Works with PostgreSQL, MySQL, and other databases.
Guides the agent through async database integration with SQLAlchemy and Alembic migrations for FastAPI applications. Triggered when users ask to "set up a database", "create database models", "add SQLAlchemy", "create migrations", "run Alembic", "connect to PostgreSQL", "add a database layer", "create CRUD operations", "set up async database", or mention SQLAlchemy, Alembic, ORM, database models, async database, connection pool, or database migrations.
Relational database implementation across Python, Rust, Go, and TypeScript. Use when building CRUD applications, transactional systems, or structured data storage. Covers PostgreSQL (primary), MySQL, SQLite, ORMs (SQLAlchemy, Prisma, SeaORM, GORM), query builders (Drizzle, sqlc, SQLx), migrations, connection pooling, and serverless databases (Neon, PlanetScale, Turso).
Better Auth — framework-agnostic TypeScript authentication & authorization library. Covers setup, email/password, social OAuth (40+ providers), passkeys, magic links, 2FA, organizations, sessions, plugins, admin, hooks, and security hardening. Use when implementing auth with Better Auth: configuring auth instances, adding providers, setting up database adapters (Prisma, Drizzle, PostgreSQL, MySQL, SQLite, MongoDB), integrating with frameworks (Next.js, Nuxt, SvelteKit, Astro, Hono, Express, Elysia, Fastify, Expo), managing sessions, or extending with plugins.
Best practices, coding conventions, and patterns for backend projects using TypeScript. Use when writing code, tests, or new features in TypeScript backends with src/, Express, PostgreSQL/MongoDB, and Mocha+tsx.
Provides comprehensive Drizzle ORM patterns for schema definition, CRUD operations, relations, queries, transactions, and migrations. Proactively use for any Drizzle ORM development including defining database schemas, writing type-safe queries, implementing relations, managing transactions, and setting up migrations with Drizzle Kit. Supports PostgreSQL, MySQL, SQLite, MSSQL, and CockroachDB.
Cal.com self-hosted deployment to GCP Cloud Run with Supabase PostgreSQL. Docker Compose for local dev. TRIGGERS - deploy calcom, cloud run, self-hosted, docker compose, supabase, gcp deploy, infrastructure, cal.com hosting.
Nuxt 4 server-side development with Nitro: API routes, server middleware, database integration, and backend patterns. Use when: creating server API routes, implementing server middleware, integrating databases (D1, PostgreSQL, Drizzle), handling file uploads, implementing WebSockets, or building backend logic with Nitro. Keywords: server routes, API routes, Nitro, defineEventHandler, getRouterParam, getQuery, readBody, setCookie, createError, server middleware, D1, Drizzle, PostgreSQL, WebSocket, file upload
Use this skill when a user wants to store, manage, or work with Goldsky secrets — the named credential objects used by pipeline sinks. This includes: creating a new secret from a connection string or credentials, listing or inspecting existing secrets, updating or rotating credentials after a password change, and deleting secrets that are no longer needed. Trigger for any query where the user mentions 'goldsky secret', wants to securely store database credentials for a pipeline, or is working with sink authentication for PostgreSQL, Neon, Supabase, ClickHouse, Kafka, S3, Elasticsearch, DynamoDB, SQS, OpenSearch, or webhooks.
Pipeline state management for Goldsky Turbo — pause, resume, restart, and delete commands with their rules and safety behavior. Use this skill when the user asks: will deleting my pipeline lose the data already in my postgres/clickhouse table, how do I pause a pipeline while doing database maintenance, how do I restart from block zero to reprocess all historical data, can I update a running streaming pipeline in place or do I have to delete and redeploy, will resuming a paused pipeline pick up from where it left off (checkpoint), how do I re-run a completed job pipeline from the beginning, can I pause or restart a job-mode pipeline. Also covers what happens to checkpoint state on delete, and job auto-deletion 1 hour after termination. For actively diagnosing why a pipeline is broken or erroring, use /turbo-doctor instead.
Goldsky Turbo pipeline YAML reference — the authoritative source for field names, required vs optional fields, and valid values. Use whenever the user asks about specific YAML fields: what does `start_at: earliest` vs `latest` do, what fields does a postgres/clickhouse/kafka sink require, what is the `from:` field in a sink, how does `checkpoint` work, what's the syntax for `batch_size` or `primary_key`. Also use for validation errors like 'unknown field' or 'missing required field'. For interactive pipeline building end-to-end, use /turbo-builder instead.