Loading...
Loading...
Found 1,575 Skills
Plan, create, and configure production-ready Google Kubernetes Engine (GKE) clusters using the golden path Autopilot configuration. Covers Day-0 checklist, Autopilot vs Standard, networking (private clusters, VPC-native, Gateway API), security (Workload Identity, Secret Manager, RBAC hardening), observability, scaling, cost optimization, and AI/ML inference. WHEN: create GKE cluster, provision GKE environment, design GKE networking, secure GKE, optimize GKE cost, GKE autoscaling, GKE inference, GKE upgrade, GKE observability, GKE multi-tenancy, GKE batch, GKE HPC, GKE compute class.
Enter this sub-process when conducting code optimization — handle tasks where 'behavior remains unchanged, structure changes' (structure / performance / readability). Shift single-module internal optimization from 'AI random refactoring' to 'first scan to generate a checklist, confirm each item with the user, execute step-by-step according to the method library, and require manual approval for each step'. Trigger scenarios: Users mention phrases like 'optimize it / refactor / rewrite / split it / poor performance / code is too long' without any accompanying behavior changes. Do not handle new requirements (route to feature), bugs (route to issue), or cross-module architecture restructuring (route to architecture + decisions).
Grafana Professional Services tool for identifying which Prometheus metrics drive high Data Points per Minute (DPM). Analyzes metric-level DPM with per-label breakdown to help optimize Grafana Cloud costs. Use when the user asks about DPM analysis, high-cardinality metrics, metric cost optimization, finding noisy metrics, or running dpm-finder against a Grafana Cloud Prometheus endpoint.
Grafana Cloud cost management — usage monitoring, cost attribution by label, usage alerts, invoice management, and optimization strategies. Covers Adaptive Metrics (cardinality reduction), Adaptive Logs (log filtering), cost attribution labels, and the FOCUS-compliant billing application. Use when analyzing Grafana Cloud spending, setting up cost alerts, attributing costs to teams, reducing metric/log cardinality, or forecasting observability budgets.
Analyze token usage patterns and recommend cost optimizations with estimated savings
Create and execute Goal-Oriented Action Plans (GOAP) with precondition analysis, cost optimization, and adaptive replanning
Analyze a Materialize environment for health, performance, and optimization opportunities using the MCP Developer endpoint. Use this skill when someone wants to check environment health, investigate performance issues, troubleshoot stale materialized views, diagnose memory pressure, audit resource utilization, or get optimization recommendations. Trigger this even if the user just says "check my environment", "why is my MV stale", "why is my cluster slow", or "what can I optimize".
Core principles of SEO including E-E-A-T, Core Web Vitals, technical foundations, content quality, and how modern search engines evaluate pages. This skill explains *why* SEO works, not how to execute specific optimizations.
Analyze and optimize individual pages for conversion performance. Use when the user wants to improve conversion rates, diagnose why a page is underperforming, or increase the effectiveness of marketing pages (homepage, landing pages, pricing, feature pages, or blog posts). This skill focuses on diagnosis, prioritization, and testable recommendations— not blind optimization.
Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.
Comprehensive React and Next.js performance optimization guide with 40+ rules for eliminating waterfalls, optimizing bundles, and improving rendering. Use when optimizing React apps, reviewing performance, or refactoring components.
Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction vs dense models), implementing sparse architectures like Mixtral 8x7B or DeepSeek-V3, or scaling model capacity without proportional compute increase. Covers MoE architectures, routing mechanisms, load balancing, expert parallelism, and inference optimization.