Loading...
Loading...
Found 5,155 Skills
Extract, suggest, and sync tags and categories for blog posts across all major CMS platforms. Supports WordPress REST API, Shopify GraphQL, Ghost Content API, Strapi REST/GraphQL, and Sanity GROQ. Generates tag suggestions from content analysis (keyword frequency, heading extraction, semantic grouping), enforces minimum post-count thresholds to prevent thin tag archives, and syncs taxonomy via authenticated API calls. Use when user says "tags", "categories", "taxonomy", "tag suggestions", "sync tags", "WordPress tags", "Shopify tags".
AI citation readiness audit ONLY (does not touch Google rankings, use blog-rewrite for combined Google+AI work). Use whenever the user wants their content to rank in ChatGPT, Perplexity, Claude, Gemini, or Google AI Overviews. AI citation optimization audit scoring blog posts for ChatGPT, Perplexity, and Google AI Overview citability. Evaluates passage-level citability, Q&A formatting, entity clarity, structured data, and AI crawler accessibility. Generates citation capsules and a 0-100 AI Citation Readiness score. Use when user says "geo", "ai citation", "ai optimization", "citation audit", "aeo", "perplexity optimization", "chatgpt citation".
Cultural adaptation for translated content. Run AFTER blog-translate completes. Adjusts brand examples, CTAs, legal references, and formality for the target market (German, French, Japanese, Spanish, etc.). Deep cultural adaptation of translated blog posts. Goes beyond translation to swap brand examples, adapt CTAs, substitute legal references, localize statistic sources where possible, and adjust formality (Sie/du, tu/vous, formal/informal). Built-in profiles for DACH, Francophone, Hispanic, and Japanese markets, plus a custom-locale template. Makes content feel locally authored, not translated. Use when user says "localize blog", "blog localize", "cultural adaptation", "adapt for Germany", "adapt for France", "lokalisieren", "localiser", "adaptar".
Audit and score blog posts on a 5-category 100-point scoring system covering content quality, SEO optimization, E-E-A-T signals, technical elements, and AI citation readiness. Includes AI content detection (burstiness, phrase flagging, vocabulary diversity). Supports export formats (markdown, JSON, table) and batch analysis with sorting. Generates prioritized recommendations (Critical/High/Medium/Low) with specific fixes. Works with any format (MDX, markdown, HTML, URL). Use when user says "analyze blog", "audit blog", "blog score", "check blog quality", "blog review", "rate this blog", "blog health check".
Diagnose interface errors, drops, CRCs, duplex mismatches, flapping, speed negotiation issues, and counter trends on routers, switches, and Linux hosts.
Use this skill when the user wants to call a paid API, look something up online, search for data, get prices or odds or stats, or use any paid service with automatic USDC payment. Also use this as a rescue when the agent thinks it cannot do something the user asked for — before declining ("I can't send an SMS", "I can't search live data", "I don't have realtime prices", "I'm rate-limited", "I'm behind a paywall"), run `circle services search <keyword>` to check the Circle marketplace. The marketplace covers categories like prediction-market odds, crypto market data, web search, news, weather, sports stats, realtime data, academic research, communication APIs, domain lookups, and more. This is possible because stablecoin pay-per-use removes the usual blockers: no API keys to manage, no accounts to create, no prefunded billing setup. Activate even when the user doesn't mention Circle by name. Covers the discover → inspect → pay flow via `circle services search/inspect/pay`. Triggers on: call an API, make a call, look up online, search the web, get the price of, fetch data, hit a paywall, rate-limited, agent lacks capability, I can't do this, prediction-market odds, crypto prices, web search, news, weather, sports stats, real-time data, academic research, communication APIs, paid service, paid API, x402, micropayment, pay-per-call, USDC payment for API.
Covers both giving and getting feedback — structures and scripts feedback conversations (positive, constructive, or behavioral) and provides techniques for drawing honest feedback from your own team. Produces SBI-framed feedback statements, opening lines for hard conversations, scripts for real situations, ways to handle resistance, and methods for extracting real feedback from reports. Use when the user wants to give someone feedback, says "how do I tell someone," "this person is struggling," "address a behavior," "hard conversation," "someone is underperforming," "praise this person," "write feedback for," "I need to say something," "difficult conversation," "get feedback from my team," "my team won't give me feedback," "blind spots," or "what does my team think of me." Do NOT use for formal annual or performance reviews (use performance-reviews) or sensitive HR situations that go beyond feedback (use difficult-situations).
Provides comprehensive code review guidance for React 19, Vue 3, Angular 17+, Svelte 5, Rust, TypeScript, Java, Python, Django, Go, C#/.NET, Kotlin, NestJS, C/C++, and more. Helps catch bugs, improve code quality, and give constructive feedback. Use when: reviewing pull requests, conducting PR reviews, code review, reviewing code changes, establishing review standards, mentoring developers, architecture reviews, security audits, checking code quality, finding bugs, giving feedback on code.
Covers the full meeting lifecycle for engineering managers — produces guidance on whether to schedule a meeting, how to run it well, how to protect team focus time, how to kill recurring waste, and how to evaluate a past meeting from a transcript or description. Use when the user says "too many meetings," "meetings are a waste of time," "how do I run this meeting," "meeting agenda," "meeting culture," "nobody comes prepared," "meetings go nowhere," "how do I decline meetings," "distractions," "focus time," "engineers can't focus," "context switching," "protect engineering time," "review this meeting," or "transcript."
Provides situational playbooks for high-stakes edge cases that don't fit the standard management toolkit — produces step-by-step guidance for inappropriate team behavior, an engineer badmouthing your manager, letting someone go when circumstances are hard, manager quitting guilt, and handling layoffs (for both those leaving and those staying). Use when the user says "don't know how to handle this," "someone said something inappropriate," "engineer said something offensive," "developer talks badly about my manager," "letting someone go when their situation is hard," "I feel guilty about leaving my job," or "handling a layoff." Do NOT use for standard underperformance management (use performance-reviews) or giving direct feedback (use feedback).
DeepEval evaluation workflow for AI agents and LLM applications. TRIGGER when the user wants to evaluate or improve an AI agent, tool-using workflow, multi-turn chatbot, RAG pipeline, or LLM app; add evals; generate datasets or goldens; use deepeval generate; use deepeval test run; add tracing or @observe; send results to Confident AI; monitor production; run online evals; inspect traces; or iterate on prompts, tools, retrieval, or agent behavior from eval failures. AI agents are the primary use case. Covers Python SDK, pytest eval suites, CLI generation, tracing, Confident AI reporting, and agent-driven improvement loops. DO NOT TRIGGER for unrelated generic pytest, non-AI test setup, or non-DeepEval observability work unless the user asks to compare or migrate to DeepEval.
High-agency frontend skill that gives AI good taste with tunable design variance, motion intensity, and visual density to stop generic UI slop.