Total 44,222 skills, AI & Machine Learning has 7033 skills
Showing 12 of 7033 skills
Generate hierarchical AGENTS.md structures for codebases. Use when user asks to create AGENTS.md files, analyze codebase for AI agent documentation, set up AI-friendly project documentation, or generate context files for AI coding assistants. Triggers on "create AGENTS.md", "generate agents", "analyze codebase for AI", "AI documentation setup", "hierarchical agents".
Deep codebase initialization with hierarchical AGENTS.md documentation
Enables Claude to conduct comprehensive research using Gemini Deep Research for in-depth analysis and reports
This skill should be used for structured feature development with codebase understanding. Triggers on /do command. Provides a 5-phase workflow (Understand, Clarify, Design, Implement, Complete) using codeagent-wrapper to orchestrate code-explorer, code-architect, code-reviewer, and develop agents in parallel.
CASS Memory System - procedural memory for AI coding agents. Three-layer cognitive architecture with confidence decay, anti-pattern learning, cross-agent knowledge transfer, trauma guard safety system. Bun/TypeScript CLI.
Create validated LLM-as-a-Judge evaluators following best practices — binary Pass/Fail judges with TPR/TNR validation for measuring specific failure modes. Use when you need to automate quality checks, build guardrails, or measure a specific failure mode identified during trace analysis. Do NOT use when failures are fixable with prompt changes (use optimize-prompt) or when failure modes are unknown (use analyze-trace-failures first).
Set up orq.ai observability for LLM applications. Use when setting up tracing, adding the AI Router proxy, integrating OpenTelemetry, auditing existing instrumentation, or enriching traces with metadata.
Analyze and optimize system prompts using a structured prompting guidelines framework — AI-powered analysis and rewriting. Use when a prompt needs improvement, experiment results show quality gaps, or you want a structured review of an existing system prompt. Do NOT use when production traces show failures (use analyze-trace-failures first to identify patterns). Do NOT use to build evaluators (use build-evaluator).
Generate and curate evaluation datasets — structured generation via dimensions-tuples-NL, quick from description, expansion from existing data, plus dataset maintenance through deduplication, rebalancing, and gap-filling. Use when creating eval data, expanding test coverage, or cleaning datasets. Do NOT use when sufficient real production data exists (use analyze-trace-failures instead). Do NOT use for evaluator creation (use build-evaluator).
Append project fragment knowledge that is "too short to warrant a separate file but needs to be known by AI every time" to fixed sections of AGENTS.md / CLAUDE.md — such as special compilation flags, services that must be started before running, path pitfalls, command aliases, and environment variable conventions. Triggers: When the user says "make a note", "add to AGENTS", "save to CLAUDE.md", "the project requires X to compile", "must do Y every time from now on", or just encountered a project-specific setting that can be explained in one sentence.
Generates blog post thumbnail images for Orbitant following the brand's visual identity, using Google's Imagen API (Nano Banana 2). Activates when creating blog images, generating thumbnails, designing featured images for articles, or when someone needs a visual for an Orbitant insight/blog post. Use this skill even if the user just says "I need an image for this article", "create a thumbnail", "generate a hero image", or "make a featured image". Also triggers when the user mentions "Nano Banana 2", "image generation", or asks for a prompt for an AI image tool.
Interview the user and inspect coding-agent skill trigger counts to recommend unused K-skills for removal.