Loading...
Loading...
Found 232 Skills
Learning designer quality check for Canvas LMS courses. Audits module structure, content completeness, publishing state, date consistency, and rubric coverage. Use when asked to "QC a course", "is this course ready", "pre-semester check", or "quality review".
Run a comprehensive multi-perspective code review on current changes. Activates the Review Council (security, quality, documentation, domain review) and runs automated security scanning. Use before creating a pull request or when you want a thorough review of your work.
Use this skill when generating AI-agent-friendly documentation for a git repo or directory, answering questions about a codebase from existing docs, or incrementally updating documentation after code changes. Triggers on codedocs:generate, codedocs:ask, codedocs:update, "document this codebase", "generate docs for this repo", "what does this project do", "update the docs after my changes", or any task requiring structured codebase documentation that serves AI agents, developers, and new team members.
AI agent with retrieval tool for document Q&A using RAG and LangGraph.
Expert guidance for Playwright end-to-end testing with TypeScript and JavaScript best practices
Create and deliver effective technical presentations, demos, and talks. Provides frameworks for structuring content, designing slides, and handling live demos.
Agent skill for tester - invoke with $agent-tester
Document Q&A with RAG using Supabase pgvector store.
Help users create custom batch image generation Skills through interactive Q&A. Users don't need to write code; they can generate fully functional image generation Skills just by answering questions. Triggered when users say "Help me create an image generation Skill", "I want to make an image matching Skill", "Create a batch image generation Skill", "How to make an image generation Skill", or "Help me make an AI image generation Skill". Supports any image scenarios such as article illustrations, Logo design, storyboards, social media images, posters, etc.
Use when analyzing PR review comments to determine relevance, identify ambiguities, and generate a comprehensive report without making code changes. Useful for understanding feedback landscape and initiating collaborative Q&A discussions about unclear or potentially outdated comments.
Use when validating automation builds before launch or after significant changes.
Enables grounded question answering by automatically executing the Google Search tool within Gemini models. Use when the required information is recent (post knowledge cutoff) or requires verifiable citation.