Loading...
Loading...
Found 1,008 Skills
Cross-model second opinion from Google Gemini — a different AI reviewing the same changes, with deep Google ecosystem knowledge. Three modes: review (pass/fail gate for Google Ads campaigns, SEO metadata, or code), challenge (adversarial stress-test that tries to break your changes), and consult (open Q&A with Gemini on Google Ads strategy, SEO best practices, or implementation questions). Use when the user says "gemini review", "ask gemini", "gemini challenge", "second opinion from gemini", "consult gemini", "stress test with gemini", "what would gemini say", "cross-model review", or "get another opinion". Voice aliases: "gem", "gemini check". Especially useful for Google Ads changes, SEO metadata updates, campaign structure decisions, keyword strategies, and bid/budget changes — Gemini has native Google ecosystem knowledge that complements Claude's analysis.
Adversarial code review with anti-trust verification. Verifies every implementer claim against the actual diff. Use when reviewing PRs, branches, or recent commits.
Portable Zod schema design and validation guidance. Default to `zod/mini` for new work and preserve established classic `zod` surfaces. Use when Codex needs to create, extend, refactor, or review Zod schemas; choose strict or loose object contracts; model nullability, unions, intersections, recursion, or runtime-validated values; or debug surprising Zod behavior and serialization boundaries.
Use skill if you are writing or reviewing framework-agnostic TypeScript and need strict typing, tsconfig/lint decisions, safer refactors, or guidance on generics, unions, and typed boundaries.
Proceed with implementation based on the TODO list in Spec.md. Execute review→check→commit after each task is completed.
Use when user asks to leverage claude or claude code to do something (e.g. implement a feature design or review codes, etc). Provides non-interactive automation mode for hands-off task execution without approval prompts.
Request/execute structured code review: use after completing important tasks, at end of each execution batch, or before merge. Based on git diff range, compare against plan and requirements, output issue list by Critical/Important/Minor severity, and provide clear verdict on merge readiness. Trigger words: request code review, PR review, merge readiness, production readiness.
Consult an advisory council of three AI personas — Cato (skeptic), Ada (optimist), Marcus (pragmatist) — backed by different frontier LLM agents (Gemini, Claude, Codex). Each persona runs as a separate agent process with full repo context and returns independent feedback. Use when the user says "/council", asks for a second opinion, wants feedback on code changes, needs a premortem, wants to pressure-test a decision, or asks "what do you think about this approach?" Claude may also proactively suggest consulting the council before major architectural decisions, risky deploys, or ambiguous trade-offs (but should ask for user approval first).
Perform a code review with linting, standards checking, and priority-ranked findings
Detect common code smells and anti-patterns providing feedback on quality issues a senior developer would catch during review. Use when user opens/views code files, asks for code review or quality assessment, mentions code quality/refactoring/improvements, when files contain code smell patterns, or during code review discussions.
Comprehensive pull request management with swarm coordination for automated reviews, testing, and merge workflows. Use for PR lifecycle management, multi-reviewer coordination, conflict resolution, and intelligent branch management.
Applies Google's engineering practices for strict, health-focused code reviews.