Loading...
Loading...
Found 636 Skills
Systematic 4-phase debugging: understand the failure, form hypotheses, test one change at a time, fix with confidence. Activate when tests fail unexpectedly, errors occur, behavior is wrong, or something that worked before is now broken. Triggers on: "debug", "why is this failing", "test failure", "unexpected error", "bug", "broken".
Conduct rigorous, adversarial code reviews with zero tolerance for mediocrity. Use when users ask to "critically review" my code or a PR, "critique my code", "find issues in my code", or "what's wrong with this code". Identifies security holes, lazy patterns, edge case failures, and bad practices across Python, R, JavaScript/TypeScript, SQL, and front-end code. Scrutinizes error handling, type safety, performance, accessibility, and code quality. Provides structured feedback with severity tiers (Blocking, Required, Suggestions) and specific, actionable recommendations.
Loads org- and repo-level coding rules from Qodo before code tasks begin, ensuring all generation and modification follows team standards. Use before any code generation or modification task when rules are not already loaded. Invoke when user asks to write, edit, refactor, or review code, or when starting implementation planning.
Perform a refactor pass focused on simplicity after recent changes. Use when the user asks for a refactor/cleanup pass, simplification, or dead-code removal and expects build/tests to verify behavior.
Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development.
Use when implementing any code in RLM Phase 3. Enforces strict RED-GREEN-REFACTOR discipline with The Iron Law - no production code without a failing test first.
Applies a modified Fagan Inspection methodology to systematically resolve persistent bugs and complex issues. Use when multiple previous fix attempts have failed repeatedly, when dealing with intricate system interactions, or when a methodical root cause analysis is needed. Do not use for simple troubleshooting. Triggers after multiple failed debugging attempts on the same complex issue.
Architecture analysis, violation detection, and pattern validation. USE WHEN: reviewing code architecture, identifying violations, verifying patterns, updating technical documentation. Reference: docs/02-architecture/ARCHITECTURE.md Examples: <example> Context: User wants to check if code follows architecture. user: "Analyze if the payment module follows our architecture" assistant: "I'll use architecture-analyzer to review against ARCHITECTURE.md." <commentary>Architectural review is architecture-analyzer specialty.</commentary> </example> <example> Context: Need to identify technical debt. user: "Find architectural violations in the services layer" assistant: "I'll use architecture-analyzer to scan for violations." <commentary>Violation detection is architecture-analyzer responsibility.</commentary> </example>
Analyzes Java code against industry best practices and evaluates design principles including SOLID, exception handling, thread safety, and resource management. Reviews naming conventions, Stream API usage, Optional patterns, and general code quality. Use when reviewing Java files, checking code quality, evaluating exception handling, or auditing resource management.
Enforces the discipline of thinking about tests, features, and maintainability BEFORE writing implementation code. Use when starting new classes/methods, refactoring existing code, or when asked to "think about tests first", "design for testability", "what tests do I need", "test-first approach", or "TDD thinking". Promotes simple, maintainable designs by considering testability upfront. Works with any codebase requiring test coverage and quality standards.
Provides reflective questioning framework to challenge assumptions about work completeness, catching incomplete implementations before they're marked "done". Use before claiming features complete, before moving ADRs to completed status, during self-review, or when declaring work finished. Triggers on "is this really done", "self-review my work", "challenge my assumptions", "verify completeness", or proactively before marking tasks complete. Works with any type of implementation work. Enforces critical thinking about integration, testing, and execution proof.
Captures quality metrics baseline (tests, coverage, type errors, linting, dead code) by running quality gates and storing results in memory for regression detection. Use at feature start, before refactor work, or after major changes to establish baseline. Triggers on "capture baseline", "establish baseline", or PROACTIVELY at start of any feature/refactor work. Works with pytest output, pyright errors, ruff warnings, vulture results, and memory MCP server for baseline storage.