Loading...
Loading...
Found 63 Skills
This skill should be used when generating comprehensive test cases from PRD documents or user requirements. Triggers when users request test case generation, QA planning, test scenario creation, or need structured test documentation. Produces detailed test cases covering functional, edge case, error handling, and state transition scenarios.
Compares a single test case's behavior across two branches, analyzing pass/fail status, duration, flakiness, and failure details. Useful for investigating test regressions introduced by a feature branch.
Use this skill when you need to create high-quality test cases with normal, exception, and boundary scenarios; triggers include test case writing and test design.
Use this skill when you need to review test cases for completeness, clarity, maintainability, and missing scenarios; triggers include 'test case review' and 'test case review'.
Design test cases based on requirements. Use when users need test case design, testing strategy, or QA planning. Triggers on keywords like "test cases", "test design", "unit test", "integration test", "e2e test".
NeuroForge QA is a QA/UX review system grounded in the 30 Laws of UX and QA engineering standards. Works with ANY framework, language, or software — React, Vue, iOS, Android, APIs, wireframes, or plain descriptions. On activation it scans the project and creates (or reads existing) files in a /neuroforge/ folder: project analysis, UX audit, risk register, accessibility audit, and test cases in /neuroforge/test-cases/. Treats these files as single source of truth, updating incrementally. Trigger on: "review my UI", "audit this design", "write test cases", "check my UX", "QA this flow", "critique my wireframe", "write tests for", "find bugs in", any screenshot shared for feedback, or any request for QA or UX analysis of a product, screen, flow, or codebase. When in doubt, trigger.
Fixes flaky tests by analyzing failure patterns from Tuist test insights, identifying root causes, and applying targeted corrections. Can be invoked with a specific test case URL (e.g. `https://tuist.dev/{account}/{project}/tests/test-cases/{id}`) or without arguments to discover and fix all flaky tests in the project.
Fixes a specific flaky test by analyzing its failure patterns from Tuist, identifying the root cause, and applying a targeted correction. Typically invoked with a Tuist test case URL in the format such as `https://tuist.dev/{account}/{project}/tests/test-cases/{id}`.
Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stories, or building security test cases.
Eval enablement accelerator — help customers think through "what does good look like" for their AI agent, then generate a structured eval plan and test cases they can use immediately. No running agent required. Works from a description, an idea, or even a vague goal. Use when anyone mentions agent evaluation, eval planning, "what should we test", "how do we know if the agent is good", test case generation, or interpreting eval results.
Generates eval test cases from an eval suite plan (output of /eval-suite-planner) or a plain-English agent description. Supports both single-response and conversation (multi-turn) evaluation modes. Outputs a Copilot Studio test set table, a CSV file for import (single-response only), and a docx report for human review.
Test Case Generator - Based on the theories of Equivalence Partitioning and Boundary Value Analysis, generates high-quality test cases in batches by Test Points (POINT), output in Markdown format. Used when users execute the /testcase-gen command or need to generate test cases.