Loading...
Loading...
Found 47 Skills
Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stories, or building security test cases.
Eval enablement accelerator — help customers think through "what does good look like" for their AI agent, then generate a structured eval plan and test cases they can use immediately. No running agent required. Works from a description, an idea, or even a vague goal. Use when anyone mentions agent evaluation, eval planning, "what should we test", "how do we know if the agent is good", test case generation, or interpreting eval results.
Generates eval test cases from an eval suite plan (output of /eval-suite-planner) or a plain-English agent description. Supports both single-response and conversation (multi-turn) evaluation modes. Outputs a Copilot Studio test set table, a CSV file for import (single-response only), and a docx report for human review.
Website exploration for testing using Playwright MCP
Add a test case to the web renderer
Test quality inspection framework for reviewing test coverage, identifying gaps, and ensuring comprehensive validation
This skill should be used when establishing comprehensive QA testing processes for any software project. Use when creating test strategies, writing test cases following Google Testing Standards, executing test plans, tracking bugs with P0-P4 classification, calculating quality metrics, or generating progress reports. Includes autonomous execution capability via master prompts and complete documentation templates for third-party QA team handoffs. Implements OWASP security testing and achieves 90% coverage targets.
Design comprehensive test cases using PICT (Pairwise Independent Combinatorial Testing) for any piece of requirements or code. Analyzes inputs, generates PICT models with parameters, values, and constraints for valid scenarios using pairwise testing. Outputs the PICT model, markdown table of test cases, and expected results.
Find branch coverage gaps in changed code and fix them by writing missing tests. Two analysis layers: Source ↔ Test (logic branches vs test cases) and Spec ↔ Test (requirement scenarios vs test cases, when a spec file is provided). Use when verifying test completeness after implementing a feature or fixing a bug, when auditing whether tests match a spec, or when suspecting untested branches.
NeuroForge QA is a QA/UX review system grounded in the 30 Laws of UX and QA engineering standards. Works with ANY framework, language, or software — React, Vue, iOS, Android, APIs, wireframes, or plain descriptions. On activation it scans the project and creates (or reads existing) files in a /neuroforge/ folder: project analysis, UX audit, risk register, accessibility audit, and test cases in /neuroforge/test-cases/. Treats these files as single source of truth, updating incrementally. Trigger on: "review my UI", "audit this design", "write test cases", "check my UX", "QA this flow", "critique my wireframe", "write tests for", "find bugs in", any screenshot shared for feedback, or any request for QA or UX analysis of a product, screen, flow, or codebase. When in doubt, trigger.
Use when writing new Playwright E2E tests or adding test cases. Provides testing philosophy, patterns, and best practices from the Playwright Developer Handbook.
Creates Robot Framework test cases for creating and executing SnapLogic triggered tasks. Use when the user wants to create triggered tasks for on-demand pipeline execution, execute triggered tasks with parameters, or wants to see triggered task test case examples.