Loading...
Loading...
Found 28 Skills
Persistent browser interaction through a normal Node.js Playwright script for fast iterative web UI debugging.
Creates, updates, and fixes Cypress tests (E2E/end-to-end and component tests). Use when the user asks to create tests, add tests, write tests, update tests, test this file/component, new spec, or fix a failing or flaky test. Apply even when the user does not say 'Cypress' (e.g. 'create tests for this file'). Prefer cypress-explain when the user only wants to explain or review tests without changing code.
Run, watch, debug, and extend OpenClaw QA testing with qa-lab and qa-channel. Use when Codex needs to execute the repo-backed QA suite, inspect live QA artifacts, debug failing scenarios, add new QA scenarios, or explain the OpenClaw QA workflow. Prefer the live OpenAI lane with regular openai/gpt-5.4 in fast mode; do not use gpt-5.4-pro or gpt-5.4-mini unless the user explicitly overrides that policy.
Generate end to end automated tests for existing features. Use when the user says "create qa automated tests for [feature]"
Generate test and suite specifications in the strict FinalRun YAML format. Handles automated test planning, folder grouping by feature, repo app configuration, environment-specific overrides in .finalrun/env/*.yaml, and validation via finalrun check.
Generate test and suite specifications in the strict FinalRun YAML format. Handles automated test planning, folder grouping by feature, repo app configuration, environment-specific overrides in .finalrun/env/*.yaml, and validation via finalrun check.
Solidroad platform help — AI-powered QA and training for CX teams. Use when reps ramping too slowly and need AI practice simulations, QA only covers 2% of conversations and you want 100% automated scoring, training and QA are disconnected and insights don't turn into coaching, setting up Solidroad scorecards or custom quality rubrics, connecting Solidroad to Salesforce Service Cloud or Zendesk or Intercom, or evaluating Solidroad vs Observe.AI vs Balto vs Cresta for contact center QA. Do NOT use for general coaching strategy without a specific platform (use /sales-coaching).
CallMiner platform help — enterprise conversation analytics (Eureka) with omnichannel interaction capture, automated QA scoring, agent coaching, real-time alerts, compliance monitoring, and CX automation. Use when QA scoring is inconsistent or takes too long across agents, when needing to analyze 100% of customer interactions instead of sampling, when setting up automated compliance monitoring for regulated industries (healthcare, finance, collections), when CallMiner Coach scorecards aren't surfacing the right coaching moments, when CallMiner RealTime alerts aren't triggering during live calls, when ingesting audio or text into CallMiner via the Ingestion API, when CallMiner Analyze categories aren't matching expected interactions, or when evaluating CallMiner vs Observe.AI or NICE CXone analytics. Do NOT use for CCaaS platform selection (use /sales-ccaas-selection) or for sales-specific coaching strategy (use /sales-coaching).
Convin platform help — AI-powered contact center QA, coaching, and conversation intelligence. Use when setting up Convin automated QA scoring, Convin Real-Time Assist not surfacing prompts, Convin transcription missing speakers or inaccurate with accents, Convin audits hanging or calls delayed on dashboard, Convin AI Phone Call agent for outbound, Convin LMS agent training, or evaluating Convin vs Observe.AI vs Cresta vs Balto vs Enthu.AI for contact center QA. Do NOT use for CCaaS platform selection (use /sales-ccaas-selection) or building a coaching program (use /sales-coaching).
Senior QA Automation Engineer with 10+ years E2E testing experience. Use when writing end-to-end tests for web apps with Playwright, mobile apps with Detox, testing critical user flows, cross-browser testing, or visual regression testing.
Guides QA engineers through daily testing activities—morning review, test case creation, automation, exploratory testing, bug reporting, and end-of-day wrap-up. Use when planning or executing day-to-day testing or when the user asks about daily testing workflow.
Run adversarial browser tests against code changes. Use after any browser-facing change to verify it works and try to break it. Prefer this over raw browser tools (Playwright MCP, chrome tools).