Loading...
Loading...
Found 330 Skills
Qt Model/View architecture — QAbstractItemModel, table/list/tree views, item delegates, and proxy models. Use when displaying tabular data, building a list with custom items, implementing a tree, creating a sortable/filterable table, or writing a custom item delegate. Trigger phrases: "QAbstractItemModel", "table view", "list model", "QTableView", "QListView", "tree view", "item delegate", "sort table", "filter model", "QSortFilterProxyModel", "custom model", "model data"
기존 PeachSolution 모듈을 test-data 패턴으로 리팩토링하는 팀 오케스트레이터 스킬. "팀 리팩토링", "레거시 코드 정리", "test-data 패턴으로 변환" 키워드로 트리거. layer=backend|frontend|all 지원, 독립 QA로 확증 편향 방지.
This skill should be used when a developer or QA engineer wants to report a bug, create a bug ticket, document a test failure, log a defect, file an issue found during a QA session, or report something that is broken — for example "report a bug", "create a bug ticket", "I found a defect", "something is broken in task
기존 PeachSolution 모듈을 test-data 패턴으로 리팩토링하는 팀 오케스트레이터 스킬. "팀 리팩토링", "레거시 코드 정리", "test-data 패턴으로 변환" 키워드로 트리거. layer=backend|frontend|all 지원, 독립 QA로 확증 편향 방지.
Solidroad platform help — AI-powered QA and training for CX teams. Use when reps ramping too slowly and need AI practice simulations, QA only covers 2% of conversations and you want 100% automated scoring, training and QA are disconnected and insights don't turn into coaching, setting up Solidroad scorecards or custom quality rubrics, connecting Solidroad to Salesforce Service Cloud or Zendesk or Intercom, or evaluating Solidroad vs Observe.AI vs Balto vs Cresta for contact center QA. Do NOT use for general coaching strategy without a specific platform (use /sales-coaching).
CallMiner platform help — enterprise conversation analytics (Eureka) with omnichannel interaction capture, automated QA scoring, agent coaching, real-time alerts, compliance monitoring, and CX automation. Use when QA scoring is inconsistent or takes too long across agents, when needing to analyze 100% of customer interactions instead of sampling, when setting up automated compliance monitoring for regulated industries (healthcare, finance, collections), when CallMiner Coach scorecards aren't surfacing the right coaching moments, when CallMiner RealTime alerts aren't triggering during live calls, when ingesting audio or text into CallMiner via the Ingestion API, when CallMiner Analyze categories aren't matching expected interactions, or when evaluating CallMiner vs Observe.AI or NICE CXone analytics. Do NOT use for CCaaS platform selection (use /sales-ccaas-selection) or for sales-specific coaching strategy (use /sales-coaching).
Convin platform help — AI-powered contact center QA, coaching, and conversation intelligence. Use when setting up Convin automated QA scoring, Convin Real-Time Assist not surfacing prompts, Convin transcription missing speakers or inaccurate with accents, Convin audits hanging or calls delayed on dashboard, Convin AI Phone Call agent for outbound, Convin LMS agent training, or evaluating Convin vs Observe.AI vs Cresta vs Balto vs Enthu.AI for contact center QA. Do NOT use for CCaaS platform selection (use /sales-ccaas-selection) or building a coaching program (use /sales-coaching).
Observe.AI platform help — enterprise contact center intelligence with Auto QA scoring on 100% of interactions, Agent Copilot real-time guidance, Coaching Copilot post-call performance management, VoiceAI and ChatAI virtual agents, screen recording, Insights Copilot. Use when setting up Observe.AI Auto QA scorecards for contact center agents, Agent Copilot not surfacing guidance during live calls, transcription accuracy issues or speaker attribution errors, comparing Observe.AI vs Balto or Cresta or CallMiner for contact center QA, integrating Observe.AI with Five9 or Amazon Connect or Talkdesk, or configuring compliance monitoring and regulatory audit trails. Do NOT use for building a general coaching program (use /sales-coaching) or reviewing a specific call transcript (use /sales-call-review).
Documents edge cases, error states, boundary conditions, and recovery paths for a feature. Use during specification to ensure comprehensive coverage, or during QA planning to identify test scenarios.
Generates structured Given/When/Then acceptance criteria for a user story or feature slice. Use when translating product requirements into testable scenarios that cover the happy path, edge cases, error states, and non-functional expectations for engineering handoff and QA.
Quality review of test files and manual evidence documents. Goes beyond existence checks — evaluates assertion coverage, edge case handling, naming conventions, and evidence completeness. Produces ADEQUATE/INCOMPLETE/MISSING verdict per story. Run before QA sign-off or on demand.
Screenshot-obsessed, fantasy-allergic QA specialist - Default to finding 3-5 issues, requires visual proof for everything