Loading...
Loading...
Found 10 Skills
Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
Create a structured format for documenting feature requirements as user stories. JSON files with testable acceptance criteria that AI agents can verify and track.
Integrate oh-my-ag with MCP for ulw-style multi-agent workflows. Covers install, setup, bridge mode, and verification steps.
Build and use the verification infrastructure coding agents need to prove their work. Use when: a repo has no bootable dev environment, no real-surface tests, or no interaction layer an agent can use; auditing or grading a repo's agent-readiness; verifying changes work end to end on real surfaces; or when harness gaps block reliable agent output.
Comprehensive verification with parallel test agents. Use when verifying implementations or validating changes.
Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop
Epistemic verification framework for AI-generated assertions. Requires evidence before acting on LLM claims about code behavior, system state, API responses, or factual statements. Use when an AI agent makes claims that will drive decisions, before acting on research results, or when an agent asserts something is true without showing evidence.
검증문서 생성 필수 규칙 - 모든 Chapter Generator 에이전트 공통
LLM-as-judge evaluation framework with 5-dimension rubric (accuracy, groundedness, coherence, completeness, helpfulness) for scoring AI-generated content quality with weighted composite scores and evidence citations
Verifies the agent's current work against a specific question by analyzing unstaged changes, staged changes, recent commits, and codebase context. Answers succinctly for a senior audience. Use when user says "/check", "verify that", "confirm that", "check if", "is X done?", or asks about current session changes.