/groom
Orchestrate interactive backlog grooming. Explore the product landscape with the user,
brainstorm directions, then synthesize into prioritized issues.
Philosophy
Exploration before synthesis. Understand deeply, discuss with user, THEN create issues.
Orchestrator pattern. /groom invokes skills and agents, doesn't reimplement logic.
AI-augmented analysis. External tools provide specialized capabilities:
- Gemini — Web-grounded research, current best practices, huge context
- Codex — Implementation recommendations, concrete code suggestions
- Thinktank — Multi-model consensus, diverse expert perspectives
Opinionated recommendations. Don't just present options. Recommend and justify.
Org-Wide Standards
All issues MUST comply with
groom/references/org-standards.md
.
Load that file before creating any issues.
Process
Phase 1: Context
Step 1: Load or Gather Vision
Check for
in project root:
If vision.md exists:
- Read and display current vision
- Ask: "Is this still accurate? Any updates?"
- If updates, rewrite vision.md
If vision.md doesn't exist:
- Interview: "What's your vision for this product? Where should it go?"
- Write response to
vision.md format:
markdown
# Vision
## One-Liner
[Single sentence: what this product is and who it's for]
## North Star
[The dream state — what does success look like in 2 years?]
## Key Differentiators
[What makes this different from alternatives?]
## Target User
[Who specifically is this for?]
## Current Focus
[Immediate priority this quarter?]
---
*Last updated: YYYY-MM-DD*
*Updated during: /groom session*
Store as
for agent context throughout session.
Step 2: Capture What's On Your Mind
Before structured analysis:
Anything on your mind? Bugs, UX friction, missing features, nitpicks?
These become issues alongside the automated findings.
(Skip if nothing comes to mind)
For each item: clarify if needed (one follow-up max), assign tentative priority.
Don't create issues yet — collect for Phase 4.
Step 3: Quick Backlog Audit
bash
gh issue list --state open --limit 100 --json number,title,labels,body,createdAt,updatedAt
Evaluate each existing issue:
- Still relevant? Given current vision and codebase state
- Priority correct? Focus may have shifted
- Duplicate? Will new findings cover this?
- Actionable? Can someone pick this up?
Present findings. Don't auto-close anything yet.
"Here's where we stand: X open issues, Y look stale, Z may need reprioritization."
Phase 2: Discovery
Launch agents in parallel:
| Agent | Focus |
|---|
| Product strategist | Gaps vs vision, user value opportunities |
| Technical archaeologist | Code health, architectural debt, improvement patterns |
| Domain auditors | Run skills (audit-only, no issue creation) |
| Growth analyst | Acquisition, activation, retention opportunities |
Domain auditors invoke in parallel:
Synthesize findings into 3-5 strategic themes with evidence.
Examples: "reliability foundation," "onboarding redesign," "API expansion."
Present: "Here are the themes I see across the analysis. Which interest you?"
Phase 3: Exploration Loop
For each theme the user wants to explore:
- Pitch — Agents brainstorm approaches. What it looks like, what it costs, what it enables.
- Present — 3-5 competing approaches with tradeoffs. Recommend one.
- Discuss — User steers. "What about X?" "I prefer Y because Z."
- Refine — Agents dig deeper on selected direction. Architecture, toolchain, risk.
- Decide or iterate — Lock direction or explore more.
Repeats per theme. Revisits allowed. Continues until user says "lock it in."
Use AskUserQuestion for structured decisions. Plain conversation for exploration.
Team-Accelerated Exploration (for large sessions):
| Teammate | Focus |
|---|
| Infra & quality | Production, quality gates, observability |
| Product & growth | Landing, onboarding, virality, strategy |
| Payments & integrations | Stripe, Bitcoin, Lightning |
| AI enrichment | Gemini research, Codex implementation recs |
Teammates share findings via messages. Cross-pollination encouraged:
when Infra finds a P0, Growth checks if it affects onboarding.
Phase 4: Synthesis
Once directions are locked for explored themes:
Step 1: Create Issues
Create atomic, implementable GitHub issues from agreed directions.
Include user observations from Phase 1 Step 2.
Invoke
skills for domains where automated issue creation helps:
- , ,
/log-observability-issues
, /log-product-standards-issues
- , ,
- , ,
For strategic issues from exploration: create directly with full context.
Step 2: Enrich
Each issue gets:
- Problem statement (from exploration discussion)
- Context and evidence
- Recommended approach (from locked direction)
- Acceptance criteria
- Effort estimate
Use Codex for implementation recommendations on P0/P1 issues.
Use Gemini for current best practices research.
Use Thinktank for architecture validation on complex issues.
Step 3: Organize
Apply org-wide standards (load
groom/references/org-standards.md
):
- Canonical labels (priority, type, horizon, effort, source, domain)
- Issue types via GraphQL
- Milestone assignment
- Project linking (Active Sprint, Product Roadmap)
Close stale issues identified in Phase 1 Step 3 (with user confirmation).
Migrate legacy labels.
Step 4: Deduplicate
Three sources of duplicates:
- User observations that overlap with automated findings
- New issues from log-* skills that overlap with each other
- New issues that overlap with existing kept issues
Keep the most comprehensive. Close others with link to canonical.
Step 5: Summarize
GROOM SUMMARY
=============
Themes Explored: [list]
Directions Locked: [list]
Issues by Priority:
- P0 (Critical): N
- P1 (Essential): N
- P2 (Important): N
- P3 (Nice to Have): N
Recommended Execution Order:
1. [P0] ...
2. [P1] ...
Ready for /autopilot: [issue numbers]
View all: gh issue list --state open
Related Skills
Audit Primitives (Phase 2)
Issue Creators (Phase 4)
- , ,
/log-observability-issues
, /log-product-standards-issues
- , ,
- , ,
Standalone Domain Work
bash
/check-production # Audit only
/log-production-issues # Create issues
/triage # Fix highest priority