Loading...
Loading...
Devil's advocate. Seek contrary evidence before locking in. Use when about to make a significant decision, when confidence is high but stakes are higher, or when the team is converging too quickly.
npx skill4agent add open-horizon-labs/skills dissent/dissent"The current approach is [approach]. The reasoning is [reasoning]. The expected outcome is [outcome]. This is the strongest version of this position."
"If this approach were wrong, what would we expect to see? Are we seeing any of that?"
"This failed because [reason]. The warning signs we ignored were [signs]. The assumption that broke was [assumption]."
Assumption: [what we're taking for granted]
Evidence: [what supports this assumption]
Risk if wrong: [what happens if this assumption breaks]
Test: [how we could validate this before committing]"PROCEED: The strongest counter-argument is [X], but it's addressed by [Y]. The key assumption is [Z], which we've validated by [how]."
| Dissent Section | ADR Section |
|---|---|
| Decision under review | Title |
| Steel-Man Position | Context |
| Contrary Evidence + Pre-Mortem | Options Considered |
| Hidden Assumptions | Consequences |
| Decision + Reasoning | Decision |
docs/adr/NNNN-<decision-slug>.md# ADR NNNN: [Decision Title]
## Status
Accepted
## Context
[From Steel-Man Position - why this decision needed to be made]
## Decision
[From Decision section - what we decided and why]
## Options Considered
[From Contrary Evidence - what alternatives were evaluated]
## Consequences
[From Hidden Assumptions - what we're accepting by making this decision]
## Notes
Generated from /dissent on [date]## Dissent Report
**Decision under review:** [what's being challenged]
**Stakes:** [why this matters]
**Confidence before dissent:** [HIGH/MEDIUM/LOW]
### Steel-Man Position
[The strongest version of the current approach]
### Contrary Evidence
1. [Evidence point 1]
2. [Evidence point 2]
3. [Evidence point 3]
### Pre-Mortem Scenarios
1. **[Failure mode]:** [How this could fail]
2. **[Failure mode]:** [How this could fail]
3. **[Failure mode]:** [How this could fail]
### Hidden Assumptions
| Assumption | Evidence | Risk if Wrong | Test |
|------------|----------|---------------|------|
| [Assumption 1] | [Evidence] | [Risk] | [Test] |
| [Assumption 2] | [Evidence] | [Risk] | [Test] |
### Decision
**Recommendation:** [PROCEED / ADJUST / RECONSIDER]
**Reasoning:** [Why this recommendation]
**If ADJUST:** [Specific modifications]
**Confidence after dissent:** [HIGH/MEDIUM/LOW]
**Create ADR?** [Yes - offer to write docs/adr/NNNN-slug.md / No - not a one-way door]/dissent
We're about to adopt a microservices architecture for the new
payments system. The team agrees this is the right approach.## Dissent Report
**Decision under review:** Microservices architecture for payments
**Stakes:** 6+ month commitment, affects entire system
**Confidence before dissent:** HIGH (team unanimous)
### Steel-Man Position
Microservices will allow independent scaling of payment processing,
enable different teams to deploy independently, and provide fault
isolation so one service failure doesn't bring down payments.
### Contrary Evidence
1. We have 4 engineers. Netflix had 100+ when they went microservices.
2. Our deployment pipeline takes 45 minutes. We'll multiply that by N services.
3. The "independent teams" benefit requires teams we don't have.
### Pre-Mortem Scenarios
1. **Distributed systems complexity:** We spend 60% of time debugging
cross-service issues instead of building features.
2. **Operational overhead:** We can't hire SREs fast enough. System
is down because nobody understands the full architecture.
3. **Premature optimization:** The "scaling" benefit never materializes
because we never got enough users to need it.
### Hidden Assumptions
| Assumption | Evidence | Risk if Wrong | Test |
|------------|----------|---------------|------|
| We'll need to scale components independently | None - projected | Waste of complexity | Check actual load patterns |
| Team will grow to support this | Hiring plan exists | Stuck with complexity, no staff | Validate funding/timeline |
| Network latency acceptable | Not measured | Payment failures | Prototype and benchmark |
### Decision
**Recommendation:** RECONSIDER
**Reasoning:** The benefits of microservices assume scale and team
size we don't have. The strongest argument for microservices (independent
scaling) has no evidence supporting the need. A modular monolith gives
us the code organization benefits without the operational complexity.
**Confidence after dissent:** HIGH (clear mismatch)
**Create ADR?** No - we're reconsidering, not committing/dissent
We're about to ship a recommendation system that suggests similar
items based on purchase history. Built over 3 sprints.## Dissent Report
**Decision under review:** Ship purchase-history recommendation system
**Stakes:** 3 sprints invested, prominent homepage placement
**Confidence before dissent:** MEDIUM
### Steel-Man Position
Users who bought X often want Y. Amazon proved this works. We have
the purchase data. The model is trained and performs well in testing.
### Contrary Evidence
1. Our testing used historical data. Users might behave differently
when seeing recommendations in real-time.
2. We have 10K users. Amazon's model works with billions of data points.
3. Competitor tried this, removed it after 6 months (no lift).
### Pre-Mortem Scenarios
1. **Cold start problem:** New users see garbage recommendations,
leave before making a purchase we could learn from.
2. **Filter bubble:** System recommends what users already buy,
doesn't drive discovery of new product categories.
3. **Wrong metric:** Recommendations get clicks but not purchases.
We optimized for engagement, needed revenue.
### Hidden Assumptions
| Assumption | Evidence | Risk if Wrong | Test |
|------------|----------|---------------|------|
| Purchase history predicts intent | Industry standard | Wasted real estate | A/B test against simple "popular items" |
| 10K users is enough data | None | Poor recommendations | Check minimum viable data in literature |
| Homepage placement is right | None | Users ignore it | Heatmap existing traffic patterns |
### Decision
**Recommendation:** ADJUST
**Reasoning:** The approach is sound but we're shipping without
validation. Risk is manageable with modifications.
**Modifications:**
1. Ship with A/B test against "popular items" baseline
2. Add fallback to curated picks for users with <5 purchases
3. Define success metric (revenue lift, not clicks) before launch
**Confidence after dissent:** MEDIUM (reduced risk with A/B)
**Create ADR?** Yes - shall I write `docs/adr/0012-recommendation-system-rollout.md`? Documents why we chose A/B testing and what success metrics we committed to..oh/<session>.md/dissent auth-decision.oh/auth-decision.md/dissent"Save to session? [suggested-name] [custom] [skip]"
## Dissent
**Updated:** <timestamp>
**Decision:** [PROCEED | ADJUST | RECONSIDER]
[dissent report content].oh/<session>.md/solution-space/problem-statement/execute/execute/ship/review/solution-space