okr-design

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OKR Design & Metrics Framework

OKR设计与度量框架

Structure goals, decompose metrics into KPI trees, identify leading indicators, and design rigorous experiments.
结构化目标,将指标拆解为KPI树,识别领先指标,并设计严谨的实验。

OKR Structure

OKR结构

Objectives are qualitative and inspiring. Key Results are quantitative and outcome-focused — never a list of outputs.
Objective: Qualitative, inspiring goal (70% achievable stretch)
+-- Key Result 1: [Verb] [metric] from [baseline] to [target]
+-- Key Result 2: [Verb] [metric] from [baseline] to [target]
+-- Key Result 3: [Verb] [metric] from [baseline] to [target]
markdown
undefined
Objective(目标)是定性且具有激励性的。Key Result(关键结果)是定量且聚焦结果的——绝不能是产出列表。
Objective: Qualitative, inspiring goal (70% achievable stretch)
+-- Key Result 1: [Verb] [metric] from [baseline] to [target]
+-- Key Result 2: [Verb] [metric] from [baseline] to [target]
+-- Key Result 3: [Verb] [metric] from [baseline] to [target]
markdown
undefined

Q1 OKRs

Q1 OKRs

Objective: Become the go-to platform for enterprise teams

Objective: Become the go-to platform for enterprise teams

Key Results:
  • KR1: Increase enterprise NPS from 32 to 50
  • KR2: Reduce time-to-value from 14 days to 3 days
  • KR3: Achieve 95% feature adoption in first 30 days of onboarding
  • KR4: Win 5 competitive displacements from [Competitor]
undefined
Key Results:
  • KR1: Increase enterprise NPS from 32 to 50
  • KR2: Reduce time-to-value from 14 days to 3 days
  • KR3: Achieve 95% feature adoption in first 30 days of onboarding
  • KR4: Win 5 competitive displacements from [Competitor]
undefined

OKR Quality Checks

OKR质量检查

CheckObjectiveKey Result
Has a numberNOYES
Inspiring / energizingYESnot required
Outcome-focused (not "ship X features")YESYES
70% achievable (stretch, not sandbagged)YESYES
Aligned to higher-level goalYESYES
See references/okr-workshop-guide.md for a full facilitation agenda (3-4 hours, dot voting, finalization template). See rules/metrics-okr.md for pitfalls and alignment cascade patterns.

检查项Objective(目标)Key Result(关键结果)
包含数值
具有激励性无需
聚焦结果(而非“交付X个功能”)
70%可达成(具有挑战性,而非保守)
与更高层级目标对齐
查看references/okr-workshop-guide.md获取完整的引导议程(3-4小时,点投票,终稿模板)。 查看rules/metrics-okr.md了解常见陷阱和对齐层级模式。

KPI Tree & North Star

KPI树与北极星指标

Decompose the top-level metric into components with clear cause-effect relationships.
Revenue (Lagging — root)
├── New Revenue = Leads × Conv Rate          (Leading)
├── Expansion   = Users × Upsell Rate        (Leading)
└── Retained    = Existing × (1 - Churn)     (Lagging)
将顶层指标拆解为具有明确因果关系的组件。
Revenue (Lagging — root)
├── New Revenue = Leads × Conv Rate          (Leading)
├── Expansion   = Users × Upsell Rate        (Leading)
└── Retained    = Existing × (1 - Churn)     (Lagging)

North Star + Input Metrics Template

北极星指标+输入指标模板

markdown
undefined
markdown
undefined

Metrics Framework

Metrics Framework

North Star: [One metric that captures core value — e.g., Weekly Active Teams]
Input Metrics (leading, actionable by teams):
  1. New signups — acquisition
  2. Onboarding completion rate — activation
  3. Features used per user/week — engagement
  4. Invite rate — virality
  5. Upgrade rate — monetization
Lagging Validation (confirm inputs translate to value):
  • Revenue growth
  • Net retention rate
  • Customer lifetime value
undefined
North Star: [One metric that captures core value — e.g., Weekly Active Teams]
Input Metrics (leading, actionable by teams):
  1. New signups — acquisition
  2. Onboarding completion rate — activation
  3. Features used per user/week — engagement
  4. Invite rate — virality
  5. Upgrade rate — monetization
Lagging Validation (confirm inputs translate to value):
  • Revenue growth
  • Net retention rate
  • Customer lifetime value
undefined

North Star Selection by Business Type

按业务类型选择北极星指标

BusinessNorth Star ExampleWhy
SaaSWeekly Active UsersIndicates ongoing value delivery
MarketplaceGross Merchandise ValueCaptures both buyer and seller sides
MediaTime spentEngagement signals content value
E-commercePurchase frequencyRepeat = satisfaction
See rules/metrics-kpi-trees.md for the full revenue and product health KPI tree examples.

业务类型北极星指标示例原因
SaaSWeekly Active Users反映持续的价值交付
交易平台Gross Merchandise Value覆盖买卖双方业务
媒体Time spent参与度体现内容价值
电商Purchase frequency复购代表用户满意度
查看rules/metrics-kpi-trees.md获取完整的营收和产品健康度KPI树示例。

Leading vs Lagging Indicators

领先指标vs滞后指标

Every lagging metric you want to improve needs 2-3 leading predictors.
markdown
undefined
每个你希望提升的滞后指标都需要搭配2-3个领先预测指标。
markdown
undefined

Metric Pairs

Metric Pairs

Lagging: Customer Churn Rate Leading:
  1. Product usage frequency (weekly)
  2. Support ticket severity (daily)
  3. NPS score trend (monthly)
Lagging: Revenue Growth Leading:
  1. Pipeline value (weekly)
  2. Demo-to-trial conversion (weekly)
  3. Feature adoption rate (weekly)

| Indicator | Review Cadence | Action Timeline |
|-----------|----------------|-----------------|
| Leading | Daily / Weekly | Immediate course correction |
| Lagging | Monthly / Quarterly | Strategic adjustments |

See [rules/metrics-leading-lagging.md](rules/metrics-leading-lagging.md) for a balanced dashboard template.

---
Lagging: Customer Churn Rate Leading:
  1. Product usage frequency (weekly)
  2. Support ticket severity (daily)
  3. NPS score trend (monthly)
Lagging: Revenue Growth Leading:
  1. Pipeline value (weekly)
  2. Demo-to-trial conversion (weekly)
  3. Feature adoption rate (weekly)

| 指标类型 | 回顾频率 | 行动周期 |
|-----------|----------------|-----------------|
| 领先指标 | 每日/每周 | 即时调整方向 |
| 滞后指标 | 每月/每季度 | 战略层面调整 |

查看[rules/metrics-leading-lagging.md](rules/metrics-leading-lagging.md)获取平衡仪表盘模板。

---

Metric Instrumentation

指标埋点

Every metric needs a formal definition before instrumentation.
markdown
undefined
每个指标在埋点前都需要有正式的定义。
markdown
undefined

Metric: Feature Adoption Rate

Metric: Feature Adoption Rate

Definition: % of active users who used [feature] at least once in their first 30 days. Formula: (Users who triggered feature_activated in first 30 days) / (Users who signed up) Data Source: Analytics — feature_activated event Segments: By plan tier, by signup cohort Calculation: Daily Review: Weekly
Events: user_signed_up { user_id, plan_tier, signup_source } feature_activated { user_id, feature_name, activation_method }

Event naming: `object_action` in snake_case — `user_signed_up`, `feature_activated`, `subscription_upgraded`.

See [rules/metrics-instrumentation.md](rules/metrics-instrumentation.md) for the full metric definition template, alerting thresholds, and dashboard design principles.

---
Definition: % of active users who used [feature] at least once in their first 30 days. Formula: (Users who triggered feature_activated in first 30 days) / (Users who signed up) Data Source: Analytics — feature_activated event Segments: By plan tier, by signup cohort Calculation: Daily Review: Weekly
Events: user_signed_up { user_id, plan_tier, signup_source } feature_activated { user_id, feature_name, activation_method }

事件命名规则:采用蛇形命名法`object_action`——例如`user_signed_up`、`feature_activated`、`subscription_upgraded`。

查看[rules/metrics-instrumentation.md](rules/metrics-instrumentation.md)获取完整的指标定义模板、告警阈值和仪表盘设计原则。

---

Experiment Design

实验设计

Every experiment must define guardrail metrics before launch. Guardrails prevent shipping a "win" that causes hidden damage.
markdown
undefined
每个实验在启动前必须定义防护指标(Guardrail)。防护指标可避免上线看似“成功”但实则造成隐性损害的功能。
markdown
undefined

Experiment: [Name]

Experiment: [Name]

Hypothesis

Hypothesis

If we [change], then [primary metric] will [direction] by [amount] because [reasoning based on evidence].
If we [change], then [primary metric] will [direction] by [amount] because [reasoning based on evidence].

Metrics

Metrics

  • Primary: [The metric you are trying to move]
  • Secondary: [Supporting context metrics]
  • Guardrails: [Metrics that MUST NOT degrade — define thresholds]
  • Primary: [The metric you are trying to move]
  • Secondary: [Supporting context metrics]
  • Guardrails: [Metrics that MUST NOT degrade — define thresholds]

Design

Design

  • Type: A/B test | multivariate | feature flag rollout
  • Sample size: [N per variant — calculated for statistical power]
  • Duration: [Minimum weeks to reach significance]
  • Type: A/B test | multivariate | feature flag rollout
  • Sample size: [N per variant — calculated for statistical power]
  • Duration: [Minimum weeks to reach significance]

Rollout Plan

Rollout Plan

  1. 10% — 1 week canary, monitor guardrails daily
  2. 50% — 2 weeks, confirm statistical significance
  3. 100% — full rollout with continued monitoring
  1. 10% — 1 week canary, monitor guardrails daily
  2. 50% — 2 weeks, confirm statistical significance
  3. 100% — full rollout with continued monitoring

Kill Criteria

Kill Criteria

Any guardrail degrades > [threshold]% relative to baseline.
undefined
Any guardrail degrades > [threshold]% relative to baseline.
undefined

Pre-Launch Checklist

启动前检查清单

  • Hypothesis documented with expected effect size
  • Primary, secondary, and guardrail metrics defined
  • Sample size calculated for minimum detectable effect
  • Dashboard or alerts configured for guardrail metrics
  • Staged rollout plan with kill criteria at each stage
  • Rollback procedure documented
See rules/metrics-experiment-design.md for guardrail thresholds, performance and business guardrail tables, and alert SLAs.

  • 假设已记录,包含预期影响幅度
  • 已定义主要、次要和防护指标
  • 已计算达到最小可检测效果所需的样本量
  • 已配置防护指标的仪表盘或告警
  • 已制定分阶段上线计划,每个阶段都有终止标准
  • 已记录回滚流程
查看rules/metrics-experiment-design.md获取防护指标阈值、性能和业务防护指标表,以及告警服务级别协议(SLA)。

Common Pitfalls

常见陷阱

PitfallMitigation
KRs are outputs ("ship 5 features")Rewrite as outcomes ("increase conversion by 20%")
Tracking only lagging indicatorsPair every lagging metric with 2-3 leading predictors
No baseline before setting targetsInstrument and measure for 2 weeks before setting OKRs
Launching experiments without guardrailsDefine guardrails before any code is shipped
Too many OKRs (>5 per team)Limit to 3-5 objectives, 3-5 KRs each
Metrics without ownersEvery metric needs a team owner

陷阱应对措施
KR为产出(如“交付5个功能”)改写为结果导向(如“提升转化率20%”)
仅跟踪滞后指标每个滞后指标搭配2-3个领先预测指标
设定目标前无基线数据在设定OKR前,先埋点并度量2周数据
无防护指标就启动实验在任何代码上线前定义防护指标
OKR数量过多(团队超过5个)限制为3-5个Objective,每个Objective搭配3-5个Key Result
指标无负责人每个指标都需要对应团队负责人

Related Skills

相关技能

  • prioritization
    — RICE, WSJF, ICE, MoSCoW scoring; OKRs define which KPIs drive RICE impact
  • product-frameworks
    — Full PM toolkit: value prop, competitive analysis, user research, business case
  • product-analytics
    — Instrument and query the metrics defined in OKR trees
  • write-prd
    — Embed success metrics and experiment hypotheses into product requirements
  • market-sizing
    — TAM/SAM/SOM that anchors North Star Metric targets
  • competitive-analysis
    — Competitor benchmarks that inform KR targets

Version: 1.0.0
  • prioritization
    — RICE、WSJF、ICE、MoSCoW评分;OKR定义驱动RICE影响的KPI
  • product-frameworks
    — 完整产品经理工具包:价值主张、竞品分析、用户研究、商业案例
  • product-analytics
    — 对OKR树中定义的指标进行埋点和查询
  • write-prd
    — 将成功指标和实验假设嵌入产品需求文档
  • market-sizing
    — TAM/SAM/SOM确定北极星指标的目标基准
  • competitive-analysis
    — 竞品基准为KR目标提供参考

版本:1.0.0