okr-design
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOKR Design & Metrics Framework
OKR设计与度量框架
Structure goals, decompose metrics into KPI trees, identify leading indicators, and design rigorous experiments.
结构化目标,将指标拆解为KPI树,识别领先指标,并设计严谨的实验。
OKR Structure
OKR结构
Objectives are qualitative and inspiring. Key Results are quantitative and outcome-focused — never a list of outputs.
Objective: Qualitative, inspiring goal (70% achievable stretch)
+-- Key Result 1: [Verb] [metric] from [baseline] to [target]
+-- Key Result 2: [Verb] [metric] from [baseline] to [target]
+-- Key Result 3: [Verb] [metric] from [baseline] to [target]markdown
undefinedObjective(目标)是定性且具有激励性的。Key Result(关键结果)是定量且聚焦结果的——绝不能是产出列表。
Objective: Qualitative, inspiring goal (70% achievable stretch)
+-- Key Result 1: [Verb] [metric] from [baseline] to [target]
+-- Key Result 2: [Verb] [metric] from [baseline] to [target]
+-- Key Result 3: [Verb] [metric] from [baseline] to [target]markdown
undefinedQ1 OKRs
Q1 OKRs
Objective: Become the go-to platform for enterprise teams
Objective: Become the go-to platform for enterprise teams
Key Results:
- KR1: Increase enterprise NPS from 32 to 50
- KR2: Reduce time-to-value from 14 days to 3 days
- KR3: Achieve 95% feature adoption in first 30 days of onboarding
- KR4: Win 5 competitive displacements from [Competitor]
undefinedKey Results:
- KR1: Increase enterprise NPS from 32 to 50
- KR2: Reduce time-to-value from 14 days to 3 days
- KR3: Achieve 95% feature adoption in first 30 days of onboarding
- KR4: Win 5 competitive displacements from [Competitor]
undefinedOKR Quality Checks
OKR质量检查
| Check | Objective | Key Result |
|---|---|---|
| Has a number | NO | YES |
| Inspiring / energizing | YES | not required |
| Outcome-focused (not "ship X features") | YES | YES |
| 70% achievable (stretch, not sandbagged) | YES | YES |
| Aligned to higher-level goal | YES | YES |
See references/okr-workshop-guide.md for a full facilitation agenda (3-4 hours, dot voting, finalization template).
See rules/metrics-okr.md for pitfalls and alignment cascade patterns.
| 检查项 | Objective(目标) | Key Result(关键结果) |
|---|---|---|
| 包含数值 | 否 | 是 |
| 具有激励性 | 是 | 无需 |
| 聚焦结果(而非“交付X个功能”) | 是 | 是 |
| 70%可达成(具有挑战性,而非保守) | 是 | 是 |
| 与更高层级目标对齐 | 是 | 是 |
查看references/okr-workshop-guide.md获取完整的引导议程(3-4小时,点投票,终稿模板)。
查看rules/metrics-okr.md了解常见陷阱和对齐层级模式。
KPI Tree & North Star
KPI树与北极星指标
Decompose the top-level metric into components with clear cause-effect relationships.
Revenue (Lagging — root)
├── New Revenue = Leads × Conv Rate (Leading)
├── Expansion = Users × Upsell Rate (Leading)
└── Retained = Existing × (1 - Churn) (Lagging)将顶层指标拆解为具有明确因果关系的组件。
Revenue (Lagging — root)
├── New Revenue = Leads × Conv Rate (Leading)
├── Expansion = Users × Upsell Rate (Leading)
└── Retained = Existing × (1 - Churn) (Lagging)North Star + Input Metrics Template
北极星指标+输入指标模板
markdown
undefinedmarkdown
undefinedMetrics Framework
Metrics Framework
North Star: [One metric that captures core value — e.g., Weekly Active Teams]
Input Metrics (leading, actionable by teams):
- New signups — acquisition
- Onboarding completion rate — activation
- Features used per user/week — engagement
- Invite rate — virality
- Upgrade rate — monetization
Lagging Validation (confirm inputs translate to value):
- Revenue growth
- Net retention rate
- Customer lifetime value
undefinedNorth Star: [One metric that captures core value — e.g., Weekly Active Teams]
Input Metrics (leading, actionable by teams):
- New signups — acquisition
- Onboarding completion rate — activation
- Features used per user/week — engagement
- Invite rate — virality
- Upgrade rate — monetization
Lagging Validation (confirm inputs translate to value):
- Revenue growth
- Net retention rate
- Customer lifetime value
undefinedNorth Star Selection by Business Type
按业务类型选择北极星指标
| Business | North Star Example | Why |
|---|---|---|
| SaaS | Weekly Active Users | Indicates ongoing value delivery |
| Marketplace | Gross Merchandise Value | Captures both buyer and seller sides |
| Media | Time spent | Engagement signals content value |
| E-commerce | Purchase frequency | Repeat = satisfaction |
See rules/metrics-kpi-trees.md for the full revenue and product health KPI tree examples.
| 业务类型 | 北极星指标示例 | 原因 |
|---|---|---|
| SaaS | Weekly Active Users | 反映持续的价值交付 |
| 交易平台 | Gross Merchandise Value | 覆盖买卖双方业务 |
| 媒体 | Time spent | 参与度体现内容价值 |
| 电商 | Purchase frequency | 复购代表用户满意度 |
查看rules/metrics-kpi-trees.md获取完整的营收和产品健康度KPI树示例。
Leading vs Lagging Indicators
领先指标vs滞后指标
Every lagging metric you want to improve needs 2-3 leading predictors.
markdown
undefined每个你希望提升的滞后指标都需要搭配2-3个领先预测指标。
markdown
undefinedMetric Pairs
Metric Pairs
Lagging: Customer Churn Rate
Leading:
- Product usage frequency (weekly)
- Support ticket severity (daily)
- NPS score trend (monthly)
Lagging: Revenue Growth
Leading:
- Pipeline value (weekly)
- Demo-to-trial conversion (weekly)
- Feature adoption rate (weekly)
| Indicator | Review Cadence | Action Timeline |
|-----------|----------------|-----------------|
| Leading | Daily / Weekly | Immediate course correction |
| Lagging | Monthly / Quarterly | Strategic adjustments |
See [rules/metrics-leading-lagging.md](rules/metrics-leading-lagging.md) for a balanced dashboard template.
---Lagging: Customer Churn Rate
Leading:
- Product usage frequency (weekly)
- Support ticket severity (daily)
- NPS score trend (monthly)
Lagging: Revenue Growth
Leading:
- Pipeline value (weekly)
- Demo-to-trial conversion (weekly)
- Feature adoption rate (weekly)
| 指标类型 | 回顾频率 | 行动周期 |
|-----------|----------------|-----------------|
| 领先指标 | 每日/每周 | 即时调整方向 |
| 滞后指标 | 每月/每季度 | 战略层面调整 |
查看[rules/metrics-leading-lagging.md](rules/metrics-leading-lagging.md)获取平衡仪表盘模板。
---Metric Instrumentation
指标埋点
Every metric needs a formal definition before instrumentation.
markdown
undefined每个指标在埋点前都需要有正式的定义。
markdown
undefinedMetric: Feature Adoption Rate
Metric: Feature Adoption Rate
Definition: % of active users who used [feature] at least once in their first 30 days.
Formula: (Users who triggered feature_activated in first 30 days) / (Users who signed up)
Data Source: Analytics — feature_activated event
Segments: By plan tier, by signup cohort
Calculation: Daily
Review: Weekly
Events:
user_signed_up { user_id, plan_tier, signup_source }
feature_activated { user_id, feature_name, activation_method }
Event naming: `object_action` in snake_case — `user_signed_up`, `feature_activated`, `subscription_upgraded`.
See [rules/metrics-instrumentation.md](rules/metrics-instrumentation.md) for the full metric definition template, alerting thresholds, and dashboard design principles.
---Definition: % of active users who used [feature] at least once in their first 30 days.
Formula: (Users who triggered feature_activated in first 30 days) / (Users who signed up)
Data Source: Analytics — feature_activated event
Segments: By plan tier, by signup cohort
Calculation: Daily
Review: Weekly
Events:
user_signed_up { user_id, plan_tier, signup_source }
feature_activated { user_id, feature_name, activation_method }
事件命名规则:采用蛇形命名法`object_action`——例如`user_signed_up`、`feature_activated`、`subscription_upgraded`。
查看[rules/metrics-instrumentation.md](rules/metrics-instrumentation.md)获取完整的指标定义模板、告警阈值和仪表盘设计原则。
---Experiment Design
实验设计
Every experiment must define guardrail metrics before launch. Guardrails prevent shipping a "win" that causes hidden damage.
markdown
undefined每个实验在启动前必须定义防护指标(Guardrail)。防护指标可避免上线看似“成功”但实则造成隐性损害的功能。
markdown
undefinedExperiment: [Name]
Experiment: [Name]
Hypothesis
Hypothesis
If we [change], then [primary metric] will [direction] by [amount]
because [reasoning based on evidence].
If we [change], then [primary metric] will [direction] by [amount]
because [reasoning based on evidence].
Metrics
Metrics
- Primary: [The metric you are trying to move]
- Secondary: [Supporting context metrics]
- Guardrails: [Metrics that MUST NOT degrade — define thresholds]
- Primary: [The metric you are trying to move]
- Secondary: [Supporting context metrics]
- Guardrails: [Metrics that MUST NOT degrade — define thresholds]
Design
Design
- Type: A/B test | multivariate | feature flag rollout
- Sample size: [N per variant — calculated for statistical power]
- Duration: [Minimum weeks to reach significance]
- Type: A/B test | multivariate | feature flag rollout
- Sample size: [N per variant — calculated for statistical power]
- Duration: [Minimum weeks to reach significance]
Rollout Plan
Rollout Plan
- 10% — 1 week canary, monitor guardrails daily
- 50% — 2 weeks, confirm statistical significance
- 100% — full rollout with continued monitoring
- 10% — 1 week canary, monitor guardrails daily
- 50% — 2 weeks, confirm statistical significance
- 100% — full rollout with continued monitoring
Kill Criteria
Kill Criteria
Any guardrail degrades > [threshold]% relative to baseline.
undefinedAny guardrail degrades > [threshold]% relative to baseline.
undefinedPre-Launch Checklist
启动前检查清单
- Hypothesis documented with expected effect size
- Primary, secondary, and guardrail metrics defined
- Sample size calculated for minimum detectable effect
- Dashboard or alerts configured for guardrail metrics
- Staged rollout plan with kill criteria at each stage
- Rollback procedure documented
See rules/metrics-experiment-design.md for guardrail thresholds, performance and business guardrail tables, and alert SLAs.
- 假设已记录,包含预期影响幅度
- 已定义主要、次要和防护指标
- 已计算达到最小可检测效果所需的样本量
- 已配置防护指标的仪表盘或告警
- 已制定分阶段上线计划,每个阶段都有终止标准
- 已记录回滚流程
查看rules/metrics-experiment-design.md获取防护指标阈值、性能和业务防护指标表,以及告警服务级别协议(SLA)。
Common Pitfalls
常见陷阱
| Pitfall | Mitigation |
|---|---|
| KRs are outputs ("ship 5 features") | Rewrite as outcomes ("increase conversion by 20%") |
| Tracking only lagging indicators | Pair every lagging metric with 2-3 leading predictors |
| No baseline before setting targets | Instrument and measure for 2 weeks before setting OKRs |
| Launching experiments without guardrails | Define guardrails before any code is shipped |
| Too many OKRs (>5 per team) | Limit to 3-5 objectives, 3-5 KRs each |
| Metrics without owners | Every metric needs a team owner |
| 陷阱 | 应对措施 |
|---|---|
| KR为产出(如“交付5个功能”) | 改写为结果导向(如“提升转化率20%”) |
| 仅跟踪滞后指标 | 每个滞后指标搭配2-3个领先预测指标 |
| 设定目标前无基线数据 | 在设定OKR前,先埋点并度量2周数据 |
| 无防护指标就启动实验 | 在任何代码上线前定义防护指标 |
| OKR数量过多(团队超过5个) | 限制为3-5个Objective,每个Objective搭配3-5个Key Result |
| 指标无负责人 | 每个指标都需要对应团队负责人 |
Related Skills
相关技能
- — RICE, WSJF, ICE, MoSCoW scoring; OKRs define which KPIs drive RICE impact
prioritization - — Full PM toolkit: value prop, competitive analysis, user research, business case
product-frameworks - — Instrument and query the metrics defined in OKR trees
product-analytics - — Embed success metrics and experiment hypotheses into product requirements
write-prd - — TAM/SAM/SOM that anchors North Star Metric targets
market-sizing - — Competitor benchmarks that inform KR targets
competitive-analysis
Version: 1.0.0
- — RICE、WSJF、ICE、MoSCoW评分;OKR定义驱动RICE影响的KPI
prioritization - — 完整产品经理工具包:价值主张、竞品分析、用户研究、商业案例
product-frameworks - — 对OKR树中定义的指标进行埋点和查询
product-analytics - — 将成功指标和实验假设嵌入产品需求文档
write-prd - — TAM/SAM/SOM确定北极星指标的目标基准
market-sizing - — 竞品基准为KR目标提供参考
competitive-analysis
版本:1.0.0