growth-marketing
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGrowth Marketing Expert
增长营销专家
Expertise in growth experimentation, funnel optimization, and data-driven marketing.
专长于增长实验、漏斗优化和数据驱动营销。
Core Competencies
核心能力
Growth Experimentation
增长实验
- Hypothesis development
- A/B and multivariate testing
- Statistical significance
- Experiment prioritization (ICE/PIE)
- Learning documentation
- 假设制定
- A/B与多变量测试
- 统计显著性
- 实验优先级排序(ICE/PIE框架)
- 经验记录
Funnel Optimization
转化漏斗优化
- Conversion rate optimization (CRO)
- Landing page optimization
- Sign-up flow optimization
- Activation improvement
- Retention mechanics
- 转化率优化(CRO)
- 着陆页优化
- 注册流程优化
- 激活提升
- 留存机制
Analytics & Data
分析与数据
- Funnel analytics
- Cohort analysis
- Attribution modeling
- Predictive analytics
- Customer segmentation
- 漏斗分析
- 群组分析
- 归因建模
- 预测分析
- 用户细分
The Growth Framework
增长框架
AARRR (Pirate Metrics)
AARRR(海盗指标)
yaml
Acquisition:
question: How do users find you?
metrics:
- Traffic by source
- Cost per acquisition
- Click-through rate
tactics:
- SEO & content marketing
- Paid acquisition
- Viral/referral
- Partnerships
Activation:
question: Do users have a great first experience?
metrics:
- Sign-up rate
- Onboarding completion
- Time to value
- Feature adoption
tactics:
- Onboarding optimization
- Progressive profiling
- Quick wins
- Personalization
Retention:
question: Do users come back?
metrics:
- DAU/MAU ratio
- Cohort retention curves
- Churn rate
- Feature stickiness
tactics:
- Email/push engagement
- Feature releases
- Community building
- Habit loops
Revenue:
question: How do you make money?
metrics:
- ARPU/ARPA
- LTV
- Conversion to paid
- Expansion revenue
tactics:
- Pricing optimization
- Upsell flows
- Reduction of friction
- Value demonstration
Referral:
question: Do users tell others?
metrics:
- Viral coefficient (K-factor)
- Referral conversion
- NPS
- Share rate
tactics:
- Referral programs
- Social proof
- Word of mouth
- Product viralityyaml
Acquisition:
question: How do users find you?
metrics:
- Traffic by source
- Cost per acquisition
- Click-through rate
tactics:
- SEO & content marketing
- Paid acquisition
- Viral/referral
- Partnerships
Activation:
question: Do users have a great first experience?
metrics:
- Sign-up rate
- Onboarding completion
- Time to value
- Feature adoption
tactics:
- Onboarding optimization
- Progressive profiling
- Quick wins
- Personalization
Retention:
question: Do users come back?
metrics:
- DAU/MAU ratio
- Cohort retention curves
- Churn rate
- Feature stickiness
tactics:
- Email/push engagement
- Feature releases
- Community building
- Habit loops
Revenue:
question: How do you make money?
metrics:
- ARPU/ARPA
- LTV
- Conversion to paid
- Expansion revenue
tactics:
- Pricing optimization
- Upsell flows
- Reduction of friction
- Value demonstration
Referral:
question: Do users tell others?
metrics:
- Viral coefficient (K-factor)
- Referral conversion
- NPS
- Share rate
tactics:
- Referral programs
- Social proof
- Word of mouth
- Product viralityGrowth Levers
增长杠杆
python
def calculate_growth_impact(metrics):
"""Calculate impact of improving each growth lever."""
levers = {
'traffic': {
'current': metrics['monthly_visitors'],
'improvement': 0.20, # 20% more traffic
'impact': metrics['monthly_visitors'] * 0.20 * metrics['conversion_rate'] * metrics['arpu']
},
'conversion': {
'current': metrics['conversion_rate'],
'improvement': 0.25, # 25% better conversion
'impact': metrics['monthly_visitors'] * (metrics['conversion_rate'] * 0.25) * metrics['arpu']
},
'frequency': {
'current': metrics['purchases_per_year'],
'improvement': 0.15, # 15% more frequent
'impact': metrics['customers'] * (metrics['purchases_per_year'] * 0.15) * metrics['aov']
},
'aov': {
'current': metrics['aov'],
'improvement': 0.10, # 10% higher AOV
'impact': metrics['customers'] * metrics['purchases_per_year'] * (metrics['aov'] * 0.10)
},
'retention': {
'current': metrics['retention_rate'],
'improvement': 0.05, # 5% better retention
'impact': calculate_ltv_improvement(metrics, 0.05)
}
}
return sorted(levers.items(), key=lambda x: x[1]['impact'], reverse=True)python
def calculate_growth_impact(metrics):
"""Calculate impact of improving each growth lever."""
levers = {
'traffic': {
'current': metrics['monthly_visitors'],
'improvement': 0.20, # 20% more traffic
'impact': metrics['monthly_visitors'] * 0.20 * metrics['conversion_rate'] * metrics['arpu']
},
'conversion': {
'current': metrics['conversion_rate'],
'improvement': 0.25, # 25% better conversion
'impact': metrics['monthly_visitors'] * (metrics['conversion_rate'] * 0.25) * metrics['arpu']
},
'frequency': {
'current': metrics['purchases_per_year'],
'improvement': 0.15, # 15% more frequent
'impact': metrics['customers'] * (metrics['purchases_per_year'] * 0.15) * metrics['aov']
},
'aov': {
'current': metrics['aov'],
'improvement': 0.10, # 10% higher AOV
'impact': metrics['customers'] * metrics['purchases_per_year'] * (metrics['aov'] * 0.10)
},
'retention': {
'current': metrics['retention_rate'],
'improvement': 0.05, # 5% better retention
'impact': calculate_ltv_improvement(metrics, 0.05)
}
}
return sorted(levers.items(), key=lambda x: x[1]['impact'], reverse=True)Experimentation Process
实验流程
ICE Prioritization Framework
ICE优先级排序框架
python
def calculate_ice_score(experiments):
"""Score experiments using ICE framework."""
scored = []
for exp in experiments:
ice_score = (
exp['impact'] * # 1-10: potential business impact
exp['confidence'] * # 1-10: confidence in hypothesis
exp['ease'] # 1-10: ease of implementation
) / 3
scored.append({
'name': exp['name'],
'hypothesis': exp['hypothesis'],
'ice_score': ice_score,
'impact': exp['impact'],
'confidence': exp['confidence'],
'ease': exp['ease']
})
return sorted(scored, key=lambda x: x['ice_score'], reverse=True)python
def calculate_ice_score(experiments):
"""Score experiments using ICE framework."""
scored = []
for exp in experiments:
ice_score = (
exp['impact'] * # 1-10: potential business impact
exp['confidence'] * # 1-10: confidence in hypothesis
exp['ease'] # 1-10: ease of implementation
) / 3
scored.append({
'name': exp['name'],
'hypothesis': exp['hypothesis'],
'ice_score': ice_score,
'impact': exp['impact'],
'confidence': exp['confidence'],
'ease': exp['ease']
})
return sorted(scored, key=lambda x: x['ice_score'], reverse=True)Experiment Template
实验模板
yaml
Experiment Name: Homepage CTA Button Color Test
Hypothesis:
statement: "Changing the CTA button from blue to orange will increase clicks"
reasoning: "Orange creates more urgency and stands out from our blue brand"
Metrics:
primary: CTA click rate
secondary:
- Sign-up conversion
- Time on page
- Bounce rate
Test Design:
type: A/B test
control: Blue button (#3498db)
variant: Orange button (#e67e22)
traffic_split: 50/50
sample_size_needed: 10,000 per variant
duration: 14 days minimum
Success Criteria:
minimum_detectable_effect: 10%
statistical_significance: 95%
Segmentation:
- New vs returning visitors
- Mobile vs desktop
- Traffic sourceyaml
Experiment Name: Homepage CTA Button Color Test
Hypothesis:
statement: "Changing the CTA button from blue to orange will increase clicks"
reasoning: "Orange creates more urgency and stands out from our blue brand"
Metrics:
primary: CTA click rate
secondary:
- Sign-up conversion
- Time on page
- Bounce rate
Test Design:
type: A/B test
control: Blue button (#3498db)
variant: Orange button (#e67e22)
traffic_split: 50/50
sample_size_needed: 10,000 per variant
duration: 14 days minimum
Success Criteria:
minimum_detectable_effect: 10%
statistical_significance: 95%
Segmentation:
- New vs returning visitors
- Mobile vs desktop
- Traffic sourceStatistical Significance Calculator
统计显著性计算器
python
import scipy.stats as stats
import numpy as np
def calculate_sample_size(baseline_rate, mde, alpha=0.05, power=0.80):
"""Calculate required sample size for A/B test."""
effect_size = mde * baseline_rate
# Z-scores for significance level and power
z_alpha = stats.norm.ppf(1 - alpha/2)
z_beta = stats.norm.ppf(power)
# Pooled standard deviation
p1 = baseline_rate
p2 = baseline_rate * (1 + mde)
pooled_var = p1*(1-p1) + p2*(1-p2)
# Sample size per group
n = (2 * pooled_var * (z_alpha + z_beta)**2) / (effect_size**2)
return int(np.ceil(n))
def analyze_ab_test(control_visitors, control_conversions,
variant_visitors, variant_conversions):
"""Analyze A/B test results."""
control_rate = control_conversions / control_visitors
variant_rate = variant_conversions / variant_visitors
# Lift calculation
lift = (variant_rate - control_rate) / control_rate
# Statistical test
contingency = [[control_conversions, control_visitors - control_conversions],
[variant_conversions, variant_visitors - variant_conversions]]
chi2, p_value, dof, expected = stats.chi2_contingency(contingency)
return {
'control_rate': control_rate,
'variant_rate': variant_rate,
'lift': lift,
'lift_percent': f"{lift:.1%}",
'p_value': p_value,
'significant': p_value < 0.05,
'confidence': 1 - p_value
}python
import scipy.stats as stats
import numpy as np
def calculate_sample_size(baseline_rate, mde, alpha=0.05, power=0.80):
"""Calculate required sample size for A/B test."""
effect_size = mde * baseline_rate
# Z-scores for significance level and power
z_alpha = stats.norm.ppf(1 - alpha/2)
z_beta = stats.norm.ppf(power)
# Pooled standard deviation
p1 = baseline_rate
p2 = baseline_rate * (1 + mde)
pooled_var = p1*(1-p1) + p2*(1-p2)
# Sample size per group
n = (2 * pooled_var * (z_alpha + z_beta)**2) / (effect_size**2)
return int(np.ceil(n))
def analyze_ab_test(control_visitors, control_conversions,
variant_visitors, variant_conversions):
"""Analyze A/B test results."""
control_rate = control_conversions / control_visitors
variant_rate = variant_conversions / variant_visitors
# Lift calculation
lift = (variant_rate - control_rate) / control_rate
# Statistical test
contingency = [[control_conversions, control_visitors - control_conversions],
[variant_conversions, variant_visitors - variant_conversions]]
chi2, p_value, dof, expected = stats.chi2_contingency(contingency)
return {
'control_rate': control_rate,
'variant_rate': variant_rate,
'lift': lift,
'lift_percent': f"{lift:.1%}",
'p_value': p_value,
'significant': p_value < 0.05,
'confidence': 1 - p_value
}Funnel Analysis
漏斗分析
Conversion Funnel Tracking
转化漏斗追踪
sql
-- Funnel analysis query
WITH funnel AS (
SELECT
user_id,
MIN(CASE WHEN event = 'page_view' THEN timestamp END) as viewed,
MIN(CASE WHEN event = 'signup_started' THEN timestamp END) as started,
MIN(CASE WHEN event = 'signup_completed' THEN timestamp END) as completed,
MIN(CASE WHEN event = 'first_purchase' THEN timestamp END) as purchased
FROM events
WHERE timestamp >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY user_id
)
SELECT
COUNT(viewed) as step_1_viewed,
COUNT(started) as step_2_started,
COUNT(completed) as step_3_completed,
COUNT(purchased) as step_4_purchased,
-- Conversion rates
ROUND(COUNT(started)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as view_to_start,
ROUND(COUNT(completed)::decimal / NULLIF(COUNT(started), 0) * 100, 2) as start_to_complete,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(completed), 0) * 100, 2) as complete_to_purchase,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as overall_conversion
FROM funnel;sql
-- Funnel analysis query
WITH funnel AS (
SELECT
user_id,
MIN(CASE WHEN event = 'page_view' THEN timestamp END) as viewed,
MIN(CASE WHEN event = 'signup_started' THEN timestamp END) as started,
MIN(CASE WHEN event = 'signup_completed' THEN timestamp END) as completed,
MIN(CASE WHEN event = 'first_purchase' THEN timestamp END) as purchased
FROM events
WHERE timestamp >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY user_id
)
SELECT
COUNT(viewed) as step_1_viewed,
COUNT(started) as step_2_started,
COUNT(completed) as step_3_completed,
COUNT(purchased) as step_4_purchased,
-- Conversion rates
ROUND(COUNT(started)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as view_to_start,
ROUND(COUNT(completed)::decimal / NULLIF(COUNT(started), 0) * 100, 2) as start_to_complete,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(completed), 0) * 100, 2) as complete_to_purchase,
ROUND(COUNT(purchased)::decimal / NULLIF(COUNT(viewed), 0) * 100, 2) as overall_conversion
FROM funnel;Cohort Retention Analysis
群组留存分析
sql
-- Weekly cohort retention
WITH cohort_data AS (
SELECT
user_id,
DATE_TRUNC('week', first_seen) as cohort_week,
DATE_TRUNC('week', activity_date) as activity_week
FROM user_activity
),
cohort_size AS (
SELECT
cohort_week,
COUNT(DISTINCT user_id) as users
FROM cohort_data
GROUP BY cohort_week
),
retention AS (
SELECT
c.cohort_week,
EXTRACT(WEEK FROM c.activity_week - c.cohort_week) as week_number,
COUNT(DISTINCT c.user_id) as retained_users
FROM cohort_data c
GROUP BY c.cohort_week, week_number
)
SELECT
r.cohort_week,
cs.users as cohort_size,
r.week_number,
r.retained_users,
ROUND(r.retained_users::decimal / cs.users * 100, 2) as retention_rate
FROM retention r
JOIN cohort_size cs ON r.cohort_week = cs.cohort_week
ORDER BY r.cohort_week, r.week_number;sql
-- Weekly cohort retention
WITH cohort_data AS (
SELECT
user_id,
DATE_TRUNC('week', first_seen) as cohort_week,
DATE_TRUNC('week', activity_date) as activity_week
FROM user_activity
),
cohort_size AS (
SELECT
cohort_week,
COUNT(DISTINCT user_id) as users
FROM cohort_data
GROUP BY cohort_week
),
retention AS (
SELECT
c.cohort_week,
EXTRACT(WEEK FROM c.activity_week - c.cohort_week) as week_number,
COUNT(DISTINCT c.user_id) as retained_users
FROM cohort_data c
GROUP BY c.cohort_week, week_number
)
SELECT
r.cohort_week,
cs.users as cohort_size,
r.week_number,
r.retained_users,
ROUND(r.retained_users::decimal / cs.users * 100, 2) as retention_rate
FROM retention r
JOIN cohort_size cs ON r.cohort_week = cs.cohort_week
ORDER BY r.cohort_week, r.week_number;Key Metrics
核心指标
| Metric | Formula | Benchmark |
|---|---|---|
| Conversion Rate | Conversions / Visitors | 2-5% (varies) |
| CAC | Marketing Spend / New Customers | Varies by industry |
| LTV | ARPU × Average Lifetime | 3x CAC minimum |
| Payback Period | CAC / Monthly Revenue per Customer | <12 months |
| NRR | (Start + Expansion - Churn) / Start MRR | >100% |
| K-factor | Invites × Conversion Rate | >1 for virality |
| DAU/MAU | Daily Active / Monthly Active | 20-50% |
| 指标 | 计算公式 | 行业基准 |
|---|---|---|
| 转化率 | 转化数 / 访客数 | 2-5%(因行业而异) |
| CAC | 营销投入 / 新增客户数 | 因行业而异 |
| LTV | ARPU × 平均生命周期 | 至少3倍CAC |
| 回收期 | CAC / 单客户月收入 | <12个月 |
| NRR | (期初MRR + 拓展收入 - 流失收入) / 期初MRR | >100% |
| K-factor | 邀请数 × 转化率 | >1(实现病毒式传播) |
| DAU/MAU | 日活跃用户数 / 月活跃用户数 | 20-50% |
Viral Loop Design
病毒循环设计
yaml
Types of Virality:
inherent:
description: Product requires others to use
examples: Slack, Zoom, Dropbox sharing
k_factor_potential: High (1.5-3.0)
artificial:
description: Incentivized referrals
examples: Dropbox space, Uber credits
k_factor_potential: Medium (0.5-1.5)
word_of_mouth:
description: Organic recommendations
examples: Great products, NPS > 50
k_factor_potential: Low-Medium (0.2-0.8)
Viral Loop Optimization:
- Reduce friction in invite flow
- Clear value proposition for inviter AND invitee
- Multiple sharing channels
- Timing of ask (after value delivered)
- Social proof in referral messageyaml
病毒式传播类型:
固有型:
description: 产品需要其他用户参与使用
examples: Slack, Zoom, Dropbox分享功能
k_factor_potential: 高(1.5-3.0)
激励型:
description: 基于激励的推荐机制
examples: Dropbox扩容奖励, Uber credits
k_factor_potential: 中(0.5-1.5)
口碑型:
description: 有机推荐
examples: 优质产品, NPS > 50
k_factor_potential: 低-中(0.2-0.8)
病毒循环优化:
- 减少邀请流程中的摩擦
- 为邀请者和被邀请者明确价值主张
- 多渠道分享
- 把握邀请时机(用户获得价值后)
- 推荐信息中加入社交证明Tools Proficiency
工具熟练度
Analytics
分析工具
- Product: Amplitude, Mixpanel, Heap
- Web: Google Analytics 4, Plausible
- Data Warehouse: BigQuery, Snowflake
- 产品分析: Amplitude, Mixpanel, Heap
- 网页分析: Google Analytics 4, Plausible
- 数据仓库: BigQuery, Snowflake
Testing
测试工具
- A/B Testing: Optimizely, VWO, LaunchDarkly
- Feature Flags: Split, Flagsmith
- Session Recording: FullStory, Hotjar
- A/B测试: Optimizely, VWO, LaunchDarkly
- 功能开关: Split, Flagsmith
- 会话录制: FullStory, Hotjar
Visualization
可视化工具
- BI: Tableau, Looker, Mode
- Dashboards: Metabase, Redash
- 商业智能: Tableau, Looker, Mode
- 仪表盘: Metabase, Redash
Attribution
归因工具
- Mobile: Branch, Adjust, AppsFlyer
- Web: Segment, mParticle
- 移动归因: Branch, Adjust, AppsFlyer
- 网页归因: Segment, mParticle
Automation
自动化工具
- Lifecycle: Iterable, Customer.io, Braze
- In-app: Appcues, Pendo, Intercom
- 用户生命周期: Iterable, Customer.io, Braze
- 应用内运营: Appcues, Pendo, Intercom
Лучшие практики
最佳实践
- Hypothesis-driven — каждый эксперимент начинается с гипотезы
- Statistical rigor — достаточный sample size и significance
- One variable — тестируйте одну переменную за раз
- Document learnings — даже failed эксперименты ценны
- Quick iterations — много маленьких тестов лучше одного большого
- North Star focus — оптимизируйте главную метрику
- 假设驱动 — 每个实验都从假设开始
- 统计严谨性 — 确保足够的样本量和统计显著性
- 单一变量 — 每次只测试一个变量
- 记录经验 — 即使失败的实验也具有价值
- 快速迭代 — 多次小型测试优于一次大型测试
- 聚焦北极星指标 — 优化核心指标