reddit

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Reddit Opportunity Hunter

Reddit 机会猎手

Mission

任务目标

Product Opportunity Hunting, NOT Lead Hunting.
  • Input: Reddit discussions from high-purchasing-power markets
  • Output: Actionable product opportunity reports with build assessments
  • Goal: Surface 1-2 week MVP opportunities in USD/EUR/GBP markets
  • You are scanning for patterns of unmet need, not individual sales leads
  • Every recommendation must pass the Solo Dev Fit test before being highlighted
  • Reference data lives in
    references/
    — subreddits, keywords, seasonal patterns
聚焦产品机会挖掘,而非销售线索获取。
  • 输入来源: 高购买力市场的Reddit讨论内容
  • 输出成果: 包含构建评估的可落地产品机会报告
  • 核心目标: 挖掘出可在1-2周内完成MVP的、面向美元/欧元/英镑市场的机会
  • 需扫描的是未被满足需求的共性模式,而非单个销售线索
  • 所有推荐机会在被重点标注前,必须通过独立开发者适配性测试
  • 参考数据存储于
    references/
    目录下——包含Subreddit列表、关键词、季节性趋势

Quick Start / First Run

快速上手 / 首次运行

  1. Install dependencies:
    brew install curl jq
  2. Run health check:
    reddit.sh diagnose
    — verifies curl, jq, network connectivity
  3. The script auto-creates
    .reddit/
    with
    reports/
    ,
    opportunities/
    ,
    archive/
  4. Verify
    .gitignore
    includes
    .reddit/
    — the script warns if missing
  5. Review
    references/subreddits.json
    — confirm subreddits match the user's domain
  6. Review
    references/intent_keywords.json
    — adjust if the user targets a specific niche
  7. First scan:
    bash
    reddit.sh fetch --campaign global_english --sort new --pages 1
  8. Inspect the enriched JSON output, then proceed to Phase 2 (Analysis)
  1. 安装依赖:
    brew install curl jq
  2. 运行健康检查:
    reddit.sh diagnose
    —— 验证curl、jq及网络连通性
  3. 脚本会自动创建
    .reddit/
    目录,包含
    reports/
    opportunities/
    archive/
    子目录
  4. 确认
    .gitignore
    已包含
    .reddit/
    ——若未包含,脚本会发出警告
  5. 查看
    references/subreddits.json
    ——确认Subreddit列表与用户所在领域匹配
  6. 查看
    references/intent_keywords.json
    ——若用户瞄准特定细分领域,可调整关键词
  7. 首次扫描:
    bash
    reddit.sh fetch --campaign global_english --sort new --pages 1
  8. 检查增强后的JSON输出,然后进入第二阶段(分析阶段)

Core Workflow

核心工作流程

Phase 1: Data Collection

阶段1:数据收集

Run
reddit.sh fetch
for each campaign defined in
references/subreddits.json
, ordered by
scan_priority
:
bash
reddit.sh fetch --campaign global_english --sort new --pages 2
reddit.sh fetch --campaign dach_german --sort new --pages 1
reddit.sh fetch --campaign nordic_scandi --sort new --pages 1
Key details:
  • Multi-sub merge: the script combines subreddits as
    r/A+B+C/new.json
    to reduce API calls
  • Output is enriched JSON — jq computes
    _jq_enriched
    fields (intent matches, sentiment, age)
  • Do NOT re-compute what jq already provides; read the enriched fields directly
  • Rate limit budget: ~100 requests per ~260 seconds; a single fetch loop uses ~40-45
  • Fetch Tier S campaigns every loop, Tier A daily, Tier B weekly
针对
references/subreddits.json
中定义的每个活动,按
scan_priority
优先级运行
reddit.sh fetch
命令:
bash
reddit.sh fetch --campaign global_english --sort new --pages 2
reddit.sh fetch --campaign dach_german --sort new --pages 1
reddit.sh fetch --campaign nordic_scandi --sort new --pages 1
关键细节:
  • 多Subreddit合并:脚本会将多个Subreddit组合为
    r/A+B+C/new.json
    的形式,以减少API调用次数
  • 输出为增强版JSON——jq会计算
    _jq_enriched
    字段(意向匹配、情感倾向、帖子时长)
  • 请勿重复计算jq已生成的内容,直接读取增强字段即可
  • 调用频率限制:约每260秒可调用100次;单次扫描循环约使用40-45次调用
  • 扫描频率:S级活动每次循环都扫描,A级活动每日扫描,B级活动每周扫描

Phase 2: Analysis (Claude)

阶段2:分析(Claude)

Read the enriched JSON from Phase 1. For each batch:
  1. Pain point clustering — group similar complaints across posts and subreddits
  2. Frequency counting — how many posts mention this pain this week?
  3. Intensity assessment — use
    intent_keywords_matched
    and
    negative_signals
    from the enriched data
  4. Market validation signals — look for: budget mentions, team size,
    already_tried
    products, willingness to pay
  5. Score each opportunity using the scoring algorithm below
  6. Deduplicate against
    seen_posts
    in
    .reddit/.reddit.json
读取阶段1生成的增强版JSON数据,针对每批数据执行以下操作:
  1. 痛点聚类——将不同帖子和Subreddit中的相似投诉归类
  2. 频次统计——过去7天内提及该痛点的帖子数量
  3. 强度评估——利用增强数据中的
    intent_keywords_matched
    negative_signals
    字段
  4. 市场验证信号——关注:预算提及、团队规模、
    already_tried
    (已尝试的产品)、付费意愿
  5. 使用以下评分算法对每个机会进行评分
  6. .reddit/.reddit.json
    中的
    seen_posts
    去重

Phase 3: Deep Verification (score >= 8)

阶段3:深度验证(评分≥8)

For opportunities scoring 8 or above:
  1. Fetch comment trees:
    reddit.sh comments <post_id> <subreddit>
  2. Search competitive landscape:
    reddit.sh search "competitor alternative" --global
  3. Add post to
    watched_threads
    for ongoing monitoring
  4. Optional: use WebSearch for cross-platform validation (Twitter/X, HN, G2, Capterra)
针对评分≥8的机会:
  1. 获取评论树:
    reddit.sh comments <post_id> <subreddit>
  2. 搜索竞争格局:
    reddit.sh search "competitor alternative" --global
  3. 将帖子添加至
    watched_threads
    以进行持续监测
  4. 可选:使用WebSearch进行跨平台验证(Twitter/X、HN、G2、Capterra)

Phase 3.5: Micro-Validation

阶段3.5:微验证

Before promoting an opportunity to "validated":
  • Suggest a landing page smoke test to the user
  • Cross-platform search: Twitter/X, Hacker News, Indie Hackers for the same pain
  • Search for failed attempts at similar products (important signal)
  • Check Product Hunt / GitHub for recent launches in the space
在将机会标记为“已验证”之前:
  • 向用户建议落地页烟雾测试
  • 跨平台搜索:在Twitter/X、Hacker News、Indie Hackers上查找相同痛点
  • 搜索同类产品的失败案例(重要信号)
  • 查看Product Hunt / GitHub上该领域的近期发布产品

Phase 4: Report

阶段4:报告输出

  • Daily scan report ->
    .reddit/reports/YYYY-MM-DD-scan.md
  • High-value opportunities ->
    .reddit/opportunities/<slug>.md
  • Use the templates defined below
  • 每日扫描报告 ->
    .reddit/reports/YYYY-MM-DD-scan.md
  • 高价值机会报告 ->
    .reddit/opportunities/<slug>.md
  • 使用下方定义的模板

reddit.sh Reference

reddit.sh 命令参考

ModeUsagePurpose
fetch
reddit.sh fetch --campaign X --sort new --pages 2
Fetch & enrich posts
comments
reddit.sh comments <id> <sub>
Comment tree for deep-dive
search
reddit.sh search "query" [--global] [--type post|user|subreddit]
Reddit search
discover
reddit.sh discover <keyword> [--method keyword|autocomplete]
Find new subreddits
profile
reddit.sh profile <user> [--enrich]
User history analysis
crosspost
reddit.sh crosspost [--campaign X]
Cross-poster detection
stickied
reddit.sh stickied [subreddit]
Stickied post mining
firehose
reddit.sh firehose [sub1+sub2]
Real-time comment stream
duplicates
reddit.sh duplicates <post_id>
Link propagation tracking
wiki
reddit.sh wiki <sub> [page]
Community wiki content
stats
reddit.sh stats
Database / state statistics
export
reddit.sh export [--format csv|json]
CRM-ready export
cleanup
reddit.sh cleanup
Purge expired data
diagnose
reddit.sh diagnose
Health check (jq, dirs, state)
Helper functions (called during loop cycles, not directly by user):
  • watch_check
    — check watched threads for new comments since last check
  • competitor_search <campaign>
    — expand competitor query templates from config
  • update_subreddit_quality <sub> <scanned> [opportunities]
    — track hit rates per subreddit
模式使用方式用途
fetch
reddit.sh fetch --campaign X --sort new --pages 2
获取并增强帖子内容
comments
reddit.sh comments <id> <sub>
获取评论树以进行深度分析
search
reddit.sh search "query" [--global] [--type post|user|subreddit]
Reddit站内搜索
discover
reddit.sh discover <keyword> [--method keyword|autocomplete]
发现新的Subreddit
profile
reddit.sh profile <user> [--enrich]
用户历史分析
crosspost
reddit.sh crosspost [--campaign X]
跨帖子检测
stickied
reddit.sh stickied [subreddit]
置顶帖子挖掘
firehose
reddit.sh firehose [sub1+sub2]
实时评论流
duplicates
reddit.sh duplicates <post_id>
链接传播追踪
wiki
reddit.sh wiki <sub> [page]
社区Wiki内容提取
stats
reddit.sh stats
数据库/状态统计
export
reddit.sh export [--format csv|json]
导出为CRM兼容格式
cleanup
reddit.sh cleanup
清理过期数据
diagnose
reddit.sh diagnose
健康检查(jq、目录、状态)
辅助函数(循环周期中自动调用,无需用户直接执行):
  • watch_check
    —— 检查关注线程自上次检查后的新评论
  • competitor_search <campaign>
    —— 根据配置扩展竞品查询模板
  • update_subreddit_quality <sub> <scanned> [opportunities]
    —— 追踪每个Subreddit的机会命中率

Scoring Algorithm

评分算法

raw_score = intensity      * 0.20
          + competitive_gap * 0.20
          + build_feasibility * 0.20
          + market_value   * 0.20
          + frequency      * 0.15
          + timeliness     * 0.05
Each dimension is scored 1-10 individually.
Adjustments:
adjusted = raw_score
  + cross_market_bonus   (same pain in 3+ Tier S markets -> +1.5)
  + seasonal_bonus       (matches upcoming seasonal pattern -> +1.0; just passed -> -1.0)
  - false_positive_penalty (see below)

final_score = clamp(adjusted, 1, 10)
Weekly decay: if no new mentions this week:
final_score *= 0.88
Market tier bonuses (applied to
market_value
dimension, not final score):
  • Tier S (US, UK, DE, FR, NL, JP, AU, KR, Nordics): +3
  • Tier A (IN, BR, SEA, LATAM, PL, CZ): +1
  • Tier B (Africa, South Asia, rest): +0
Thresholds:
  • = 8: Deep verification (Phase 3) + highlight in report
  • = 7: Show in daily report under New Opportunities
  • < 7: Aggregate only under Trending Pain Points
False positive penalties:
  • Single user mention, no corroboration: -3
  • One-time complaint (user has no topic history): -2
  • Strong open-source alternative (>5k GitHub stars): -2
  • Requires enterprise sales process: mark as "not solo dev fit", do not penalize score but flag
raw_score = intensity      * 0.20
          + competitive_gap * 0.20
          + build_feasibility * 0.20
          + market_value   * 0.20
          + frequency      * 0.15
          + timeliness     * 0.05
每个维度单独按1-10分评分。
评分调整:
adjusted = raw_score
  + cross_market_bonus   (3个以上S级市场存在相同痛点 -> +1.5)
  + seasonal_bonus       (匹配即将到来的季节性趋势 -> +1.0;刚过季 -> -1.0)
  - false_positive_penalty (详见下方)

final_score = clamp(adjusted, 1, 10)
每周衰减: 若本周无新提及,最终评分:
final_score *= 0.88
市场等级加成(仅作用于
market_value
维度,不直接作用于最终评分):
  • S级(美国、英国、德国、法国、荷兰、日本、澳大利亚、韩国、北欧):+3
  • A级(印度、巴西、东南亚、拉美、波兰、捷克):+1
  • B级(非洲、南亚及其他地区):+0
评分阈值:
  • ≥8:进入深度验证(阶段3)并在报告中重点标注
  • ≥7:在每日报告的“新机会”板块展示
  • <7:仅在“趋势痛点”板块汇总展示
误判惩罚:
  • 仅单个用户提及,无其他佐证:-3
  • 一次性投诉(用户无相关话题历史):-2
  • 有成熟开源替代方案(GitHub星数>5k):-2
  • 需要企业销售流程:标记为“不适配独立开发者”,不扣分但需标注

Intent Tiers

意向等级

Reference
references/intent_keywords.json
for the full keyword list. You classify intent tier from context — jq only provides raw keyword matches.
TierSignalExamples
1Direct purchase intent"willing to pay", "budget for", "take my money"
2Active solution seeking"looking for a tool", "switching from", "need alternative"
3Pain expression"frustrated with", "too expensive", "waste of time"
4Research"what do you use for", "best practices", "recommendations"
5Indirect signalsDomain discussions implying unmet need
完整关键词列表参考
references/intent_keywords.json
。需结合上下文判断意向等级——jq仅提供原始关键词匹配结果。
等级信号特征示例
1直接购买意向“愿意付费”、“有预算”、“快来赚我的钱”
2主动寻找解决方案“寻找工具”、“想更换现有工具”、“需要替代方案”
3表达不满痛点“对...感到失望”、“太贵了”、“纯粹浪费时间”
4调研咨询“大家用什么做...”、“最佳实践”、“求推荐”
5间接信号领域讨论中隐含未被满足的需求

Solo Dev Fit Assessment

独立开发者适配性评估

Evaluate independently of opportunity score. All must be true for a pass:
  • Build time < 2 weeks for MVP
  • No ongoing compliance burden (HIPAA, SOC2, etc.)
  • Self-serve distribution (no enterprise sales cycle)
  • Subscription or usage-based pricing model viable
  • Known tech stack (no deep domain R&D)
  • No network effects required for initial value
需独立于机会评分进行评估,需全部满足以下条件才算通过:
  • MVP构建时间<2周
  • 无持续合规负担(如HIPAA、SOC2等)
  • 可自助式分发(无需企业销售周期)
  • 可采用订阅或按使用量付费的定价模式
  • 使用成熟技术栈(无需深度领域研发)
  • 初始价值实现无需依赖网络效应

Opportunity Report Template

机会报告模板

markdown
undefined
markdown
undefined

Product Opportunity: [Name]

产品机会:[名称]

Score: X.X/10 | Intent Tier: N | Solo Dev Fit: Yes/No
评分: X.X/10 | 意向等级: N | 独立开发者适配性: 是/否

Pain Point

核心痛点

[2-3 sentence summary of the unmet need]
[2-3句话总结未被满足的需求]

Market Evidence

市场证据

  • Frequency: N posts in last 7 days across M subreddits
  • Intensity: [low/medium/high] — key signals: ...
  • Geography: [primary markets]
  • Target user: [persona]
  • Budget signals: [quotes or indicators]
  • 提及频次: 过去7天内M个Subreddit中共N篇帖子提及
  • 痛点强度: [低/中/高] —— 关键信号:...
  • 目标地区: [核心市场]
  • 目标用户: [用户画像]
  • 预算信号: [引用原文或相关指标]

Competitive Landscape

竞争格局

  • Existing paid tools: [list with pricing]
  • Open-source alternatives: [list with GitHub stars]
  • Why they fail: [gap analysis]
  • Recent launches: [last 6 months]
  • 现有付费工具: [列表及定价]
  • 开源替代方案: [列表及GitHub星数]
  • 竞品不足:[差距分析]
  • 近期发布产品: [过去6个月内的产品]

Build Assessment

构建评估

  • Complexity: [low/medium/high]
  • MVP scope: [3-5 core features]
  • Build time: [estimate]
  • Tech stack: [recommendation]
  • Technical moat: [if any]
  • Solo Dev Fit: [Yes/No + reasoning]
  • 复杂度: [低/中/高]
  • MVP范围: [3-5个核心功能]
  • 构建时间: [预估时长]
  • 技术栈建议: [推荐技术栈]
  • 技术壁垒: [若有]
  • 独立开发者适配性: [是/否 + 理由]

Revenue Model

盈利模式

  • Pricing anchor: [competitor pricing context]
  • Suggested tiers: [USD/EUR with PPP notes]
  • Distribution: [channels]
  • Market size estimate: [TAM/SAM]
  • Revenue potential: [12-month projection]
  • CAC / Payback: [estimate]
  • Churn risk: [assessment]
  • 定价参考: [竞品定价参考]
  • 建议定价档位: [美元/欧元,含购买力平价说明]
  • 分发渠道: [渠道列表]
  • 市场规模预估: [TAM/SAM]
  • 营收潜力: [12个月预估]
  • 获客成本 / 回本周期: [预估]
  • 流失风险: [评估]

Cross-Market Signal

跨市场信号

[Evidence from other markets/platforms]
[来自其他市场/平台的佐证]

Source Posts

来源帖子

  • post title — r/subreddit — N upvotes, M comments — YYYY-MM-DD
undefined
  • 帖子标题 — r/subreddit — N个赞,M条评论 — YYYY-MM-DD
undefined

Daily Report Template

每日报告模板

markdown
undefined
markdown
undefined

Reddit Opportunity Scan — YYYY-MM-DD

Reddit 机会扫描报告 — YYYY-MM-DD

New Opportunities (score >= 7)

新机会(评分≥7)

[Opportunity cards with score, pain summary, top source post]
[包含评分、痛点摘要、核心来源帖子的机会卡片]

Trending Pain Points

趋势痛点

[Clusters below threshold but gaining frequency]
[评分未达阈值但提及频次上升的痛点聚类]

Time-Sensitive (< 2h old, high intent)

高时效性机会(发布<2小时,高意向)

[Posts needing immediate attention — Tier 1-2 intent, fresh]
[需立即关注的帖子——意向等级1-2,发布时间较新]

Scan Stats

扫描统计

  • Subreddits scanned: N
  • Posts analyzed: N
  • New opportunities: N
  • Watched threads updated: N
  • API calls used: N / 100
undefined
  • 已扫描Subreddit数量:N
  • 已分析帖子数量:N
  • 新发现机会数量:N
  • 已更新关注线程数量:N
  • API调用使用量:N / 100
undefined

Loop Integration

循环集成

Trigger with:
/loop 30m /reddit
Each cycle:
  1. Read
    references/subreddits.json
    (hot reload — picks up edits between cycles)
  2. Fetch per
    scan_priority
    : Tier S every loop, Tier A daily, Tier B weekly
  3. Deduplicate against
    seen_posts
    in
    .reddit/.reddit.json
  4. watch_check
    for watched threads with new activity
  5. competitor_search
    for configured campaigns
  6. Analyze, score, cluster new posts
  7. Output incremental report (append to daily scan file)
  8. Score >= 8 -> alert:
    OPPORTUNITY: [title] (score X.X)
  9. update_subreddit_quality
    with hit rates
Scheduled reports:
  • Weekly summary: trigger on Sundays (or first loop after Sunday midnight)
  • Monthly summary: last day of month (or first loop after)
  • If a scheduled report was missed, generate it on next run
触发命令:
/loop 30m /reddit
每个循环周期执行以下操作:
  1. 读取
    references/subreddits.json
    (热重载——循环周期间的修改会被自动读取)
  2. scan_priority
    扫描:S级活动每次循环都扫描,A级活动每日扫描,B级活动每周扫描
  3. .reddit/.reddit.json
    中的
    seen_posts
    去重
  4. 执行
    watch_check
    检查关注线程的新动态
  5. 针对配置的活动执行
    competitor_search
  6. 对新帖子进行分析、评分、聚类
  7. 输出增量报告(追加至每日扫描文件)
  8. 评分≥8的机会:触发提醒
    OPPORTUNITY: [标题] (评分 X.X)
  9. 执行
    update_subreddit_quality
    更新Subreddit命中率
定时报告:
  • 每周总结:周日触发(或周日午夜后的首次循环)
  • 月度总结:月末最后一天触发(或月末后的首次循环)
  • 若错过定时报告,将在下次运行时自动生成

State Management

状态管理

.reddit/.reddit.json
tracks:
KeyPurposeTTL
seen_posts
Deduplication30 days
watched_threads
Monitor for new comments7 days default
opportunities
Lifecycle trackingPermanent
products_seen
Known tools/competitorsPermanent
influencers
High-value Reddit usersPermanent
community_overlap
Cross-sub posting patterns30 days
subreddit_quality
Hit rate per subredditPermanent
Opportunity lifecycle:
discovered -> investigating -> validated -> building -> launched -> revenue -> archived
.reddit/.reddit.json
用于追踪以下状态:
键名用途有效期
seen_posts
去重30天
watched_threads
监测新评论默认7天
opportunities
机会生命周期追踪永久
products_seen
已知工具/竞品永久
influencers
高价值Reddit用户永久
community_overlap
跨Subreddit发帖模式30天
subreddit_quality
各Subreddit的机会命中率永久
机会生命周期:
发现 -> 调研中 -> 已验证 -> 开发中 -> 已发布 -> 产生营收 -> 已归档

Safety

安全规范

  • Public data only — no PII beyond Reddit usernames
  • Rate limit compliant — respect the ~100 req/260s budget
  • .reddit/
    must be in
    .gitignore
    — never commit user data
  • Reply drafts always marked [REVIEW BEFORE POSTING]
  • Suggest max 5 replies per day to avoid spam patterns
  • 仅使用公开数据 —— 除Reddit用户名外,不涉及任何个人可识别信息(PII)
  • 遵守调用频率限制 —— 严格遵循约100次/260秒的调用配额
  • .reddit/
    目录必须加入
    .gitignore
    —— 绝不提交用户数据
  • 回复草稿需标记**[发布前需审核]**
  • 建议每日回复不超过5条,避免触发垃圾邮件检测

Skill Integration

技能集成

Related skills for downstream workflows:
  • content-strategy — turn validated pain points into content calendars
  • copywriting — turn opportunities into landing page copy
  • competitor-alternatives — deep competitive analysis
  • cold-email — draft outreach / DM templates
  • social-content — repurpose Reddit insights for social posts
下游工作流可集成以下相关技能:
  • content-strategy —— 将已验证痛点转化为内容日历
  • copywriting —— 将机会转化为落地页文案
  • competitor-alternatives —— 深度竞品分析
  • cold-email —— 撰写外发/私信模板
  • social-content —— 将Reddit洞察转化为社交平台内容