solo-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

/research

/research

Deep research before PRD generation. Produces a structured
research.md
with competitive analysis, user pain points, SEO/ASO keywords, naming/domain options, and market sizing.
在生成PRD之前进行深度调研。生成结构清晰的
research.md
文件,包含竞品分析、用户痛点、SEO/ASO关键词、命名/域名选项以及市场规模测算内容。

MCP Tools (use if available)

MCP工具(如有可用请使用)

If MCP tools are available, prefer them over CLI:
  • kb_search(query, n_results)
    — search knowledge base for related docs
  • web_search(query, engines, include_raw_content)
    — web search with engine routing
  • session_search(query, project)
    — find how similar research was done before
  • project_info(name)
    — check project details and stacks
  • codegraph_explain(project)
    — architecture overview of an existing project (stack, patterns, deps)
  • codegraph_query(query)
    — raw Cypher queries against code graph (find shared packages, dependencies)
  • project_code_search(query, project)
    — semantic search over project source code
MCP
web_search
supports engine override:
engines="reddit"
,
engines="youtube"
, etc. If MCP tools are not available, use WebSearch/WebFetch as primary. If MCP web_search tool is available, use it for better results.
如果MCP工具可用,优先于CLI使用:
  • kb_search(query, n_results)
    — 在知识库中搜索相关文档
  • web_search(query, engines, include_raw_content)
    — 支持引擎路由的网页搜索
  • session_search(query, project)
    — 查找此前类似调研的执行方式
  • project_info(name)
    — 查看项目详情与技术栈
  • codegraph_explain(project)
    — 现有项目的架构概述(技术栈、模式、依赖)
  • codegraph_query(query)
    — 针对代码图谱的原生Cypher查询(查找共享包、依赖关系)
  • project_code_search(query, project)
    — 对项目源代码进行语义搜索
MCP
web_search
支持引擎覆盖:
engines="reddit"
engines="youtube"
等。 如果MCP工具不可用,以WebSearch/WebFetch作为主要工具。若MCP web_search工具可用,使用它可获得更优结果。

Reddit Search Best Practices

Reddit搜索最佳实践

  • Max 3 keywords in reddit queries — more keywords = fewer results
  • Good:
    "product hunt outreach launch"
    — Bad:
    "product hunt scraper maker profiles linkedin outreach launch strategy"
  • include_raw_content=true
    rarely works for Reddit — use fallback chain below
  • Reddit查询中最多使用3个关键词 — 关键词越多,结果越少
  • 示例:有效:
    "product hunt outreach launch"
    — 无效:
    "product hunt scraper maker profiles linkedin outreach launch strategy"
  • include_raw_content=true
    在Reddit上很少生效 — 使用下方的回退流程

Reddit Content Access — Fallback Chain

Reddit内容获取 — 回退流程

When a search finds a relevant Reddit post, reading its full content requires a fallback chain:
1. MCP Playwright (old.reddit.com)     ← BEST: bypasses CAPTCHA, full post + comments
2. PullPush API (api.pullpush.io)      ← search by query/subreddit/author/score/date
3. MCP web_search include_raw_content   ← sometimes works, often truncated
4. WebFetch / WebSearch snippets        ← last resort, partial data only
Method 1: MCP Playwright (recommended for full post content)
  • Use
    browser_navigate("https://old.reddit.com/r/...")
    — old.reddit.com loads without CAPTCHA
  • www.reddit.com
    shows CAPTCHA ("Prove your humanity"), always use
    old.reddit.com
  • Snapshot contains full post text + comments in structured YAML
  • Example:
    old.reddit.com/r/indiehackers/comments/abc123/post_title/
Method 2: PullPush API (for search/discovery)
  • Endpoint:
    https://api.pullpush.io/reddit/submission/search
  • Params:
    q
    ,
    subreddit
    ,
    author
    ,
    score
    (e.g.
    >10,<100
    ),
    since
    /
    until
    (unix timestamps),
    size
    (max 100)
  • Rate limits: soft 15 req/min, hard 30 req/min, 1000 req/hr. Sleep 4 sec between requests.
  • Returns JSON with full
    selftext
    , author, score, created_utc
  • Comment search:
    /reddit/comment/search
    (same params)
  • Can use via curl:
bash
curl -s "https://api.pullpush.io/reddit/submission/search?q=product+hunt+launch&subreddit=indiehackers&size=10"
Method 3: Reddit .json endpoint (often blocked)
  • Append
    .json
    to any Reddit URL:
    reddit.com/r/sub/comments/id.json
  • Returns raw JSON with full post + comments
  • Frequently blocked (403/429) — use as opportunistic fallback only
Method 4: PRAW (Reddit Official API, for live search/user profiles)
  • praw-dev/praw — Python Reddit API Wrapper
  • OAuth2 auth, built-in rate limiting, sync/async support
  • Best for: live subreddit search, user profiles, comment trees
  • pip install praw
    /
    uv add praw
当搜索找到相关Reddit帖子时,读取完整内容需要遵循以下回退流程:
1. MCP Playwright (old.reddit.com)     ← 最佳选择:绕过验证码,获取完整帖子+评论
2. PullPush API (api.pullpush.io)      ← 按查询/子版块/作者/评分/日期搜索
3. MCP web_search include_raw_content   ← 偶尔生效,结果常被截断
4. WebFetch / WebSearch 片段        ← 最后手段:仅能获取部分数据
方法1:MCP Playwright(推荐用于获取完整帖子内容)
  • 使用
    browser_navigate("https://old.reddit.com/r/...")
    — old.reddit.com无需验证码即可加载
  • www.reddit.com
    会显示验证码(“验证您的身份”),请始终使用
    old.reddit.com
  • 快照包含结构化YAML格式的完整帖子文本+评论
  • 示例:
    old.reddit.com/r/indiehackers/comments/abc123/post_title/
方法2:PullPush API(用于搜索/发现)
  • 端点:
    https://api.pullpush.io/reddit/submission/search
  • 参数:
    q
    subreddit
    author
    score
    (例如
    >10,<100
    )、
    since
    /
    until
    (Unix时间戳)、
    size
    (最大100)
  • 速率限制:软限制15请求/分钟,硬限制30请求/分钟,1000请求/小时。请求间隔至少4秒。
  • 返回包含完整
    selftext
    、作者、评分、created_utc的JSON数据
  • 评论搜索:
    /reddit/comment/search
    (参数相同)
  • 可通过curl调用:
bash
curl -s "https://api.pullpush.io/reddit/submission/search?q=product+hunt+launch&subreddit=indiehackers&size=10"
方法3:Reddit .json端点(常被屏蔽)
  • 在任意Reddit URL后追加
    .json
    reddit.com/r/sub/comments/id.json
  • 返回包含完整帖子+评论的原生JSON数据
  • 经常被屏蔽(403/429) — 仅作为机会性回退方案
方法4:PRAW(Reddit官方API,用于实时搜索/用户资料)
  • praw-dev/praw — Python Reddit API封装
  • 支持OAuth2认证、内置速率限制、同步/异步调用
  • 最佳适用场景:实时子版块搜索、用户资料、评论树
  • 安装方式:
    pip install praw
    /
    uv add praw

Search Strategy: Hybrid (MCP + WebSearch)

搜索策略:混合模式(MCP + WebSearch)

Use multiple search backends together. Each has strengths:
StepBest backendWhy
CompetitorsWebSearch +
site:producthunt.com
+
site:g2.com
Broad discovery + Product Hunt + B2B reviews
Reddit / Pain pointsMCP
web_search
with
engines: reddit
(max 3 keywords!) + MCP Playwright for full posts
PullPush API, selftext in content
YouTube reviewsMCP
web_search
with
engines: youtube
Video reviews (views = demand)
Market sizeWebSearchSynthesizes numbers from 10 sources
SEO / ASOWebSearchBroader coverage, trend data
Page scrapingWebFetch or MCP
web_search
with
include_raw_content
Up to 5000 chars of page content
Hacker NewsWebSearch
site:news.ycombinator.com
HN discussions and opinions
Funding / CompaniesWebSearch
site:crunchbase.com
Competitor funding, team size
Verified revenueWebFetch
trustmrr.com/startup/<slug>
Stripe-verified MRR, growth, tech stack, traffic
结合使用多种搜索后端,各后端有其优势:
步骤最佳后端原因
竞品调研WebSearch +
site:producthunt.com
+
site:g2.com
广泛发现竞品 + Product Hunt + B2B评测
Reddit / 用户痛点MCP
web_search
(指定
engines: reddit
,最多3个关键词!) + MCP Playwright获取完整帖子
PullPush API可获取帖子正文
YouTube评测MCP
web_search
(指定
engines: youtube
视频评测(播放量=需求信号)
市场规模WebSearch综合10个数据源的统计数据
SEO / ASOWebSearch覆盖范围更广,包含趋势数据
页面爬取WebFetch 或 MCP
web_search
(开启
include_raw_content
最多可获取5000字符的页面内容
Hacker NewsWebSearch
site:news.ycombinator.com
获取HN社区的讨论与观点
融资 / 企业信息WebSearch
site:crunchbase.com
竞品融资情况、团队规模
已验证收入WebFetch
trustmrr.com/startup/<slug>
Stripe验证的MRR、增长数据、技术栈、流量

Search Availability

搜索工具可用性

Use WebSearch/WebFetch as primary. If MCP
web_search
tool is available, use it for better results (supports engine routing and raw content extraction).
以WebSearch/WebFetch作为主要工具。若MCP
web_search
工具可用,使用它可获得更优结果(支持引擎路由与原始内容提取)。

Steps

执行步骤

  1. Parse the idea from
    $ARGUMENTS
    . If empty, ask the user what idea they want to research.
  2. Detect product type — infer from the idea description:
    • Keywords like "app", "mobile", "iPhone", "Android" → mobile (ios/android)
    • Keywords like "website", "SaaS", "dashboard", "web app" → web
    • Keywords like "CLI", "terminal", "command line" → cli
    • Keywords like "API", "backend", "service" → api
    • Keywords like "extension", "plugin", "browser" → web (extension)
    • Default if unclear → web
    • Only ask via AskUserQuestion if truly ambiguous (e.g., "build a todo app" could be web or mobile)
    • This determines which research sections apply (ASO for mobile, SEO for web, etc.)
  3. Search knowledge base and past work:
    • If MCP
      kb_search
      available:
      kb_search(query="<idea keywords>", n_results=5)
    • If MCP
      session_search
      available:
      session_search(query="<idea keywords>")
      — check if this idea was researched before
    • Otherwise: Grep for keywords in
      .md
      files
    • Check if
      research.md
      or
      prd.md
      already exist for this idea.
  4. Check existing portfolio (if MCP codegraph tools available):
    • codegraph_explain(project="<similar project>")
      — architecture overview of related projects in the portfolio
    • project_code_search(query="<relevant pattern>", project="<sibling>")
      — find reusable code, patterns, infrastructure
    • codegraph_query("MATCH (p:Project)-[:DEPENDS_ON]->(pkg:Package) WHERE pkg.name CONTAINS '<relevant tech>' RETURN p.name, pkg.name")
      — find projects using similar tech
    • This helps assess: feasibility, reusable code, stack decisions, and time estimates
    • If no MCP tools available, skip this step.
  5. Competitive analysis — use WebSearch (primary) + MCP web_search (if available):
    • "<idea> competitors alternatives 2026"
      — broad discovery
    • "<idea> app review pricing"
      — pricing data
    • WebFetch or MCP
      include_raw_content=true
      : scrape competitor URLs for detailed pricing
    • MCP
      engines: reddit
      or WebSearch:
      "<idea> vs"
      — user opinions
    • "site:producthunt.com <idea>"
      — Product Hunt launches
    • "site:g2.com <idea>"
      or
      "site:capterra.com <idea>"
      — B2B reviews
    • "site:crunchbase.com <competitor>"
      — funding, team size
    • "site:trustmrr.com <idea>"
      or WebFetch
      trustmrr.com/startup/<slug>
      — Stripe-verified MRR, growth %, tech stack, traffic (24h/7d/30d)
    • For each competitor extract: name, URL, pricing, key features, weaknesses, verified MRR (if on TrustMRR)
  6. User pain points — use MCP web_search / WebSearch + YouTube:
    • MCP
      engines: reddit
      or WebSearch:
      "<problem>"
      — Reddit discussions (max 3 keywords!)
    • If Reddit post found but content not available → open via MCP Playwright:
      browser_navigate("https://old.reddit.com/r/...")
      — old.reddit.com bypasses CAPTCHA
    • MCP
      engines: youtube
      or WebSearch:
      "<problem> review"
      — video reviews
    • "site:news.ycombinator.com <problem>"
      — Hacker News opinions
    • WebSearch:
      "<problem> frustrating OR annoying"
      — broader sweep
    • Synthesis: top 5 pain points with quotes and source URLs
  7. SEO / ASO analysis (depends on product type from step 2):
    For web apps:
    • "<competitor> SEO keywords ranking"
      — competitor keywords
    • "<problem domain> search volume trends 2026"
      — demand signals
    • WebFetch or MCP
      include_raw_content
      : scrape competitor pages for meta tags
    • Result: keyword table (keyword, intent, competition, relevance)
    For mobile apps:
    • "<category> App Store top apps keywords 2026"
      — category landscape
    • "site:reddit.com <competitor app> review"
      — user complaints
    • Result: ASO keywords, competitor ratings, common complaints
  8. Naming, domains, and company registration:
    • Generate 7-10 name candidates (mix of descriptive + invented/brandable)
    • Domain availability: triple verification (whois → dig → RDAP)
    • Trademark + company name conflict checks
    See
    references/domain-check.md
    (bundled with this skill) for TLD priority tiers, bash scripts, gotchas, and trademark check methods.
  9. Market sizing (TAM/SAM/SOM) — use WebSearch (primary):
    • WebSearch:
      "<market> market size 2025 2026 report"
      — synthesizes numbers
    • WebSearch:
      "<market> growth rate CAGR billion"
      — growth projections
    • Extrapolation: TAM → SAM → SOM (Year 1)
  10. Write
    research.md
    — write to
    docs/research.md
    in the current project directory. Create the directory if needed.
  11. Output summary:
    • Key findings (3-5 bullets)
    • Recommendation: GO / NO-GO / PIVOT with brief reasoning
    • Path to generated research.md
    • Suggested next step:
      /validate <idea>
  1. 解析创意:从
    $ARGUMENTS
    中解析用户的创意。若参数为空,询问用户需要调研的创意内容。
  2. 识别产品类型:从创意描述中推断:
    • 包含“app”“mobile”“iPhone”“Android”等关键词 → 移动应用(ios/android)
    • 包含“website”“SaaS”“dashboard”“web app”等关键词 → 网页应用
    • 包含“CLI”“terminal”“command line”等关键词 → 命令行工具
    • 包含“API”“backend”“service”等关键词 → 后端服务
    • 包含“extension”“plugin”“browser”等关键词 → 网页应用(扩展程序)
    • 若无法明确判断,默认归类为网页应用
    • 仅当存在真正歧义时才询问用户(例如“构建待办应用”可能是网页或移动应用)
    • 产品类型将决定适用的调研模块(移动应用用ASO,网页应用用SEO等)
  3. 搜索知识库与过往成果
    • 若MCP
      kb_search
      可用:
      kb_search(query="<创意关键词>", n_results=5)
    • 若MCP
      session_search
      可用:
      session_search(query="<创意关键词>")
      — 检查该创意是否已被调研过
    • 若以上工具不可用:在
      .md
      文件中搜索关键词
    • 检查当前创意是否已存在
      research.md
      prd.md
      文件
  4. 检查现有项目组合(若MCP codegraph工具可用):
    • codegraph_explain(project="<类似项目>")
      — 获取项目组合中相关项目的架构概述
    • project_code_search(query="<相关模式>", project="<关联项目>")
      — 查找可复用代码、模式、基础设施
    • codegraph_query("MATCH (p:Project)-[:DEPENDS_ON]->(pkg:Package) WHERE pkg.name CONTAINS '<相关技术>' RETURN p.name, pkg.name")
      — 查找使用类似技术的项目
    • 此步骤有助于评估:可行性、可复用代码、技术栈决策、时间预估
    • 若MCP工具不可用,跳过此步骤
  5. 竞品分析 — 使用WebSearch(主要) + MCP web_search(如有可用):
    • 搜索词:
      "<创意> competitors alternatives 2026"
      — 广泛发现竞品
    • 搜索词:
      "<创意> app review pricing"
      — 获取定价数据
    • 使用WebFetch或MCP
      include_raw_content=true
      :爬取竞品URL获取详细定价
    • 使用MCP
      engines: reddit
      或WebSearch:
      "<创意> vs"
      — 获取用户观点
    • 搜索词:
      "site:producthunt.com <创意>"
      — 查看Product Hunt上的竞品发布情况
    • 搜索词:
      "site:g2.com <创意>"
      "site:capterra.com <创意>"
      — 获取B2B评测
    • 搜索词:
      "site:crunchbase.com <竞品名称>"
      — 获取竞品融资、团队规模
    • 搜索词:
      "site:trustmrr.com <创意>"
      或 WebFetch
      trustmrr.com/startup/<slug>
      — 获取Stripe验证的MRR、增长率、技术栈、流量(24h/7d/30d)
    • 为每个竞品提取:名称、URL、定价、核心功能、劣势、已验证MRR(若在TrustMRR上有数据)
  6. 用户痛点调研 — 使用MCP web_search / WebSearch + YouTube:
    • 使用MCP
      engines: reddit
      或WebSearch:
      "<问题>"
      — 查找Reddit讨论(最多3个关键词!)
    • 若找到Reddit帖子但无法获取内容 → 使用MCP Playwright打开:
      browser_navigate("https://old.reddit.com/r/...")
      — old.reddit.com可绕过验证码
    • 使用MCP
      engines: youtube
      或WebSearch:
      "<问题> review"
      — 查找视频评测
    • 搜索词:
      "site:news.ycombinator.com <问题>"
      — 获取Hacker News上的观点
    • WebSearch:
      "<问题> frustrating OR annoying"
      — 更广泛的痛点搜索
    • 总结:提炼Top 5痛点,并附上引用原文与来源URL
  7. SEO / ASO分析(取决于步骤2中识别的产品类型):
    网页应用:
    • 搜索词:
      "<竞品> SEO keywords ranking"
      — 获取竞品的关键词排名
    • 搜索词:
      "<问题领域> search volume trends 2026"
      — 获取需求信号
    • 使用WebFetch或MCP
      include_raw_content
      :爬取竞品页面获取元标签
    • 输出结果:关键词表格(关键词、搜索意图、竞争程度、相关性)
    移动应用:
    • 搜索词:
      "<分类> App Store top apps keywords 2026"
      — 了解分类格局
    • 搜索词:
      "site:reddit.com <竞品应用> review"
      — 获取用户投诉
    • 输出结果:ASO关键词、竞品评分、常见投诉
  8. 命名、域名与企业注册:
    • 生成7-10个名称候选(混合描述性名称与原创/品牌化名称)
    • 域名可用性:三重验证(whois → dig → RDAP)
    • 商标与企业名称冲突检查
    查看随本技能附带的
    references/domain-check.md
    文件,了解TLD优先级、bash脚本、注意事项以及商标检查方法。
  9. 市场规模测算(TAM/SAM/SOM) — 使用WebSearch(主要):
    • WebSearch:
      "<市场> market size 2025 2026 report"
      — 综合多来源数据
    • WebSearch:
      "<市场> growth rate CAGR billion"
      — 获取增长预测
    • 推导逻辑:TAM → SAM → SOM(第1年)
  10. 撰写
    research.md
    — 将内容写入当前项目目录下的
    docs/research.md
    。若目录不存在则创建。
  11. 输出总结:
    • 核心发现(3-5条要点)
    • 建议:推进/放弃/调整方向,并附上简短理由
    • 生成的research.md文件路径
    • 建议下一步操作:
      /validate <创意>

research.md Format

research.md格式

See
references/research-template.md
(bundled with this skill) for the full output template (frontmatter, 6 sections, tables).
查看随本技能附带的
references/research-template.md
文件,获取完整输出模板(前置元数据、6个章节、表格)。

Notes

注意事项

  • Always use kebab-case for project directory names
  • If research.md already exists, ask before overwriting
  • Run search queries in parallel when independent
  • 项目目录名称始终使用短横线分隔命名(kebab-case)
  • 若research.md已存在,在覆盖前需询问用户
  • 独立的搜索查询可并行执行

Common Issues

常见问题

MCP web_search not available

MCP web_search不可用

Cause: MCP server not running or not configured. Fix: Use WebSearch/WebFetch as primary. For better results with engine routing (Reddit, GitHub, YouTube), set up SearXNG (private, self-hosted, free) and configure solograph MCP.
原因: MCP服务器未运行或未配置 解决方法: 以WebSearch/WebFetch作为主要工具。若需更好的引擎路由效果(Reddit、GitHub、YouTube),可搭建SearXNG(私有、自托管、免费)并配置solograph MCP。

Domain check returns wrong results

域名检查结果错误

Cause:
.app
/
.dev
whois shows TLD creation date for unregistered domains. Fix: Use the triple verification method (whois -> dig -> RDAP). Check Name Server and Registrar fields, not creation date.
原因: 未注册的
.app
/
.dev
域名的whois信息显示的是TLD创建日期 解决方法: 使用三重验证法(whois -> dig -> RDAP)。检查名称服务器与注册商字段,而非创建日期。

research.md already exists

research.md已存在

Cause: Previous research run for this idea. Fix: Skill asks before overwriting. Choose to merge new findings or start fresh.
原因: 此前已针对该创意执行过调研 解决方法: 技能会在覆盖前询问用户。可选择合并新发现或重新开始调研。

Proactive Search Practices

主动搜索实践

Reddit Deep Dive

Reddit深度挖掘

  1. MCP web_search or WebSearch — use for discovery (max 3 keywords for Reddit), get post URLs
  2. MCP Playwright — open
    old.reddit.com
    URLs to read full post + comments (bypasses CAPTCHA)
  3. Extract quotes — copy key phrases with attribution (u/username, subreddit, date)
  4. Cross-post detection — same post in multiple subreddits = higher signal
  1. MCP web_search或WebSearch — 用于发现内容(Reddit查询最多3个关键词),获取帖子URL
  2. MCP Playwright — 打开
    old.reddit.com
    URL获取完整帖子+评论(绕过验证码)
  3. 提取引用 — 复制关键语句并附上来源(用户名、子版块、日期)
  4. 跨帖子检测 — 同一帖子出现在多个子版块意味着更高的参考价值

Product Hunt Research

Product Hunt调研

  1. producthunt.com/visit-streaks — streak leaderboard (scrapeable via Playwright)
  2. producthunt.com/@username — profile with social links, maker history, points
  3. PH API v2 is broken — redacts usernames/Twitter since Feb 2023, use scraping
  4. Apify actors — check for DEPRECATED status before relying on them (mass deprecation Sep 2025)
  1. producthunt.com/visit-streaks — 访问 streak排行榜(可通过Playwright爬取)
  2. producthunt.com/@username — 查看用户资料,包含社交链接、创作者历史、积分
  3. PH API v2已损坏 — 自2023年2月起隐藏用户名/Twitter信息,建议使用爬取方式
  4. Apify actors — 在依赖前检查是否已被弃用(2025年9月大规模弃用)

TrustMRR Revenue Validation

TrustMRR收入验证

  1. trustmrr.com/startup/<slug>
    — Stripe-verified MRR, growth %, subscriptions, traffic
  2. WebFetch works — no auth needed, returns full page with JSON-LD structured data
  3. Data fields: MRR, all-time revenue, last 30 days, active subs, tech stack, traffic (24h/7d/30d), category, founder X handle
  4. Use for: competitor revenue validation, market sizing with real data, tech stack discovery
  5. Search:
    "site:trustmrr.com <category or idea>"
    to find similar startups with verified revenue
  6. Apify scrapers: TrustMRR Scraper for bulk extraction
  1. trustmrr.com/startup/<slug>
    — 获取Stripe验证的MRR、增长率、订阅数、流量
  2. WebFetch可正常使用 — 无需认证,返回包含JSON-LD结构化数据的完整页面
  3. 数据字段: MRR、累计收入、过去30天数据、活跃订阅数、技术栈、流量(24h/7d/30d)、分类、创始人X平台账号
  4. 适用场景: 竞品收入验证、基于真实数据的市场规模测算、技术栈发现
  5. 搜索方式:
    "site:trustmrr.com <分类或创意>"
    查找同类已验证收入的初创公司
  6. Apify爬取工具: TrustMRR Scraper 用于批量提取数据

GitHub Library Discovery

GitHub库发现

  1. MCP
    engines: github
    — often returns empty, use WebSearch as primary
  2. github.com/topics/<keyword> — browse topic pages via Playwright or WebFetch
  3. Check stars, last update, open issues — avoid abandoned repos
  1. MCP
    engines: github
    — 常返回空结果,建议以WebSearch作为主要工具
  2. github.com/topics/<keyword> — 通过Playwright或WebFetch浏览主题页面
  3. 检查星标数、最后更新时间、开放问题数 — 避免使用已废弃的仓库

Blocked Content Fallback Chain

内容被屏蔽时的回退流程

MCP Playwright (best) → PullPush API (Reddit) → WebFetch → WebSearch snippets → MCP web_search include_raw_content
If a page returns 403/CAPTCHA via WebFetch:
  1. Reddit: MCP Playwright →
    old.reddit.com
    (always works, no CAPTCHA)
  2. Reddit search: PullPush API
    api.pullpush.io
    (structured JSON, full selftext)
  3. Product Hunt / other sites: MCP Playwright
    browser_navigate
    (no captcha on most sites)
  4. General: WebSearch snippets + WebSearch synthesis
MCP Playwright(最佳) → PullPush API(Reddit) → WebFetch → WebSearch片段 → MCP web_search include_raw_content
若WebFetch访问页面返回403/验证码:
  1. Reddit: 使用MCP Playwright →
    old.reddit.com
    (始终可用,无验证码)
  2. Reddit搜索: 使用PullPush API
    api.pullpush.io
    (结构化JSON,完整帖子正文)
  3. Product Hunt / 其他网站: 使用MCP Playwright
    browser_navigate
    (多数网站无验证码)
  4. 通用场景: WebSearch片段 + WebSearch内容总结