solo-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese/research
/research
Deep research before PRD generation. Produces a structured with competitive analysis, user pain points, SEO/ASO keywords, naming/domain options, and market sizing.
research.md在生成PRD之前进行深度调研。生成结构清晰的文件,包含竞品分析、用户痛点、SEO/ASO关键词、命名/域名选项以及市场规模测算内容。
research.mdMCP Tools (use if available)
MCP工具(如有可用请使用)
If MCP tools are available, prefer them over CLI:
- — search knowledge base for related docs
kb_search(query, n_results) - — web search with engine routing
web_search(query, engines, include_raw_content) - — find how similar research was done before
session_search(query, project) - — check project details and stacks
project_info(name) - — architecture overview of an existing project (stack, patterns, deps)
codegraph_explain(project) - — raw Cypher queries against code graph (find shared packages, dependencies)
codegraph_query(query) - — semantic search over project source code
project_code_search(query, project)
MCP supports engine override: , , etc.
If MCP tools are not available, use WebSearch/WebFetch as primary. If MCP web_search tool is available, use it for better results.
web_searchengines="reddit"engines="youtube"如果MCP工具可用,优先于CLI使用:
- — 在知识库中搜索相关文档
kb_search(query, n_results) - — 支持引擎路由的网页搜索
web_search(query, engines, include_raw_content) - — 查找此前类似调研的执行方式
session_search(query, project) - — 查看项目详情与技术栈
project_info(name) - — 现有项目的架构概述(技术栈、模式、依赖)
codegraph_explain(project) - — 针对代码图谱的原生Cypher查询(查找共享包、依赖关系)
codegraph_query(query) - — 对项目源代码进行语义搜索
project_code_search(query, project)
MCP 支持引擎覆盖:、等。
如果MCP工具不可用,以WebSearch/WebFetch作为主要工具。若MCP web_search工具可用,使用它可获得更优结果。
web_searchengines="reddit"engines="youtube"Reddit Search Best Practices
Reddit搜索最佳实践
- Max 3 keywords in reddit queries — more keywords = fewer results
- Good: — Bad:
"product hunt outreach launch""product hunt scraper maker profiles linkedin outreach launch strategy" - rarely works for Reddit — use fallback chain below
include_raw_content=true
- Reddit查询中最多使用3个关键词 — 关键词越多,结果越少
- 示例:有效:— 无效:
"product hunt outreach launch""product hunt scraper maker profiles linkedin outreach launch strategy" - 在Reddit上很少生效 — 使用下方的回退流程
include_raw_content=true
Reddit Content Access — Fallback Chain
Reddit内容获取 — 回退流程
When a search finds a relevant Reddit post, reading its full content requires a fallback chain:
1. MCP Playwright (old.reddit.com) ← BEST: bypasses CAPTCHA, full post + comments
2. PullPush API (api.pullpush.io) ← search by query/subreddit/author/score/date
3. MCP web_search include_raw_content ← sometimes works, often truncated
4. WebFetch / WebSearch snippets ← last resort, partial data onlyMethod 1: MCP Playwright (recommended for full post content)
- Use — old.reddit.com loads without CAPTCHA
browser_navigate("https://old.reddit.com/r/...") - shows CAPTCHA ("Prove your humanity"), always use
www.reddit.comold.reddit.com - Snapshot contains full post text + comments in structured YAML
- Example:
old.reddit.com/r/indiehackers/comments/abc123/post_title/
Method 2: PullPush API (for search/discovery)
- Endpoint:
https://api.pullpush.io/reddit/submission/search - Params: ,
q,subreddit,author(e.g.score),>10,<100/since(unix timestamps),until(max 100)size - Rate limits: soft 15 req/min, hard 30 req/min, 1000 req/hr. Sleep 4 sec between requests.
- Returns JSON with full , author, score, created_utc
selftext - Comment search: (same params)
/reddit/comment/search - Can use via curl:
bash
curl -s "https://api.pullpush.io/reddit/submission/search?q=product+hunt+launch&subreddit=indiehackers&size=10"Method 3: Reddit .json endpoint (often blocked)
- Append to any Reddit URL:
.jsonreddit.com/r/sub/comments/id.json - Returns raw JSON with full post + comments
- Frequently blocked (403/429) — use as opportunistic fallback only
Method 4: PRAW (Reddit Official API, for live search/user profiles)
- praw-dev/praw — Python Reddit API Wrapper
- OAuth2 auth, built-in rate limiting, sync/async support
- Best for: live subreddit search, user profiles, comment trees
- /
pip install prawuv add praw
当搜索找到相关Reddit帖子时,读取完整内容需要遵循以下回退流程:
1. MCP Playwright (old.reddit.com) ← 最佳选择:绕过验证码,获取完整帖子+评论
2. PullPush API (api.pullpush.io) ← 按查询/子版块/作者/评分/日期搜索
3. MCP web_search include_raw_content ← 偶尔生效,结果常被截断
4. WebFetch / WebSearch 片段 ← 最后手段:仅能获取部分数据方法1:MCP Playwright(推荐用于获取完整帖子内容)
- 使用— old.reddit.com无需验证码即可加载
browser_navigate("https://old.reddit.com/r/...") - 会显示验证码(“验证您的身份”),请始终使用
www.reddit.comold.reddit.com - 快照包含结构化YAML格式的完整帖子文本+评论
- 示例:
old.reddit.com/r/indiehackers/comments/abc123/post_title/
方法2:PullPush API(用于搜索/发现)
- 端点:
https://api.pullpush.io/reddit/submission/search - 参数:、
q、subreddit、author(例如score)、>10,<100/since(Unix时间戳)、until(最大100)size - 速率限制:软限制15请求/分钟,硬限制30请求/分钟,1000请求/小时。请求间隔至少4秒。
- 返回包含完整、作者、评分、created_utc的JSON数据
selftext - 评论搜索:(参数相同)
/reddit/comment/search - 可通过curl调用:
bash
curl -s "https://api.pullpush.io/reddit/submission/search?q=product+hunt+launch&subreddit=indiehackers&size=10"方法3:Reddit .json端点(常被屏蔽)
- 在任意Reddit URL后追加:
.jsonreddit.com/r/sub/comments/id.json - 返回包含完整帖子+评论的原生JSON数据
- 经常被屏蔽(403/429) — 仅作为机会性回退方案
方法4:PRAW(Reddit官方API,用于实时搜索/用户资料)
- praw-dev/praw — Python Reddit API封装
- 支持OAuth2认证、内置速率限制、同步/异步调用
- 最佳适用场景:实时子版块搜索、用户资料、评论树
- 安装方式:/
pip install prawuv add praw
Search Strategy: Hybrid (MCP + WebSearch)
搜索策略:混合模式(MCP + WebSearch)
Use multiple search backends together. Each has strengths:
| Step | Best backend | Why |
|---|---|---|
| Competitors | WebSearch + | Broad discovery + Product Hunt + B2B reviews |
| Reddit / Pain points | MCP | PullPush API, selftext in content |
| YouTube reviews | MCP | Video reviews (views = demand) |
| Market size | WebSearch | Synthesizes numbers from 10 sources |
| SEO / ASO | WebSearch | Broader coverage, trend data |
| Page scraping | WebFetch or MCP | Up to 5000 chars of page content |
| Hacker News | WebSearch | HN discussions and opinions |
| Funding / Companies | WebSearch | Competitor funding, team size |
| Verified revenue | WebFetch | Stripe-verified MRR, growth, tech stack, traffic |
结合使用多种搜索后端,各后端有其优势:
| 步骤 | 最佳后端 | 原因 |
|---|---|---|
| 竞品调研 | WebSearch + | 广泛发现竞品 + Product Hunt + B2B评测 |
| Reddit / 用户痛点 | MCP | PullPush API可获取帖子正文 |
| YouTube评测 | MCP | 视频评测(播放量=需求信号) |
| 市场规模 | WebSearch | 综合10个数据源的统计数据 |
| SEO / ASO | WebSearch | 覆盖范围更广,包含趋势数据 |
| 页面爬取 | WebFetch 或 MCP | 最多可获取5000字符的页面内容 |
| Hacker News | WebSearch | 获取HN社区的讨论与观点 |
| 融资 / 企业信息 | WebSearch | 竞品融资情况、团队规模 |
| 已验证收入 | WebFetch | Stripe验证的MRR、增长数据、技术栈、流量 |
Search Availability
搜索工具可用性
Use WebSearch/WebFetch as primary. If MCP tool is available, use it for better results (supports engine routing and raw content extraction).
web_search以WebSearch/WebFetch作为主要工具。若MCP 工具可用,使用它可获得更优结果(支持引擎路由与原始内容提取)。
web_searchSteps
执行步骤
-
Parse the idea from. If empty, ask the user what idea they want to research.
$ARGUMENTS -
Detect product type — infer from the idea description:
- Keywords like "app", "mobile", "iPhone", "Android" → mobile (ios/android)
- Keywords like "website", "SaaS", "dashboard", "web app" → web
- Keywords like "CLI", "terminal", "command line" → cli
- Keywords like "API", "backend", "service" → api
- Keywords like "extension", "plugin", "browser" → web (extension)
- Default if unclear → web
- Only ask via AskUserQuestion if truly ambiguous (e.g., "build a todo app" could be web or mobile)
- This determines which research sections apply (ASO for mobile, SEO for web, etc.)
-
Search knowledge base and past work:
- If MCP available:
kb_searchkb_search(query="<idea keywords>", n_results=5) - If MCP available:
session_search— check if this idea was researched beforesession_search(query="<idea keywords>") - Otherwise: Grep for keywords in files
.md - Check if or
research.mdalready exist for this idea.prd.md
- If MCP
-
Check existing portfolio (if MCP codegraph tools available):
- — architecture overview of related projects in the portfolio
codegraph_explain(project="<similar project>") - — find reusable code, patterns, infrastructure
project_code_search(query="<relevant pattern>", project="<sibling>") - — find projects using similar tech
codegraph_query("MATCH (p:Project)-[:DEPENDS_ON]->(pkg:Package) WHERE pkg.name CONTAINS '<relevant tech>' RETURN p.name, pkg.name") - This helps assess: feasibility, reusable code, stack decisions, and time estimates
- If no MCP tools available, skip this step.
-
Competitive analysis — use WebSearch (primary) + MCP web_search (if available):
- — broad discovery
"<idea> competitors alternatives 2026" - — pricing data
"<idea> app review pricing" - WebFetch or MCP : scrape competitor URLs for detailed pricing
include_raw_content=true - MCP or WebSearch:
engines: reddit— user opinions"<idea> vs" - — Product Hunt launches
"site:producthunt.com <idea>" - or
"site:g2.com <idea>"— B2B reviews"site:capterra.com <idea>" - — funding, team size
"site:crunchbase.com <competitor>" - or WebFetch
"site:trustmrr.com <idea>"— Stripe-verified MRR, growth %, tech stack, traffic (24h/7d/30d)trustmrr.com/startup/<slug> - For each competitor extract: name, URL, pricing, key features, weaknesses, verified MRR (if on TrustMRR)
-
User pain points — use MCP web_search / WebSearch + YouTube:
- MCP or WebSearch:
engines: reddit— Reddit discussions (max 3 keywords!)"<problem>" - If Reddit post found but content not available → open via MCP Playwright: — old.reddit.com bypasses CAPTCHA
browser_navigate("https://old.reddit.com/r/...") - MCP or WebSearch:
engines: youtube— video reviews"<problem> review" - — Hacker News opinions
"site:news.ycombinator.com <problem>" - WebSearch: — broader sweep
"<problem> frustrating OR annoying" - Synthesis: top 5 pain points with quotes and source URLs
- MCP
-
SEO / ASO analysis (depends on product type from step 2):For web apps:
- — competitor keywords
"<competitor> SEO keywords ranking" - — demand signals
"<problem domain> search volume trends 2026" - WebFetch or MCP : scrape competitor pages for meta tags
include_raw_content - Result: keyword table (keyword, intent, competition, relevance)
For mobile apps:- — category landscape
"<category> App Store top apps keywords 2026" - — user complaints
"site:reddit.com <competitor app> review" - Result: ASO keywords, competitor ratings, common complaints
-
Naming, domains, and company registration:
- Generate 7-10 name candidates (mix of descriptive + invented/brandable)
- Domain availability: triple verification (whois → dig → RDAP)
- Trademark + company name conflict checks
See(bundled with this skill) for TLD priority tiers, bash scripts, gotchas, and trademark check methods.references/domain-check.md -
Market sizing (TAM/SAM/SOM) — use WebSearch (primary):
- WebSearch: — synthesizes numbers
"<market> market size 2025 2026 report" - WebSearch: — growth projections
"<market> growth rate CAGR billion" - Extrapolation: TAM → SAM → SOM (Year 1)
- WebSearch:
-
Write— write to
research.mdin the current project directory. Create the directory if needed.docs/research.md -
Output summary:
- Key findings (3-5 bullets)
- Recommendation: GO / NO-GO / PIVOT with brief reasoning
- Path to generated research.md
- Suggested next step:
/validate <idea>
-
解析创意:从中解析用户的创意。若参数为空,询问用户需要调研的创意内容。
$ARGUMENTS -
识别产品类型:从创意描述中推断:
- 包含“app”“mobile”“iPhone”“Android”等关键词 → 移动应用(ios/android)
- 包含“website”“SaaS”“dashboard”“web app”等关键词 → 网页应用
- 包含“CLI”“terminal”“command line”等关键词 → 命令行工具
- 包含“API”“backend”“service”等关键词 → 后端服务
- 包含“extension”“plugin”“browser”等关键词 → 网页应用(扩展程序)
- 若无法明确判断,默认归类为网页应用
- 仅当存在真正歧义时才询问用户(例如“构建待办应用”可能是网页或移动应用)
- 产品类型将决定适用的调研模块(移动应用用ASO,网页应用用SEO等)
-
搜索知识库与过往成果:
- 若MCP 可用:
kb_searchkb_search(query="<创意关键词>", n_results=5) - 若MCP 可用:
session_search— 检查该创意是否已被调研过session_search(query="<创意关键词>") - 若以上工具不可用:在文件中搜索关键词
.md - 检查当前创意是否已存在或
research.md文件prd.md
- 若MCP
-
检查现有项目组合(若MCP codegraph工具可用):
- — 获取项目组合中相关项目的架构概述
codegraph_explain(project="<类似项目>") - — 查找可复用代码、模式、基础设施
project_code_search(query="<相关模式>", project="<关联项目>") - — 查找使用类似技术的项目
codegraph_query("MATCH (p:Project)-[:DEPENDS_ON]->(pkg:Package) WHERE pkg.name CONTAINS '<相关技术>' RETURN p.name, pkg.name") - 此步骤有助于评估:可行性、可复用代码、技术栈决策、时间预估
- 若MCP工具不可用,跳过此步骤
-
竞品分析 — 使用WebSearch(主要) + MCP web_search(如有可用):
- 搜索词:— 广泛发现竞品
"<创意> competitors alternatives 2026" - 搜索词:— 获取定价数据
"<创意> app review pricing" - 使用WebFetch或MCP :爬取竞品URL获取详细定价
include_raw_content=true - 使用MCP 或WebSearch:
engines: reddit— 获取用户观点"<创意> vs" - 搜索词:— 查看Product Hunt上的竞品发布情况
"site:producthunt.com <创意>" - 搜索词:或
"site:g2.com <创意>"— 获取B2B评测"site:capterra.com <创意>" - 搜索词:— 获取竞品融资、团队规模
"site:crunchbase.com <竞品名称>" - 搜索词:或 WebFetch
"site:trustmrr.com <创意>"— 获取Stripe验证的MRR、增长率、技术栈、流量(24h/7d/30d)trustmrr.com/startup/<slug> - 为每个竞品提取:名称、URL、定价、核心功能、劣势、已验证MRR(若在TrustMRR上有数据)
- 搜索词:
-
用户痛点调研 — 使用MCP web_search / WebSearch + YouTube:
- 使用MCP 或WebSearch:
engines: reddit— 查找Reddit讨论(最多3个关键词!)"<问题>" - 若找到Reddit帖子但无法获取内容 → 使用MCP Playwright打开:— old.reddit.com可绕过验证码
browser_navigate("https://old.reddit.com/r/...") - 使用MCP 或WebSearch:
engines: youtube— 查找视频评测"<问题> review" - 搜索词:— 获取Hacker News上的观点
"site:news.ycombinator.com <问题>" - WebSearch:— 更广泛的痛点搜索
"<问题> frustrating OR annoying" - 总结:提炼Top 5痛点,并附上引用原文与来源URL
- 使用MCP
-
SEO / ASO分析(取决于步骤2中识别的产品类型):网页应用:
- 搜索词:— 获取竞品的关键词排名
"<竞品> SEO keywords ranking" - 搜索词:— 获取需求信号
"<问题领域> search volume trends 2026" - 使用WebFetch或MCP :爬取竞品页面获取元标签
include_raw_content - 输出结果:关键词表格(关键词、搜索意图、竞争程度、相关性)
移动应用:- 搜索词:— 了解分类格局
"<分类> App Store top apps keywords 2026" - 搜索词:— 获取用户投诉
"site:reddit.com <竞品应用> review" - 输出结果:ASO关键词、竞品评分、常见投诉
- 搜索词:
-
命名、域名与企业注册:
- 生成7-10个名称候选(混合描述性名称与原创/品牌化名称)
- 域名可用性:三重验证(whois → dig → RDAP)
- 商标与企业名称冲突检查
查看随本技能附带的文件,了解TLD优先级、bash脚本、注意事项以及商标检查方法。references/domain-check.md -
市场规模测算(TAM/SAM/SOM) — 使用WebSearch(主要):
- WebSearch:— 综合多来源数据
"<市场> market size 2025 2026 report" - WebSearch:— 获取增长预测
"<市场> growth rate CAGR billion" - 推导逻辑:TAM → SAM → SOM(第1年)
- WebSearch:
-
撰写— 将内容写入当前项目目录下的
research.md。若目录不存在则创建。docs/research.md -
输出总结:
- 核心发现(3-5条要点)
- 建议:推进/放弃/调整方向,并附上简短理由
- 生成的research.md文件路径
- 建议下一步操作:
/validate <创意>
research.md Format
research.md格式
See (bundled with this skill) for the full output template (frontmatter, 6 sections, tables).
references/research-template.md查看随本技能附带的文件,获取完整输出模板(前置元数据、6个章节、表格)。
references/research-template.mdNotes
注意事项
- Always use kebab-case for project directory names
- If research.md already exists, ask before overwriting
- Run search queries in parallel when independent
- 项目目录名称始终使用短横线分隔命名(kebab-case)
- 若research.md已存在,在覆盖前需询问用户
- 独立的搜索查询可并行执行
Common Issues
常见问题
MCP web_search not available
MCP web_search不可用
Cause: MCP server not running or not configured.
Fix: Use WebSearch/WebFetch as primary. For better results with engine routing (Reddit, GitHub, YouTube), set up SearXNG (private, self-hosted, free) and configure solograph MCP.
原因: MCP服务器未运行或未配置
解决方法: 以WebSearch/WebFetch作为主要工具。若需更好的引擎路由效果(Reddit、GitHub、YouTube),可搭建SearXNG(私有、自托管、免费)并配置solograph MCP。
Domain check returns wrong results
域名检查结果错误
Cause: / whois shows TLD creation date for unregistered domains.
Fix: Use the triple verification method (whois -> dig -> RDAP). Check Name Server and Registrar fields, not creation date.
.app.dev原因: 未注册的/域名的whois信息显示的是TLD创建日期
解决方法: 使用三重验证法(whois -> dig -> RDAP)。检查名称服务器与注册商字段,而非创建日期。
.app.devresearch.md already exists
research.md已存在
Cause: Previous research run for this idea.
Fix: Skill asks before overwriting. Choose to merge new findings or start fresh.
原因: 此前已针对该创意执行过调研
解决方法: 技能会在覆盖前询问用户。可选择合并新发现或重新开始调研。
Proactive Search Practices
主动搜索实践
Reddit Deep Dive
Reddit深度挖掘
- MCP web_search or WebSearch — use for discovery (max 3 keywords for Reddit), get post URLs
- MCP Playwright — open URLs to read full post + comments (bypasses CAPTCHA)
old.reddit.com - Extract quotes — copy key phrases with attribution (u/username, subreddit, date)
- Cross-post detection — same post in multiple subreddits = higher signal
- MCP web_search或WebSearch — 用于发现内容(Reddit查询最多3个关键词),获取帖子URL
- MCP Playwright — 打开URL获取完整帖子+评论(绕过验证码)
old.reddit.com - 提取引用 — 复制关键语句并附上来源(用户名、子版块、日期)
- 跨帖子检测 — 同一帖子出现在多个子版块意味着更高的参考价值
Product Hunt Research
Product Hunt调研
- producthunt.com/visit-streaks — streak leaderboard (scrapeable via Playwright)
- producthunt.com/@username — profile with social links, maker history, points
- PH API v2 is broken — redacts usernames/Twitter since Feb 2023, use scraping
- Apify actors — check for DEPRECATED status before relying on them (mass deprecation Sep 2025)
- producthunt.com/visit-streaks — 访问 streak排行榜(可通过Playwright爬取)
- producthunt.com/@username — 查看用户资料,包含社交链接、创作者历史、积分
- PH API v2已损坏 — 自2023年2月起隐藏用户名/Twitter信息,建议使用爬取方式
- Apify actors — 在依赖前检查是否已被弃用(2025年9月大规模弃用)
TrustMRR Revenue Validation
TrustMRR收入验证
- — Stripe-verified MRR, growth %, subscriptions, traffic
trustmrr.com/startup/<slug> - WebFetch works — no auth needed, returns full page with JSON-LD structured data
- Data fields: MRR, all-time revenue, last 30 days, active subs, tech stack, traffic (24h/7d/30d), category, founder X handle
- Use for: competitor revenue validation, market sizing with real data, tech stack discovery
- Search: to find similar startups with verified revenue
"site:trustmrr.com <category or idea>" - Apify scrapers: TrustMRR Scraper for bulk extraction
- — 获取Stripe验证的MRR、增长率、订阅数、流量
trustmrr.com/startup/<slug> - WebFetch可正常使用 — 无需认证,返回包含JSON-LD结构化数据的完整页面
- 数据字段: MRR、累计收入、过去30天数据、活跃订阅数、技术栈、流量(24h/7d/30d)、分类、创始人X平台账号
- 适用场景: 竞品收入验证、基于真实数据的市场规模测算、技术栈发现
- 搜索方式: 查找同类已验证收入的初创公司
"site:trustmrr.com <分类或创意>" - Apify爬取工具: TrustMRR Scraper 用于批量提取数据
GitHub Library Discovery
GitHub库发现
- MCP — often returns empty, use WebSearch as primary
engines: github - github.com/topics/<keyword> — browse topic pages via Playwright or WebFetch
- Check stars, last update, open issues — avoid abandoned repos
- MCP — 常返回空结果,建议以WebSearch作为主要工具
engines: github - github.com/topics/<keyword> — 通过Playwright或WebFetch浏览主题页面
- 检查星标数、最后更新时间、开放问题数 — 避免使用已废弃的仓库
Blocked Content Fallback Chain
内容被屏蔽时的回退流程
MCP Playwright (best) → PullPush API (Reddit) → WebFetch → WebSearch snippets → MCP web_search include_raw_contentIf a page returns 403/CAPTCHA via WebFetch:
- Reddit: MCP Playwright → (always works, no CAPTCHA)
old.reddit.com - Reddit search: PullPush API (structured JSON, full selftext)
api.pullpush.io - Product Hunt / other sites: MCP Playwright (no captcha on most sites)
browser_navigate - General: WebSearch snippets + WebSearch synthesis
MCP Playwright(最佳) → PullPush API(Reddit) → WebFetch → WebSearch片段 → MCP web_search include_raw_content若WebFetch访问页面返回403/验证码:
- Reddit: 使用MCP Playwright → (始终可用,无验证码)
old.reddit.com - Reddit搜索: 使用PullPush API (结构化JSON,完整帖子正文)
api.pullpush.io - Product Hunt / 其他网站: 使用MCP Playwright (多数网站无验证码)
browser_navigate - 通用场景: WebSearch片段 + WebSearch内容总结