res-deep
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDeep Research
深度研究
Iterative multi-round research across Web, Reddit, X/Twitter, GitHub, Hacker News, Substack, Financial Media, LinkedIn, and more with structured output frameworks (comparison, landscape, deep-dive, decision).
基于Web、Reddit、X/Twitter、GitHub、Hacker News、Substack、财经媒体、LinkedIn等多平台的迭代式多轮研究,搭配结构化输出框架(对比、格局、深度剖析、决策)。
Architecture
架构
| Source | Tool | Cost |
|---|---|---|
| Web | Claude Code | Free + $0.005/call |
Claude Code | Free + $0.005/call | |
| X/Twitter | xAI | $0.005/call |
| GitHub | Claude Code | Free + $0.005/call |
| Hacker News | Claude Code | Free + $0.005/call |
| Substack | Claude Code | Free + $0.005/call |
| Financial Media | Claude Code | Free + $0.005/call |
| Wall Street Oasis | Claude Code | Free |
Claude Code | Free + $0.005/call | |
| Crunchbase | Claude Code | Free |
| YouTube | Claude Code | Free |
| Tech Blogs | Claude Code | Free |
Results from multiple sources are merged and deduplicated for comprehensive coverage.
| 来源 | 工具 | 成本 |
|---|---|---|
| Web | Claude Code | 免费 + $0.005/调用 |
Claude Code | 免费 + $0.005/调用 | |
| X/Twitter | 仅xAI | $0.005/调用 |
| GitHub | Claude Code | 免费 + $0.005/调用 |
| Hacker News | Claude Code | 免费 + $0.005/调用 |
| Substack | Claude Code | 免费 + $0.005/调用 |
| 财经媒体 | Claude Code | 免费 + $0.005/调用 |
| Wall Street Oasis | Claude Code | 免费 |
Claude Code | 免费 + $0.005/调用 | |
| Crunchbase | Claude Code | 免费 |
| YouTube | Claude Code | 免费 |
| 技术博客 | Claude Code | 免费 |
多来源结果会被合并并去重,以实现全面覆盖。
Prerequisites
前提条件
Required Tools
必备工具
| Tool | Purpose | Install |
|---|---|---|
| uv | Python package manager (handles dependencies) | |
| 工具 | 用途 | 安装方式 |
|---|---|---|
| uv | Python包管理器(处理依赖) | |
Optional Tools
可选工具
| Tool | Purpose | Install |
|---|---|---|
| scrapling | Headless browser fallback for sites that block WebFetch (403, captcha, empty responses) | |
| 工具 | 用途 | 安装方式 |
|---|---|---|
| scrapling | 针对阻止WebFetch的站点的无头浏览器备选方案(403错误、验证码、空响应) | |
API Keys
API密钥
| Service | Purpose | Required | Get Key |
|---|---|---|---|
| xAI | X/Twitter search + supplemental web/GitHub/HN search | Recommended | https://console.x.ai |
Note: The skill works without xAI key (web-only mode via Claude Code), but X/Twitter data and broader coverage require xAI.
| 服务 | 用途 | 是否必填 | 获取密钥 |
|---|---|---|---|
| xAI | X/Twitter搜索 + 补充性Web/GitHub/HN搜索 | 推荐 | https://console.x.ai |
注意: 该工具在没有xAI密钥的情况下也能运行(仅通过Claude Code的Web模式),但要获取X/Twitter数据和更广泛的覆盖范围,需要xAI密钥。
Keychain Setup (One-Time, for xAI)
钥匙串设置(一次性,用于xAI)
bash
undefinedbash
undefined1. Create a dedicated keychain
1. 创建专用钥匙串
security create-keychain -p 'YourPassword' ~/Library/Keychains/claude-keys.keychain-db
security create-keychain -p 'YourPassword' ~/Library/Keychains/claude-keys.keychain-db
2. Add keychain to search list
2. 将钥匙串添加到搜索列表
security list-keychains -s ~/Library/Keychains/claude-keys.keychain-db ~/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain
security list-keychains -s ~/Library/Keychains/claude-keys.keychain-db ~/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain
3. Store your xAI API key
3. 存储你的xAI API密钥
echo -n "Enter xAI API key: " && read -s key && security add-generic-password -s "xai-api" -a "$USER" -w "$key" ~/Library/Keychains/claude-keys.keychain-db && unset key && echo
Before using: `security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db`echo -n "Enter xAI API key: " && read -s key && security add-generic-password -s "xai-api" -a "$USER" -w "$key" ~/Library/Keychains/claude-keys.keychain-db && unset key && echo
使用前执行:`security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db`Workflow Overview
工作流概述
| Step | Action | Purpose |
|---|---|---|
| 0 | Detect xAI key | Determine Full vs Web-Only mode |
| 1 | Parse query | Extract TOPIC, FRAMEWORK, DEPTH |
| 2 | Round 1: Broad search | Discover entities, themes, initial findings |
| 3 | Gap analysis | Identify missing perspectives, unverified claims |
| 4 | Round 2: Targeted follow-up | Fill gaps, verify claims, deepen coverage |
| 5 | Round 3: Verification | (deep only) Primary source verification |
| 6 | Synthesis | Structure findings into framework template |
| 7 | Expert mode | Answer follow-ups from cached results |
| 步骤 | 操作 | 目的 |
|---|---|---|
| 0 | 检测xAI密钥 | 确定使用完整模式还是仅Web模式 |
| 1 | 解析查询 | 提取主题(TOPIC)、框架(FRAMEWORK)、深度(DEPTH) |
| 2 | 第一轮:广泛搜索 | 发现实体、主题、初步结论 |
| 3 | 差距分析 | 识别缺失的视角、未验证的主张 |
| 4 | 第二轮:针对性跟进 | 填补差距、验证主张、深化覆盖 |
| 5 | 第三轮:验证 | (仅深度模式)原始来源验证 |
| 6 | 合成 | 将结论整理为框架模板 |
| 7 | 专家模式 | 基于缓存结果回答后续问题 |
Step 0: Detect xAI Key
步骤0:检测xAI密钥
MANDATORY — run before every research session.
bash
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"- XAI_AVAILABLE=true: Use Full mode — Claude WebSearch AND xAI scripts in parallel.
- XAI_AVAILABLE=false: Use Web-Only mode — Claude WebSearch only. Append note suggesting xAI setup.
This step is NOT optional. Always check before starting research.
必须执行 — 每次研究会话前都要运行。
bash
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"- XAI_AVAILABLE=true:使用完整模式 — 并行运行Claude WebSearch和xAI脚本。
- XAI_AVAILABLE=false:使用仅Web模式 — 仅使用Claude WebSearch。添加提示建议设置xAI的说明。
此步骤不可省略。开始研究前务必检查。
Step 1: Parse Query
步骤1:解析查询
Extract from user input:
从用户输入中提取以下内容:
1a. TOPIC
1a. 主题(TOPIC)
The subject being researched. Strip framework indicators and depth modifiers.
研究的主题。去除框架指示符和深度修饰符。
1b. FRAMEWORK
1b. 框架(FRAMEWORK)
Detect output framework from query patterns:
| Framework | Detection Patterns | Example |
|---|---|---|
| COMPARISON | "X vs Y", "compare X and Y", "X or Y", "which is better" | "React vs Vue for enterprise apps" |
| LANDSCAPE | "landscape", "ecosystem", "market", "what's out there", "overview of" | "AI agent frameworks landscape" |
| DEEP_DIVE | "deep dive", "how does X work", "explain", "tell me about", "what is" | "Deep dive into WebAssembly" |
| DECISION | "should I/we", "evaluate", "which should we use", "recommend" | "Should we use Kafka or RabbitMQ?" |
Explicit override: User can force a framework with , , , or anywhere in query.
[comparison][landscape][deep-dive][decision]Default: If no framework detected, use DEEP_DIVE.
从查询模式中检测输出框架:
| 框架 | 检测模式 | 示例 |
|---|---|---|
| 对比(COMPARISON) | "X vs Y"、"compare X and Y"、"X or Y"、"which is better" | "React vs Vue for enterprise apps" |
| 格局(LANDSCAPE) | "landscape"、"ecosystem"、"market"、"what's out there"、"overview of" | "AI agent frameworks landscape" |
| 深度剖析(DEEP_DIVE) | "deep dive"、"how does X work"、"explain"、"tell me about"、"what is" | "Deep dive into WebAssembly" |
| 决策(DECISION) | "should I/we"、"evaluate"、"which should we use"、"recommend" | "Should we use Kafka or RabbitMQ?" |
显式覆盖:用户可在查询的任意位置使用、、或强制指定框架。
[comparison][landscape][deep-dive][decision]默认值:如果未检测到框架,使用深度剖析(DEEP_DIVE)。
1c. DEPTH
1c. 深度(DEPTH)
| Depth | Trigger | Rounds | Target Sources |
|---|---|---|---|
| quick | "quick", "brief", "overview" | 1 | 10-15 |
| default | (none) | 2 | 25-40 |
| deep | "deep", "comprehensive", "thorough" | 3 | 60-90 |
| 深度 | 触发词 | 轮次 | 目标来源数量 |
|---|---|---|---|
| 快速(quick) | "quick"、"brief"、"overview" | 1 | 10-15 |
| 默认(default) | 无 | 2 | 25-40 |
| 深度(deep) | "deep"、"comprehensive"、"thorough" | 3 | 60-90 |
Step 2: Round 1 — Broad Search
步骤2:第一轮 — 广泛搜索
Query Generation
查询生成
Generate 6-9 queries covering different angles of the TOPIC:
- Direct query: — the topic as stated
"{TOPIC}" - Temporal query: or
"{TOPIC} 2026""{TOPIC} latest" - Reddit query:
site:reddit.com "{TOPIC}" - GitHub query:
site:github.com "{TOPIC}" - HN query:
site:news.ycombinator.com "{TOPIC}" - Framework-specific query:
- COMPARISON:
"{Alt A} vs {Alt B}" - LANDSCAPE:
"{TOPIC} ecosystem" OR "{TOPIC} landscape" - DEEP_DIVE:
"how {TOPIC} works" OR "{TOPIC} explained" - DECISION:
"{TOPIC}" experience OR recommendation
- COMPARISON:
- Substack query:
site:substack.com "{TOPIC}" - Financial media query: (for finance/economics topics)
site:tradingview.com OR site:benzinga.com OR site:seekingalpha.com "{TOPIC}" - LinkedIn query: (when topic involves people or companies)
site:linkedin.com "{TOPIC}"
生成6-9个覆盖主题不同角度的查询:
- 直接查询:— 主题原文
"{TOPIC}" - 时效性查询:或
"{TOPIC} 2026""{TOPIC} latest" - Reddit查询:
site:reddit.com "{TOPIC}" - GitHub查询:
site:github.com "{TOPIC}" - HN查询:
site:news.ycombinator.com "{TOPIC}" - 框架特定查询:
- 对比(COMPARISON):
"{Alt A} vs {Alt B}" - 格局(LANDSCAPE):
"{TOPIC} ecosystem" OR "{TOPIC} landscape" - 深度剖析(DEEP_DIVE):
"how {TOPIC} works" OR "{TOPIC} explained" - 决策(DECISION):
"{TOPIC}" experience OR recommendation
- 对比(COMPARISON):
- Substack查询:
site:substack.com "{TOPIC}" - 财经媒体查询:(针对金融/经济主题)
site:tradingview.com OR site:benzinga.com OR site:seekingalpha.com "{TOPIC}" - LinkedIn查询:(当主题涉及人物或公司时)
site:linkedin.com "{TOPIC}"
Parallel Execution
并行执行
Run searches simultaneously:
Claude Code (free):
- : direct query
WebSearch - : temporal query
WebSearch - : Reddit-targeted query
WebSearch - : GitHub-targeted query
WebSearch - : HN-targeted query
WebSearch - : Substack-targeted query
WebSearch - : Financial media-targeted query (for finance/economics topics)
WebSearch - : LinkedIn-targeted query (when topic involves people/companies)
WebSearch - : YouTube-targeted query
WebSearch - : WSO-targeted query (for finance topics)
WebSearch - : Crunchbase-targeted query (for company/startup topics)
WebSearch
xAI scripts (if available, run as background Bash tasks):
bash
uv run scripts/xai_search.py web "{TOPIC}" --json &
uv run scripts/xai_search.py reddit "{TOPIC}" --json &
uv run scripts/xai_search.py x "{TOPIC}" --json &
uv run scripts/xai_search.py github "{TOPIC}" --json &
uv run scripts/xai_search.py hn "{TOPIC}" --json &
uv run scripts/xai_search.py substack "{TOPIC}" --json &
uv run scripts/xai_search.py finance "{TOPIC}" --json &
uv run scripts/xai_search.py linkedin "{TOPIC}" --json &同时运行搜索:
Claude Code(免费):
- :直接查询
WebSearch - :时效性查询
WebSearch - :Reddit定向查询
WebSearch - :GitHub定向查询
WebSearch - :HN定向查询
WebSearch - :Substack定向查询
WebSearch - :财经媒体定向查询(针对金融/经济主题)
WebSearch - :LinkedIn定向查询(当主题涉及人物/公司时)
WebSearch - :YouTube定向查询
WebSearch - :WSO定向查询(针对金融主题)
WebSearch - :Crunchbase定向查询(针对公司/初创企业主题)
WebSearch
xAI脚本(如果可用,作为后台Bash任务运行):
bash
uv run scripts/xai_search.py web "{TOPIC}" --json &
uv run scripts/xai_search.py reddit "{TOPIC}" --json &
uv run scripts/xai_search.py x "{TOPIC}" --json &
uv run scripts/xai_search.py github "{TOPIC}" --json &
uv run scripts/xai_search.py hn "{TOPIC}" --json &
uv run scripts/xai_search.py substack "{TOPIC}" --json &
uv run scripts/xai_search.py finance "{TOPIC}" --json &
uv run scripts/xai_search.py linkedin "{TOPIC}" --json &Merge and Deduplicate
合并与去重
MERGED_WEB = dedupe(claude_web + xai_web)
MERGED_REDDIT = dedupe(claude_reddit + xai_reddit)
MERGED_GITHUB = dedupe(claude_github + xai_github)
MERGED_HN = dedupe(claude_hn + xai_hn)
MERGED_SUBSTACK = dedupe(claude_substack + xai_substack)
MERGED_FINANCE = dedupe(claude_finance + xai_finance)
MERGED_LINKEDIN = dedupe(claude_linkedin + xai_linkedin)
X_RESULTS = xai_x_results (no Claude equivalent)
WSO_RESULTS = claude_wso (WebSearch only)
CRUNCHBASE_RESULTS = claude_crunchbase (WebSearch only)
YOUTUBE_RESULTS = claude_youtube (WebSearch only)Deduplication by URL, keeping the entry with more metadata.
MERGED_WEB = dedupe(claude_web + xai_web)
MERGED_REDDIT = dedupe(claude_reddit + xai_reddit)
MERGED_GITHUB = dedupe(claude_github + xai_github)
MERGED_HN = dedupe(claude_hn + xai_hn)
MERGED_SUBSTACK = dedupe(claude_substack + xai_substack)
MERGED_FINANCE = dedupe(claude_finance + xai_finance)
MERGED_LINKEDIN = dedupe(claude_linkedin + xai_linkedin)
X_RESULTS = xai_x_results (无Claude等效工具)
WSO_RESULTS = claude_wso (仅WebSearch)
CRUNCHBASE_RESULTS = claude_crunchbase (仅WebSearch)
YOUTUBE_RESULTS = claude_youtube (仅WebSearch)通过URL去重,保留元数据更丰富的条目。
Round 1 Internal Notes
第一轮内部记录
Record (internally, NOT in output):
KEY_ENTITIES: [specific tools, companies, people discovered]
THEMES: [recurring themes across sources]
GAPS: [what's missing — feed into Step 3]
CONTRADICTIONS: [conflicting claims]
LEADS: [URLs worth deep-reading via WebFetch (or scrapling fallback) in Round 2]记录(内部使用,不包含在输出中):
KEY_ENTITIES: [发现的特定工具、公司、人物]
THEMES: [跨来源的重复主题]
GAPS: [缺失的内容 — 输入到步骤3]
CONTRADICTIONS: [相互矛盾的主张]
LEADS: [在第二轮中值得通过WebFetch(或scrapling备选方案)深度阅读的URL]Step 3: Gap Analysis
步骤3:差距分析
After Round 1, run gap analysis. See for full checklist.
references/iterative-research.md第一轮结束后,进行差距分析。完整检查清单请参考。
references/iterative-research.mdGap Categories
差距类别
| Gap | Check | Action |
|---|---|---|
| Missing perspective | Have developer, operator, and business views? | Target missing perspective |
| Unverified claims | Any claims from only 1 source? | Seek corroboration |
| Shallow coverage | Any entity mentioned but unexplained? | Deep-search that entity |
| Stale data | Key facts > 12 months old? | Search for recent updates |
| Missing source type | Missing Reddit / GitHub / HN / X / blogs? | Target that platform |
| Missing financial/business context | Missing Substack / financial media / LinkedIn / Crunchbase? | Target that platform |
| 差距 | 检查项 | 操作 |
|---|---|---|
| 缺失视角 | 是否涵盖开发者、运维和业务视角? | 针对缺失视角进行搜索 |
| 未验证主张 | 是否有仅来自单一来源的主张? | 寻求佐证 |
| 覆盖较浅 | 是否有提及但未解释的实体? | 对该实体进行深度搜索 |
| 数据过时 | 关键事实是否超过12个月? | 搜索最新更新 |
| 缺失来源类型 | 缺失Reddit/GitHub/HN/X/博客来源? | 针对该平台进行搜索 |
| 缺失财务/业务背景 | 缺失Substack/财经媒体/LinkedIn/Crunchbase来源? | 针对该平台进行搜索 |
Plan Round 2
规划第二轮
Select top 4-6 gaps. Generate targeted queries for each. See for multi-round refinement patterns.
references/search-patterns.mdSkip to Step 6 if depth is (single round only).
quick选择前4-6个差距。为每个差距生成针对性查询。多轮优化模式请参考。
references/search-patterns.md如果深度为quick(仅一轮),则跳至步骤6。
Step 4: Round 2 — Targeted Follow-Up
步骤4:第二轮 — 针对性跟进
Query Rules
查询规则
- Never repeat Round 1 queries
- Entity-specific queries — target names/tools discovered in Round 1
- Source-type specific — target platforms underrepresented in Round 1
- Framework-adapted — see targeting table in
references/iterative-research.md
- 绝不重复第一轮的查询
- 实体特定查询 — 针对第一轮中发现的名称/工具
- 来源类型特定 — 针对第一轮中覆盖不足的平台
- 框架适配 — 请参考中的目标表
references/iterative-research.md
Execution
执行
Same parallel pattern as Round 1, but with targeted queries.
Additionally, use for high-value URLs discovered in Round 1:
WebFetch- Official documentation pages
- Benchmark result pages
- Engineering blog posts
- Comparison articles with methodology
Maximum 4-6 WebFetch calls in Round 2.
Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
- → Read → validate content
scrapling extract get "URL" /tmp/scrapling-fallback.md - If content is thin (JS-only shell, no data, mostly nav links) → → Read → validate
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - If still blocked →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - All tiers fail → skip URL and note "scrapling blocked"
与第一轮相同的并行模式,但使用针对性查询。
此外,对第一轮中发现的高价值URL使用:
WebFetch- 官方文档页面
- 基准测试结果页面
- 技术博客文章
- 带方法论的对比文章
第二轮最多使用4-6次WebFetch调用。
Scrapling备选方案: 如果WebFetch返回403错误、空内容、验证码页面或被阻止的响应,请按照cli-web-scrape的自动升级协议重试:
- → 读取 → 验证内容
scrapling extract get "URL" /tmp/scrapling-fallback.md - 如果内容单薄(仅JS外壳、无数据、多为导航链接)→ → 读取 → 验证
scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources - 如果仍被阻止 →
scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare - 所有层级均失败 → 跳过该URL并记录"scrapling blocked"
Confidence Update
置信度更新
After Round 2, re-assess all claims:
| Before | New Evidence | After |
|---|---|---|
| [LOW] | Second source found | [MEDIUM] |
| [MEDIUM] | Third source found | [HIGH] |
| Any | Contradicted | Note conflict, present both sides |
Skip to Step 6 if depth is .
default第二轮结束后,重新评估所有主张:
| 之前 | 新证据 | 之后 |
|---|---|---|
| [低] | 找到第二个来源 | [中] |
| [中] | 找到第三个来源 | [高] |
| 任意 | 被反驳 | 记录冲突,呈现双方观点 |
如果深度为default,则跳至步骤6。
Step 5: Round 3 — Verification (Deep Only)
步骤5:第三轮 — 验证(仅深度模式)
Round 3 is for verification only. No new discovery.
第三轮仅用于验证,不进行新的探索。
Budget
预算
Maximum 6-10 WebFetch lookups targeting:
| Target | Purpose | Max Calls |
|---|---|---|
| Primary sources for key claims | Verify accuracy | 3-4 |
| Independent benchmark sites | Validate performance claims | 1-2 |
| Both sides of contradictions | Resolve conflicts | 1-2 |
| Official sites for versions/dates | Confirm recency | 1-2 |
Scrapling fallback: Same auto-escalation protocol as Round 2 — try HTTP tier, validate content, escalate to Dynamic/Stealthy if thin or blocked.
最多使用6-10次WebFetch查找,目标如下:
| 目标 | 目的 | 最大调用次数 |
|---|---|---|
| 关键主张的原始来源 | 验证准确性 | 3-4 |
| 独立基准测试站点 | 验证性能主张 | 1-2 |
| 矛盾双方的来源 | 解决冲突 | 1-2 |
| 版本/日期的官方站点 | 确认时效性 | 1-2 |
Scrapling备选方案: 与第二轮相同的自动升级协议 — 先尝试HTTP层级,验证内容,若内容单薄或被阻止则升级到动态/隐身模式。
Rules
规则
- Verify, don't discover — no new topic exploration
- Target highest-impact claims — those that would change the recommendation
- Check primary sources — go to the original, not summaries
- Update confidence — upgrade or downgrade based on findings
- Trust primary over secondary — if primary contradicts secondary, note it
- 仅验证,不探索 — 不进行新的主题探索
- 针对影响最大的主张 — 那些会改变建议的主张
- 检查原始来源 — 访问原始内容,而非摘要
- 更新置信度 — 根据发现提升或降低置信度
- 优先信任原始来源 — 如果原始来源与次要来源矛盾,记录该情况
Step 6: Synthesis
步骤6:合成
Framework Selection
框架选择
Load and select the template matching the detected FRAMEWORK.
references/output-frameworks.md加载并选择与检测到的框架匹配的模板。
references/output-frameworks.mdFilling the Template
填充模板
- Header block — Framework type, topic, depth, source count, date
- Core content — Fill framework sections with research findings
- Confidence indicators — Mark each claim: ,
[HIGH], or[MEDIUM][LOW] - Community perspective — Synthesize Reddit/X/HN/GitHub sentiment
- Statistics footer — Source counts and engagement metrics
- Sources section — Organized by platform with URLs and metrics
- 标题块 — 框架类型、主题、深度、来源数量、日期
- 核心内容 — 用研究结论填充框架章节
- 置信度指标 — 为每个主张标记:、
[HIGH]或[MEDIUM][LOW] - 社区视角 — 合成Reddit/X/HN/GitHub的观点
- 统计页脚 — 来源数量和参与度指标
- 来源章节 — 按平台整理,包含URL和指标
Engagement-Weighted Synthesis
基于参与度的合成
Weight sources by signal strength. See for full weighting table.
references/iterative-research.md| Signal | Threshold | Weight |
|---|---|---|
| Reddit upvotes | 100+ | High |
| X engagement | 50+ likes | High |
| GitHub stars | 1000+ | High |
| HN points | 100+ | High |
| Substack likes | 50+ | High |
| Multi-platform agreement | 3+ sources | High |
| Dual-engine match | Claude + xAI | High |
| Seeking Alpha comments | 20+ | Medium |
| WSO upvotes | 10+ | Medium |
| YouTube views | 10K+ | Medium |
| Recent (< 7 days) | Any | Medium |
| Single source only | Any | Low |
根据信号强度对来源进行加权。完整加权表请参考。
references/iterative-research.md| 信号 | 阈值 | 权重 |
|---|---|---|
| Reddit点赞数 | 100+ | 高 |
| X参与度 | 50+点赞 | 高 |
| GitHub星标数 | 1000+ | 高 |
| HN点数 | 100+ | 高 |
| Substack点赞数 | 50+ | 高 |
| 多平台一致 | 3+来源 | 高 |
| 双引擎匹配 | Claude + xAI | 高 |
| Seeking Alpha评论数 | 20+ | 中 |
| WSO点赞数 | 10+ | 中 |
| YouTube播放量 | 10K+ | 中 |
| 近期内容(<7天) | 任意 | 中 |
| 仅单一来源 | 任意 | 低 |
Statistics Footer Format
统计页脚格式
Research Statistics
├─ Reddit: {n} threads │ {upvotes} upvotes
├─ X: {n} posts │ {likes} likes │ {reposts} reposts
├─ GitHub: {n} repos │ {stars} total stars
├─ HN: {n} threads │ {points} total points
├─ Substack: {n} articles │ {likes} likes
├─ Financial: {n} articles │ {sources}
├─ LinkedIn: {n} profiles/articles
├─ YouTube: {n} videos
├─ WSO: {n} threads
├─ Crunchbase: {n} profiles
├─ Web: {n} pages │ {domains}
├─ Merged: {n} from Claude + {n} from xAI
└─ Top voices: r/{sub1} │ @{handle1} │ {blog1}Web-Only Mode footer:
Research Statistics
├─ Reddit: {n} threads (via Claude WebSearch)
├─ GitHub: {n} repos (via Claude WebSearch)
├─ HN: {n} threads (via Claude WebSearch)
├─ Substack: {n} articles (via Claude WebSearch)
├─ Financial: {n} articles (via Claude WebSearch)
├─ LinkedIn: {n} profiles/articles (via Claude WebSearch)
├─ YouTube: {n} videos (via Claude WebSearch)
├─ Web: {n} pages
└─ Top sources: {site1}, {site2}
Add X/Twitter + broader coverage: Set up xAI API key (see Prerequisites)研究统计
├─ Reddit: {n} 个帖子 │ {upvotes} 点赞
├─ X: {n} 个帖子 │ {likes} 点赞 │ {reposts} 转发
├─ GitHub: {n} 个仓库 │ {stars} 总星标数
├─ HN: {n} 个帖子 │ {points} 总点数
├─ Substack: {n} 篇文章 │ {likes} 点赞
├─ 财经媒体: {n} 篇文章 │ {sources} 来源
├─ LinkedIn: {n} 个个人资料/文章
├─ YouTube: {n} 个视频
├─ WSO: {n} 个帖子
├─ Crunchbase: {n} 个资料
├─ Web: {n} 个页面 │ {domains} 个域名
├─ 合并结果: {n} 来自Claude + {n} 来自xAI
└─ 核心发声者: r/{sub1} │ @{handle1} │ {blog1}仅Web模式页脚:
研究统计
├─ Reddit: {n} 个帖子 (通过Claude WebSearch)
├─ GitHub: {n} 个仓库 (通过Claude WebSearch)
├─ HN: {n} 个帖子 (通过Claude WebSearch)
├─ Substack: {n} 篇文章 (通过Claude WebSearch)
├─ 财经媒体: {n} 篇文章 (通过Claude WebSearch)
├─ LinkedIn: {n} 个个人资料/文章 (通过Claude WebSearch)
├─ YouTube: {n} 个视频 (通过Claude WebSearch)
├─ Web: {n} 个页面
└─ 主要来源: {site1}, {site2}
添加X/Twitter + 更广泛覆盖:设置xAI API密钥(请参考前提条件)Step 7: Expert Mode
步骤7:专家模式
After delivering research, enter Expert Mode:
- Answer follow-ups from cached results
- No new searches unless explicitly requested
- Cross-reference between sources
New search triggers (exit Expert Mode):
- "Search again for..."
- "Find more about..."
- "Update the research..."
- "Look deeper into..."
交付研究结果后,进入专家模式:
- 基于缓存结果回答后续问题
- 除非明确要求,否则不进行新的搜索
- 跨来源交叉引用
新搜索触发词(退出专家模式):
- "Search again for..."
- "Find more about..."
- "Update the research..."
- "Look deeper into..."
Operational Modes
运行模式
| Mode | Sources | When |
|---|---|---|
| Full | Claude WebSearch + xAI (web + X + Reddit + GitHub + HN + Substack + Finance + LinkedIn) | Step 0 returns XAI_AVAILABLE=true |
| Web-Only | Claude WebSearch only | Step 0 returns XAI_AVAILABLE=false |
Mode is determined by Step 0 — never skip it or assume Web-Only without checking.
| 模式 | 来源 | 适用场景 |
|---|---|---|
| 完整模式 | Claude WebSearch + xAI(Web + X + Reddit + GitHub + HN + Substack + 财经 + LinkedIn) | 步骤0返回XAI_AVAILABLE=true |
| 仅Web模式 | 仅Claude WebSearch | 步骤0返回XAI_AVAILABLE=false |
模式由步骤0决定 — 绝不跳过该步骤,也不假设默认使用仅Web模式。
Depth Control
深度控制
| Depth | Rounds | Sources | xAI Calls (Full) | Use Case |
|---|---|---|---|---|
| quick | 1 | 10-15 | 8 | Fast overview, time-sensitive |
| default | 2 | 25-40 | 13 | Balanced research |
| deep | 3 | 60-90 | 18 + 6-10 WebFetch | Comprehensive analysis, important decisions |
| 深度 | 轮次 | 来源数量 | xAI调用次数(完整模式) | 适用场景 |
|---|---|---|---|---|
| 快速(quick) | 1 | 10-15 | 8 | 快速概述、时间敏感场景 |
| 默认(default) | 2 | 25-40 | 13 | 平衡型研究 |
| 深度(deep) | 3 | 60-90 | 18 + 6-10次WebFetch | 综合性分析、重要决策 |
Cost Awareness
成本说明
| Action | Cost |
|---|---|
| Claude Code WebSearch | Free (subscription) |
| xAI search call (any type) | $0.005/call |
| WebFetch (built-in) | Free |
| scrapling fallback (optional) | Free |
Estimated cost per research session:
| Depth | Full Mode | Web-Only |
|---|---|---|
| quick | ~$0.04 (8 xAI calls) | Free |
| default | ~$0.065 (13 xAI calls) | Free |
| deep | ~$0.09 (18 xAI calls) | Free |
Cost-Saving Strategy:
- Claude WebSearch handles most needs (free)
- xAI adds X/Twitter (unique value) + broader coverage per platform
- WebFetch for deep-reading specific URLs (free), with scrapling fallback for blocked sites
| 操作 | 成本 |
|---|---|
| Claude Code WebSearch | 免费(订阅制) |
| xAI搜索调用(任意类型) | $0.005/调用 |
| WebFetch(内置) | 免费 |
| scrapling备选方案(可选) | 免费 |
每次研究会话的预估成本:
| 深度 | 完整模式 | 仅Web模式 |
|---|---|---|
| 快速 | ~$0.04(8次xAI调用) | 免费 |
| 默认 | ~$0.065(13次xAI调用) | 免费 |
| 深度 | ~$0.09(18次xAI调用) | 免费 |
成本节约策略:
- Claude WebSearch可满足大部分需求(免费)
- xAI添加X/Twitter(独特价值) + 每个平台更广泛的覆盖
- WebFetch用于深度阅读特定URL(免费),针对被阻止的站点使用scrapling备选方案
Critical Constraints
关键约束
DO:
- Run Step 0 (xAI key detection) before every research session
- If xAI key exists: run Claude WebSearch AND xAI scripts in parallel (Full mode)
- If xAI key missing: use Claude WebSearch only (Web-Only mode)
- Run gap analysis between rounds — never skip it
- Merge and deduplicate results by URL
- Exclude archived GitHub repositories from results — they are unmaintained and read-only
- Mark every claim with confidence: ,
[HIGH], or[MEDIUM][LOW] - Ground synthesis in actual research, not pre-existing knowledge
- Cite specific sources with URLs
- Use flag when calling xAI scripts for programmatic parsing
--json - Load framework template from
references/output-frameworks.md
DON'T:
- Skip Claude Code WebSearch (it's free)
- Run sequential searches when parallel is possible
- Display duplicate results from different search engines
- Quote more than 125 characters from any single source
- Repeat queries across rounds — each round targets gaps from previous
- Add Round 3 for quick or default depth — it's deep-only
- Discover new topics in Round 3 — verification only
必须执行:
- 每次研究会话前运行步骤0(检测xAI密钥)
- 如果存在xAI密钥:并行运行Claude WebSearch和xAI脚本(完整模式)
- 如果缺少xAI密钥:仅使用Claude WebSearch(仅Web模式)
- 轮次之间进行差距分析 — 绝不跳过
- 通过URL合并并去重结果
- 从结果中排除已归档的GitHub仓库 — 这些仓库已不再维护且为只读
- 为每个主张标记置信度:、
[HIGH]或[MEDIUM][LOW] - 基于实际研究进行合成,而非既有知识
- 引用带URL的特定来源
- 调用xAI脚本时使用标志以实现程序化解析
--json - 从加载框架模板
references/output-frameworks.md
禁止执行:
- 跳过Claude Code WebSearch(它是免费的)
- 可以并行搜索时绝不串行执行
- 显示来自不同搜索引擎的重复结果
- 从单一来源引用超过125个字符
- 跨轮次重复查询 — 每一轮都针对前一轮的差距
- 为快速或默认深度添加第三轮 — 仅深度模式使用第三轮
- 在第三轮中探索新主题 — 仅用于验证
Troubleshooting
故障排除
xAI key not found:
bash
security find-generic-password -s "xai-api" ~/Library/Keychains/claude-keys.keychain-dbIf not found, run keychain setup above.
Keychain locked:
bash
security unlock-keychain ~/Library/Keychains/claude-keys.keychain-dbNo X/Twitter results: Requires valid xAI API key. Check at https://console.x.ai
WebFetch blocked (403/captcha/empty): Install scrapling and follow the auto-escalation protocol from cli-web-scrape (HTTP → validate → Dynamic → Stealthy):
bash
uv tool install 'scrapling[all]'
scrapling install # one-time: install browser engines for Dynamic/Stealthy tiersScript errors: Ensure uv is installed: . If missing:
which uvcurl -LsSf https://astral.sh/uv/install.sh | sh未找到xAI密钥:
bash
security find-generic-password -s "xai-api" ~/Library/Keychains/claude-keys.keychain-db如果未找到,请运行上述钥匙串设置步骤。
钥匙串已锁定:
bash
security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db无X/Twitter结果: 需要有效的xAI API密钥。请访问https://console.x.ai检查。
WebFetch被阻止(403/验证码/空响应): 安装scrapling并遵循cli-web-scrape的自动升级协议(HTTP → 验证 → 动态 → 隐身):
bash
uv tool install 'scrapling[all]'
scrapling install # 一次性操作:安装动态/隐身模式所需的浏览器引擎脚本错误: 确保已安装uv:。如果未安装:
which uvcurl -LsSf https://astral.sh/uv/install.sh | shReferences
参考资料
- — Framework templates (comparison, landscape, deep-dive, decision)
references/output-frameworks.md - — Search operators and multi-round query patterns
references/search-patterns.md - — Gap analysis, round procedures, cross-referencing methodology
references/iterative-research.md
- — 框架模板(对比、格局、深度剖析、决策)
references/output-frameworks.md - — 搜索运算符和多轮查询模式
references/search-patterns.md - — 差距分析、轮次流程、交叉引用方法论
references/iterative-research.md