res-deep

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Deep Research

深度研究

Iterative multi-round research across Web, Reddit, X/Twitter, GitHub, Hacker News, Substack, Financial Media, LinkedIn, and more with structured output frameworks (comparison, landscape, deep-dive, decision).
基于Web、Reddit、X/Twitter、GitHub、Hacker News、Substack、财经媒体、LinkedIn等多平台的迭代式多轮研究,搭配结构化输出框架(对比、格局、深度剖析、决策)。

Architecture

架构

SourceToolCost
WebClaude Code
WebSearch
+ xAI
web_search
Free + $0.005/call
RedditClaude Code
WebSearch
(site:reddit.com) + xAI
web_search
Free + $0.005/call
X/TwitterxAI
x_search
only
$0.005/call
GitHubClaude Code
WebSearch
(site:github.com) + xAI
web_search
Free + $0.005/call
Hacker NewsClaude Code
WebSearch
(site:news.ycombinator.com) + xAI
web_search
Free + $0.005/call
SubstackClaude Code
WebSearch
(site:substack.com) + xAI
web_search
Free + $0.005/call
Financial MediaClaude Code
WebSearch
(site-specific) + xAI
web_search
Free + $0.005/call
Wall Street OasisClaude Code
WebSearch
(site:wallstreetoasis.com)
Free
LinkedInClaude Code
WebSearch
(site:linkedin.com) + xAI
web_search
Free + $0.005/call
CrunchbaseClaude Code
WebSearch
(site:crunchbase.com)
Free
YouTubeClaude Code
WebSearch
(site:youtube.com)
Free
Tech BlogsClaude Code
WebSearch
(site-specific)
Free
Results from multiple sources are merged and deduplicated for comprehensive coverage.
来源工具成本
WebClaude Code
WebSearch
+ xAI
web_search
免费 + $0.005/调用
RedditClaude Code
WebSearch
(site:reddit.com) + xAI
web_search
免费 + $0.005/调用
X/Twitter仅xAI
x_search
$0.005/调用
GitHubClaude Code
WebSearch
(site:github.com) + xAI
web_search
免费 + $0.005/调用
Hacker NewsClaude Code
WebSearch
(site:news.ycombinator.com) + xAI
web_search
免费 + $0.005/调用
SubstackClaude Code
WebSearch
(site:substack.com) + xAI
web_search
免费 + $0.005/调用
财经媒体Claude Code
WebSearch
(指定站点) + xAI
web_search
免费 + $0.005/调用
Wall Street OasisClaude Code
WebSearch
(site:wallstreetoasis.com)
免费
LinkedInClaude Code
WebSearch
(site:linkedin.com) + xAI
web_search
免费 + $0.005/调用
CrunchbaseClaude Code
WebSearch
(site:crunchbase.com)
免费
YouTubeClaude Code
WebSearch
(site:youtube.com)
免费
技术博客Claude Code
WebSearch
(指定站点)
免费
多来源结果会被合并并去重,以实现全面覆盖。

Prerequisites

前提条件

Required Tools

必备工具

ToolPurposeInstall
uvPython package manager (handles dependencies)
curl -LsSf https://astral.sh/uv/install.sh | sh
工具用途安装方式
uvPython包管理器(处理依赖)
curl -LsSf https://astral.sh/uv/install.sh | sh

Optional Tools

可选工具

ToolPurposeInstall
scraplingHeadless browser fallback for sites that block WebFetch (403, captcha, empty responses)
uv tool install 'scrapling[all]'
工具用途安装方式
scrapling针对阻止WebFetch的站点的无头浏览器备选方案(403错误、验证码、空响应)
uv tool install 'scrapling[all]'

API Keys

API密钥

ServicePurposeRequiredGet Key
xAIX/Twitter search + supplemental web/GitHub/HN searchRecommendedhttps://console.x.ai
Note: The skill works without xAI key (web-only mode via Claude Code), but X/Twitter data and broader coverage require xAI.
服务用途是否必填获取密钥
xAIX/Twitter搜索 + 补充性Web/GitHub/HN搜索推荐https://console.x.ai
注意: 该工具在没有xAI密钥的情况下也能运行(仅通过Claude Code的Web模式),但要获取X/Twitter数据和更广泛的覆盖范围,需要xAI密钥。

Keychain Setup (One-Time, for xAI)

钥匙串设置(一次性,用于xAI)

bash
undefined
bash
undefined

1. Create a dedicated keychain

1. 创建专用钥匙串

security create-keychain -p 'YourPassword' ~/Library/Keychains/claude-keys.keychain-db
security create-keychain -p 'YourPassword' ~/Library/Keychains/claude-keys.keychain-db

2. Add keychain to search list

2. 将钥匙串添加到搜索列表

security list-keychains -s ~/Library/Keychains/claude-keys.keychain-db ~/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain
security list-keychains -s ~/Library/Keychains/claude-keys.keychain-db ~/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain

3. Store your xAI API key

3. 存储你的xAI API密钥

echo -n "Enter xAI API key: " && read -s key && security add-generic-password -s "xai-api" -a "$USER" -w "$key" ~/Library/Keychains/claude-keys.keychain-db && unset key && echo

Before using: `security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db`
echo -n "Enter xAI API key: " && read -s key && security add-generic-password -s "xai-api" -a "$USER" -w "$key" ~/Library/Keychains/claude-keys.keychain-db && unset key && echo

使用前执行:`security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db`

Workflow Overview

工作流概述

StepActionPurpose
0Detect xAI keyDetermine Full vs Web-Only mode
1Parse queryExtract TOPIC, FRAMEWORK, DEPTH
2Round 1: Broad searchDiscover entities, themes, initial findings
3Gap analysisIdentify missing perspectives, unverified claims
4Round 2: Targeted follow-upFill gaps, verify claims, deepen coverage
5Round 3: Verification(deep only) Primary source verification
6SynthesisStructure findings into framework template
7Expert modeAnswer follow-ups from cached results
步骤操作目的
0检测xAI密钥确定使用完整模式还是仅Web模式
1解析查询提取主题(TOPIC)、框架(FRAMEWORK)、深度(DEPTH)
2第一轮:广泛搜索发现实体、主题、初步结论
3差距分析识别缺失的视角、未验证的主张
4第二轮:针对性跟进填补差距、验证主张、深化覆盖
5第三轮:验证(仅深度模式)原始来源验证
6合成将结论整理为框架模板
7专家模式基于缓存结果回答后续问题

Step 0: Detect xAI Key

步骤0:检测xAI密钥

MANDATORY — run before every research session.
bash
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"
  • XAI_AVAILABLE=true: Use Full mode — Claude WebSearch AND xAI scripts in parallel.
  • XAI_AVAILABLE=false: Use Web-Only mode — Claude WebSearch only. Append note suggesting xAI setup.
This step is NOT optional. Always check before starting research.
必须执行 — 每次研究会话前都要运行。
bash
security find-generic-password -s "xai-api" -w ~/Library/Keychains/claude-keys.keychain-db 2>/dev/null && echo "XAI_AVAILABLE=true" || echo "XAI_AVAILABLE=false"
  • XAI_AVAILABLE=true:使用完整模式 — 并行运行Claude WebSearch和xAI脚本。
  • XAI_AVAILABLE=false:使用仅Web模式 — 仅使用Claude WebSearch。添加提示建议设置xAI的说明。
此步骤不可省略。开始研究前务必检查。

Step 1: Parse Query

步骤1:解析查询

Extract from user input:
从用户输入中提取以下内容:

1a. TOPIC

1a. 主题(TOPIC)

The subject being researched. Strip framework indicators and depth modifiers.
研究的主题。去除框架指示符和深度修饰符。

1b. FRAMEWORK

1b. 框架(FRAMEWORK)

Detect output framework from query patterns:
FrameworkDetection PatternsExample
COMPARISON"X vs Y", "compare X and Y", "X or Y", "which is better""React vs Vue for enterprise apps"
LANDSCAPE"landscape", "ecosystem", "market", "what's out there", "overview of""AI agent frameworks landscape"
DEEP_DIVE"deep dive", "how does X work", "explain", "tell me about", "what is""Deep dive into WebAssembly"
DECISION"should I/we", "evaluate", "which should we use", "recommend""Should we use Kafka or RabbitMQ?"
Explicit override: User can force a framework with
[comparison]
,
[landscape]
,
[deep-dive]
, or
[decision]
anywhere in query.
Default: If no framework detected, use DEEP_DIVE.
从查询模式中检测输出框架:
框架检测模式示例
对比(COMPARISON)"X vs Y"、"compare X and Y"、"X or Y"、"which is better""React vs Vue for enterprise apps"
格局(LANDSCAPE)"landscape"、"ecosystem"、"market"、"what's out there"、"overview of""AI agent frameworks landscape"
深度剖析(DEEP_DIVE)"deep dive"、"how does X work"、"explain"、"tell me about"、"what is""Deep dive into WebAssembly"
决策(DECISION)"should I/we"、"evaluate"、"which should we use"、"recommend""Should we use Kafka or RabbitMQ?"
显式覆盖:用户可在查询的任意位置使用
[comparison]
[landscape]
[deep-dive]
[decision]
强制指定框架。
默认值:如果未检测到框架,使用深度剖析(DEEP_DIVE)。

1c. DEPTH

1c. 深度(DEPTH)

DepthTriggerRoundsTarget Sources
quick"quick", "brief", "overview"110-15
default(none)225-40
deep"deep", "comprehensive", "thorough"360-90
深度触发词轮次目标来源数量
快速(quick)"quick"、"brief"、"overview"110-15
默认(default)225-40
深度(deep)"deep"、"comprehensive"、"thorough"360-90

Step 2: Round 1 — Broad Search

步骤2:第一轮 — 广泛搜索

Query Generation

查询生成

Generate 6-9 queries covering different angles of the TOPIC:
  1. Direct query:
    "{TOPIC}"
    — the topic as stated
  2. Temporal query:
    "{TOPIC} 2026"
    or
    "{TOPIC} latest"
  3. Reddit query:
    site:reddit.com "{TOPIC}"
  4. GitHub query:
    site:github.com "{TOPIC}"
  5. HN query:
    site:news.ycombinator.com "{TOPIC}"
  6. Framework-specific query:
    • COMPARISON:
      "{Alt A} vs {Alt B}"
    • LANDSCAPE:
      "{TOPIC} ecosystem" OR "{TOPIC} landscape"
    • DEEP_DIVE:
      "how {TOPIC} works" OR "{TOPIC} explained"
    • DECISION:
      "{TOPIC}" experience OR recommendation
  7. Substack query:
    site:substack.com "{TOPIC}"
  8. Financial media query:
    site:tradingview.com OR site:benzinga.com OR site:seekingalpha.com "{TOPIC}"
    (for finance/economics topics)
  9. LinkedIn query:
    site:linkedin.com "{TOPIC}"
    (when topic involves people or companies)
生成6-9个覆盖主题不同角度的查询:
  1. 直接查询
    "{TOPIC}"
    — 主题原文
  2. 时效性查询
    "{TOPIC} 2026"
    "{TOPIC} latest"
  3. Reddit查询
    site:reddit.com "{TOPIC}"
  4. GitHub查询
    site:github.com "{TOPIC}"
  5. HN查询
    site:news.ycombinator.com "{TOPIC}"
  6. 框架特定查询
    • 对比(COMPARISON):
      "{Alt A} vs {Alt B}"
    • 格局(LANDSCAPE):
      "{TOPIC} ecosystem" OR "{TOPIC} landscape"
    • 深度剖析(DEEP_DIVE):
      "how {TOPIC} works" OR "{TOPIC} explained"
    • 决策(DECISION):
      "{TOPIC}" experience OR recommendation
  7. Substack查询
    site:substack.com "{TOPIC}"
  8. 财经媒体查询
    site:tradingview.com OR site:benzinga.com OR site:seekingalpha.com "{TOPIC}"
    (针对金融/经济主题)
  9. LinkedIn查询
    site:linkedin.com "{TOPIC}"
    (当主题涉及人物或公司时)

Parallel Execution

并行执行

Run searches simultaneously:
Claude Code (free):
  • WebSearch
    : direct query
  • WebSearch
    : temporal query
  • WebSearch
    : Reddit-targeted query
  • WebSearch
    : GitHub-targeted query
  • WebSearch
    : HN-targeted query
  • WebSearch
    : Substack-targeted query
  • WebSearch
    : Financial media-targeted query (for finance/economics topics)
  • WebSearch
    : LinkedIn-targeted query (when topic involves people/companies)
  • WebSearch
    : YouTube-targeted query
  • WebSearch
    : WSO-targeted query (for finance topics)
  • WebSearch
    : Crunchbase-targeted query (for company/startup topics)
xAI scripts (if available, run as background Bash tasks):
bash
uv run scripts/xai_search.py web "{TOPIC}" --json &
uv run scripts/xai_search.py reddit "{TOPIC}" --json &
uv run scripts/xai_search.py x "{TOPIC}" --json &
uv run scripts/xai_search.py github "{TOPIC}" --json &
uv run scripts/xai_search.py hn "{TOPIC}" --json &
uv run scripts/xai_search.py substack "{TOPIC}" --json &
uv run scripts/xai_search.py finance "{TOPIC}" --json &
uv run scripts/xai_search.py linkedin "{TOPIC}" --json &
同时运行搜索:
Claude Code(免费):
  • WebSearch
    :直接查询
  • WebSearch
    :时效性查询
  • WebSearch
    :Reddit定向查询
  • WebSearch
    :GitHub定向查询
  • WebSearch
    :HN定向查询
  • WebSearch
    :Substack定向查询
  • WebSearch
    :财经媒体定向查询(针对金融/经济主题)
  • WebSearch
    :LinkedIn定向查询(当主题涉及人物/公司时)
  • WebSearch
    :YouTube定向查询
  • WebSearch
    :WSO定向查询(针对金融主题)
  • WebSearch
    :Crunchbase定向查询(针对公司/初创企业主题)
xAI脚本(如果可用,作为后台Bash任务运行):
bash
uv run scripts/xai_search.py web "{TOPIC}" --json &
uv run scripts/xai_search.py reddit "{TOPIC}" --json &
uv run scripts/xai_search.py x "{TOPIC}" --json &
uv run scripts/xai_search.py github "{TOPIC}" --json &
uv run scripts/xai_search.py hn "{TOPIC}" --json &
uv run scripts/xai_search.py substack "{TOPIC}" --json &
uv run scripts/xai_search.py finance "{TOPIC}" --json &
uv run scripts/xai_search.py linkedin "{TOPIC}" --json &

Merge and Deduplicate

合并与去重

MERGED_WEB = dedupe(claude_web + xai_web)
MERGED_REDDIT = dedupe(claude_reddit + xai_reddit)
MERGED_GITHUB = dedupe(claude_github + xai_github)
MERGED_HN = dedupe(claude_hn + xai_hn)
MERGED_SUBSTACK = dedupe(claude_substack + xai_substack)
MERGED_FINANCE = dedupe(claude_finance + xai_finance)
MERGED_LINKEDIN = dedupe(claude_linkedin + xai_linkedin)
X_RESULTS = xai_x_results  (no Claude equivalent)
WSO_RESULTS = claude_wso  (WebSearch only)
CRUNCHBASE_RESULTS = claude_crunchbase  (WebSearch only)
YOUTUBE_RESULTS = claude_youtube  (WebSearch only)
Deduplication by URL, keeping the entry with more metadata.
MERGED_WEB = dedupe(claude_web + xai_web)
MERGED_REDDIT = dedupe(claude_reddit + xai_reddit)
MERGED_GITHUB = dedupe(claude_github + xai_github)
MERGED_HN = dedupe(claude_hn + xai_hn)
MERGED_SUBSTACK = dedupe(claude_substack + xai_substack)
MERGED_FINANCE = dedupe(claude_finance + xai_finance)
MERGED_LINKEDIN = dedupe(claude_linkedin + xai_linkedin)
X_RESULTS = xai_x_results  (无Claude等效工具)
WSO_RESULTS = claude_wso  (仅WebSearch)
CRUNCHBASE_RESULTS = claude_crunchbase  (仅WebSearch)
YOUTUBE_RESULTS = claude_youtube  (仅WebSearch)
通过URL去重,保留元数据更丰富的条目。

Round 1 Internal Notes

第一轮内部记录

Record (internally, NOT in output):
KEY_ENTITIES: [specific tools, companies, people discovered]
THEMES: [recurring themes across sources]
GAPS: [what's missing — feed into Step 3]
CONTRADICTIONS: [conflicting claims]
LEADS: [URLs worth deep-reading via WebFetch (or scrapling fallback) in Round 2]
记录(内部使用,不包含在输出中):
KEY_ENTITIES: [发现的特定工具、公司、人物]
THEMES: [跨来源的重复主题]
GAPS: [缺失的内容 — 输入到步骤3]
CONTRADICTIONS: [相互矛盾的主张]
LEADS: [在第二轮中值得通过WebFetch(或scrapling备选方案)深度阅读的URL]

Step 3: Gap Analysis

步骤3:差距分析

After Round 1, run gap analysis. See
references/iterative-research.md
for full checklist.
第一轮结束后,进行差距分析。完整检查清单请参考
references/iterative-research.md

Gap Categories

差距类别

GapCheckAction
Missing perspectiveHave developer, operator, and business views?Target missing perspective
Unverified claimsAny claims from only 1 source?Seek corroboration
Shallow coverageAny entity mentioned but unexplained?Deep-search that entity
Stale dataKey facts > 12 months old?Search for recent updates
Missing source typeMissing Reddit / GitHub / HN / X / blogs?Target that platform
Missing financial/business contextMissing Substack / financial media / LinkedIn / Crunchbase?Target that platform
差距检查项操作
缺失视角是否涵盖开发者、运维和业务视角?针对缺失视角进行搜索
未验证主张是否有仅来自单一来源的主张?寻求佐证
覆盖较浅是否有提及但未解释的实体?对该实体进行深度搜索
数据过时关键事实是否超过12个月?搜索最新更新
缺失来源类型缺失Reddit/GitHub/HN/X/博客来源?针对该平台进行搜索
缺失财务/业务背景缺失Substack/财经媒体/LinkedIn/Crunchbase来源?针对该平台进行搜索

Plan Round 2

规划第二轮

Select top 4-6 gaps. Generate targeted queries for each. See
references/search-patterns.md
for multi-round refinement patterns.
Skip to Step 6 if depth is
quick
(single round only).
选择前4-6个差距。为每个差距生成针对性查询。多轮优化模式请参考
references/search-patterns.md
如果深度为quick(仅一轮),则跳至步骤6。

Step 4: Round 2 — Targeted Follow-Up

步骤4:第二轮 — 针对性跟进

Query Rules

查询规则

  1. Never repeat Round 1 queries
  2. Entity-specific queries — target names/tools discovered in Round 1
  3. Source-type specific — target platforms underrepresented in Round 1
  4. Framework-adapted — see targeting table in
    references/iterative-research.md
  1. 绝不重复第一轮的查询
  2. 实体特定查询 — 针对第一轮中发现的名称/工具
  3. 来源类型特定 — 针对第一轮中覆盖不足的平台
  4. 框架适配 — 请参考
    references/iterative-research.md
    中的目标表

Execution

执行

Same parallel pattern as Round 1, but with targeted queries.
Additionally, use
WebFetch
for high-value URLs discovered in Round 1:
  • Official documentation pages
  • Benchmark result pages
  • Engineering blog posts
  • Comparison articles with methodology
Maximum 4-6 WebFetch calls in Round 2.
Scrapling fallback: If WebFetch returns 403, empty content, a captcha page, or a blocked response, retry using the auto-escalation protocol from cli-web-scrape:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → Read → validate content
  2. If content is thin (JS-only shell, no data, mostly nav links) →
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → Read → validate
  3. If still blocked →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. All tiers fail → skip URL and note "scrapling blocked"
与第一轮相同的并行模式,但使用针对性查询。
此外,对第一轮中发现的高价值URL使用
WebFetch
  • 官方文档页面
  • 基准测试结果页面
  • 技术博客文章
  • 带方法论的对比文章
第二轮最多使用4-6次WebFetch调用。
Scrapling备选方案: 如果WebFetch返回403错误、空内容、验证码页面或被阻止的响应,请按照cli-web-scrape的自动升级协议重试:
  1. scrapling extract get "URL" /tmp/scrapling-fallback.md
    → 读取 → 验证内容
  2. 如果内容单薄(仅JS外壳、无数据、多为导航链接)→
    scrapling extract fetch "URL" /tmp/scrapling-fallback.md --network-idle --disable-resources
    → 读取 → 验证
  3. 如果仍被阻止 →
    scrapling extract stealthy-fetch "URL" /tmp/scrapling-fallback.md --solve-cloudflare
  4. 所有层级均失败 → 跳过该URL并记录"scrapling blocked"

Confidence Update

置信度更新

After Round 2, re-assess all claims:
BeforeNew EvidenceAfter
[LOW]Second source found[MEDIUM]
[MEDIUM]Third source found[HIGH]
AnyContradictedNote conflict, present both sides
Skip to Step 6 if depth is
default
.
第二轮结束后,重新评估所有主张:
之前新证据之后
[低]找到第二个来源[中]
[中]找到第三个来源[高]
任意被反驳记录冲突,呈现双方观点
如果深度为default,则跳至步骤6。

Step 5: Round 3 — Verification (Deep Only)

步骤5:第三轮 — 验证(仅深度模式)

Round 3 is for verification only. No new discovery.
第三轮仅用于验证,不进行新的探索。

Budget

预算

Maximum 6-10 WebFetch lookups targeting:
TargetPurposeMax Calls
Primary sources for key claimsVerify accuracy3-4
Independent benchmark sitesValidate performance claims1-2
Both sides of contradictionsResolve conflicts1-2
Official sites for versions/datesConfirm recency1-2
Scrapling fallback: Same auto-escalation protocol as Round 2 — try HTTP tier, validate content, escalate to Dynamic/Stealthy if thin or blocked.
最多使用6-10次WebFetch查找,目标如下:
目标目的最大调用次数
关键主张的原始来源验证准确性3-4
独立基准测试站点验证性能主张1-2
矛盾双方的来源解决冲突1-2
版本/日期的官方站点确认时效性1-2
Scrapling备选方案: 与第二轮相同的自动升级协议 — 先尝试HTTP层级,验证内容,若内容单薄或被阻止则升级到动态/隐身模式。

Rules

规则

  1. Verify, don't discover — no new topic exploration
  2. Target highest-impact claims — those that would change the recommendation
  3. Check primary sources — go to the original, not summaries
  4. Update confidence — upgrade or downgrade based on findings
  5. Trust primary over secondary — if primary contradicts secondary, note it
  1. 仅验证,不探索 — 不进行新的主题探索
  2. 针对影响最大的主张 — 那些会改变建议的主张
  3. 检查原始来源 — 访问原始内容,而非摘要
  4. 更新置信度 — 根据发现提升或降低置信度
  5. 优先信任原始来源 — 如果原始来源与次要来源矛盾,记录该情况

Step 6: Synthesis

步骤6:合成

Framework Selection

框架选择

Load
references/output-frameworks.md
and select the template matching the detected FRAMEWORK.
加载
references/output-frameworks.md
并选择与检测到的框架匹配的模板。

Filling the Template

填充模板

  1. Header block — Framework type, topic, depth, source count, date
  2. Core content — Fill framework sections with research findings
  3. Confidence indicators — Mark each claim:
    [HIGH]
    ,
    [MEDIUM]
    , or
    [LOW]
  4. Community perspective — Synthesize Reddit/X/HN/GitHub sentiment
  5. Statistics footer — Source counts and engagement metrics
  6. Sources section — Organized by platform with URLs and metrics
  1. 标题块 — 框架类型、主题、深度、来源数量、日期
  2. 核心内容 — 用研究结论填充框架章节
  3. 置信度指标 — 为每个主张标记:
    [HIGH]
    [MEDIUM]
    [LOW]
  4. 社区视角 — 合成Reddit/X/HN/GitHub的观点
  5. 统计页脚 — 来源数量和参与度指标
  6. 来源章节 — 按平台整理,包含URL和指标

Engagement-Weighted Synthesis

基于参与度的合成

Weight sources by signal strength. See
references/iterative-research.md
for full weighting table.
SignalThresholdWeight
Reddit upvotes100+High
X engagement50+ likesHigh
GitHub stars1000+High
HN points100+High
Substack likes50+High
Multi-platform agreement3+ sourcesHigh
Dual-engine matchClaude + xAIHigh
Seeking Alpha comments20+Medium
WSO upvotes10+Medium
YouTube views10K+Medium
Recent (< 7 days)AnyMedium
Single source onlyAnyLow
根据信号强度对来源进行加权。完整加权表请参考
references/iterative-research.md
信号阈值权重
Reddit点赞数100+
X参与度50+点赞
GitHub星标数1000+
HN点数100+
Substack点赞数50+
多平台一致3+来源
双引擎匹配Claude + xAI
Seeking Alpha评论数20+
WSO点赞数10+
YouTube播放量10K+
近期内容(<7天)任意
仅单一来源任意

Statistics Footer Format

统计页脚格式

Research Statistics
├─ Reddit: {n} threads │ {upvotes} upvotes
├─ X: {n} posts │ {likes} likes │ {reposts} reposts
├─ GitHub: {n} repos │ {stars} total stars
├─ HN: {n} threads │ {points} total points
├─ Substack: {n} articles │ {likes} likes
├─ Financial: {n} articles │ {sources}
├─ LinkedIn: {n} profiles/articles
├─ YouTube: {n} videos
├─ WSO: {n} threads
├─ Crunchbase: {n} profiles
├─ Web: {n} pages │ {domains}
├─ Merged: {n} from Claude + {n} from xAI
└─ Top voices: r/{sub1} │ @{handle1} │ {blog1}
Web-Only Mode footer:
Research Statistics
├─ Reddit: {n} threads (via Claude WebSearch)
├─ GitHub: {n} repos (via Claude WebSearch)
├─ HN: {n} threads (via Claude WebSearch)
├─ Substack: {n} articles (via Claude WebSearch)
├─ Financial: {n} articles (via Claude WebSearch)
├─ LinkedIn: {n} profiles/articles (via Claude WebSearch)
├─ YouTube: {n} videos (via Claude WebSearch)
├─ Web: {n} pages
└─ Top sources: {site1}, {site2}

Add X/Twitter + broader coverage: Set up xAI API key (see Prerequisites)
研究统计
├─ Reddit: {n} 个帖子 │ {upvotes} 点赞
├─ X: {n} 个帖子 │ {likes} 点赞 │ {reposts} 转发
├─ GitHub: {n} 个仓库 │ {stars} 总星标数
├─ HN: {n} 个帖子 │ {points} 总点数
├─ Substack: {n} 篇文章 │ {likes} 点赞
├─ 财经媒体: {n} 篇文章 │ {sources} 来源
├─ LinkedIn: {n} 个个人资料/文章
├─ YouTube: {n} 个视频
├─ WSO: {n} 个帖子
├─ Crunchbase: {n} 个资料
├─ Web: {n} 个页面 │ {domains} 个域名
├─ 合并结果: {n} 来自Claude + {n} 来自xAI
└─ 核心发声者: r/{sub1} │ @{handle1} │ {blog1}
仅Web模式页脚:
研究统计
├─ Reddit: {n} 个帖子 (通过Claude WebSearch)
├─ GitHub: {n} 个仓库 (通过Claude WebSearch)
├─ HN: {n} 个帖子 (通过Claude WebSearch)
├─ Substack: {n} 篇文章 (通过Claude WebSearch)
├─ 财经媒体: {n} 篇文章 (通过Claude WebSearch)
├─ LinkedIn: {n} 个个人资料/文章 (通过Claude WebSearch)
├─ YouTube: {n} 个视频 (通过Claude WebSearch)
├─ Web: {n} 个页面
└─ 主要来源: {site1}, {site2}

添加X/Twitter + 更广泛覆盖:设置xAI API密钥(请参考前提条件)

Step 7: Expert Mode

步骤7:专家模式

After delivering research, enter Expert Mode:
  • Answer follow-ups from cached results
  • No new searches unless explicitly requested
  • Cross-reference between sources
New search triggers (exit Expert Mode):
  • "Search again for..."
  • "Find more about..."
  • "Update the research..."
  • "Look deeper into..."
交付研究结果后,进入专家模式:
  • 基于缓存结果回答后续问题
  • 除非明确要求,否则不进行新的搜索
  • 跨来源交叉引用
新搜索触发词(退出专家模式):
  • "Search again for..."
  • "Find more about..."
  • "Update the research..."
  • "Look deeper into..."

Operational Modes

运行模式

ModeSourcesWhen
FullClaude WebSearch + xAI (web + X + Reddit + GitHub + HN + Substack + Finance + LinkedIn)Step 0 returns XAI_AVAILABLE=true
Web-OnlyClaude WebSearch onlyStep 0 returns XAI_AVAILABLE=false
Mode is determined by Step 0 — never skip it or assume Web-Only without checking.
模式来源适用场景
完整模式Claude WebSearch + xAI(Web + X + Reddit + GitHub + HN + Substack + 财经 + LinkedIn)步骤0返回XAI_AVAILABLE=true
仅Web模式仅Claude WebSearch步骤0返回XAI_AVAILABLE=false
模式由步骤0决定 — 绝不跳过该步骤,也不假设默认使用仅Web模式。

Depth Control

深度控制

DepthRoundsSourcesxAI Calls (Full)Use Case
quick110-158Fast overview, time-sensitive
default225-4013Balanced research
deep360-9018 + 6-10 WebFetchComprehensive analysis, important decisions
深度轮次来源数量xAI调用次数(完整模式)适用场景
快速(quick)110-158快速概述、时间敏感场景
默认(default)225-4013平衡型研究
深度(deep)360-9018 + 6-10次WebFetch综合性分析、重要决策

Cost Awareness

成本说明

ActionCost
Claude Code WebSearchFree (subscription)
xAI search call (any type)$0.005/call
WebFetch (built-in)Free
scrapling fallback (optional)Free
Estimated cost per research session:
DepthFull ModeWeb-Only
quick~$0.04 (8 xAI calls)Free
default~$0.065 (13 xAI calls)Free
deep~$0.09 (18 xAI calls)Free
Cost-Saving Strategy:
  • Claude WebSearch handles most needs (free)
  • xAI adds X/Twitter (unique value) + broader coverage per platform
  • WebFetch for deep-reading specific URLs (free), with scrapling fallback for blocked sites
操作成本
Claude Code WebSearch免费(订阅制)
xAI搜索调用(任意类型)$0.005/调用
WebFetch(内置)免费
scrapling备选方案(可选)免费
每次研究会话的预估成本:
深度完整模式仅Web模式
快速~$0.04(8次xAI调用)免费
默认~$0.065(13次xAI调用)免费
深度~$0.09(18次xAI调用)免费
成本节约策略:
  • Claude WebSearch可满足大部分需求(免费)
  • xAI添加X/Twitter(独特价值) + 每个平台更广泛的覆盖
  • WebFetch用于深度阅读特定URL(免费),针对被阻止的站点使用scrapling备选方案

Critical Constraints

关键约束

DO:
  • Run Step 0 (xAI key detection) before every research session
  • If xAI key exists: run Claude WebSearch AND xAI scripts in parallel (Full mode)
  • If xAI key missing: use Claude WebSearch only (Web-Only mode)
  • Run gap analysis between rounds — never skip it
  • Merge and deduplicate results by URL
  • Exclude archived GitHub repositories from results — they are unmaintained and read-only
  • Mark every claim with confidence:
    [HIGH]
    ,
    [MEDIUM]
    , or
    [LOW]
  • Ground synthesis in actual research, not pre-existing knowledge
  • Cite specific sources with URLs
  • Use
    --json
    flag when calling xAI scripts for programmatic parsing
  • Load framework template from
    references/output-frameworks.md
DON'T:
  • Skip Claude Code WebSearch (it's free)
  • Run sequential searches when parallel is possible
  • Display duplicate results from different search engines
  • Quote more than 125 characters from any single source
  • Repeat queries across rounds — each round targets gaps from previous
  • Add Round 3 for quick or default depth — it's deep-only
  • Discover new topics in Round 3 — verification only
必须执行:
  • 每次研究会话前运行步骤0(检测xAI密钥)
  • 如果存在xAI密钥:并行运行Claude WebSearch和xAI脚本(完整模式)
  • 如果缺少xAI密钥:仅使用Claude WebSearch(仅Web模式)
  • 轮次之间进行差距分析 — 绝不跳过
  • 通过URL合并并去重结果
  • 从结果中排除已归档的GitHub仓库 — 这些仓库已不再维护且为只读
  • 为每个主张标记置信度:
    [HIGH]
    [MEDIUM]
    [LOW]
  • 基于实际研究进行合成,而非既有知识
  • 引用带URL的特定来源
  • 调用xAI脚本时使用
    --json
    标志以实现程序化解析
  • references/output-frameworks.md
    加载框架模板
禁止执行:
  • 跳过Claude Code WebSearch(它是免费的)
  • 可以并行搜索时绝不串行执行
  • 显示来自不同搜索引擎的重复结果
  • 从单一来源引用超过125个字符
  • 跨轮次重复查询 — 每一轮都针对前一轮的差距
  • 为快速或默认深度添加第三轮 — 仅深度模式使用第三轮
  • 在第三轮中探索新主题 — 仅用于验证

Troubleshooting

故障排除

xAI key not found:
bash
security find-generic-password -s "xai-api" ~/Library/Keychains/claude-keys.keychain-db
If not found, run keychain setup above.
Keychain locked:
bash
security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db
No X/Twitter results: Requires valid xAI API key. Check at https://console.x.ai
WebFetch blocked (403/captcha/empty): Install scrapling and follow the auto-escalation protocol from cli-web-scrape (HTTP → validate → Dynamic → Stealthy):
bash
uv tool install 'scrapling[all]'
scrapling install  # one-time: install browser engines for Dynamic/Stealthy tiers
Script errors: Ensure uv is installed:
which uv
. If missing:
curl -LsSf https://astral.sh/uv/install.sh | sh
未找到xAI密钥:
bash
security find-generic-password -s "xai-api" ~/Library/Keychains/claude-keys.keychain-db
如果未找到,请运行上述钥匙串设置步骤。
钥匙串已锁定:
bash
security unlock-keychain ~/Library/Keychains/claude-keys.keychain-db
无X/Twitter结果: 需要有效的xAI API密钥。请访问https://console.x.ai检查。
WebFetch被阻止(403/验证码/空响应): 安装scrapling并遵循cli-web-scrape的自动升级协议(HTTP → 验证 → 动态 → 隐身):
bash
uv tool install 'scrapling[all]'
scrapling install  # 一次性操作:安装动态/隐身模式所需的浏览器引擎
脚本错误: 确保已安装uv:
which uv
。如果未安装:
curl -LsSf https://astral.sh/uv/install.sh | sh

References

参考资料

  • references/output-frameworks.md
    — Framework templates (comparison, landscape, deep-dive, decision)
  • references/search-patterns.md
    — Search operators and multi-round query patterns
  • references/iterative-research.md
    — Gap analysis, round procedures, cross-referencing methodology
  • references/output-frameworks.md
    — 框架模板(对比、格局、深度剖析、决策)
  • references/search-patterns.md
    — 搜索运算符和多轮查询模式
  • references/iterative-research.md
    — 差距分析、轮次流程、交叉引用方法论