tech-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Tech Research

技术调研

Orchestrate multi-source technical research by dispatching parallel subagents to gather intelligence from X/Twitter (via Grok), GitHub repositories (via DeepWiki), and the web (via WebSearch). Synthesize all findings into a single actionable report.
Architecture: The main agent orchestrates research using one of two modes — lightweight (Task Subagents) or heavyweight (Agent Teammates) — chosen based on research complexity.
通过调度并行子Agent从X/Twitter(借助Grok)、GitHub仓库(借助DeepWiki)和网络(借助WebSearch)收集情报,统筹多源技术调研工作,并将所有调研结果整合为一份可执行的统一报告。
架构设计:主Agent会根据调研复杂度,选择两种模式之一来统筹调研工作——轻量模式(任务子Agent)或重量级模式(Agent协作团队)。

Research Mode Selection

调研模式选择

Before dispatching any agents, determine the appropriate mode:
Signal→ Mode
Single topic, multiple data sources (Grok + DeepWiki + WebSearch)Light → Task Subagents
Multiple independent topics/competitors needing cross-comparisonHeavy → Agent Teammates
Research may produce follow-up questions requiring dynamic re-scopingHeavy → Agent Teammates
Agent count ≥ 4Heavy → Agent Teammates
在调度任何Agent之前,需确定合适的模式:
信号→ 模式
单一主题、多数据源(Grok + DeepWiki + WebSearch)轻量模式 → 任务子Agent
需要交叉对比的多个独立主题/竞品重量级模式 → Agent协作团队
调研可能产生需要动态调整范围的后续问题重量级模式 → Agent协作团队
Agent数量 ≥4重量级模式 → Agent协作团队

Light Mode (default for single-topic research)

轻量模式(单一主题调研默认模式)

Dispatch up to 3 Task Subagents (
Task
with
subagent_type: "general-purpose"
). Each handles one data source independently. The main agent synthesizes results after all return.
最多调度3个任务子Agent(
Task
搭配
subagent_type: "general-purpose"
)。每个子Agent独立处理一个数据源,所有子Agent返回结果后,由主Agent进行整合。

Heavy Mode (for multi-topic / competitive research)

重量级模式(适用于多主题/竞品调研)

Use
TeamCreate
to create a research team →
TaskCreate
for each research task → spawn Agent Teammates (via
Task
with
team_name
and
name
parameters) → coordinate via
SendMessage
. Teammates can:
  • Communicate to avoid duplication ("I found Project A uses the same approach as B — focus on their differentiators")
  • Share discoveries across tasks ("The blog post I found compares all 3 frameworks, sending you the link")
  • Dynamically adjust scope based on what others have found
使用
TeamCreate
创建调研团队 → 为每个调研任务调用
TaskCreate
→ 生成Agent协作成员(通过带有
team_name
name
参数的
Task
)→ 通过
SendMessage
进行协作协调。协作成员可:
  • 沟通避免重复工作(如“我发现Project A采用了与B相同的方案,请聚焦两者的差异化点”)
  • 跨任务共享发现成果(如“我找到的这篇博客对比了全部3个框架,已为你发送链接”)
  • 根据其他成员的发现动态调整调研范围

When to Use

适用场景

  • Evaluating a technology, library, or framework for adoption
  • Comparing alternatives (e.g., "Zustand vs Jotai vs Redux")
  • Investigating a GitHub repo's architecture and community reception
  • Gauging developer sentiment on a new API, tool, or announcement
  • Cross-language research (Chinese/Japanese developer communities)
  • 评估待采用的技术、类库或框架
  • 对比技术选型(如“Zustand vs Jotai vs Redux”)
  • 调研GitHub仓库的架构设计与社区反馈
  • 了解开发者对新API、工具或公告的态度
  • 跨语言社区调研(中文/日文开发者社区)

Research Sources

调研数据源

SourceWhat It ProvidesBest For
Grok (X/Twitter)Real developer opinions, @handles, post URLsSentiment, expert discovery, niche recommendations
DeepWiki (GitHub)AI-powered repo analysis, architecture, API docsUnderstanding codebases, comparing repo internals. Only use
ask_question
— never
read_wiki_structure
or
read_wiki_contents
(they return massive dumps that easily exceed context limits)
WebSearchOfficial docs, blog posts, benchmarks, tutorialsFacts, performance data, official announcements
数据源提供内容最佳适用场景
Grok(X/Twitter)实时开发者观点、@用户名、推文链接开发者态度调研、专家发掘、小众技术推荐
DeepWiki(GitHub)基于AI的仓库分析、架构解读、API文档理解代码库、对比仓库内部实现。仅可使用
ask_question
——禁止使用
read_wiki_structure
read_wiki_contents
(这两个接口会返回大量数据,极易超出上下文限制)
WebSearch官方文档、博客文章、性能基准、教程事实查证、性能数据获取、官方公告查询

Workflow

工作流程

1. Analyze the Research Question

1. 分析调研问题

Break the user's question into sub-queries for each source:
  • Grok query: Developer opinions, community sentiment, expert recommendations
  • DeepWiki query: Repository architecture, API design, code quality (requires
    owner/repo
    )
  • WebSearch query: Official docs, benchmarks, comparisons, recent announcements
Not every research task needs all 3 sources. Select sources based on the question:
Research TypeGrokDeepWikiWebSearch
"Should we use library X?"YesYes (if OSS)Yes
"What are devs saying about X?"YesNoMaybe
"How does repo X work internally?"NoYesMaybe
"Compare X vs Y performance"MaybeYes (both repos)Yes
"What's new in framework X?"YesNoYes
将用户的问题拆解为适配各数据源的子查询:
  • Grok查询:开发者观点、社区态度、专家推荐
  • DeepWiki查询:仓库架构、API设计、代码质量(需提供
    owner/repo
    信息)
  • WebSearch查询:官方文档、性能基准、技术对比、最新公告
并非所有调研任务都需要用到3个数据源,需根据问题选择合适的数据源:
调研类型GrokDeepWikiWebSearch
“我们是否应该使用类库X?”是(若为开源项目)
“开发者对X的评价如何?”可选
“仓库X的内部实现逻辑是怎样的?”可选
“对比X与Y的性能表现”可选是(需两个仓库信息)
“框架X有哪些新特性?”

2. Dispatch Research Agents

2. 调度调研Agent

Choose the dispatch method based on the research mode selected above.
根据选定的调研模式选择调度方式。

Light Mode: Task Subagents

轻量模式:任务子Agent

Launch subagents concurrently using
Task
. See references/subagent_templates.md for complete prompt templates.
Grok subagent:
Task(subagent_type: "general-purpose", description: "Ask Grok about [topic]", prompt: <grok_template>)
DeepWiki subagent:
Task(subagent_type: "general-purpose", description: "DeepWiki research [repo]", prompt: <deepwiki_template>)
WebSearch subagent:
Task(subagent_type: "general-purpose", description: "Web research [topic]", prompt: <websearch_template>)
使用
Task
并发启动子Agent。完整的提示模板可参考references/subagent_templates.md
Grok子Agent:
Task(subagent_type: "general-purpose", description: "Ask Grok about [topic]", prompt: <grok_template>)
DeepWiki子Agent:
Task(subagent_type: "general-purpose", description: "DeepWiki research [repo]", prompt: <deepwiki_template>)
WebSearch子Agent:
Task(subagent_type: "general-purpose", description: "Web research [topic]", prompt: <websearch_template>)

Heavy Mode: Agent Teammates

重量级模式:Agent协作团队

1. TeamCreate(team_name: "research-[topic]")
2. TaskCreate(subject: "Research [Project A]", description: "...", activeForm: "Researching [Project A]")
3. TaskCreate(subject: "Research [Project B]", description: "...", activeForm: "Researching [Project B]")
4. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-a", prompt: "...")
5. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-b", prompt: "...")
6. Coordinate via SendMessage — share findings, adjust scope, avoid duplication
7. Synthesize after all teammates report back
8. Shutdown teammates and TeamDelete when done
Each teammate should use all relevant data sources (Grok, DeepWiki, WebSearch) for their assigned topic, rather than splitting by data source.
1. TeamCreate(team_name: "research-[topic]")
2. TaskCreate(subject: "Research [Project A]", description: "...", activeForm: "Researching [Project A]")
3. TaskCreate(subject: "Research [Project B]", description: "...", activeForm: "Researching [Project B]")
4. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-a", prompt: "...")
5. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-b", prompt: "...")
6. Coordinate via SendMessage — share findings, adjust scope, avoid duplication
7. Synthesize after all teammates report back
8. Shutdown teammates and TeamDelete when done
每个协作成员应为其负责的主题使用所有相关数据源(Grok、DeepWiki、WebSearch),而非按数据源拆分工作。

3. Synthesize and Report

3. 结果整合与报告输出

After all subagents return, merge findings into a unified report:
markdown
undefined
所有子Agent返回结果后,将调研发现整合为统一报告:
markdown
undefined

Tech Research: [Topic]

技术调研:[主题]

TL;DR

摘要

[2-3 sentence executive summary with clear recommendation]
[2-3句话的执行摘要,包含明确的推荐意见]

Community Sentiment (from X/Twitter)

社区态度(来自X/Twitter)

  • [Key opinions with @username attribution]
  • [Verified post URLs]
  • [带有@用户名标注的核心观点]
  • [已验证的推文链接]

Repository Analysis (from DeepWiki)

仓库分析(来自DeepWiki)

  • Architecture overview
  • Code quality observations
  • API design patterns
  • Activity and maintenance status
  • 架构概述
  • 代码质量观察
  • API设计模式
  • 活跃度与维护状态

Web Intelligence

网络情报

  • Official documentation highlights
  • Benchmark data
  • Blog post insights
  • Recent announcements
  • 官方文档重点
  • 性能基准数据
  • 博客文章洞察
  • 最新公告

Comparison Matrix (if comparing alternatives)

对比矩阵(若为竞品对比)

CriteriaOption AOption B
[criterion][finding][finding]
评估维度选项A选项B
[维度项][调研发现][调研发现]

Recommendation

推荐意见

[Clear, actionable recommendation based on all sources]
[基于所有数据源得出的清晰、可执行的推荐]

Limitations

局限性

[What couldn't be verified or found]
undefined
[无法验证或未找到的信息说明]
undefined

Grok Query Strategies

Grok查询策略

Choose the right strategy based on the research goal. Strategy matters more than the topic itself. Every query MUST include X/Twitter-scoping keywords to avoid falling back to broad web search.
StrategyBest ForTemplate
Expert Discovery (highest success)Finding people to follow, active practitioners"Who are the most active [topic] developers on X worth following? Give @handles and link to recent posts"
Community Sentiment (high success)New API launches, framework releases"Search X posts about [topic]. What are developers saying about [aspect 1], [aspect 2]? Include @username and post URLs"
Technology Selection (medium)Choosing between libraries"What [type of tool] are developers currently recommending on X for [use case]? Include @username and post URLs"
CJK Research (medium)Chinese/Japanese community"搜索 X 上中文开发者关于 [topic] 的讨论。请给出推文链接和 @用户名"
Workflow DiscoveryHow devs use tools"How are developers on X using [tool] for [domain]? Include skills, workflows, @username and post URLs"
Avoid: Too-specific technical queries (X is not Stack Overflow) and searching for niche library names (search for the need instead).
需根据调研目标选择合适的策略。策略本身比主题更重要。所有查询必须包含X/Twitter限定关键词,避免退化为通用网络搜索。
策略最佳适用场景模板
专家发掘(成功率最高)寻找值得关注的活跃从业者"X平台上哪些[主题]领域的开发者最为活跃,值得关注?请给出@用户名及近期推文链接"
社区态度调研(成功率高)新API发布、框架版本更新"搜索X上关于[主题]的推文。开发者对[方面1]、[方面2]的评价如何?请包含@用户名及推文链接"
技术选型建议(成功率中等)类库选型决策"X平台上的开发者目前推荐哪些[工具类型]用于[使用场景]?请包含@用户名及推文链接"
CJK社区调研(成功率中等)中文/日文开发者社区"搜索 X 上中文开发者关于 [topic] 的讨论。请给出推文链接和 @用户名"
工作流调研开发者工具使用方式"X平台上的开发者如何在[领域]中使用[工具]?请包含相关技能、工作流、@用户名及推文链接"
避坑提示:避免过于具体的技术查询(X并非Stack Overflow),避免搜索小众类库名称(应搜索对应的需求场景)。

Grok Pre-flight

Grok前置检查

Before dispatching a Grok subagent, run the pre-flight check:
bash
bash ${CLAUDE_PLUGIN_ROOT}/skills/tech-research/scripts/grok_preflight.sh
Exit CodeAction
0
READY
Dispatch Grok subagent
1
NEEDS_LOGIN
Dispatch anyway; subagent will verify
2
NOT_CONFIGURED
Skip Grok source, note in report
If Grok is unavailable, proceed with DeepWiki + WebSearch only.
调度Grok子Agent前,需运行前置检查脚本:
bash
bash ${CLAUDE_PLUGIN_ROOT}/skills/tech-research/scripts/grok_preflight.sh
退出码操作建议
0
就绪
调度Grok子Agent
1
需要登录
正常调度,子Agent会自行验证登录状态
2
未配置
跳过Grok数据源,并在报告中注明
若Grok不可用,可仅使用DeepWiki + WebSearch完成调研。

Tips

实用技巧

  • For CJK communities, query Grok in the target language directly
  • DeepWiki accepts up to 10 repos in a single query for comparisons
  • WebSearch is best for recent information (include current year in queries)
  • Always verify Grok post URLs before citing — accuracy is ~80%
  • Run subagents in parallel to minimize total research time
  • 针对CJK社区,直接使用目标语言向Grok发起查询
  • DeepWiki单次查询最多支持10个仓库的对比分析
  • WebSearch最适合获取近期信息(查询时建议包含当前年份)
  • 引用Grok推文链接前务必验证——链接准确率约为80%
  • 并行调度子Agent以缩短总调研时间