Tech Research
Orchestrate multi-source technical research by dispatching parallel subagents to gather intelligence from X/Twitter (via Grok), GitHub repositories (via DeepWiki), and the web (via WebSearch). Synthesize all findings into a single actionable report.
Architecture: The main agent orchestrates research using one of two modes — lightweight (Task Subagents) or heavyweight (Agent Teammates) — chosen based on research complexity.
Research Mode Selection
Before dispatching any agents, determine the appropriate mode:
| Signal | → Mode |
|---|
| Single topic, multiple data sources (Grok + DeepWiki + WebSearch) | Light → Task Subagents |
| Multiple independent topics/competitors needing cross-comparison | Heavy → Agent Teammates |
| Research may produce follow-up questions requiring dynamic re-scoping | Heavy → Agent Teammates |
| Agent count ≥ 4 | Heavy → Agent Teammates |
Light Mode (default for single-topic research)
Dispatch up to 3 Task Subagents (
with
subagent_type: "general-purpose"
). Each handles one data source independently. The main agent synthesizes results after all return.
Heavy Mode (for multi-topic / competitive research)
Use
to create a research team →
for each research task → spawn Agent Teammates (via
with
and
parameters) → coordinate via
. Teammates can:
- Communicate to avoid duplication ("I found Project A uses the same approach as B — focus on their differentiators")
- Share discoveries across tasks ("The blog post I found compares all 3 frameworks, sending you the link")
- Dynamically adjust scope based on what others have found
When to Use
- Evaluating a technology, library, or framework for adoption
- Comparing alternatives (e.g., "Zustand vs Jotai vs Redux")
- Investigating a GitHub repo's architecture and community reception
- Gauging developer sentiment on a new API, tool, or announcement
- Cross-language research (Chinese/Japanese developer communities)
Research Sources
| Source | What It Provides | Best For |
|---|
| Grok (X/Twitter) | Real developer opinions, @handles, post URLs | Sentiment, expert discovery, niche recommendations |
| DeepWiki (GitHub) | AI-powered repo analysis, architecture, API docs | Understanding codebases, comparing repo internals. Only use — never or (they return massive dumps that easily exceed context limits) |
| WebSearch | Official docs, blog posts, benchmarks, tutorials | Facts, performance data, official announcements |
Workflow
1. Analyze the Research Question
Break the user's question into sub-queries for each source:
- Grok query: Developer opinions, community sentiment, expert recommendations
- DeepWiki query: Repository architecture, API design, code quality (requires )
- WebSearch query: Official docs, benchmarks, comparisons, recent announcements
Not every research task needs all 3 sources. Select sources based on the question:
| Research Type | Grok | DeepWiki | WebSearch |
|---|
| "Should we use library X?" | Yes | Yes (if OSS) | Yes |
| "What are devs saying about X?" | Yes | No | Maybe |
| "How does repo X work internally?" | No | Yes | Maybe |
| "Compare X vs Y performance" | Maybe | Yes (both repos) | Yes |
| "What's new in framework X?" | Yes | No | Yes |
2. Dispatch Research Agents
Choose the dispatch method based on the research mode selected above.
Light Mode: Task Subagents
Launch subagents concurrently using
. See
references/subagent_templates.md for complete prompt templates.
Grok subagent:
Task(subagent_type: "general-purpose", description: "Ask Grok about [topic]", prompt: <grok_template>)
DeepWiki subagent:
Task(subagent_type: "general-purpose", description: "DeepWiki research [repo]", prompt: <deepwiki_template>)
WebSearch subagent:
Task(subagent_type: "general-purpose", description: "Web research [topic]", prompt: <websearch_template>)
Heavy Mode: Agent Teammates
1. TeamCreate(team_name: "research-[topic]")
2. TaskCreate(subject: "Research [Project A]", description: "...", activeForm: "Researching [Project A]")
3. TaskCreate(subject: "Research [Project B]", description: "...", activeForm: "Researching [Project B]")
4. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-a", prompt: "...")
5. Task(subagent_type: "general-purpose", team_name: "research-[topic]", name: "researcher-b", prompt: "...")
6. Coordinate via SendMessage — share findings, adjust scope, avoid duplication
7. Synthesize after all teammates report back
8. Shutdown teammates and TeamDelete when done
Each teammate should use all relevant data sources (Grok, DeepWiki, WebSearch) for their assigned topic, rather than splitting by data source.
3. Synthesize and Report
After all subagents return, merge findings into a unified report:
markdown
## Tech Research: [Topic]
### TL;DR
[2-3 sentence executive summary with clear recommendation]
### Community Sentiment (from X/Twitter)
- [Key opinions with @username attribution]
- [Verified post URLs]
### Repository Analysis (from DeepWiki)
- Architecture overview
- Code quality observations
- API design patterns
- Activity and maintenance status
### Web Intelligence
- Official documentation highlights
- Benchmark data
- Blog post insights
- Recent announcements
### Comparison Matrix (if comparing alternatives)
|----------|----------|----------|
| [criterion] | [finding] | [finding] |
### Recommendation
[Clear, actionable recommendation based on all sources]
### Limitations
[What couldn't be verified or found]
Grok Query Strategies
Choose the right strategy based on the research goal. Strategy matters more than the topic itself. Every query MUST include X/Twitter-scoping keywords to avoid falling back to broad web search.
| Strategy | Best For | Template |
|---|
| Expert Discovery (highest success) | Finding people to follow, active practitioners | "Who are the most active [topic] developers on X worth following? Give @handles and link to recent posts" |
| Community Sentiment (high success) | New API launches, framework releases | "Search X posts about [topic]. What are developers saying about [aspect 1], [aspect 2]? Include @username and post URLs" |
| Technology Selection (medium) | Choosing between libraries | "What [type of tool] are developers currently recommending on X for [use case]? Include @username and post URLs" |
| CJK Research (medium) | Chinese/Japanese community | "搜索 X 上中文开发者关于 [topic] 的讨论。请给出推文链接和 @用户名" |
| Workflow Discovery | How devs use tools | "How are developers on X using [tool] for [domain]? Include skills, workflows, @username and post URLs" |
Avoid: Too-specific technical queries (X is not Stack Overflow) and searching for niche library names (search for the need instead).
Grok Pre-flight
Before dispatching a Grok subagent, run the pre-flight check:
bash
bash ${CLAUDE_PLUGIN_ROOT}/skills/tech-research/scripts/grok_preflight.sh
| Exit Code | Action |
|---|
| READY | Dispatch Grok subagent |
| NEEDS_LOGIN | Dispatch anyway; subagent will verify |
| NOT_CONFIGURED | Skip Grok source, note in report |
If Grok is unavailable, proceed with DeepWiki + WebSearch only.
Tips
- For CJK communities, query Grok in the target language directly
- DeepWiki accepts up to 10 repos in a single query for comparisons
- WebSearch is best for recent information (include current year in queries)
- Always verify Grok post URLs before citing — accuracy is ~80%
- Run subagents in parallel to minimize total research time