Deep Research (8-Step In-Depth Research Process)
Transform vague topics proposed by users into high-quality, deliverable research reports through systematic methods.
Core Principles
- Conclusions come from mechanism comparison, not "I feel like it"
- Establish facts first, then make deductions
- Priority to authoritative sources, L1 > L2 > L3 > L4
- Intermediate results must be saved for easy traceback and reuse
Working Directory and Intermediate Product Management
Working Directory Structure
At the start of research,
you must create a topic-named working directory under
:
~/Downloads/research/<topic>/
├── 00_Problem_Decomposition.md # Output of Step 0-1
├── 01_Source_Materials.md # Output of Step 2: Links to all consulted materials
├── 02_Fact_Cards.md # Output of Step 3: Extracted facts
├── 03_Comparison_Framework.md # Output of Step 4: Selected framework and content
├── 04_Derivation_Process.md # Output of Step 6: Deduction from facts to conclusions
├── 05_Verification_Records.md # Output of Step 7: Use case verification results
├── FINAL_Research_Report.md # Output of Step 8: Final deliverable
└── raw/ # Archive of original materials (optional)
├── source_1.md
└── source_2.md
Saving Timing and Content
| Step | Save Immediately After Completion | File Name |
|---|
| Step 0-1 | Problem type judgment + sub-question list | 00_Problem_Decomposition.md
|
| Step 2 | Link, hierarchy, and summary of each consulted material | |
| Step 3 | Each fact card (statement + source + confidence level) | |
| Step 4 | Selected comparison framework + initial content | 03_Comparison_Framework.md
|
| Step 6 | Derivation process for each dimension | |
| Step 7 | Verification scenario + result + review checklist | 05_Verification_Records.md
|
| Step 8 | Complete research report | |
Saving Principles
- Save in real-time: Write to the corresponding file immediately after completing each step, do not wait until the end
- Incremental update: The same file can be updated multiple times, append or replace new content
- Preserve process: Even if intermediate file content is later integrated into the final report, keep them
- Easy recovery: If research is interrupted, progress can be restored from intermediate files
Trigger Conditions
When users want to:
- Gain in-depth understanding of a concept/technology/phenomenon
- Compare similarities and differences between two or more things
- Collect information and basis for decision-making
- Write research reports or analysis documents
Keywords:
- "Deep Research", "In-Depth Study", "In-Depth Analysis"
- "Help me research", "Do a research", "Conduct a study"
- "Comparative Analysis", "Concept Comparison", "Technology Comparison"
- "Write a research report", "Produce a research report"
Differentiation from Other Skills:
- Need visual diagrams → Use
- Need writing output (articles/tutorials) → Use
- Need material organization → Use
- Need pure research reports → Use this Skill
Workflow (8-Step Process)
Step 0: Problem Type Judgment
First judge the type of research question and select the corresponding strategy:
| Problem Type | Core Task | Focus Dimensions |
|---|
| Concept Comparison Type | Establish comparison framework | Mechanism differences, application boundaries |
| Decision Support Type | Trade-off analysis | Cost, risk, benefits |
| Trend Analysis Type | Sort out evolution context | History, driving factors, predictions |
| Problem Diagnosis Type | Root cause analysis | Symptoms, causes, evidence chain |
| Knowledge Organization Type | Systematic organization | Definition, classification, relationships |
Step 0.5: Timeliness Sensitivity Judgment (BLOCKING)
Before starting research, you must judge the timeliness sensitivity of the question, which will determine the material screening strategy.
Timeliness Sensitivity Classification
| Sensitivity Level | Typical Fields | Material Time Window | Description |
|---|
| 🔴 Extremely High | AI/Large Models, Blockchain, Cryptocurrency | 3-6 months | Technology iterates extremely fast, information from a few months ago may be completely outdated |
| 🟠 High | Cloud Services, Frontend Frameworks, API Interfaces | 6-12 months | Version updates are frequent, need to confirm the current version |
| 🟡 Medium | Programming Languages, Databases, Operating Systems | 1-2 years | Relatively stable but still evolving |
| 🟢 Low | Algorithm Principles, Design Patterns, Theoretical Concepts | No restrictions | Basic principles change slowly |
Special Rules for 🔴 Extremely High Sensitivity Fields
When the research topic involves the following fields, you must implement special rules:
Trigger Word Recognition:
- AI-related: Large Model, LLM, GPT, Claude, Gemini, AI Agent, RAG, Vector Database, Prompt Engineering
- Cloud Native: Kubernetes New Versions, Serverless, Container Runtime
- Cutting-edge Technologies: Web3, Quantum Computing, AR/VR
Mandatory Rules:
-
Search with time constraints:
- Use or to limit search results
- Prioritize setting to within the last 3 months
-
Elevated priority for official sources:
- Must first consult official documents, official blogs, official Changelogs
- GitHub Release Notes, official X/Twitter announcements
- Academic papers (preprint platforms like arXiv)
-
Mandatory version number labeling:
- Any technical description must be labeled with the current version number
- Example: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports..."
- Prohibit using vague expressions like "the latest version supports..."
-
Disposal of outdated information:
- Technical blogs/tutorials older than 6 months → Only used as historical reference, cannot be used as factual basis
- If version inconsistency is found → Must verify the current version before use
- Obvious outdated descriptions (e.g., "will support in the future" but now supported) → Discard directly
-
Cross-verification:
- High-sensitivity information must be confirmed from at least 2 independent sources
- Priority: Official Documents > Official Blogs > Authoritative Technical Media > Personal Blogs
-
Direct verification of official download/release pages (BLOCKING):
- Must directly access official download pages to verify platform support (do not rely on search engine cache)
- Use
mcp__tavily-mcp__tavily-extract
or to directly extract page content
- Example:
https://product.com/download
or https://github.com/xxx/releases
- Descriptions like "Coming soon" in search results may be outdated, must verify in real-time
- Platform support is frequently changing information, cannot be inferred from old materials
-
Search for product-specific protocol/function names (BLOCKING):
- In addition to searching for product names, must additionally search for the names of protocols/standards supported by the product
- Common protocols/standards to search for:
- AI Tools: MCP, ACP (Agent Client Protocol), LSP, DAP
- Cloud Services: OAuth, OIDC, SAML
- Data Exchange: GraphQL, gRPC, REST
- Search format:
"<Product Name> <Protocol Name> support"
or "<Product Name> <Protocol Name> integration"
- These protocol integrations are often differentiated features, easily omitted in main documents but explained in dedicated pages
Timeliness Judgment Output Template
markdown
## Timeliness Sensitivity Assessment
- **Research Topic**: [Topic]
- **Sensitivity Level**: 🔴Extremely High / 🟠High / 🟡Medium / 🟢Low
- **Judgment Basis**: [Why this level]
- **Material Time Window**: [X months/years]
- **Priority Official Sources to Consult**:
1. [Official Source 1]
2. [Official Source 2]
- **Key Version Information to Verify**:
- [Product/Technology 1]: Current version ____
- [Product/Technology 2]: Current version ____
📁 Save Action: Append the timeliness assessment to the end of
00_Problem_Decomposition.md
Step 1: Problem Decomposition and Boundary Definition
Break down vague topics into 2-4 researchable sub-questions:
- Sub-question A: "What is X and how does it work?" (Definition and mechanism)
- Sub-question B: "What are the relationships/differences between X and Y in various dimensions?" (Comparative analysis)
- Sub-question C: "In what scenarios is X applicable/not applicable?" (Boundary conditions)
- Sub-question D: "What are the development trends/best practices of X?" (Extended analysis)
⚠️ Research Object Definition (BLOCKING - Must be clear):
When decomposing problems, you must clearly define the boundaries of the research object:
| Dimension | Boundaries to Clearly Define | Example |
|---|
| Population | Which group is the research targeting? | College students vs middle school students vs vocational school students vs all students |
| Region | Which region is the research targeting? | Chinese universities vs American universities vs global |
| Time | Which period is the research targeting? | After 2020 vs full history |
| Level | Which level is the research targeting? | Undergraduate vs postgraduate vs vocational education |
Typical Mistake: User asks about "college students' classroom problems", but includes policies targeting "middle school students"—mismatched application objects will invalidate the entire research.
📁 Save Action:
- Create the working directory
~/Downloads/research/<topic>/
- Write to
00_Problem_Decomposition.md
, including:
- Original question
- Judged problem type and reasons
- Research object boundary definition (population, region, time, level)
- List of decomposed sub-questions
- Write to TodoWrite to track progress
Step 2: Material Stratification and Authority Locking
Classify materials by authority, prioritize first-hand materials:
| Level | Material Type | Usage | Credibility |
|---|
| L1 | Official Documents, Papers, Specifications, RFC | Definitions, mechanisms, verifiable facts | ✅ High |
| L2 | Official Blogs, Technical Speeches, White Papers | Design intent, architectural ideas | ✅ High |
| L3 | Authoritative Media, Expert Interpretations, Tutorials | Supplement intuition, cases | ⚠️ Medium |
| L4 | Community Discussions, Personal Blogs, Forums | Discover blind spots, verify understanding | ❓ Low |
L4 Community Source Specification (Mandatory for product comparison research):
| Source Type | Acquisition Method | Value |
|---|
| GitHub Issues | Directly access github.com/<org>/<repo>/issues
| Real user pain points, feature requests, bug feedback |
| GitHub Discussions | Access github.com/<org>/<repo>/discussions
| Feature discussions, usage experiences, community consensus |
| Reddit | Search site:reddit.com "<Product Name>"
| Real user reviews, comparison discussions |
| Hacker News | Search site:news.ycombinator.com "<Product Name>"
| In-depth discussions in technical communities |
| Discord/Telegram | Official product communities | Feedback from active users (mark [Source Restricted]) |
Principles:
- Conclusions must be traceable to L1/L2
- L3/L4 are only for assistance and verification
- L4 community discussions are used to discover "what users really care about"
- Record all information sources
⏰ Timeliness Screening Rules (Implemented based on Step 0.5 sensitivity level):
| Sensitivity Level | Material Screening Rules | Suggested Search Parameters |
|---|
| 🔴 Extremely High | Only accept materials within 6 months as factual basis | or set start_date to the last 3 months |
| 🟠 High | Prioritize materials within 1 year, mark those older than 1 year | |
| 🟡 Medium | Use materials within 2 years normally, verify validity if older | Default search |
| 🟢 Low | No time restrictions | Default search |
Search Strategy for High-Sensitivity Fields:
1. First Round: Targeted search of official sources
- Use include_domains to limit official domain names
- Example: include_domains: ["anthropic.com", "openai.com", "docs.xxx.com"]
2. Second Round: Direct verification of official download/release pages (BLOCKING)
- Directly access official download pages, do not rely on search cache
- Use tavily-extract or WebFetch to extract page content
- Verify: Platform support, current version number, release date
- This step must be done, search engines may cache outdated "Coming soon" information
3. Third Round: Search for product-specific protocols/functions (BLOCKING)
- Search for protocol names supported by the product (MCP, ACP, LSP, etc.)
- Format: `"<Product Name> <Protocol Name>" site:official domain`
- These integration features are often not displayed on the homepage but explained in dedicated documents
4. Fourth Round: Time-limited extensive search
- time_range: "month" or set start_date to recent period
- Explicitly exclude outdated sources
5. Fifth Round: Version verification
- Cross-verify version numbers in search results
- Immediately check official Changelog if inconsistencies are found
6. Sixth Round: Community voice mining (BLOCKING - Mandatory for product comparison research)
- Access the product's GitHub Issues page, view popular/pinned issues
- Search for key function words in Issues (e.g., "MCP", "plugin", "integration")
- View discussion trends in the last 3-6 months
- Identify the function points and differentiated features that users care about most
- Value of this step: Official documents often do not emphasize "features that others don't have but I do", but community discussions will
Specific Operations for Community Voice Mining:
GitHub Issues Mining Steps:
1. Access github.com/<org>/<repo>/issues
2. Sort by "Most commented" to view popular discussions
3. Search keywords:
- Feature-related: feature request, enhancement, MCP, plugin, API
- Comparison-related: vs, compared to, alternative, migrate from
4. View issue labels: enhancement, feature, discussion
5. Record frequently mentioned feature demands and user pain points
Value Conversion:
- Frequently discussed features → May be differentiated highlights
- User complaints/requests → May be product shortcomings
- Comparison discussions → Directly obtain user-perspective difference analysis
Material Timeliness Labeling Template (Appended to source material records):
markdown
- **Release Date**: [YYYY-MM-DD]
- **Timeliness Status**: ✅Currently Valid / ⚠️Needs Verification / ❌Outdated
- **Version Information**: [If applicable, mark the involved version number]
Tool Usage:
- Prioritize using
mcp__plugin_context7_context7__query-docs
to obtain technical documents
- Use or
mcp__tavily-mcp__tavily-search
for extensive searches
- Use
mcp__tavily-mcp__tavily-extract
to extract specific page content
⚠️ Application Object Verification (BLOCKING - Must check before inclusion):
Before including any material, you must verify whether its application object matches the research boundary:
| Material Type | Application Object to Verify | Verification Method |
|---|
| Policies/Regulations | Who is it for? (Middle school students/college students/all people) | Check document title, application scope clauses |
| Academic Research | Who are the samples? (Vocational school students/undergraduates/postgraduates) | Check research methods/sample description sections |
| Statistical Data | Which group is being counted? | Check data source description |
| Case Reports | Which type of institution is involved? | Confirm institution type (university/middle school/vocational school) |
Handling of Mismatched Materials:
- Completely mismatched materials → Do not include
- Partially overlapping (e.g., "students" include college students) → Include but mark application scope
- Analogous reference (e.g., middle school policies as trend reference) → Include but explicitly mark "For Reference Only"
📁 Save Action:
Immediately after consulting a material, append it to
:
markdown
## Material #[Serial Number]
- **Title**: [Material Title]
- **Link**: [URL]
- **Level**: L1/L2/L3/L4
- **Release Date**: [YYYY-MM-DD]
- **Timeliness Status**: ✅Currently Valid / ⚠️Needs Verification / ❌Outdated (Reference Only)
- **Version Information**: [If involving specific versions, must mark]
- **Application Object**: [Clearly mark the group/region/level targeted by this material]
- **Match with Research Boundary**: ✅Fully Matched / ⚠️Partially Overlapping / 📎For Reference Only
- **Abstract**: [1-2 key content sentences]
- **Association with Sub-questions**: [Corresponding sub-question]
Step 3: Fact Extraction and Evidence Cards
Convert materials into verifiable fact cards:
markdown
## Fact Cards
### Fact 1
- **Statement**: [Specific fact description]
- **Source**: [Link/document section]
- **Confidence Level**: High/Medium/Low
### Fact 2
...
Key Discipline:
- Establish facts first, then make deductions
- Distinguish between "what the official said" and "what I inferred"
- When encountering contradictory information, mark and retain both sides
- Mark confidence level:
- ✅ High: Clearly stated in official documents
- ⚠️ Medium: Mentioned in official blogs but not formally documented
- ❓ Low: Inferred or from non-official sources
📁 Save Action:
Immediately after extracting a fact, append it to
:
markdown
## Fact #[Serial Number]
- **Statement**: [Specific fact description]
- **Source**: [Material #Serial Number] [Link]
- **Application Object**: [The group this fact applies to, inherited from materials or further refined]
- **Confidence Level**: ✅/⚠️/❓
- **Associated Dimension**: [Corresponding comparison dimension]
⚠️ Application Object in Fact Statements:
- If the fact comes from partially overlapping or reference-only materials, must clearly mark the application scope when stating
- Wrong Example: "Ministry of Education prohibits bringing mobile phones into classrooms" (does not specify who it targets)
- Correct Example: "Ministry of Education prohibits middle school students from bringing mobile phones into classrooms (not applicable to college students)"
Step 4: Establish Comparison/Analysis Framework
Based on the problem type, select fixed analysis dimensions:
General Dimensions (Select as needed):
- Objectives/Problems solved
- Working mechanism/process
- Input/output/boundaries
- Advantages/disadvantages/trade-offs
- Application scenarios/boundary conditions
- Cost/benefit/risk
- Historical evolution/future trends
- Security/permission/controllability
Concept Comparison Type Special Dimensions:
- Definition and essence
- Trigger/call method
- Execution subject
- Input/output and type constraints
- Determinism and repeatability
- Resource and context management
- Combination and reuse methods
- Security boundaries and permission control
Decision Support Type Special Dimensions:
- Program overview
- Implementation cost
- Maintenance cost
- Risk assessment
- Benefit expectations
- Application scenarios
- Team capability requirements
- Migration difficulty
📁 Save Action:
Write to
03_Comparison_Framework.md
:
markdown
# Comparison Framework
## Selected Framework Type
[Concept Comparison Type/Decision Support Type/...]
## Selected Dimensions
1. [Dimension 1]
2. [Dimension 2]
...
## Initial Content
|------|---|---|----------|
| [Dimension 1] | [Description] | [Description] | Fact#1, #3 |
| ... | | | |
Step 5: Reference Object Alignment
Ensure all parties in the comparison have clear, unified definitions:
Checklist:
Step 6: Deduction Chain from Facts to Conclusions
Explicitly write out the deduction process of "Fact → Comparison → Conclusion":
markdown
## Deduction Process
### About [Dimension Name]
1. **Fact Confirmation**: According to [Source], the mechanism of X is...
2. **Compare with Reference Object**: While the mechanism of Y is...
3. **Conclusion**: Therefore, the difference between X and Y in this dimension is...
Key Discipline:
- Conclusions come from mechanism comparison, not "I feel like it"
- Each conclusion can be traced back to specific facts
- Mark uncertain conclusions
📁 Save Action:
Write to
:
markdown
# Deduction Process
## Dimension 1: [Dimension Name]
### Fact Confirmation
According to [Fact#X], the mechanism of X is...
### Compare with Reference Object
While the mechanism of Y is... (Source: [Fact#Y])
### Conclusion
Therefore, the difference between X and Y in this dimension is...
### Confidence Level
✅/⚠️/❓ + Reasons
---
## Dimension 2: [Dimension Name]
...
Step 7: Use Case Verification (Sanity Check)
Use a typical scenario to verify whether the conclusion holds:
Verification Questions:
- According to my conclusion, how should this scenario be handled?
- Is this actually the case?
- Are there any counterexamples that need explanation?
Review Checklist:
📁 Save Action:
Write to
05_Verification_Records.md
:
markdown
# Verification Records
## Verification Scenario
[Scenario Description]
## Expected According to Conclusion
If using X: [Expected Behavior]
If using Y: [Expected Behavior]
## Actual Verification Result
[Actual Situation]
## Are There Counterexamples
[Yes/No, describe if yes]
## Review Checklist
- [x] Draft conclusion is consistent with fact cards
- [x] No missing important dimensions
- [x] No over-inference
- [ ] Found issues: [If any]
## Conclusions Needing Correction
[If any]
Step 8: Deliverable Processing
Make the report readable by bosses, reproducible, and traceable:
Deliverable Three-Piece Set:
- One-sentence summary: Can be directly repeated in meetings
- Structured chapters: Use subheadings to split the deduction chain
- Traceable evidence: Attach source links to key facts
📁 Save Action:
Integrate all intermediate products and write to
:
- Extract background from
00_Problem_Decomposition.md
- Cite key facts from
- Organize conclusions from
- Generate references from
- Supplement use cases from
05_Verification_Records.md
Report Output Structure
markdown
# Research Report on [Research Topic]
## Abstract
[One-sentence summary of core conclusions]
## 1. Concept Alignment
### 1.1 What is X
[Definition + Why it exists]
### 1.2 What is Y (Reference Object)
[As comparison benchmark]
## 2. Working Mechanism
[How X operates, this is the core difference point]
## 3. Relationships
[Common problems solved, 3-4 points]
## 4. Differences
[Compare item by item by dimension, highlight decisive differences]
## 5. Use Case Demonstration
[Bring abstraction to specific scenarios]
## 6. Summary and Recommendations
[Reproducible conclusions + operable recommendations]
## References
[Links to all cited sources]
Stakeholder Perspective
Adjust content depth according to the audience:
| Audience | Focus | Detail Level |
|---|
| Decision-Makers | Conclusions, risks, recommendations | Concise, emphasize operability |
| Executors | Specific mechanisms, operation methods | Detailed, emphasize how to do it |
| Technical Experts | Details, boundary conditions, limitations | In-depth, emphasize accuracy |
Output Files
Default save location:
~/Downloads/research/<topic>/
Mandatory Generated Files (Automatically generated according to process):
| File | Content | Generation Timing |
|---|
00_Problem_Decomposition.md
| Problem type, sub-question list | After completing Step 0-1 |
| All material links and abstracts | Continuously updated during Step 2 |
| Extracted facts and sources | Continuously updated during Step 3 |
03_Comparison_Framework.md
| Selected framework and content | After completing Step 4 |
| Deduction from facts to conclusions | After completing Step 6 |
05_Verification_Records.md
| Use case verification and review | After completing Step 7 |
| Complete deliverable report | After completing Step 8 |
Optional Files:
- - Archive of original materials (save when content is long)
Methodology Quick Reference Card
┌─────────────────────────────────────────────────────────────┐
│ 8-Step In-Depth Research Methodology │
├─────────────────────────────────────────────────────────────┤
│ 0. Judge problem type → Select corresponding framework template │
│ 1. Decompose problem → 2-4 researchable sub-questions │
│ 2. Stratify materials → L1 Official > L2 Blog > L3 Media > L4 Community │
│ 3. Extract facts → Each with source, mark confidence level │
│ 4. Establish framework → Fixed dimensions, structured comparison │
│ 5. Align reference object → Ensure unified definition │
│ 6. Deduction chain → Fact→Comparison→Conclusion, write explicitly │
│ 7. Use case verification → Sanity check, prevent armchair theorizing │
│ 8. Deliverable processing → One-sentence summary + structured chapters + traceable evidence │
├─────────────────────────────────────────────────────────────┤
│ Report Structure: Definition→Mechanism→Relationships→Differences→Use Cases→Summary │
│ Key Discipline: Conclusions come from mechanism comparison, not "I feel like it" │
└─────────────────────────────────────────────────────────────┘
Usage Examples
Example 1: Technical Concept Comparison
User: Help me conduct in-depth research on the differences between REST API and GraphQL
Execution Process:
- Judge type: Concept Comparison Type
- Decompose problems: Definition, mechanism, application scenarios, advantages and disadvantages
- Consult official specifications (REST papers, GraphQL official documents)
- Extract fact cards
- Analyze with 8-dimension comparison framework
- Verify with actual project scenarios
- Output structured report
Example 2: Technical Decision Support
User: Should we choose PostgreSQL or MongoDB? Help me research it
Execution Process:
- Judge type: Decision Support Type
- Supplement questions: User's business scenarios, data characteristics, team experience
- Consult official documents and performance benchmarks
- Analyze with decision dimension framework
- Provide scenario-based recommendations
- Mark risks and preconditions
Example 3: Trend Analysis
User: What are the development trends of AI Agent? Conduct an in-depth analysis
Execution Process:
- Judge type: Trend Analysis Type
- Sort out historical evolution context
- Collect first-hand materials (papers, official announcements)
- Identify driving factors
- Analyze current landscape
- Prudent trend prediction (mark uncertainties)
Source Verifiability Requirements
Core Principle: Every piece of external information cited in the report must be directly verifiable by users.
Mandatory Rules:
-
URL Accessibility:
- All cited links must be publicly accessible (no login/paywall required)
- If citing content that requires login, must mark
- For academic papers, prioritize providing public versions such as arXiv/DOI
-
Precise Citation Positioning:
- For long documents, must specify the specific section/page number/timestamp
- Example:
[Source: OpenAI Blog, 2024-03-15, "GPT-4 Technical Report", §3.2 Safety]
- For video/audio citations, mark the timestamp
-
Content Correspondence:
- The cited fact must have a corresponding statement in the original text
- Prohibit using over-inferred content as "citation"
- If there is interpretation/inference, must explicitly mark "Inferred based on [Source]"
-
Timeliness Labeling:
- Mark the release/update date of materials
- For technical documents, mark the version number
- Evaluate validity if materials are older than 2 years
-
Handling of Unverifiable Information:
- If the information source cannot be publicly verified (e.g., private communication, paid report abstracts), must mark in the confidence level
- Unverifiable information cannot be the only support for core conclusions
Quality Checklist
Before completing the report, check the following items:
⏰ Timeliness Check (BLOCKING for High-Sensitivity Fields)
When the research topic belongs to 🔴Extremely High or 🟠High sensitivity level, you must complete the following checks:
Typical Community Voice Omission Error Case:
❌ Wrong: Only rely on official documents, mention MCP as an ordinary feature in the report
✅ Correct: Discover that MCP is the most discussed feature in the community through GitHub Issues, and focus on analyzing its value in the report
❌ Wrong: "Both Alma and Cherry Studio support MCP" (no analysis of differences)
✅ Correct: Discover through community discussions that "Alma's MCP implementation is highly consistent with Claude Code, which is its core competitiveness"
Typical Platform Support/Protocol Omission Error Case:
❌ Wrong: "Alma only supports macOS" (based on search engine cached "Coming soon" information)
✅ Correct: Directly access alma.now/download page to verify the currently supported platforms
❌ Wrong: "Alma supports MCP" (only searched MCP, missed ACP)
✅ Correct: Search both "Alma MCP" and "Alma ACP", and discover that Alma also supports ACP protocol integration CLI tools
Typical Timeliness Error Case:
❌ Wrong: "Claude supports function calling" (no version marked, may refer to old version capabilities)
✅ Correct: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports function calling through the Tool Use API, with a maximum of 8192 tokens for tool definitions"
❌ Wrong: "According to a 2023 blog, the context length of GPT-4 is 8K"
✅ Correct: "As of January 2024, GPT-4 Turbo supports 128K context (Source: OpenAI Official Document, updated on 2024-01-25)"
⚠️ Application Object Consistency Check (BLOCKING)
This is the easiest to ignore and also the most fatal check item:
Typical Error Case:
Research Topic: "College students' inattentiveness in class"
Wrong Citation: "In October 2025, the Ministry of Education issued a document prohibiting mobile phones in classrooms"
Problem: This policy targets middle school students, not college students
Consequence: Readers mistakenly think the Ministry of Education prohibits college students from bringing mobile phones, causing serious misunderstanding
Package Output (BLOCKING)
After completing the research, package the working directory:
bash
tar -czvf ~/outcome.tar.gz -C <parent_dir> <workspace_name>
- If already exists, overwrite it directly
- Inform the user that packaging is completed and the file location
Final Response Specification
After completing the research, when replying to the user:
✅ Should Include:
- One-sentence core conclusion
- Key findings summary (3-5 points)
- Package file location ()
- If there are major uncertainties, mark points that need further verification
❌ Prohibited to Include:
- List of process files (e.g.,
00_Problem_Decomposition.md
, , etc.)
- Detailed research step descriptions
- Working directory structure display
Version History
-
v1.6 (2026-01-12): Added community voice mining mechanism
- Added the "Specification of L4 Community Sources" table, clarifying sources like GitHub Issues/Discussions/Reddit/HN
- Expanded search strategy from 5 rounds to 6 rounds, added "Community Voice Mining" round
- Added "Specific Operations for Community Voice Mining" guide
- Added to quality checklist: GitHub Issues mined, community hotspots identified
- Added typical community voice omission error cases
- Source: Lesson from underestimating the importance of MCP function in Alma vs Cherry Studio research—official documents won't emphasize "features others don't have but I do", but community discussions will
-
v1.5 (2026-01-12): Enhanced accuracy of high-sensitivity field research
- Added Rule 6 "Direct Verification of Official Download/Release Pages" - Must access download pages in real-time, do not rely on search cache
- Added Rule 7 "Search for Product-Specific Protocol/Function Names" - Must search for protocol names like MCP and ACP
- Expanded search strategy from 3 rounds to 5 rounds, added download page verification and protocol search rounds
- Added "Final Response Specification" chapter - Prohibit listing process files in replies
- Added to quality checklist: Download page verification, protocol search check items
- Added typical error cases: Platform support omission (Alma Windows), protocol omission (Alma ACP)
- Source: Lesson from missing Windows support and ACP function in Alma vs Cherry Studio research
-
v1.4 (2026-01-12): Added timeliness sensitivity judgment mechanism
- Added Step 0.5 "Timeliness Sensitivity Judgment", dividing questions into 4 sensitivity levels (Extremely High/High/Medium/Low)
- Mandatory implementation for 🔴Extremely High sensitivity fields (AI/Large Models, etc.): 6-month time window, priority to official sources, mandatory version number labeling
- Added "Timeliness Screening Rules" and "Search Strategy for High-Sensitivity Fields" in Step 2
- Added to source material template: Release date, timeliness status, version information fields
- Added "Timeliness Check" chapter to quality checklist
- Source: User feedback that technology-related research easily cites outdated information leading to misunderstanding
-
v1.3 (2026-01-11): Added Source Verifiability Requirements
- Added "Source Verifiability Requirements" chapter, including 5 mandatory rules
- URL accessibility, precise citation positioning, content correspondence, timeliness labeling, handling of unverifiable information
- Added to quality checklist: "Every citation can be directly verified by users" item
- Source: User requirement to ensure external citations are directly verifiable
-
v1.2 (2026-01-11): Added application object verification mechanism
- Added "Research Object Definition" in Step 1, requiring clear population/region/time/level boundaries
- Added "Application Object Verification" in Step 2, must verify application object match before including materials
- Added "Application Object" field to fact card template in Step 3
- Added "Application Object Consistency Check" chapter to quality checklist
- Source: Lesson from mistakenly citing middle school policies in classroom attention rate research
-
v1.1 (2026-01-11): Enhanced intermediate product management
- Added "Working Directory and Intermediate Product Management" chapter
- Clear save actions for each step (marked with 📁)
- Changed intermediate files from "optional" to "mandatory"
- Standardized file naming and directory structure
-
v1.0 (2025-01-11): Initial version
- Refined from the case study of Claude Skills vs Functions research
- Complete 8-step process
- 5 problem type frameworks
- Multi-dimensional comparison template