research-lit
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseResearch Literature Review
研究文献综述
Research topic: $ARGUMENTS
研究主题:$ARGUMENTS
Constants
常量
- PAPER_LIBRARY — Local directory containing user's paper collection (PDFs). Check these paths in order:
- in the current project directory
papers/ - in the current project directory
literature/ - Custom path specified by user in under
CLAUDE.md## Paper Library
- MAX_LOCAL_PAPERS = 20 — Maximum number of local PDFs to scan (read first 3 pages each). If more are found, prioritize by filename relevance to the topic.
- ARXIV_DOWNLOAD = false — When , download top 3-5 most relevant arXiv PDFs to PAPER_LIBRARY after search. When
true(default), only fetch metadata (title, abstract, authors) via arXiv API — no files are downloaded.false - ARXIV_MAX_DOWNLOAD = 5 — Maximum number of PDFs to download when .
ARXIV_DOWNLOAD = true
💡 Overrides:
— custom local PDF path/research-lit "topic" — paper library: ~/my_papers/ — only search Zotero + local PDFs/research-lit "topic" — sources: zotero, local — only search Zotero/research-lit "topic" — sources: zotero — only search the web (skip all local)/research-lit "topic" — sources: web — download top relevant arXiv PDFs/research-lit "topic" — arxiv download: true — download up to 10 PDFs/research-lit "topic" — arxiv download: true, max download: 10
- PAPER_LIBRARY — 本地存储用户论文集(PDF格式)的目录。按以下顺序检查这些路径:
- 当前项目目录下的
papers/ - 当前项目目录下的
literature/ - 用户在的
CLAUDE.md部分指定的自定义路径## Paper Library
- 当前项目目录下的
- MAX_LOCAL_PAPERS = 20 — 最多扫描20份本地PDF(每份读取前3页)。如果找到更多文件,优先选择文件名与主题相关性高的论文。
- ARXIV_DOWNLOAD = false — 设为时,搜索后将最相关的3-5篇arXiv PDF下载到PAPER_LIBRARY;默认设为
true时,仅通过arXiv API获取元数据(标题、摘要、作者),不下载文件。false - ARXIV_MAX_DOWNLOAD = 5 — 当时,最多下载的PDF数量。
ARXIV_DOWNLOAD = true
💡 覆盖设置:
— 自定义本地PDF路径/research-lit "topic" — paper library: ~/my_papers/ — 仅搜索Zotero + 本地PDF/research-lit "topic" — sources: zotero, local — 仅搜索Zotero/research-lit "topic" — sources: zotero — 仅搜索网络(跳过所有本地资源)/research-lit "topic" — sources: web — 下载最相关的arXiv PDF/research-lit "topic" — arxiv download: true — 最多下载10份PDF/research-lit "topic" — arxiv download: true, max download: 10
Data Sources
数据源
This skill checks multiple sources in priority order. All are optional — if a source is not configured or not requested, skip it silently.
本技能按优先级顺序检查多个数据源。所有数据源均为可选——若某个数据源未配置或未被请求,则自动跳过。
Source Selection
数据源选择
Parse for a directive:
$ARGUMENTS— sources:- If is specified: Only search the listed sources (comma-separated). Valid values:
— sources:,zotero,obsidian,local,web.all - If not specified: Default to — search every available source in priority order.
all
Examples:
/research-lit "diffusion models" → all (default)
/research-lit "diffusion models" — sources: all → all
/research-lit "diffusion models" — sources: zotero → Zotero only
/research-lit "diffusion models" — sources: zotero, web → Zotero + web
/research-lit "diffusion models" — sources: local → local PDFs only
/research-lit "topic" — sources: obsidian, local, web → skip Zotero解析中的指令:
$ARGUMENTS— sources:- 若指定:仅搜索列出的数据源(用逗号分隔)。有效值:
— sources:,zotero,obsidian,local,web。all - 若未指定:默认使用——按优先级顺序搜索所有可用数据源。
all
示例:
/research-lit "diffusion models" → 全部(默认)
/research-lit "diffusion models" — sources: all → 全部
/research-lit "diffusion models" — sources: zotero → 仅Zotero
/research-lit "diffusion models" — sources: zotero, web → Zotero + 网络
/research-lit "diffusion models" — sources: local → 仅本地PDF
/research-lit "topic" — sources: obsidian, local, web → 跳过ZoteroSource Table
数据源表格
| Priority | Source | ID | How to detect | What it provides |
|---|---|---|---|---|
| 1 | Zotero (via MCP) | | Try calling any | Collections, tags, annotations, PDF highlights, BibTeX, semantic search |
| 2 | Obsidian (via MCP) | | Try calling any | Research notes, paper summaries, tagged references, wikilinks |
| 3 | Local PDFs | | | Raw PDF content (first 3 pages) |
| 4 | Web search | | Always available (WebSearch) | arXiv, Semantic Scholar, Google Scholar |
Graceful degradation: If no MCP servers are configured, the skill works exactly as before (local PDFs + web search). Zotero and Obsidian are pure additions.
| 优先级 | 数据源 | 标识 | 检测方式 | 提供内容 |
|---|---|---|---|---|
| 1 | Zotero(通过MCP) | | 尝试调用任意 | 文集、标签、注释、PDF高亮内容、BibTeX、语义搜索 |
| 2 | Obsidian(通过MCP) | | 尝试调用任意 | 研究笔记、论文摘要、带标签的参考文献、维基链接 |
| 3 | 本地PDF | | | 原始PDF内容(前3页) |
| 4 | 网络搜索 | | 始终可用(WebSearch) | arXiv、Semantic Scholar、Google Scholar |
优雅降级:若未配置MCP服务器,本技能仍可正常工作(本地PDF + 网络搜索)。Zotero和Obsidian是额外扩展的数据源。
Workflow
工作流程
Step 0a: Search Zotero Library (if available)
步骤0a:搜索Zotero图书馆(若可用)
Skip this step entirely if Zotero MCP is not configured.
Try calling a Zotero MCP tool (e.g., search). If it succeeds:
- Search by topic: Use the Zotero search tool to find papers matching the research topic
- Read collections: Check if the user has a relevant collection/folder for this topic
- Extract annotations: For highly relevant papers, pull PDF highlights and notes — these represent what the user found important
- Export BibTeX: Get citation data for relevant papers (useful for later)
/paper-write - Compile results: For each relevant Zotero entry, extract:
- Title, authors, year, venue
- User's annotations/highlights (if any)
- Tags the user assigned
- Which collection it belongs to
📚 Zotero annotations are gold — they show what the user personally highlighted as important, which is far more valuable than generic summaries.
若未配置Zotero MCP,则完全跳过此步骤。
尝试调用Zotero MCP工具(如搜索工具)。若调用成功:
- 按主题搜索:使用Zotero搜索工具查找与研究主题匹配的论文
- 查看文集:检查用户是否有与该主题相关的文集/文件夹
- 提取注释:对于高度相关的论文,提取PDF高亮内容和笔记——这些内容代表用户认为重要的信息
- 导出BibTeX:获取相关论文的引用数据(后续可用于功能)
/paper-write - 整理结果:为每个相关的Zotero条目提取以下信息:
- 标题、作者、年份、发表会议/期刊
- 用户的注释/高亮内容(若有)
- 用户添加的标签
- 所属的文集
📚 Zotero注释是黄金资源——它们展示了用户个人认为重要的内容,比通用摘要更有价值。
Step 0b: Search Obsidian Vault (if available)
步骤0b:搜索Obsidian库(若可用)
Skip this step entirely if Obsidian MCP is not configured.
Try calling an Obsidian MCP tool (e.g., search). If it succeeds:
- Search vault: Search for notes related to the research topic
- Check tags: Look for notes tagged with relevant topics (e.g., ,
#diffusion-models)#paper-review - Read research notes: For relevant notes, extract the user's own summaries and insights
- Follow links: If notes link to other relevant notes (wikilinks), follow them for additional context
- Compile results: For each relevant note:
- Note title and path
- User's summary/insights
- Links to other notes (research graph)
- Any frontmatter metadata (paper URL, status, rating)
📝 Obsidian notes represent the user's processed understanding — more valuable than raw paper content for understanding their perspective.
若未配置Obsidian MCP,则完全跳过此步骤。
尝试调用Obsidian MCP工具(如搜索工具)。若调用成功:
- 搜索库内容:搜索与研究主题相关的笔记
- 检查标签:查找带有相关主题标签的笔记(如,
#diffusion-models)#paper-review - 读取研究笔记:对于相关笔记,提取用户自己的摘要和见解
- 跟踪链接:若笔记链接到其他相关笔记(维基链接),则跟踪这些链接以获取更多上下文信息
- 整理结果:为每个相关笔记提取以下信息:
- 笔记标题和路径
- 用户的摘要/见解
- 指向其他笔记的链接(研究图谱)
- 任何前置元数据(论文URL、状态、评分)
📝 Obsidian笔记代表了用户的已处理的理解——对于了解用户的视角,比原始论文内容更有价值。
Step 0c: Scan Local Paper Library
步骤0c:扫描本地论文库
Before searching online, check if the user already has relevant papers locally:
-
Locate library: Check PAPER_LIBRARY paths for PDF files
Glob: papers/**/*.pdf, literature/**/*.pdf -
De-duplicate against Zotero: If Step 0a found papers, skip any local PDFs already covered by Zotero results (match by filename or title).
-
Filter by relevance: Match filenames and first-page content against the research topic. Skip clearly unrelated papers.
-
Summarize relevant papers: For each relevant local PDF (up to MAX_LOCAL_PAPERS):
- Read first 3 pages (title, abstract, intro)
- Extract: title, authors, year, core contribution, relevance to topic
- Flag papers that are directly related vs tangentially related
-
Build local knowledge base: Compile summaries into a "papers you already have" section. This becomes the starting point — external search fills the gaps.
📚 If no local papers are found, skip to Step 1. If the user has a comprehensive local collection, the external search can be more targeted (focus on what's missing).
在进行在线搜索前,先检查用户本地是否已有相关论文:
-
定位库路径:检查PAPER_LIBRARY路径中的PDF文件
Glob: papers/**/*.pdf, literature/**/*.pdf -
与Zotero去重:若步骤0a已找到论文,则跳过Zotero结果中已包含的本地PDF(通过文件名或标题匹配)。
-
按相关性筛选:将文件名和第一页内容与研究主题匹配,跳过明显不相关的论文。
-
总结相关论文:对于每篇相关的本地PDF(最多MAX_LOCAL_PAPERS篇):
- 读取前3页(标题、摘要、引言)
- 提取:标题、作者、年份、核心贡献、与主题的相关性
- 标记直接相关和间接相关的论文
-
构建本地知识库:将摘要整理成“您已有的论文”部分。这将作为后续工作的起点——外部搜索将填补缺失的内容。
📚 若未找到本地论文,则直接跳至步骤1。若用户已有全面的本地论文集,外部搜索可更具针对性(专注于缺失的内容)。
Step 1: Search (external)
步骤1:(外部)搜索
- Use WebSearch to find recent papers on the topic
- Check arXiv, Semantic Scholar, Google Scholar
- Focus on papers from last 2 years unless studying foundational work
- De-duplicate: Skip papers already found in Zotero, Obsidian, or local library
arXiv API search (always runs, no download by default):
Locate the fetch script and search arXiv directly:
bash
undefined- 使用WebSearch查找与主题相关的近期论文
- 检查arXiv、Semantic Scholar、Google Scholar
- 除非研究基础理论,否则重点关注近2年的论文
- 去重:跳过已在Zotero、Obsidian或本地库中找到的论文
arXiv API搜索(始终运行,默认不下载):
定位获取脚本并直接搜索arXiv:
bash
undefinedTry to find arxiv_fetch.py
尝试查找arxiv_fetch.py
SCRIPT=$(find tools/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
SCRIPT=$(find tools/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
If not found, check ARIS install
若未找到,检查ARIS安装目录
[ -z "$SCRIPT" ] && SCRIPT=$(find ~/.claude/skills/arxiv/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
[ -z "$SCRIPT" ] && SCRIPT=$(find ~/.claude/skills/arxiv/ -name "arxiv_fetch.py" 2>/dev/null | head -1)
Search arXiv API for structured results (title, abstract, authors, categories)
搜索arXiv API获取结构化结果(标题、摘要、完整作者列表、分类、日期)
python3 "$SCRIPT" search "QUERY" --max 10
If `arxiv_fetch.py` is not found, fall back to WebSearch for arXiv (same as before).
The arXiv API returns structured metadata (title, abstract, full author list, categories, dates) — richer than WebSearch snippets. Merge these results with WebSearch findings and de-duplicate.
**Optional PDF download** (only when `ARXIV_DOWNLOAD = true`):
After all sources are searched and papers are ranked by relevance:
```bashpython3 "$SCRIPT" search "QUERY" --max 10
若未找到`arxiv_fetch.py`,则回退到使用WebSearch搜索arXiv(与之前的方式相同)。
arXiv API返回结构化元数据(标题、摘要、完整作者列表、分类、日期)——比WebSearch的片段内容更丰富。将这些结果与WebSearch的结果合并并去重。
**可选PDF下载**(仅当`ARXIV_DOWNLOAD = true`时启用):
在搜索所有数据源并按相关性对论文排序后:
```bashDownload top N most relevant arXiv papers
下载相关性最高的N篇arXiv论文
python3 "$SCRIPT" download ARXIV_ID --dir papers/
- Only download papers ranked in the top ARXIV_MAX_DOWNLOAD by relevance
- Skip papers already in the local library
- 1-second delay between downloads (rate limiting)
- Verify each PDF > 10 KBpython3 "$SCRIPT" download ARXIV_ID --dir papers/
- 仅下载相关性排名前ARXIV_MAX_DOWNLOAD的论文
- 跳过已在本地库中的论文
- 下载间隔1秒(速率限制)
- 验证每个PDF文件大小大于10 KBStep 2: Analyze Each Paper
步骤2:分析每篇论文
For each relevant paper (from all sources), extract:
- Problem: What gap does it address?
- Method: Core technical contribution (1-2 sentences)
- Results: Key numbers/claims
- Relevance: How does it relate to our work?
- Source: Where we found it (Zotero/Obsidian/local/web) — helps user know what they already have vs what's new
对于所有数据源中的相关论文,提取以下信息:
- 问题:该论文解决了什么研究空白?
- 方法:核心技术贡献(1-2句话)
- 结果:关键数据/结论
- 相关性:该论文与我们的研究工作有何关联?
- 来源:论文的获取渠道(Zotero/Obsidian/本地/网络)——帮助用户区分已有资源和新资源
Step 3: Synthesize
步骤3:综合分析
- Group papers by approach/theme
- Identify consensus vs disagreements in the field
- Find gaps that our work could fill
- If Obsidian notes exist, incorporate the user's own insights into the synthesis
- 按方法/主题对论文分组
- 识别领域内的共识与分歧
- 找出我们的研究工作可以填补的空白
- 若存在Obsidian笔记,将用户的个人见解融入综合分析中
Step 4: Output
步骤4:输出结果
Present as a structured literature table:
| Paper | Venue | Method | Key Result | Relevance to Us | Source |
|-------|-------|--------|------------|-----------------|--------|Plus a narrative summary of the landscape (3-5 paragraphs).
If Zotero BibTeX was exported, include a snippet for direct use in paper writing.
references.bib以结构化的文献表格形式呈现:
| 论文 | 发表会议/期刊 | 方法 | 关键结果 | 与我们研究的相关性 | 来源 |
|-------|-------|--------|------------|-----------------|--------|外加一段关于研究领域现状的叙述性摘要(3-5段)。
若已导出Zotero BibTeX,则包含片段,可直接用于论文写作。
references.bibStep 5: Save (if requested)
步骤5:保存(若请求)
- Save paper PDFs to or
literature/papers/ - Update related work notes in project memory
- If Obsidian is available, optionally create a literature review note in the vault
- 将论文PDF保存至或
literature/目录papers/ - 更新项目记忆中的相关研究工作笔记
- 若Obsidian可用,可选择在库中创建一篇文献综述笔记
Key Rules
核心规则
- Always include paper citations (authors, year, venue)
- Distinguish between peer-reviewed and preprints
- Be honest about limitations of each paper
- Note if a paper directly competes with or supports our approach
- Never fail because a MCP server is not configured — always fall back gracefully to the next data source
- Zotero/Obsidian tools may have different names depending on how the user configured the MCP server (e.g., or
mcp__zotero__search). Try the most common patterns and adapt.mcp__zotero-mcp__search_items
- 始终包含论文引用信息(作者、年份、发表会议/期刊)
- 区分同行评审论文和预印本
- 如实说明每篇论文的局限性
- 标注论文是否直接竞争或支持我们的研究方法
- 绝不因MCP服务器未配置而失效——始终优雅地回退到下一个数据源
- Zotero/Obsidian工具的名称可能因用户配置MCP服务器的方式而异(如或
mcp__zotero__search)。尝试最常见的命名模式并灵活调整。mcp__zotero-mcp__search_items