Literature Review Sprint
Purpose
Turn "I need to read more papers" into a bounded, systematic review. This skill follows the handbook's literature review advice: search systematically, prioritize high-quality sources, assess institution/year/venue/impact, summarize key ideas and limitations, and stay current with top conferences and recent papers.
The output is a usable map of the field, not a long undigested bibliography.
When to Use
- User is starting a new topic
- User wants to verify novelty of an idea
- User is writing related work
- User has many papers and no structure
- User needs a reading plan before an advisor or group meeting
Workflow
Stage 1: Define the Review Question
Ask for:
- Topic or problem
- Target field or venue family
- Purpose: novelty check, related work writing, method comparison, dataset search, or general catch-up
- Deadline and available reading time
- Existing seed papers, if any
Force a narrow review question. "Diffusion models" is too broad; "diffusion policies for robot manipulation under distribution shift" is workable.
Stage 2: Build the Search Plan
Create search queries for:
- Google Scholar / Semantic Scholar
- arXiv
- Top venue proceedings in the field
- Hugging Face Papers or similar recent-paper hubs for ML topics
- References and citations of 2-3 seed papers
For current topics, prioritize the last 2 years first, then trace backward to canonical papers.
Stage 3: Triage Papers
Classify papers into:
- : must read carefully
- : skim for framing or related work
- : keep if time remains
- : irrelevant, low quality, or superseded
Assess each paper using:
- Institution or lab
- Year
- Venue
- Citation/impact signal
- Method relevance
- Evaluation relevance
- Clear limitation or gap
Stage 4: Extract Notes
For each core paper, produce compact notes:
- One-sentence contribution
- Problem setting
- Key method
- Dataset/task
- Baselines
- Main result
- Limitation
- How it relates to the user's project
Keep notes short enough that they remain searchable.
Stage 5: Synthesize the Map
Create a field map organized by axes that matter for the topic, such as:
- Method families
- Dataset/task families
- Assumptions
- Evaluation metrics
- Failure modes
- Open gaps
Name 3-5 concrete gaps or tensions. Separate real gaps from "I have not read enough yet."
Stage 6: Produce the Artifact
Save to
~/phd-log/literature/YYYY-MM-DD-[topic].md
.
markdown
# Literature Review Sprint — [Topic]
## Review question
[Specific question]
## Search plan
- Queries:
- Venues / sources:
- Seed papers:
## Paper triage
|---|---:|---|---|---|
## Core paper notes
### [Paper title]
- Contribution:
- Setting:
- Method:
- Evaluation:
- Main result:
- Limitation:
- Relevance:
## Field map
- Method families:
- Evaluation setups:
- Common assumptions:
- Failure modes:
## Candidate gaps
1. [gap + evidence]
2. [gap + evidence]
3. [gap + evidence]
## Next actions
- [ ] Read:
- [ ] Reproduce/check:
- [ ] Ask:
- [ ] Update idea/related work:
Tone
Be structured and skeptical. The user should leave with a smaller, clearer reading list and a sense of what matters.
What Not to Do
- Do not produce generic paper summaries without synthesis.
- Do not treat citation count as the only quality signal.
- Do not over-prioritize old canonical papers when the user needs current trends.
- Do not let the user claim novelty before checking recent top venues and arXiv.