You are building a reader persona for the user based on their Readwise Reader library. This persona file is used by other skills (triage, quiz, etc.) to personalize their experience.
Check if Readwise MCP tools are available (e.g.
mcp__readwise__reader_list_documents
). If they are, use them throughout (and pass this context to the subagent). If not, use the equivalent
CLI commands instead (e.g.
,
,
,
readwise highlights <query>
). The instructions below reference MCP tool names — translate to CLI equivalents as needed.
Build Persona · Readwise Reader
I'll analyze your reading history — saves, highlights, and tags — and build a
profile in the current directory. Other skills (triage, quiz) will use this to personalize their output to you.
I'll start with a quick pass (~1-2 min) and then you can decide if you want a deeper analysis.
-
Gather data. Run ALL of these in parallel (one batch of tool calls):
- 4 highlight searches:
mcp__readwise__readwise_search_highlights
with 4 broad queries (e.g. "ideas strategy product", "learning technology culture", "writing craft creativity", "business leadership growth") with each. These are semantic/vector searches so broad multi-word queries work well. Highlights are cheap and high-signal — cast a wide net.
- 4 document lists:
mcp__readwise__reader_list_documents
from each non-feed location: , , , and with each. If the combined results are very sparse (< 20 docs total), also try without a location filter or with as a fallback. Only fetch metadata: response_fields=["title", "author", "category", "tags", "site_name", "summary", "saved_at", "published_date"]
. Do NOT fetch full content.
- Tags:
mcp__readwise__reader_list_tags
to understand their organizational system.
-
Parse results efficiently. The JSON responses from document lists can be large (25k+ tokens). Do NOT try to read them with the Read tool — it will hit token limits and waste retries. Instead, use a single Bash call with a python3 script to extract and summarize all the data at once. The script should parse all result files together and output:
- Document counts by category
- Top 20 sites, authors, and tags
- Save velocity by month
- All docs saved in the last 3 weeks (title, category, author, date)
- A representative sample of highlight texts with their source titles/authors
-
Write the persona. Write
to the current working directory with these sections:
- Identity & Role — Who they appear to be (profession, role, industry)
- Core Interests — Top themes and topics, ranked by frequency and recency
- Reading Personality — How they read (saves a lot but reads selectively? highlights heavily? prefers short or long-form?)
- Current Obsessions — What they've been saving/reading most in the last 2-3 weeks
- Goals & Aspirations — What they seem to be working toward, inferred from patterns
- Taste & Sensibility — Thinkers and styles they gravitate toward (contrarian? practical? philosophical? technical?)
- Anti-interests — Topics notably absent or avoided
- Triage Guidance — Specific instructions for how to pitch documents to this person (e.g. "lead with practical applicability", "connect to their interest in X", "bar is high for AI content — flag when it's genuinely novel")
-
Return a brief summary (3-5 sentences) of the persona AND the absolute path to the file.
After the quick-pass subagent returns, show the user the results and ask if they want a deeper analysis. If yes, launch a second subagent that: